this page intentionally left blankperpus.univpancasila.ac.id/repository/ebupt181231.pdf · 2018. 9....

999

Upload: others

Post on 24-Jan-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 2: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

This page intentionally left blank

Page 3: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

The Cambridge Handbook of Consciousness

The Cambridge Handbook of Consciousness is thefirst of its kind in the field, and its appearancemarks a unique time in the history of intellectualinquiry on the topic. After decades during whichconsciousness was considered beyond the scopeof legitimate scientific investigation, conscious-ness re-emerged as a popular focus of researchtoward the end of the last century, and it hasremained so for nearly 20 years. There are nowso many different lines of investigation on con-sciousness that the time has come when the fieldmay finally benefit from a book that pulls themtogether and, by juxtaposing them, provides acomprehensive survey of this exciting field.

Philip David Zelazo is Professor of Psychologyat the University of Toronto, where he holdsa Canada Research Chair in DevelopmentalNeuroscience. He is also Co-Director of theSino-Canadian Centre for Research in ChildDevelopment, Southwest University, China. Hewas Founding Editor of the Journal of Cogni-tion and Development. His research, which isfunded by the Natural Sciences and Engineer-ing Research Council (NSERC) of Canada, theCanadian Institutes of Health Research (CIHR),and the Canadian Foundation for Innovation

(CFI), focuses on the mechanisms underly-ing typical and atypical development of exec-utive function – the conscious self-regulationof thought, action, and emotion. In September2007, he will assume the Nancy M. and JohnL. Lindhal Professorship at the Institute of ChildDevelopment, University of Minnesota.

Morris Moscovitch is the Max and Gianna Glass-man Chair in Neuropsychology and Aging inthe Department of Psychology at the Universityof Toronto. He is also a Senior Scientist at theRotman Research Institute of Baycrest Centrefor Geriatric Care. His research focuses on theneuropsychology of memory in humans but alsoaddresses attention, face recognition, and hemi-spheric specialization in young and older adults,and in people with brain damage.

Evan Thompson is Professor of Philosophy at theUniversity of Toronto. He is the author of Mindin Life: Biology, Phenomenoloy, and the Sciences ofMind and Colour Vision: A Study in Cognitive Sci-ence and the Philosophy of Perception. He is alsothe co-author of The Embodied Mind: CognitiveScience and Human Experience. He is a formerholder of a Canada Research Chair.

i

Page 4: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

ii

Page 5: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

The Cambridge Handbookof Consciousness

�Edited by

Philip David Zelazo, Morris Moscovitchand Evan Thompson

University of Toronto

iii

Page 6: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

CAMBRIDGE UNIVERSITY PRESS

Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo

Cambridge University PressThe Edinburgh Building, Cambridge CB2 8RU, UK

First published in print format

ISBN-13 978-0-521-85743-7

ISBN-13 978-0-521-67412-6

ISBN-13 978-0-511-28923-1

© Cambridge University Press 2007

2007

Information on this title: www.cambridge.org/9780521857437

This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

ISBN-10 0-511-28923-5

ISBN-10 0-521-85743-0

ISBN-10 0-521-67412-3

Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Published in the United States of America by Cambridge University Press, New York

www.cambridge.org

hardback

paperback

paperback

eBook (EBL)

eBook (EBL)

hardback

Page 7: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

To the memory of Francisco J. Varela (7 September 1946–28 May 2001)

– ET

To my growing family: Jill, Elana, David, Leora, and Ezra Meir– MM

For Sam, and the next iteration– PDZ

And a special dedication to Joseph E. Bogen (13 July 1926–22 April 2005)

v

Page 8: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

vi

Page 9: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

Contents

List of Contributors page xi

1. Introduction 1

p a r t i

THE COGNITIVE SCIENCEOF CONSCIOUSNESS

a . philosophy

2 . A Brief History of the PhilosophicalProblem of Consciousness 9

William Seager

3 . Philosophical Theories ofConsciousness: ContemporaryWestern Perspectives 35

Uriah Kriegel

4. Philosophical Issues:Phenomenology 67

Evan Thompson and Dan Zahavi

5 . Asian Perspectives: Indian Theoriesof Mind 89

Georges Dreyfus and Evan Thompson

b . computational approaches

to consciousness

6. Artificial Intelligence andConsciousness 117

Drew McDermott

7. Computational Models ofConsciousness: A Taxonomyand Some Examples 15 1

Ron Sun and Stan Franklin

c . cognitive psychology

8. Cognitive Theories ofConsciousness 177

Katharine McGovern andBernard J. Baars

9. Behavioral, Neuroimaging, andNeuropsychological Approaches toImplicit Perception 207

Daniel J. Simons, Deborah E.Hannula, David E. Warren, andSteven W. Day

vii

Page 10: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

viii contents

10. Three Forms of Consciousness inRetrieving Memories 251

Henry L. Roediger III, SuparnaRajaram, and Lisa Geraci

11. Metacognition and Consciousness 289

Asher Koriat

12 . Consciousness and Controlof Action 327

Carlo Umilta

d . linguistic considerations

13 . Language and Consciousness 355

Wallace Chafe

14. Narrative Modes of Consciousnessand Selfhood 375

Keith Oatley

e . developmental psychology

15 . The Developmentof Consciousness 405

Philip David Zelazo, Helena HongGao, and Rebecca Todd

f . alternative states of

consciousness

16. States of Consciousness: Normaland Abnormal Variation 435

J. Allan Hobson

17. Consciousness in Hypnosis 445

John F. Kihlstrom

18. Can We Study SubjectiveExperiences Objectively?First-Person PerspectiveApproaches and ImpairedSubjective States of Awarenessin Schizophrenia 481

Jean-Marie Danion and CarolineHuron

19. Meditation and the Neuroscienceof Consciousness: An Introduction 499

Antoine Lutz, John D. Dunne, andRichard J. Davidson

g . anthropology/social

psychology of consciousness

2 0. Social Psychological Approachesto Consciousness 555

John A. Bargh

2 1. The Evolution of Consciousness 571

Michael C. Corballis

2 2 . The Serpent’s Gift: EvolutionaryPsychology and Consciousness 597

Jesse M. Bering and David F.Bjorklund

2 3 . Anthropology of Consciousness 631

C. Jason Throop and CharlesD. Laughlin

h . psychodynamic approaches to

consciousness

2 4. Motivation, Decision Making, andConsciousness: FromPsychodynamics to SubliminalPriming and Emotional ConstraintSatisfaction 673

Drew Westen, Joel Weinberger, andRebekah Bradley

p a r t ii

THE NEUROSCIENCE OFCONSCIOUSNESS

a . neurophysiological mechanisms

of consciousness

2 5 . Hunting the Ghost: Toward aNeuroscience of Consciousness 707

Petra Stoerig

2 6. Neurodynamical Approaches toConsciousness 731

Diego Cosmelli, Jean-PhilippeLachaux, and Evan Thompson

b . neuropsychological aspects of

consciousness: disorders and

neuroimaging

2 7. The Thalamic Intralaminar Nucleiand the Property of Consciousness 775

Joseph E. Bogen

Page 11: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

contents ix

2 8. The Cognitive Neuroscience ofMemory and Consciousness 809

Scott D. Slotnick and DanielL. Schachter

c . affective neuroscience

of consciousness

2 9. The Affective Neuroscienceof Consciousness: Higher-Order Syntactic Thoughts,Dual Routes to Emotionand Action, andConsciousness 831

Edmund T. Rolls

d . social neuroscience of

consciousness

30. Consciousness: Situated and Social 863

Ralph Adolphs

p a r t iii

QUANTUM APPROACHES TOCONSCIOUSNESS

31. Quantum Approaches toConsciousness 881

Henry Stapp

Author Index 909

Subject Index 939

Page 12: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

x

Page 13: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

List of Contributors

Ralph Adolphs, PhD

Department of PsychologyCalifornia Institute of TechnologyHSS 228-77

Pasadena, CA 91125 USAE-mail: [email protected]

Bernard J. Baars, PhD

The Neurosciences Institute10640 John Jay Hopkins DriveSan Diego, CA 92121 USAE-mail: [email protected]

John A. Bargh, PhD

Department of PsychologyYale University2 Hillhouse AvenueP.O. Box 208205

New Haven, CT 06520-8205 USAE-mail: [email protected]

Jesse M. Bering, PhD

Institute of Cognition and CultureQueen’s University, Belfast4 Fitzwilliam StreetBelfast, Northern Ireland BT7 1NNE-mail: [email protected]

David F. Bjorklund, PhD

Department of PsychologyFlorida Atlantic UniversityBoca Raton, FL 33431-0091 USAE-mail: [email protected]

Joseph E. Bogen, MD (Deceased)

Formerly of University of Southern Californiaand the University of California, Los Angeles

Rebekah Bradley

Department of Psychiatry and BehavioralSciences

Emory University1462 Clifton RoadAtlanta, GA 30322 USAE-mail: [email protected]

Wallace Chafe, PhD

Department of LinguisticsUniversity of California, Santa BarbaraSanta Barbara, CA 93106 USAE-mail: [email protected]

Michael C. Corballis, PhD

Department of PsychologyUniversity of AucklandPrivate Bag 92019

Auckland 1020 NEW ZEALANDE-mail: [email protected]

Diego Cosmelli, PhD

Centro de Estudios NeurobiologicosDepartomento de PsiquiatroaP. Universidad Catolica de ChileMarcoleto 387, 2

◦ pisoSantiago, Chile

xi

Page 14: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

xii list of contributors

(Also: Laboratoire de neurosciences Cognitiveset Imagerie Cerebrale (LENA)

47 Bd de l’Hopital, 75651 Paris FRANCE)E-mail: [email protected]

Jean-Marie Danion, MD

INSERM Unite 405

Hopital Civil de Strasbourg – CliniquePsychiatrique

1 place de l’Hopital – BP n◦426

67091 STRASBOURG Cedex FRANCEE-mail: jean-marie.danion@chru strasbourg.fr

Richard J. Davidson, PhD

W. M. Keck Laboratory for Functional BrainImaging and Behavior

Waisman CenterUniversity of Wisconsin-Madison1500 Highland AvenueMadison, WI 53703 -2280 USAE-mail: [email protected]

Steven W. Day, BSc

Department of PsychologyUniversity of Illinois603 East Daniel StreetChampaign, IL 61820 USA

Georges Dreyfus, PhD

Department of ReligionWilliams CollegeE14 Stetson HallWilliamstown, MA 01267 USAE-mail: [email protected]

John D. Dunne, PhD

Department of ReligionEmory UniversityMailstop: 1535 /002/1AA537 Kilgo CircleAtlanta, GA 30322 USAE-mail: [email protected]

Stan Franklin, PhD

Institute for Intelligent SystemsThe University of MemphisMemphis, TN 38152 USAE-mail: [email protected]

Helena Hong Gao, PhD

School of Humanities and Social SciencesNanyang Technological UniversitySingapore 639798

E-mail: [email protected]

Lisa Geraci, PhD

Department of PsychologyWashington UniversityOne Brookings Drive

Campus Box 1125

St. Louis, MO 63130-4899 USAE-mail: [email protected]

Deborah E. Hannula

Psychology DepartmentUniversity of Illinois603 E. Daniel Street, Room 807

Champaign, IL 61820 USAE-mail: [email protected]

J. Allan Hobson, MD

Massachusetts Mental Health CenterPsychiatry, S12

74 Fenwood RoadBoston, MA 02115 USAE-mail: allan [email protected]

Caroline Huron, MD, PhD

INSERM 0117

Service Hopitalo-Universitaire de Sante Mentaleet Therapeuthique

Hopital Sainte-AnneUniversite Paris VPavillon Broca2 ter rue d’Alesia75014 Paris FRANCEE-mail: [email protected]

John F. Kihlstrom, PhD

Department of Psychology, MC 1650

University of California, BerkeleyTolman Hall 3210

Berkeley, CA 94720-1650 USAE-mail: [email protected]

Asher Koriat, PhD

Department of PsychologyUniversity of HaifaHaifa 31905 ISRAELE-mail: [email protected]

Uriah Kriegel, PhD

Department of PhilosophySocial Science Bldg. Rm 213

P.O. Box 210027

Tucson, AZ 85721-0027 USAE-mail: [email protected]

Jean-Philippe Lachaux

INSERM – Unite 280

Centre Hospitalier Le VinatierBatiment 452

95 Boulevard Pinel69500 BRON, FRANCEE-mail: [email protected]

Page 15: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

list of contributors xiii

Charles D. Laughlin, PhD

Department of Sociology and AnthropologyCarleton University125 Colonel By DriveOttawa, ON K1S 5B6 CANADA

Antoine Lutz, PhD

W. M. Keck Laboratory for Functional BrainImaging and Behavior

Waisman CenterUniversity of Wisconsin-Madison1500 Highland AvenueMadison, WI 53703 -2280 USAE-mail: [email protected]

Drew McDermott, PhD

Department of Computer ScienceYale UniversityP.O. Box 208285

New Haven, CT 06520-8285 USAE-mail: [email protected]

Katharine McGovern, PhD

California Institute of Integral Studies1453 Mission StreetSan Francisco, CA 94103 USAE-mail: [email protected]

Keith Oately, PhD

Department of Human Development andApplied Psychology

Ontario Institute for Studies inEducation/University of Toronto

252 Bloor Street WestToronto, ON M5S 1V6 CANADAE-mail: [email protected]

Suparna Rajaram, PhD

Department of PsychologySUNY at Stony BrookStony Brook, NY 11794-2500 USAE-mail: [email protected]

Henry L. Roediger III, PhD

Department of Psychology, Box 1125

Washington UniversityOne Brookings DriveSt. Louis, MO 63130-4899 USAE-mail: [email protected]

Edmund T. Rolls, PhD

University of OxfordDepartment of Experimental PsychologySouth Parks RoadOxford OX1 3UD ENGLANDE-mail: [email protected]

Daniel L. Schachter, PhD

Department of PsychologyHarvard UniversityCambridge, MA 02138 USAE-mail: [email protected]

William Seager, PhD

Department of PhilosophyUniversity of Toronto at Scarborough265 Military TrailScarborough, ON M1C 1A4 CANADAE-mail: [email protected]

Daniel J. Simons, PhD

Psychology DepartmentUniversity of Illinois603 E. Daniel Street, Room 807

Champaign, IL 61820 USAE-mail: [email protected]

Scott D. Slotnick

Department of PsychologyBoston CollegeMcGuinn HallChestnut Hill, MA 02467 USAE-mail: [email protected]

Henry Stapp, PhD

Lawrence Berkeley National LabPhysics Division1 Cyclotron Road Mail Stop 50A-5101

Berkeley, CA 94720-8153 USAE-mail: [email protected]

Petra Stoerig, PhD

Institute of Physiological PsychologyHeinrich-Heine-UniversityDusseldorf D-40225 GERMANYE-mail: [email protected]

Ron Sun, PhD

Cognitive Science DepartmentRensselaer Polytechnic Institute110 Eighth Street, Carnegie 302ATroy, NY 12180 USAE-mail: [email protected]

Evan Thompson, PhD

Department of PhilosophyUniversity of Toronto15 King’s College CircleToronto, ON M5S 3H7 CANADAE-mail: [email protected]

C. Jason Throop, PhD

Department of AnthropologyUniversity of California, Los Angeles341 Haines HallLos Angeles, CA 90095 USAE-mail: [email protected]

Page 16: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

xiv list of contributors

Rebecca Todd, BA

Department of Human Development andApplied Psychology

Ontario Institute for Studies inEducation/University of Toronto

252 Bloor Street WestToronto, Ontario M5S 1V6 CANADAE-mail: [email protected]

Carlo Umilta, PhD

Dipartimeto di Psicologia GeneraleUniversita di Padovavia 8 Febbraio, 2-35122 Padova ITALYE-mail: [email protected]

David E. Warren, BSc

Department of PsychologyUniversity of Illinois603 E. Daniel StreetChampaign, IL 61820 USAE-mail:[email protected]

Joel Weinberger, PhD

Derner InstituteAdelphi UniversityBox 701

Garden City, NY 11530 USAE-mail:[email protected]

Drew Westen, PhD

Department of PsychologyEmory University532 N. Kilgo CircleAtlanta, GA 30322 USAE-mail: [email protected]

Dan Zahavi, PhD

Danish National Research FoundationCenter for Subjectivity ResearchKobmagergade 46

DK-1150 Copenhagen K DENMARKE-mail: [email protected]

Philip David Zelazo, PhD

Department of PsychologyUniversity of Toronto100 St. George StreetToronto, ON M5S 3G3 CANADAE-mail: [email protected](After September 2007:Institute of Child DevelopmentUniversity of Minnesota51 East River RoadMinneapolis, MN 55455 USAE-mail: [email protected])

Page 17: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

The Cambridge Handbook of Consciousness

xv

Page 18: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430pre CUFX049/Zelazo 0 521 85743 0 printer: cupusbw March 3 , 2007 16:33

xvi

Page 19: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c01 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :36

C H A P T E R 1

Consciousness: An Introduction

Philip David Zelazo, Morris Moscovitch, and Evan Thompson

The Cambridge Handbook of Consciousnessbrings together leading scholars from aroundthe world who address the topic of con-sciousness from a wide variety of perspec-tives, ranging from philosophical to anthro-pological to neuroscientific. This handbookis the first of its kind in the field, and itsappearance marks a unique time in the his-tory of intellectual inquiry on the topic.After decades during which consciousnesswas considered beyond the scope of legiti-mate scientific investigation, consciousnessre-emerged as a popular focus of researchduring the latter part of the last century andit has remained so for more than 20 years.Indeed, there are now so many different linesof investigation on consciousness that thetime has come when the field may finallybenefit from a book that pulls them togetherand, by juxtaposing them, provides a com-prehensive survey of this exciting field.

By the mid-1990s, if not earlier, it waswidely agreed that one could not get a fullappreciation of psychological phenomena –for example, of perception or memory –without distinguishing between consciousand unconscious processes. The antecedents

of this agreement are many, and it wouldbe beyond the scope of this Introduction todo more than highlight a few (for furtherdiscussion, see Umilta & Moscovitch, 1994).One of the most obvious is the so-called cog-nitive revolution in psychology and the sub-sequent emergence of cognitive science as aninterdisciplinary enterprise. Whereas previ-ously psychologists sought to describe law-ful relations between environmental stimuliand behavioral responses, in the mid-1950sor so they began to trace the flow of infor-mation through a cognitive system, viewingthe mind as a kind of computer program.It eventually became clear, however, that byfocusing on the processing of information –the kind of thing a computer can do –psychology left out most of what reallymatters to us as human beings; as conscioussubjects, it left us cold. The cognitive revo-lution opened the door to the study of suchtopics as attention and memory, and sometime later, consciousness came on through.

The pre-1990s tendency to avoid discus-sions of consciousness, except in certain con-texts (e.g., in phenomenological philoso-phy and psychoanalytic circles), may have

1

Page 20: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c01 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :36

2 the cambridge handbook of consciousness

been due, in part, to the belief that con-sciousness necessarily was a kind of ghost inthe machine – one that inevitably courtedthe awful specter of dualism. Since then,however, our ontological suppositions haveevolved, and this evolution may be a conse-quence of the growing trend toward interdis-ciplinary investigation – seen, for example, inthe emergence of cognitive science and neu-roscience as coherent fields. The transdisci-plinary perspective afforded by new fieldsmay have engendered an increased open-ness and willingness to explore problemsthat earlier were deemed too difficult toaddress. Certainly, it provided the meansthat made these problems seem soluble.Indeed, precisely because consciousness issuch a difficult problem, progress in solv-ing it probably depends on a convergence ofideas and methodologies: We are unlikely toarrive at an adequate understanding of con-sciousness in the absence of a transdiscipli-nary perspective.

Clinical sciences, and in particular neu-ropsychology, also played a prominent rolein helping usher in a new willingness totackle the problem of consciousness. Vari-ous unusual syndromes came to light in thelatter half of the 20th century, and thesesyndromes seemed to demand an explana-tion in terms of consciousness. Blindsight isa good example: In this syndrome, patientswith lesions to the occipital lobe of thebrain are phenomenologically blind, but cannonetheless perform normally on a numberof visual tasks. Another example is amne-sia, in which people who are phenomeno-logically amnesic as a result of damage tomedial temporal lobes or the diencephaloncan acquire, retain, and recover informa-tion without awareness. Similar examplesemerged in other domains, and it soonbecame clear that processes under con-scious control complement, or competewith, unconscious processes in the controlof cognition and behavior. These issues arealso beginning to play a major role in therigorous, scientific analysis of psychopathol-ogy, the one field in which concerns with therole of conscious and unconscious processeshave played a steady role since Freud. More-

over, some of these same atypical phenom-ena (e.g., blindsight) have also been demon-strated in non-human animals, raising thepossibility that consciousness is not associ-ated exclusively with human beings.

A third prominent contribution to thecurrent state of affairs is the developmentof new techniques that have made it pos-sible to treat consciousness in a more rig-orous and scientifically respectable fashion.Foremost among these is the developmentof neuroimaging techniques that allow us tocorrelate performance and subjective expe-rience with brain function. These techniquesinclude electrophysiological methods, suchas magneto-encephalography (MEG), andvarious types of functional neuroimaging,including functional magnetic resonanceimaging (fMRI). The analytic sophisticationof these technologies is growing rapidly, asis the creation of new technologies that willexpand our capabilities to look into the brainmore closely and seek answers to questionsthat now seem impossible to address.

There is currently considerable interest inexploring the neural correlates of conscious-ness. There is also a growing realization,however, that it will not be possible tomake serious headway in understanding con-sciousness without confronting the issue ofhow to acquire more precise descriptivefirst-person reports about subjective expe-rience (Jack & Roepstorff, 2003 , 2004).Psychologists, especially clinical psycholo-gists and psychotherapists, have grappledwith this issue for a long time, but it hasgained new prominence thanks to the useof neuroimaging techniques. Here one guid-ing idea is that it may be possible to recoverinformation about the highly variable neu-ral processes associated with conscious-ness by collecting more precise, trial-by-trial first-person reports from experimentalparticipants.

If ever it was possible to do so, cer-tainly serious students of the mind canno longer ignore the topic of conscious-ness. This volume attempts to survey themajor developments in a wide range ofintellectual domains to give the reader anappreciation of the state of the field and

Page 21: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c01 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :36

consciousness: an introduction 3

where it is heading. Despite our efforts toprovide a comprehensive overview of thefield, however, there were several unavoid-able omissions. Though we had hoped toinclude chapters on psychedelic drugs andon split-brain research, in the end we wereunable to obtain these chapters in time.Readers interested in the latest scientificwriting on drugs and consciousness maywish to see Benny Shanon’s (2002) bookon ayahuasca. Michael Gazzaniga’s (1998)book, The Mind’s Past, provides an accessi-ble overview of work on split-brain researchand its implications for subjective experi-ence. We note, too, that although we wereable to cover philosophical approaches toconsciousness from a variety of culturalperspectives, including Continental phe-nomenology and Asian philosophy (particu-larly Buddhism), there were inevitably oth-ers that we omitted. We apologize for theseunfortunate gaps.

The volume is organized mainly arounda broad (sometimes untenable) distinctionbetween cognitive scientific approaches andneuroscientific approaches. Although we aremindful of the truly transdisiplinary natureof contemporary work on consciousness, webelieve this distinction may be useful forreaders who wish to use this handbook asan advanced textbook. For example, readerswho want a course in consciousness from acognitive science perspective might concen-trate on Chapters 2–24 . Readers approach-ing the topic from the perspective of neu-roscience might emphasize Chapters 25–31.A more sociocultural course could includeChapters 2–4 , 13–15 , 19–24 , and 31. Morefocused topical treatments are also possible.For example, a course on memory mightinclude Chapters 6–8, 10, 18, and 29.

The topic of consciousness is relevant toall intellectual inquiry – indeed, it is thefoundation of this inquiry. As the chap-ters collected here show, individually andtogether, by ignoring consciousness, oneplaces unnecessary constraints on our under-standing of a wide range of phenomena –and risks grossly distorting them. Many mys-teries remain (e.g., what are the neural sub-strates of consciousness? are there varieties

or levels of consciousness within domains offunctioning, across domains, across species,and/or across the lifespan?), but there hasalso been considerable progress. We hopethis collection serves a useful function byhelping readers see both how far we havecome in understanding consciousness andhow far we have to go.

Acknowledgments

The editors would like to thank Phil Laugh-lin, formerly of CUP, who encouraged us toprepare this volume, and Armi Macaballugand Mary Cadette, who helped us during thefinal production phases. Dana Liebermannprovided valuable assistance as we plannedthe volume, and Helena Hong Gao helped uspull the many chapters together; we are verygrateful to them both. We would also liketo thank the contributors for their patienceduring the editorial process (the scope of thisvolume threatened, at times, to turn this pro-cess into an editorial nightmare . . . ). Finally,we note with sadness the death of JosephBogen, one of the pioneers in research onconsciousness. We regret that he was unableto see his chapter in print.

References

Gazzaniga, M. S. (1998). The mind’s past. Berke-ley, CA: University of California Press.

Jack, A. & Roepstorff, A. (Eds.) (2003). Trustingthe subject? The use of introspective evidence incognitive science. Vol. 1.Thorverton, UK: ImprintAcademic.

Jack, A. & Roepstorff, A. (Eds.) (2004). Trust-ing the subject? The use of introspective evi-dence in cognitive science. Vol. 2 .Thorverton,UK: Imprint Academic.

Shanon, B. (2002). The antipodes of the mind:Charting the phenomenology of the ayahuascaexperience. New York: Oxford University Press.

Umilta, C. & Moscovitch, M. (Eds.). (1994). Con-scious and nonconscious information processing:Attention and Performance XV: Conscious andnonconscious processes in cognition. Cambridge,MA: MIT/Bradford Press.

Page 22: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c01 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :36

4

Page 23: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

Par t I

THE COGNITIVE SCIENCEOF CONSCIOUSNESS

5

Page 24: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

6

Page 25: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

A. Philosophy

7

Page 26: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

8

Page 27: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

C H A P T E R 2

A Brief History of the PhilosophicalProblem of Consciousness

William Seager

Abstract

The problem of consciousness, generallyreferred to as the mind-body problemalthough this characterization is unfortu-nately narrow, has been the subject of philo-sophical reflection for thousands of years.This chapter traces the development of thisproblem in Western philosophy from thetime of the ancient Greeks to the mid-dle of the 20th century. The birth of sci-ence in the 17th century and its subse-quent astounding success made the problemof mind particularly acute, and produced ahost of philosophical positions in response.These include the infamous interactionistdualism of Descartes and a host of dual-ist alternatives forced by the intractableproblem of mind-matter interaction; a vari-ety of idealist positions which regard mindas ontologically fundamental; emergentisttheories which posit entirely novel enti-ties, events, and laws which ‘grow’ out ofthe material substrate; panpsychist, doubleaspect, and ‘neutral monist’ views in whichboth mind and matter are somehow reflec-tions of some underlying, barely knowable

ur-material; and increasingly sophisticatedforms of materialism which, despite fail-ing to resolve the problem of consciousness,seemed to fit best with the scientific view ofthe world and eventually came to dominatethinking about the mind in the 20th century.

I. Forms of Consciousness

The term ‘consciousness’ possesses a hugeand diverse set of meanings. It is not evenobvious that there is any one ‘thing’ thatall uses of the term have in common whichcould stand as its core referent (see Wilkes1988). When we think about conscious-ness we may have in mind highly complexmental activities, such as reflective self-consciousness or introspective conscious-ness, of which perhaps only human beingsare capable. Or we may be thinking aboutsomething more purely phenomenal, perhapssomething as apparently simple and uni-tary as a momentary stab of pain. Paradig-matic examples of consciousness are the per-ceptual states of seeing and hearing, butthe nature of the consciousness involved is

9

Page 28: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

10 the cambridge handbook of consciousness

actually complex and far from clear. Arethe conscious elements of perception madeup only of raw sensations from which weconstruct objects of perception in a quasi-intellectual operation? Or is perceptual con-sciousness always of ‘completed’ objectswith their worldly properties?

The realm of consciousness is hardlyexhausted by its reflective, introspective, orperceptual forms. There is distinctively emo-tional consciousness, which seems to nec-essarily involve both bodily feelings andsome kind of cognitive assessment of them.Emotional states require a kind of evalua-tion of a situation. Does consciousness thusinclude distinctive evaluative states, so that,for example, consciousness of pain wouldinvolve both bodily sensations and a con-scious sense of aversion? Linked closely withemotional states are familiar, but nonethe-less rather peculiar, states of consciousnessthat are essentially other directed, notablyempathy and sympathy. We visibly wincewhen others are hurt and almost seem tofeel pain ourselves as we undergo this uniquekind of experience.

Philosophers argue about whether allthinking is accompanied by or perhaps evenconstituted out of sensory materials (imageshave been the traditional favorite candidatematerial), and some champion the idea ofa pure thought-consciousness independentof sensory components. In any event, thereis no doubt that thought is something thatoften happens consciously and is in someway different from perception, sensation, orother forms of consciousness.

Another sort of conscious experience isclosely associated with the idea of consciousthought but not identical to it: epistemolog-ical consciousness, or the sense of certaintyor doubt we have when consciously enter-taining a proposition (such as ‘2 + 3 = 5 ’or ‘the word ‘eat’ consists of three letters’).Descartes famously appealed to such statesof consciousness in the ‘method of doubt’(see his Meditations 1641/1985).

Still another significant if subtle formof consciousness has sometimes been giventhe name ‘fringe’ consciousness (see Mangan

2001, following James 1890/1950, ch. 9),which refers to the background of aware-ness which sets the context for experience.An example is our sense of orientation orrightness in a familiar environment (considerthe change in your state of consciousnesswhen you recognize someone’s face whoat first appeared to be a stranger). Moodspresent another form of fringe conscious-ness, with clear links to the more overtlyconscious emotional states but also clearlydistinct from them.

But I think there is a fundamental com-monality to all these different forms of con-sciousness. Consciousness is distinctive forits subjectivity or its first-person character.There is ‘something it is like’ to be in a con-scious state, and only the conscious subjecthas direct access to this way of being (seeNagel 1974). In contrast, there is nothing itis like to be a rock, no subjective aspect to anashtray. But conscious beings are essentiallydifferent in this respect. The huge variety inthe forms of consciousness makes the prob-lem very complex, but the core problem ofconsciousness focuses on the nature of sub-jectivity.

A further source of complexity arisesfrom the range of possible explanatory tar-gets associated with the study of conscious-ness. One might, for instance, primarilyfocus on the structure or contents of con-sciousness. These would provide a validanswer to one legitimate sense of the ques-tion, What is consciousness? But then again,one might be more interested in how con-sciousness comes into being, either in adeveloping individual or in the universe atlarge. Or one might wonder how conscious-ness, seemingly so different from the purelyobjective properties of the material worldstudied by physics or chemistry, fits in withthe overall scientific view of the world. Toaddress all these aspects of the problem ofconsciousness would require volumes uponvolumes. The history presented in this chap-ter focuses on what has become perhaps thecentral issue in consciousness studies, whichis the problem of integrating subjectivityinto the scientific view of the world.

Page 29: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 11

II. The Nature of the Problem

Despite the huge range of diverse opinion,I think it is fair to say that there is nowsomething of a consensus view about the ori-gin of consciousness, which I call here themainstream view. It is something like the fol-lowing. The world is a purely physical sys-tem created some 13 billion years ago in theprodigious event that Fred Hoyle labeled thebig bang. Very shortly after the big bangthe world was in a primitive, ultra-hot, andchaotic state in which normal matter couldnot exist, but as the system cooled the famil-iar elements of hydrogen and helium, aswell as some traces of a few heavier ele-ments, began to form. Then very interestingthings started to happen, as stars and galax-ies quickly evolved, burned through theirhydrogen fuel, and went nova, in the processcreating and spewing forth most of the ele-ments of the periodic table into the increas-ingly rich galactic environments.

There was not a trace of life, mind, orconsciousness throughout any of this pro-cess. That was to come later. The mainstreamview continues with the creation of plan-etary systems. At first these systems werepoor in heavier elements, but after just afew generations of star creation and destruc-tion there were many Earth-like planets scat-tered through the vast – perhaps infinite –expanse of galaxies, and indeed some7 or 8 billion years after the big bang, theEarth itself formed along with our solarsystem.

We do not yet understand it very well,but whether in a warm little pond, arounda deeply submerged hydrothermal vent,amongst the complex interstices of someclay-like matrix, as a pre-packaged gift fromanother world, or in some other way ofwhich we have no inkling, conditions on theearly Earth somehow enabled the special –though entirely in accord with physicallaw – chemistry necessary for the beginningsof life.

But even with the presence of life orproto-life, consciousness still did not gracethe Earth. The long, slow processes of evolu-

tion by natural selection took hold and ulti-mately led at some time, somewhere to thefirst living beings that could feel – pain andpleasure, want and fear – and could expe-rience sensations of light, sound, or odors.The mainstream view sees this radical devel-opment as being conditioned by the evolu-tion of neurological behavior control systemsin co-evolutionary development with morecapable sensory systems. Consciousness thusemerged as a product of increasing biologicalcomplexity, from non-conscious precursorscomposed of non-conscious components.

Here we can raise many of the cen-tral questions within the problem of con-sciousness. Imagine we were alien exo-biologists observing the Earth around thetime of the emergence of consciousness.How would we know that certain organ-isms were conscious, while other organismswere not? What is it about the consciousorganisms that explains why they areconscious? Furthermore, the appearance ofconscious beings looks to be a developmentthat sharply distinguishes them from theirprecursors, but the material processes ofevolution are not marked by such radicaldiscontinuities. To be sure, we do find strik-ing differences among extant organisms. Theunique human use of language is perhapsthe best example of such a difference, butof course the apes exhibit a host of related,potentially precursor abilities, as do humanbeings who lack full language use. Thuswe have possible models of at least someaspects of our prelinguistic ancestors whichsuggest the evolutionary path that led tolanguage.

But the slightest, most fleeting spark offeeling is a full-fledged instance of conscious-ness which entirely differentiates its posses-sor from the realm of the non-conscious.Note here a dissimilarity to other biologi-cal features. Some creatures have wings andothers do not, and we would expect thatin the evolution from wingless to wingedthere would be a hazy region where it justwould not be clear whether or not a cer-tain creature’s appendages would count aswings or not. Similarly, as we consider the

Page 30: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

12 the cambridge handbook of consciousness

evolutionary advance from non-conscious toconscious creatures, there would be a rangeof creatures about which we would beunclear whether they were conscious or not.But in this latter case, there is a fact whetheror not the creatures in that range are feelinganything, however dimly or weakly, whereaswe do not think there must be a fact aboutwhether a certain appendage is or is not awing (a dim or faint feeling is 100% a kind ofconsciousness, but a few feathers on a fore-limb is not a kind of wing). It is up to uswhether to count a certain sort of appendageas a wing or not – it makes no difference, soto speak, to the organism what we call it.But it is not up to us to decide whether ornot organism X does or does not enjoy somesmidgen of consciousness – it either does orit does not.

Lurking behind these relatively empiri-cal questions is a more basic theoretical,or metaphysical, issue. Given that crea-tures capable of fairly complex behaviorwere evolving without consciousness, whyis consciousness necessary for the contin-ued evolution of more complex behavior?Just as wings are an excellent solution tothe problem of evolving flight, brains (ormore generally nervous systems) are won-derful at implementing richly capable sen-sory systems and coordinated behavior con-trol systems. But why should these brainsbe conscious? Although perhaps of doubt-ful coherence, it is useful to try to imag-ine our alien biologists as non-consciousbeings. Perhaps they are advanced machineswell programmed in deduction, induction,and abduction. Now, why would they everposit consciousness in addition to, or as afeature of, complex sensory and behavioralcontrol systems? As Thomas Huxley said,‘How it is that anything so remarkable asa state of consciousness comes about as aresult of irritating nervous tissue, is just asunaccountable as the appearance of Djinwhen Aladdin rubbed his lamp’ (1866, 8,210). We might, rather fancifully, describethis core philosophical question about con-sciousness as how the genie of conscious-ness gets into the lamp of the brain, orwhy, to use Thomas Nagel’s (1974) famous

phrase, there is ‘something it is like’ to be aconscious entity?

III. Ancient Hints

Of course, the mainstream view has notlong been mainstream, for the problem ofconsciousness cannot strike one at all untila fairly advanced scientific understandingof the world permits development of thematerialism presupposed by the mainstreamview. A second necessary condition is sim-ply the self-recognition that we are con-scious beings possessing a host of mentalattributes. And that conception has beenaround for a long time. Our ancestors ini-tiated a spectacular leap in conceptual tech-nology by devising what is nowadays calledfolk psychology. The development of theconcepts of behavior explaining states suchas belief and desire, motivating states ofpleasure and pain, and information-ladenstates of perceptual sensation, as well asthe complex links amongst these concepts,is perhaps the greatest piece of theorizingever produced by human beings. The powerand age of folk psychology are attested bythe universal animism of preliterate peo-ples and the seemingly innate tendenciesof very young children to regard variousnatural or artificial processes as exemplify-ing agency (see, among many others, Bloom2004 ; Gergeley et al. 1995 ; Perner 1991). Thepersistence of the core mentalistic notionsof goal and purpose in Aristotle’s proto-scientific but highly sophisticated theoriz-ing also reveals the powerful hold these con-cepts had, and have, on human thought. Butto the extent that mentalistic attributes areregarded as ubiquitous, no special problemof relating the mental to the non-mentalrealm can arise, for there simply is no suchrealm.

But interesting hints of this problem ariseearly on in philosophy, as the first glim-merings of a naturalistic world view occur.A fruitful way to present this history isin terms of a fundamental divergence inthought that arose early and has not yet diedout in current debate. This is the contrast

Page 31: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 13

between emergence and panpsychism. Themainstream view accepts emergence: mindor consciousness appeared out of non-conscious precursors and non-consciouscomponents (note there is both a synchronicand diachronic sense of emergence). Panpsy-chism is the alternative view that emergenceis impossible and mind must be already andalways present, in some sense, throughoutthe universe (a panpsychist might allow thatmind emerges in the trivial sense that theuniverse may have been created out of noth-ing and hence out of ‘non-consciousness’;the characteristically panpsychist positionhere would be that consciousness must havebeen created along with whatever otherfundamental features of the world wereput in place at the beginning). Of course,this divergence transcends the mind-bodyproblem and reflects a fundamental differ-ence in thinking about how the world isstructured.

The Presocratic philosophers who flour-ished some 2 ,500 years ago in the Mediter-ranean basin were the first in the Westto conceive of something like a scientificapproach to nature, and it was their con-ception that eventually led to what we callscience. Although their particular theorieswere understandably crude and often veryfanciful, they were able to grasp the idea thatthe world could be viewed as composed outof elemental features, whose essential char-acterization might be hidden from humansenses and which acted according to constantand universal principles or laws.

The Presocratics immediately recognizedthe basic dilemma: either mind (or, moregenerally, whatever apparently macroscopic,high-level, or non-fundamental property isat issue) is an elemental feature of the world,or it somehow emerges from, or is condi-tioned by, such features. If one opts for emer-gence, it is incumbent upon one to at leastsketch the means by which new featuresemerge. If one opts for panpsychism (thusbroadly construed), then one must accountfor the all too obviously apparent total lackof certain features at the fundamental level.For example, Anaxagoras (c. 500–425 bce)flatly denied that emergence was possible

and instead advanced the view that ‘every-thing is in everything’. Anaxagoras explainedthe obvious contrary appearance by a‘principle of dominance and latency’ (seeMourelatos 1986), which asserted that somequalities were dominant in their contri-bution to the behaviour and appearanceof things. However, Anaxagoras’s views onmind are complex because he apparentlyregarded it as uniquely not containing anymeasure of other things and thus not fullyin accord with his mixing principles. Per-haps this can be interpreted as the asser-tion that mind is ontologically fundamen-tal in a special way; Anaxagoras did seemto believe that everything has some por-tion of mind in it while refraining fromthe assertion that everything has a mind(even this is controversial; see Barnes 1982 ,405 ff.).

On the other hand, Empedocles, analmost exact contemporary of Anaxagoras,favoured an emergentist account based uponthe famous doctrine of the four elements:earth, air, fire and water. All qualities wereto be explicated in terms of ratios of theseelements. The overall distribution of theelements, which were themselves eternaland unchangeable, was controlled by ‘loveand strife’, whose operations are curiouslyreminiscent of some doctrines of mod-ern thermodynamics, in a grand cyclicallydynamic universe. It is true that Empedo-cles is sometimes regarded as a panpsy-chist because of the universal role of loveand strife (see Edwards 1967, for exam-ple), but there seems little of the mental inEmpedocles’s conceptions, which are rathermore like forces of aggregation and dis-aggregation, respectively (see Barnes 1982 ,308 ff.).

The purest form of emergentism was pro-pounded by the famed atomist Democritus(c. 460–370 bce). His principle of emer-gence was based upon the possibility ofmulti-shaped, invisibly tiny atoms interlock-ing to form an infinity of more complexstructures. But Democritus, in a way echo-ing Anaxagoras and perhaps hinting at thelater distinction between primary and sec-ondary properties, had to admit that the

Page 32: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

14 the cambridge handbook of consciousness

qualities of experience (what philosophersnowadays call qualia, the subjective fea-tures of conscious experience) could not beaccounted for in this way and chose, ulti-mately unsatisfactorily, to relegate them tonon-existence: ‘sweet exists by convention,bitter by convention, in truth only atomsand the void’. Sorely missed is Democritus’saccount of how conventions themselves –the consciously agreed upon means of com-mon reference to our subjective responses –emerge from the dancing atoms (thus, theideas of Democritus anticipate the reflex-ive problem of modern eliminativist mate-rialists [e.g., Churchland 1981] who wouldenjoin us to consciously accept a view whichevidently entails that there is no such thingas conscious acceptance of views – seeChapter 3).

What is striking about these early strug-gles about the proper form of a scientificunderstanding of the world is that the mindand particularly consciousness keep rising asspecial problems. It is sometimes said thatthe mind-body problem is not an ancientphilosophical issue on the basis that sensa-tions were complacently regarded as bod-ily phenomena (see Matson 1966), but itdoes seem that the problem of conscious-ness was vexing philosophers 2 ,500 yearsago, and in a form redolent of contem-porary worries. Also critically important isthe way that the problem of conscious-ness inescapably arises within the context ofdeveloping an integrated scientific view ofthe world.

The reductionist strain in the Presocraticswas not favoured by the two giants of Greekphilosophy, Plato and Aristotle, despite theirown radical disagreements about how theworld should be understood. Plato utterlylacked the naturalizing temperament of thePresocratic philosophers, although he waswell aware of their efforts. He explicitly criti-cizes Anaxagoras’s efforts to provide natural-istic, causal explanations of human behavior(see Phaedo, Plato 1961).

Of course, Plato nonetheless has a signifi-cant role in the debate because he advancespositive arguments in favour of the thesisthat mind and body are distinct. He also

provides a basic, and perpetually influen-tial, tri-component-based psychological the-ory (see Republic, Book 4 , Plato 1961). Thesefacets of his thought illustrate the two basicaspects of the problem of consciousness: theontological question and the issue of howmind is structured. Plato’s primary motiva-tion for accepting a dualist account of mindand body presumably stems from the doc-trine of the forms. These are entities whichin some way express the intrinsic essenceof things. The form of circle is that whichour imperfect drawings of circles imitate andpoint to. The mind can grasp this form, eventhough we have never perceived a true cir-cle, but only more or less imperfect approx-imations. The ability of the mind to com-mune with the radically non-physical formssuggests that mind itself cannot be physical.In the Phaedo, Plato (putting words in themouth of Socrates) ridicules the reduction-ist account of Anaxogoras which sees humanaction as caused by local physical events. Inits place, the mind is proposed as the final(i.e., teleological) cause of action, merelyconditioned or constrained by the physical:‘if it were said that without such bones andsinews and all the rest of them I should notbe able to do what I think is right, it wouldbe true. But to say that is because of themthat I do what I am doing, and not throughchoice of what is best – although my actionsare controlled by mind – would be a very laxand inaccurate form of expression’ (Phaedo,98b ff.).

In general, Plato’s arguments for dualismare not very convincing. Here’s one. Lifemust come from death, because otherwise,as all living things eventually die, everythingwould eventually be dead. Life can comefrom death only if there is a distinct ‘compo-nent’, responsible for something being alive,that persists through the life-death-life cycle.That persistent component is soul or mind(Phaedo 72c-d). Another argument whichPlato frequently invokes (or presupposes inother argumentation) is based on reincarna-tion. If we grant that reincarnation occurs,it is a reasonable inference that somethingpersists which is what is reincarnated. Thisis a big ‘if’ to modern readers of a scientific

Page 33: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 15

bent, but the doctrine of reincarnation waswidespread throughout ancient times andis still taken seriously by large numbersof people. The kernel of a more powerfulargument for dualism lurks here as well,which was deployed by Descartes much later(see below).

Aristotle is famously more naturalisti-cally inclined than Plato (Raphael’s School ofAthens shows Plato pointing upward to theheavens while Aristotle gestures downwardto Earth as they stare determinedly at eachother). But Aristotle’s views on mind arecomplex and obscure; they are certainly notstraightforwardly reductionist (the soul isnot, for example, a particularly subtle kind ofmatter, such as fire). Aristotle’s metaphysicsdeployed a fundamental distinction betweenmatter and form, and any object necessar-ily instantiates both. A statue of a horsehas its matter, bronze, and its form, horse.Aristotle is not using Plato’s conception ofform here. The form of something is not another-world separate entity, but somethingmore like the way in which the matter ofsomething is organized or structured. Nor bymatter does Aristotle mean the fundamentalphysical stuff we refer to by that word; mat-ter is whatever relatively unstructured stuffis ‘enformed’ to make an object (Englishretains something of this notion in its useof matter to mean topic), so bronze is thematter of a statue, but soldiers would be thematter of an army. Objects can differ in mat-ter, but agree in form (two identical pictures,one on paper and another on a computerscreen) or vice versa. More abstractly, Aris-totle regarded life as the form of plants andanimals and named the form of living thingssoul (‘the form of a natural body having lifepotentially within it’ 1984 , De Anima, bk. 2 ,ch. 1). Aristotle’s views have some affinityboth with modern biology’s conception oflife and the doctrine of psychophysical func-tionalism insofar as he stresses that soul isnot a separate thing requiring another onto-logical realm, but also cannot be reduced tomere matter because its essential attribute isfunction and organization (for a close andskeptical look at the link between Aristo-tle’s philosophy and modern functionalism

see Nelson 1990; see also Nussbaum andPutnam 1992).

Yet there are elements of Aristotle’saccount that are not very naturalistic. Earlyin the De Anima Aristotle raises the pos-sibility that the relation between the bodyand the mind is analogous to that betweensailor and ship, which would imply thatmind is independent of body. Later Aristotleapparently endorses this possibility when hediscusses, notoriously obscurely, the ‘activeintellect’ – the ‘part’ of the soul capable ofrational thought (De Anima, bk. 3 , chs. 4–5). Aristotle clearly states that the activeintellect is separable from body and canexist without it. For Aristotle, like Plato, theproblematic feature of mind was its capac-ity for abstract thought and not conscious-ness per se, although of course these thinkerswere implicitly discussing conscious thoughtand had no conception of mind apart fromconsciousness.

Discussion of one particular, and highlyinteresting if perennially controversial, fea-ture of consciousness can perhaps be tracedto Aristotle. This is the self-intimating orself-representing nature of all consciousstates. Many thinkers have regarded it asaxiomatic that one could not be in a con-scious state without being aware of thatstate, and Aristotle makes some remarks thatsuggest he may belong to this school ofthought. For example, in Book Three of DeAnima Aristotle presents, rather swiftly, thefollowing regress argument:

Since we perceive that we see and hear,it is necessarily either by means of theseeing that one perceives that one seesor by another [perception]. But the same[perception] will be both of the seeing and ofthe colour that underlies it, with the resultthat either two [perceptions] will be of thesame thing, or it [the perception] will be ofitself. Further, if the perception of seeing isa different [perception], either this will pro-ceed to infinity or some [perception] will beof itself; so that we ought to posit this in thefirst instance.

The passage is somewhat difficult to inter-pret, even in this translation from Vic-tor Caston (2002) which forms part of an

Page 34: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

16 the cambridge handbook of consciousness

intricate (and controversial) mixed philo-sophical, exegetical, and linguistic argumentin favor of the view that Aristotle accepteda self-representational account of consciousstates which possessed unique phenomenalproperties. Aristotle’s argument appears tobe that if it is essential to a conscious statethat it be consciously apprehended then con-scious states must be self-representing onpain of an infinite regress of states, eachrepresenting (and hence enabling consciousapprehension of) the previous state in theseries. The crucial premise that all mentalstates must be conscious is formally neces-sary for the regress. Modern representationalaccounts of consciousness which accept thatconscious states are self-intimating, suchas the Higher Order Thought theory, canblock the regress by positing non-consciousthoughts which make lower order thoughtsconscious by being about them (see Seager1999, Chapter 3 , and see Chapters 3 and 4 ,this volume).

IV. The Scientific Revolution

Although the philosophy of the Middle Ageswas vigorous and compendious, the prob-lem of fitting consciousness into the natu-ral world did not figure prominently (for anargument, following in the tradition of Mat-son 1966, that the medievals’ views on thenature of sensation precluded the recogni-tion of at least some versions of the mind-body problem, see King 2005). There weremany acute studies of human psychologyand innovative theoretical work on the con-tent and structure of consciousness and cog-nition. Of special note is the 4th-centuryphilosopher and Church Father, St. Augus-tine (354–430 ce). His writings exhibitimportant insights into the phenomenologyof consciousness, especially with regard tothe experience of time, will, and the self(see especially Confessions and On Free Will;400/1998, 426/1998). He was one of the firstphilosophers to address the problem of otherminds, arguing on the basis of introspectionand analogy that because others behave ashe behaves when he is aware of being in a

certain mental state, they too have mentalstates. In addition, he anticipated certain keyfeatures of Descartes’ dualistic account ofhuman beings, including Descartes’ famousargument from his conscious self-awarenessto the certainty of his own existence (City ofGod, Bk. 11, Ch. 21) and the idea that mindand body, although ontologically entirelydistinct, somehow are united in the humanperson. Here Augustine also broaches one ofthe key puzzles of Cartesian dualism wherehe admits the ‘mode of union’ by whichbodies and spirits are bound together tobecome animals is ‘beyond the comprehen-sion of man’ (City of God, Bk. 21, Ch. 10).Although we see here that Augustine didnot agree with Descartes in denying mindsto animals, we can also note the completelack of any idea that this mystery poses anyspecial problem for our understanding of thenatural world (see O’Daly 1987 for a detaileddiscussion of Augustine’s philosophy ofmind).

In fact, the tenets of Christian dogma,eventually wedded to a fundamentally Aris-totelian outlook, conspired to suppress anyidea that consciousness or mind could be,should be, or needed to be explained in natu-ralistic terms. It was the scientific revolutionof the 16th and 17th centuries that forced theproblem into prominence.

Galileo’s distinction between primaryand secondary properties, crucial for thedevelopment of science insofar as it freed sci-ence from a hopelessly premature attemptto explain complex sensible qualities inmechanical terms, explicitly set up an oppo-sition between matter and consciousness: ‘Ithink that tastes, odors, colors, and so on areno more than mere names so far as the objectin which we place them is concerned, andthat they reside only in the consciousness.Hence if the living creature were removedall these qualities would be wiped away andannihilated’ (1623 /1957, 274). The welcomeconsequence is that if there are therefore nocolors in the world then science is free toignore them. That was perhaps good tacticsin Galileo’s time, but it was a strategic timebomb waiting to go off when science couldno longer delay investigating the mind itself.

Page 35: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 17

The mind-body problem in its modernform is essentially the work of a single genius,Rene Descartes (1596–1650), who reformedthe way we think about mind and conscious-ness, leaving us with a set of intuitions thatpersist to this day. To take just one topi-cal example, the basic idea behind the fic-tional technology of the Matrix films is thor-oughly Cartesian: what we experience is notdirectly related to the state of the environ-ment, but is instead the result of a com-plex function – involving essential sensoryand cognitive mediation based upon neuralsystems – from the environment to our cur-rent state of consciousness. Thus two brainsthat are in identical states ought to be inthe same state of consciousness, no matterwhat differences there are in their respec-tive environments. It now seems intuitivelyobvious that this is correct (so contemporaryphilosophers make exotic and subtle argu-ments against it) and that, to take anotherstock philosophical example, a brain in a vat,if kept alive in an appropriate chemical bathand if fed proper input signals into its sev-ered nerve endings (cleverly coupled to theoutput of the brain’s motor output nerves),would have experiences which could beindistinguishable from, say, those you arehaving at this very moment. This thoughtexperiment reveals another of the reforma-tions of philosophy instituted by Descartes:the invention of modern epistemology, forhow could you know that you are not such abrain in a vat.

Descartes was of course also one of thecreators of the scientific revolution, pro-viding seminal efforts in mathematics andphysics. But he also saw with remarkableprevision the outlines of neuropsychology.With no conception of how the nervoussystem actually works and instead deploy-ing a kind of hydraulic metaphor, Descartesenvisioned nerve-based sensory and cogni-tive systems and a kind of network struc-ture in the brain, even – anticipating Hebb –suggesting that connections in the brainare strengthened through associated activa-tion. His notorious discussion of animals asmachines can be seen as the precursor of amaterialist account of cognition.

But Descartes is most remembered andreviled for his insistence upon the strictseparation of mind and body which, weare enjoined to believe, required sunder-ing the world itself into radically dis-tinct realms, thereby fundamentally splittinghuman beings from nature (including theirown), denigrated emotion in favour of rea-son, and inspired a lack of respect for animalsand nature in general. Why was Descartes adualist? Some have suggested that Descarteslacked the courage to follow his science to itslogical and materialist conclusion (the fate ofGalileo is said to have had a strong effect onhim, or it may be that Descartes really hadno wish to harm the Catholic church). ButDescartes did have arguments for his dual-ism, some of which still have supporters.These arguments also set out one of the basicstrategies of anti-materialism.

To show that mind and body are distinct,it will suffice to show that mind has someproperty that matter lacks. The general prin-ciple here, which is that of the alibi, was cod-ified by another 17th-century philosopher,Gottfried Leibniz (1646–1716) and is nowknown as Leibniz’s Law: if x has a propertywhich y lacks, then x and y are not iden-tical. Descartes argued, for example, thatalthough matter is extended in space, mindtakes up no space at all. Thus, they couldnot be identical. It certainly does seem oddto ask how many cubic centimeters my mindtakes up (does a broad mind take up morespace than a narrow one?). But it is not obvi-ous that this question is anything more thanmerely a feature of the conventional waywe think about minds. An analogy wouldbe an argument that machines cannot thinkbecause they are not alive; there is no par-ticular reason to think that the heretoforeconstant and evident link between life andthought represents anything more than akind of accident in the way minds happenedto be created. In any event, this strategy is stillat the core of the problem of consciousness.One current line of argument, for example,contends that consciousness has a kind offirst-person subjectivity (the ‘what it is like’to experience something), whereas matteris purely third-person objective – hence

Page 36: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

18 the cambridge handbook of consciousness

consciousness and matter must be funda-mentally different phenomena.

Descartes, in the sixth of his Meditations(1641/1985), also invented an astonishinglynovel kind of argument for dualism. Theargument is couched in theological terms,but that was merely for purposes of clarityand forcefulness (in the 17th century, usingGod to explain one’s argument was impec-cable rhetoric). Descartes asked us to con-sider whether it was at least possible that Godcould destroy one’s body while leaving one’smind intact. If it was possible then of courseGod could perform the feat if He wished.But nothing can be separated from itself! Soif it is merely possible that God could sundermind from body, then they must already bedifferent things. So, anyone who thinks that,say, a consciousness persisting after bodilydeath is even so much as a bare possibil-ity already thinks that consciousness is nota physical phenomenon. This argument isvalid, but it has a little flaw: how do weknow that what we think is possible is trulyso? Many are the mathematicians labouringto prove theorems which will turn out tobe unprovable (think of the centuries-longeffort to square the circle) – what do theythink they are doing? Nonetheless, it is ahighly interesting revelation that the merepossibility of dualism (in the sense consid-ered here) entails that dualism is true.

Cartesian dualism also included the doc-trine of mind-body interaction. This seemslike common sense: when someone kicksme, that causes me to feel pain and anger,and then it is my anger that makes mekick them back. Causation appears to runfrom body to mind and back again. But assoon as Descartes propounded his theory ofmind, this interaction was seen to be deeplyproblematic. One of Descartes’ aristocraticfemale correspondents, the Princess Elisa-beth of Palatine, asked the crucial question:“How can the soul of man determine thespirits of the body, so as to produce voluntaryactions (given that the soul is only a thinkingsubstance)?” (from a letter of May 1643). It’sa fair question and Descartes’ only answerwas that the mind-body union was institutedand maintained by God and was humanly

incomprehensible. The Princess allowed her-self less than fully satisfied with this reply.

It was also noticed that Descartes’ dual-ism conflicted with the emerging under-standing of the conservation of certainphysical quantities. Descartes himself onlyaccepted that the total amount, but notdirection, of motion was conserved. Thusthe mind’s ability to wiggle the pineal gland(where Descartes posited the seat of thesoul) would redirect motion without vio-lating natural law. But it was soon discov-ered that it was momentum – or directedmotion – that is conserved, and thus themind-induced motion of the pineal glandwould indeed contradict the laws of nature(one might try to regard this as a fea-ture rather than a bug, because at least itmakes Descartes’ theory empirically testablein principle).

In addition to the ontological aspect ofhis views, Descartes had some interestinginsights into the phenomenological side ofconsciousness. For Descartes, the elementsof conscious experience are what he called‘ideas’ (Descartes pioneered the modern useof this term to stand for mental items), andevery idea possesses two kinds of reality:formal and objective. The formal realityof something is simply what it is in itself,whereas the objective reality is what, if any-thing, it represents (so, the formal reality of apicture of a horse is paper and paint; a horseis the objective reality). Though Descartesis often pilloried as one who believed thatwe are only ever conscious of our own ideas,it is far from clear that this is Descartes’position. It is possible to read him insteadas a precursor of modern representationaltheories of consciousness [see Chapter 3],in which it is asserted that, although con-sciousness essentially involves mental rep-resentation, what we are conscious of isnot the representations themselves but theircontent (rather in the way that althoughwe must use words to talk about things,we are not thereby always talking aboutwords). Descartes says that ‘there cannot beany ideas which do not appear to representsome things . . . ’ (Meditation 3), and perhapsthis suggests that even in cases of illusion

Page 37: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 19

Descartes’ view was that our experience isof the representational content of the ideasand that we do not, as it were, see our ownideas.

Finally, because Descartes is often misrep-resented as denigrating bodily feelings andemotions in favour of pure reason, it is worthpointing out that he developed a sophisti-cated account of the emotions which stressesboth their importance and the importanceof the bodily feelings which accompanythem (1649/1985). Descartes – perhaps con-tra Aristotle – strenuously denied that themind was ‘in’ the body the way a pilot isin a ship, for the intimate connection to thebody and the host of functionally significantfeelings which the body arouses in the mindin the appropriate circumstances meant thatthe mind-body link was not a mere commu-nication channel. Descartes declared insteadthat the mind and body formed a ‘substan-tial union’ and that emotional response wasessential to cognition.

Despite the fact that if one is willingto endorse a dualism of mind and bodythen Descartes’ interactive version seemsto be the most intuitively reasonable, thedifficulties of understanding how two suchentirely distinct realms could causally inter-act created an avid market for alternativetheories of the mind-body relation. Twobroad streams of theory can be discerned,which I label, not altogether happily, ide-alist and materialist. Idealists regard mindor consciousness as the fundamental exis-tent and deny the independent existenceof the material world; its apparent realityis to be explained as a function of mental-ity. Materialists cannot follow such a directroute, for they have great difficulty in out-right denying the existence of mind andgenerally content themselves with in someway identifying it with features of matter.The asymmetry in these positions is inter-esting. Idealists can easily assert that thematerial world is all illusory. Materialists fallinto paradox if they attempt the same strat-egy – for the assertion that mind is illu-sory presupposes the existence of illusions,which are themselves mental entities. Fora long time (centuries, I mean) the idealist

position seemed dominant, but the mate-rialists, like the early mammals scrabblingunder the mighty dinosaurs, were to havetheir day.

Early materialists had to face more thanan intellectual struggle, because their doc-trine stood in clear contradiction with fun-damental beliefs endorsed by the Chris-tian church, and many thinkers have beencharged with softening their views to avoidecclesiastical censure. One such is PierreGassendi (1592–1655), who espoused anupdated version of ancient Epicurean atom-ism, but who added immortal and immate-rial souls to the dance of the atoms. The soulswere responsible, in a familiar refrain, for ourhigher intellectual abilities. On the material-ist core of such a view, nature is ultimatelycomposed of tiny, indivisible, and indestruc-tible physical particles whose interactionsaccount for all the complexity and behaviourof organized matter. Gassendi asserted thatthe ‘sentient soul’, as opposed to the imma-terial ‘sapient soul’, was a material compo-nent of animals and humans, composed of anespecially subtle, quick-moving type of mat-ter which is capable of forming the systemof images we call imagination and percep-tion (Gassendi also endorsed the empiricistprinciple that all ideas are based on priorsensory experience). These are literally lit-tle images in the brain. Of course, thereis a problem here: who is looking at theseimages? What good does it do to postu-late them? For Descartes, the experience ofsensory perception or imagination is simi-larly dependent upon corporeal imagery, butbecause the visual experience is a mentalact, there really is someone to appreciate thebrain’s artwork. (Descartes in fact tried touse the imagistic quality of certain experi-ences as an argument for the existence ofmaterial objects, because real images needa material substrate in which they are real-ized – but Descartes concluded that thisargument was far from conclusive.) A subtledistinction here may have directed philoso-phers’ thinking away from this worry. Thisis the difference between what are nowa-days called substance and property dualism.Descartes is a substance dualist (hence also

Page 38: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

2 0 the cambridge handbook of consciousness

a property dualist, but that is a rather triv-ial consequence of his view). Substance ingeneral was understood as that which couldexist independently (or perhaps requiringonly the concurrence of God). Matter wasthus a substance, but properties of matterwere not themselves substantial, for proper-ties require substance in which to be instan-tiated. According to Descartes, mind is a sec-ond kind of substance, with, naturally, itsown set of characteristically mental prop-erties. Thus one basic form of materialisminvolves merely the denial of mental sub-stance, and the early materialists were keento make this aspect of their views clear. Butdenial of substance dualism leaves open thequestion of the nature of mental propertiesor attributes (consciousness can be regardedas a feature of the brain, but is no less myste-rious for being labeled a property of a phys-ical object).

The problem is clearer in the work ofanother early materialist, Thomas Hobbes(1588–1679) who, entranced by the newscience inaugurated by Galileo, declaredthat absolutely everything should be expli-cable in terms of the motions of mat-ter and the efficient causal interaction ofmaterial contact. Eventually coming to con-sider the mind, Hobbes pursues motioninto the brain to account for sensoryphenomena: ‘the cause of sense is theexternal body . . . which presses the organproper to each sense . . . which pressure, bythe mediation of the nerves . . . continuesinwards to the brain . . . ’ (1651/1998, pt. 1,ch. 1). Hobbes goes out of his way to stressthat there is nothing immaterial, occult, orsupernatural here; there is just the vari-ous ways that physical events influence ourmaterial sense organs: ‘neither in us that arepressed are they anything else but diversmotions; for motion produceth nothing butmotion’ (1651/1998, pt. 1, ch. 1). But thenHobbes makes a curious remark: speakingof these ‘divers motions’ in the brain he says,‘but their appearance to us is fancy, the samewaking that dreaming’. However, he else-where states that ‘all fancies are motionswithin us’ (1651/1998, pt. 1, ch. 3). Com-pounding the confusion he also describes our

appetites or motivations as motions, but saysthat pleasure and pain are the appearancesof these motions (1651/1998, pt. 1, ch. 6). Itwould seem that ‘appearance’ is Hobbes’sterm for something like phenomenal con-sciousness, and he seems to be saying thatsuch consciousness is caused by motions inthe brain but is not identical to them, whichof course flatly contradicts his claim thatmotion can only produce motion. Thoughobviously Hobbes is not clear about thisproblem, we might anachronistically char-acterize him as a substance materialist whois also a property dualist.

In any case, materialism was very far fromthe generally favoured opinion, and the per-ceived difficulties of Descartes’ substancedualism led instead to a series of inventivealternatives to interactive substance dual-ism, the two most important being those ofBaruch de Spinoza (1632–1677) and Leib-niz. In an austerely beautiful if forbiddingwork, the Ethics (1677/1985), Spinoza laidout a theory which perhaps, logically, oughtto have been that of Descartes. Spinoza notesthat substance is that which exists indepen-dently of all other things, and thus there canbe only one ‘maximal’ substance: God. Ifthat is so, then matter and mind can onlybe features of the God-substance (Spinozacalled them attributes and asserted therewere an infinite number of them, althoughwe are only aware of two). Spinoza’s theoryis an early form of what came to be called‘dual aspect theory’, which asserts that mindand matter are mere aspects of some under-lying kind of thing of which we have no clearapprehension. Particular material or men-tal individuals (as we would say) are meremodifications of their parent attributes (soyour mind is a kind of short-lived ripple inthe attribute of mind and your body a smalldisturbance in the material attribute). Theattributes are a perfect reflection of theirunderlying substance, but only in terms ofone aspect (very roughly like having botha climatographic and topographic map ofthe same territory). Thus Spinoza believedthat the patterns within any attribute wouldbe mirrored in all the others; in particu-lar, mind and body would be synchronized

Page 39: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 2 1

automatically and necessarily. This explainsthe apparent linkage between mind andbody – both are merely aspects of the sameunderlying substance – while at the sametime preserving the causal completeness ofeach realm. In the illustrative scholium toproposition seven of book two of the Ethics(1677/1985) Spinoza writes, ‘A circle exist-ing in nature and the idea of the exist-ing circle, which is also in God, are oneand the same thing . . . therefore, whetherwe conceive nature under the attributeof Extension, or under the attribute ofThought . . . we shall find one and the sameorder, or one and the same connection ofcauses . . . ’. On the downside, Spinoza doeshave to assume that every physical event hasa corresponding mental event, and he is thusa kind of panpsychist. Even worse (from a17th-century point of view) Spinoza’s viewis heretical, because it sees God as being lit-erally in everything and thus as a materialthing not separate from the world.

Leibniz never wrote down his meta-physical system in extensive detail (he wasdoubtless too busy with a multitude ofother projects, such as inventing calculus,rediscovering binary arithmetic, building thefirst calculating machines, and writing end-less correspondence and commentary, notto mention his day job of legal counseland historian to the Hanoverian house ofBrunswick), but his views can be recon-structed from the vast philosophical writ-ings he left us. They can be caricatured,in part, as Spinoza’s with an infinite num-ber of substances replacing the unique God-substance. These substances Leibniz calledmonads (see Leibniz 1714/1989). Becausethey are true substances, and hence canexist independently of any other thing, andbecause they are absolutely simple, they can-not interact with each other in any way(nonetheless they are created by God, whois one of them – here Spinoza seems rathermore consistent than Leibniz). Yet eachmonad carries within it complete informa-tion about the entire universe. What we callspace and time are in reality sets of rela-tions amongst these monads (or, better, theinformation which they contain), which are

in themselves radically non-spatial and per-haps even non-temporal (Leibniz’s vision ofspace and time emerging from some moreelementary systems of relations has alwaysbeen tempting, if hard to fathom, and nowfuels some of the most advanced physics onthe planet).

However, Leibniz does not see themonadic substances as having both men-tal and material aspects. Leibniz’s mon-ads are fundamentally to be conceivedmentalistically; they are in a way mental-istic automatons moving from one percep-tual or apperceptual state to another, allexactly according to a God-imposed prede-fined rule. The physical world is a kind of log-ical construction out of these mental states,one which meets various divinely insti-tuted constraints upon the relation betweenthose aspects matching what we call ‘mate-rial objects’ with those we call ‘states ofconsciousness’ – Leibniz called this the pre-established harmony, and it is his explana-tion for the appearance of mind-body inter-action. So Leibniz’s view is one that favoursthe mental realm; that is, it is at bottoma kind of idealism as opposed to Spinoza’smany aspect theory.

As we shall see, Leibniz’s vision here hada much greater immediate impact on subse-quent philosophy than Spinoza’s. An impor-tant difference between the two theories isthat, unlike Spinoza, Leibniz can maintain adistinction between things that have mindsor mental attributes from those that do not,despite his panpsychism. This crucial dis-tinction hinges on the difference betweena ‘mere aggregate’ and what Leibniz some-times calls an ‘organic unity’ or an organ-ism. Each monad represents the world – inall its infinite detail – from a unique pointof view. Consider a heap of sand. It cor-responds to a set of monads, but there isno monad which represents anything like apoint of view of the heap. By contrast, yourbody also corresponds to a set of monads, butone of these monads – the so-called domi-nant monad – represents the point of view ofthe system which is your living body. (Therepresumably are also sub-unities within you,corresponding to organized and functionally

Page 40: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

2 2 the cambridge handbook of consciousness

unified physiological, and hence also psycho-logical, subsystems.) Organisms correspondto a hierarchically ordered set of monads;mere aggregates do not. This means thatthere is no mental aspect to heaps of sandas such, even though at the most fundamen-tal level mind pervades the universe.

One last point: you might wonder whyyou, a monad that represents every detailof the entire universe, seem so relativelyignorant. The answer depends upon anotherimportant aspect of the conception of men-tality. Leibniz allows that there are uncon-scious mental states. In fact, almost all men-tal states are unconscious, and low-levelmonads never aspire to consciousness (orwhat Leibniz calls apperception). You areaware, of course, only of your consciousmental states, and these represent a literallyinfinitesimal fraction of the life of your mind,most of which is composed of consciouslyimperceptible petite perceptions (it is gallingto think that somewhere within each of ourminds lies the invisible answers to such ques-tions as whether there are advanced civiliza-tions in the Andromeda galaxy, but thereit is).

For Leibniz the material world is, fun-damentally, a kind of illusion, but one ofa very special kind. What Leibniz calls‘well grounded’ phenomena are those thatare in some way directly represented inevery monad. Imagine aerial photographsof downtown Toronto taken from a vari-ety of altitudes and angles. The same build-ings appear in each photograph, though theirappearance is more or less different. But, forexample, sun flares caused by the cameralens will not appear in every picture. Thebuildings would be termed well grounded,the sun flare an illusion. So Leibniz can pro-vide a viable appearance-reality distinctionthat holds in the world of matter (though itis tricky, because presumably the illusions ofany one monad are actually reflected in allmonads – hence the weasel word ‘directly’above). Nonetheless, it is the domain of con-sciousness which is fundamental and, in theend, the totality of reality, with the physicalworld being merely a kind of constructionout of the mental.

V. The Idealist Turn

In some way, Leibniz represents the culmi-nation of the tradition of high metaphysics:the idea that reason could reveal the ulti-mate nature of things and that this natureis radically different from that suggested bycommon sense. But his model of the materialworld as mere appearance was taken to itslogical next step by the, at least superficially,anti-metaphysical Immanuel Kant (1724–1804). In Kant (see especially 1781/1929) wesee the beginning of the idealism which inone form or another dominated philosophyfor more than a century afterward.

Once mind is established as the sole real-ity, the problem of consciousness and all theother traditional problems of relating matterto mind, virtually disappear. The problemthat now looks big and important is in a waythe inverse of the problem of consciousness:how exactly is the material world whichwe evidently experience to be constructedout of pure and seemingly evanescent con-sciousness. Two modes of response to thisproblem can be traced that roughly dividethe thinkers of the British Isles (forgiveme for including Ireland here) from thoseof continental Europe, although the geo-graphic categorization becomes increasinglymisleading as we enter the 20th century.Very crudely, these modes of idealism canbe characterized respectively as phenome-nalism (material objects are ‘permanent pos-sibilities of sensation’) and transcendentalidealism (a system of material objects repre-sented in experience is a necessary conditionfor coherent experience and knowledge).

There were, of course, materialists lurk-ing about in this period, though they werenowhere near the heart of philosophicalprogress; in fact they were frequently notphilosophers at all, and quite a number camefrom the ranks of intellectually inclinedmedical doctors. One such was Julien de LaMettrie (1709–1751) who outraged Europe,or at least enough of France to require aretreat to Berlin, with his L’Homme machine(1748/1987) (see also the slightly earlierL’Histoire naturelle de l’ame; 1745). In thisbrisk polemical work, La Mettrie extends

Page 41: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 2 3

the Cartesian thesis that animals are ‘mere’machines to include the human animal.But of note here is the same reluctanceto shed all reference to the specialness ofthe mind that we observed in earlier mate-rialists. La Mettrie is willing to deny thatthere are immaterial mental substances, butdescribes matter as having three essentialattributes: extension, motion, and conscious-ness. In L’Histoire naturelle de l’ame (1745),La Mettrie makes the interesting points thatthe intrinsic nature of matter is utterly mys-terious to us and that the attribution ofmental properties to it should be no lessstrange than the attribution of extension andmotion, in the sense that we understandwhat it is about matter itself that supportsextension no better – that is not at all – thanwe understand how it can or cannot supportmental properties. This idea has remained animportant, if somewhat marginalized, partof the debate about the relation betweenmind and matter. Although not always veryclear about their own positions, most mate-rialists or quasi-materialists of the period,such as John Toland (1670–1722), Paul-HenriD’Holbach (1723–1789) (see Holbach 1970),Joseph Priestley (1733–1804) (see Priest-ley 1975), and Pierre-Jean-George Caba-nis (1757–1808), agreed on the approachthat denies substance dualism while allow-ing that matter may have properties thatgo beyond motion and extension, promi-nent among which are the various mentalattributes.

The tide of philosophy was, however, run-ning in favour of idealism. A central reasonfor this was independent of the problem ofconsciousness, but stemmed from the epis-temological crisis brought about by Carte-sian philosophy (itself but a partial reflectionof the general cultural upheaval occasionedby the scientific revolution). Descartes hadargued that the true nature of the world wasquite unlike that apparently revealed bythe senses, but that reality could be dis-covered by the ‘light of reason’. Unfortu-nately, although everyone took to heart theskeptical challenge to conventional wisdom,Descartes’ positive arguments convincedhardly anybody. The core problem was the

disconnection between experience and thematerial world enshrined in Descartes’ dual-ism. But what if the material world wassomehow really a feature of the realm ofconsciousness for which we obviously seemto have infallible access? For example, sup-pose, as did George Berkeley (1685–1753),that material objects are nothing but orderedsequences of perceptions (1710/1998). Weknow by introspection that we have percep-tions and that they obey certain apparentlaws of succession. Under Berkeley’s identifi-cation we thereby know that there are mate-rial objects and the epistemological crisis isresolved.

On the other side of the English Chan-nel, Kant was investigating the deeper prob-lem of how we could know that our percep-tions really do follow law-governed patternswhich guarantee that they can be interpretedin terms of a scientifically explicable mate-rial world. Kant accepted Leibniz’s view thatall we could possibly have knowledge of areconstructions out of subjective experienceand that any distinction between reality andappearance within the realm of perceptionand scientific investigation would have tobe based upon some set of relations hold-ing amongst our experiences. He added theremarkable idea that these relations werea reflection of the structure of the minditself – concepts of space, time, and causa-tion are necessary conditions for the exis-tence of experience of an ‘external world’and are ‘discovered’ in that world becausethey pre-exist in the mind. There is no rea-son at all to suppose that they reflect somedeeper reality beyond appearances. But theyare a necessary condition for having coherentexperience at all and hence will and must bediscovered in the world which is a constructout of such experience. Kant called this styleof reasoning transcendental argumentation.In one sense, however, Kant was an impureidealist. He allowed for the existence of the‘thing-in-itself’: the necessarily unknowable,mind-independent basis of experience. Inthis respect, Kant’s philosophy is somewhatlike Spinoza’s, save of course that Spinozawas fully confident that reason could revealsomething of the thing-in-itself. Idealists

Page 42: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

2 4 the cambridge handbook of consciousness

who followed Kant, such as A. Schopen-hauer (1788–1869) and the absolute idealistG. Hegel (1770–1831) and many other conti-nental philosophers, as well as the later fol-lowers of Hegel, such as the British philoso-pher F. Bradley (1846–1924), espoused purerforms of idealism (see Bradley 1987/1966;Hegel 1812/1969; Schopenhauer 1819/1966).Kant’s hypothesis that it was a ‘transcenden-tal’ condition for the very possibility of intro-spectible experience to be lawfully orderedled to a huge philosophical industry focused,to put it crudely, on the mind’s contribu-tion to the structures we find in the externalworld (an industry that eventually leads intothe postmodern desconstructionist ghetto).But this industry, by its nature, did not facethe problem of consciousness as defined hereand so is not a main player in the drama ofthis chapter.

VI. Evolution and Emergence

Instead, as we now enter the heart of the19th century, two crucial non-philosophicaldevelopments transformed the problem: therise of Darwinism in biology and, drawnfrom the brow of philosophy itself, thebeginning of scientific psychology. Aboveall else, the evolutionary theory of CharlesDarwin (1809–1882) promised to unify thesimple with the complex by suggestingsome way that mere atoms could, guidedonly by the laws of physics, congregateinto such complex forms as plants, animals,and even human beings. This led immedi-ately to two deep questions: what is lifeand how does matter organized via naturalselection acquire consciousness? These areboth questions about emergence, for it cer-tainly appears, if evolution be true, that lifesprings forth from the lifeless and conscious-ness appears in beings evolved from non-conscious ancestors composed of utterlynon-conscious parts. The first question ledto the vitalism controversy, which bearssome analogy to the problem of conscious-ness. Vitalists contended that there wassomething more to life than mere materialorganization: a vital spark, or elan vital. This

view of life and its conflict with any material-ist account can be traced back at least to the17th century. Another of our philosophicallyinclined physicians, Nehemiah Grew (1641–1712), who helped found scientific botanyand was secretary of the Royal Society in1677, quaintly put the problem thus (seeGarrett 2003), perhaps not atypically con-fusing the issues of life and consciousness:

The Variety of the Mixture, will not suf-fice to produce Life . . . Nor will its beingmechanically Artificial. Unless the Parts ofa Watch, set, as they ought to be, together;may be said to be more Vital, than whenthey lye in a confused Heap. Nor its beingNatural. There being no difference, betweenthe Organs of Art and Nature; saving,that those of Nature are most of all Arti-ficial. So that an Ear, can no more hear,by being an Organ; than an ArtificialEar would do . . . And although we add theAuditory nerves to the Ear, the Brain tothe Nerves, and the Spirits to the Brain;yet is it still but adding Body to Body,Art to Subtility, and Engine or Art to Art:Which, howsoever Curious, and Many;can never bring Life out of themselves, normake one another to be Vital. (Grew 1701,33)

Vitalism flourished in the 19th century andpersisted into the 20th, notably in the writ-ings of Hans Driesch (1867–1941), who haddiscovered that fragments of sea urchinembryos would develop into normal seaurchins, contrary to then-current mecha-nist theory (indeed it is hard to understandhow a few of the parts of a machine wouldgo on to operate exactly as did the origi-nal whole machine). Vitalists thus assumedthere must be some special added feature toliving things which accounted for the abil-ity to organize and reorganize even in theface of such assaults. It was the unfortu-nately delayed development of Mendelian‘information-based’ genetics which sug-gested the answer to Driesch’s paradox andled to the successful integration of evolutionand heredity.

For our purposes, the decline of vitalismnot only provides a cautionary tale but alsohighlights an important disanalogy between

Page 43: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 2 5

the problems of life and consciousness. Lifewas seen to be problematic from the mate-rialist point of view because of what it coulddo, as in Driesch’s sea urchins. It seemedhard to explain the behavioral capacities ofliving things in terms of non-organic science.Perhaps conscious beings, as living things,present the same problem. But this difficultywas ultimately swept away with the rise ofgenetics as an adjunct to evolutionary theory.However, in addition to and independent ofthe puzzle of behaviour, consciousness hasan internal or subjective aspect, which life, assuch, utterly lacks. What is especially prob-lematic about consciousness is the questionof why or how purely material systems couldbecome such that there is ‘something it islike’ to be them.

Another aspect of Darwinism playeddirectly into the mind-matter debate.Darwin himself, and for a long time allDarwinists, was a committed gradualist andassumed that evolution worked by the longand slow accumulation of tiny changes, withno infusions of radically new properties atany point in evolutionary history. Applyinggradualism to the mind, Darwin went outof his way to emphasize the continuity inthe mental attributes of animals and humans(see Darwin 1874).

Gradualism has its difficulties, which havelong been noted and persist to this day in talkof punctuated equilibrium (see Eldredge &Gould 1972) and so-called irreducible com-plexity (Behe 1998, for example). The evolu-tion of the eye was seen as very hard for evo-lution to explain even by Darwin himself:‘To suppose that the eye . . . could havebeen formed by natural selection, seems, Ifreely confess, absurd in the highest degree’(1859/1967, 167). Of course, the idea that afully formed eye could appear as the resultof one supremely lucky mutational accidentis truly absurd and is not what is at issuehere. But Darwin went on to give some basisfor how the evolution of the eye was pos-sible, and there are nowadays sophisticatedaccounts of how complex multi-part organscan evolve, as well as compelling theories ofthe evolution of particular organs, such asthe eye (see e.g., Dawkins 1995).

But as noted above, in the most basicsense of the term, consciousness seems tobe an all-or-nothing affair. No non-consciousprecursor state seems to give the slightesthint that consciousness would be its evolu-tionary successor. The tiniest spark of feelingand the weakest and most obscure sensationare fully states of consciousness. Thus theemergence of consciousness at some pointin evolutionary history appears to be anintrusion of true novelty at odds with thesmoothly evolving complexity of organisms.William Clifford (1845–1879), a tragicallyshort-lived philosophical and mathematicalgenius (he anticipated general relativity’sunification of gravity with geometry and pre-dicted gravitational waves), put the problemthus:

. . . we cannot suppose that so enormous ajump from one creature to another shouldhave occurred at any point in the processof evolution as the introduction of a factentirely different and absolutely separatefrom the physical fact. It is impossible foranybody to point out the particular place inthe line of descent where that event can besupposed to have taken place (1874/1886,2 66).

So, although Darwinism provided great sup-port for and impetus to the materialist visionof the world, within it lurked the old, andstill unresolved, problem of emergence.

Perhaps it was time to tackle the minddirectly with the tools of science. During the19th century, psychology broke away fromphilosophy to become a scientific disciplinein its own right. Despite the metaphysi-cal precariousness of the situation, no onehad any doubt that there was a correspon-dence between certain physical states andmental states and that it ought to be pos-sible to investigate that correspondence sci-entifically. The pseudo science of phrenol-ogy was founded on reasonably acceptableprinciples by Franz Gall (1758–1828) withthe aim of correlating physical attributes ofthe brain with mental faculties, of whichGall, following a somewhat idiosyncraticsystem of categorization, counted some twodozen, including friendship, amativeness,

Page 44: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

2 6 the cambridge handbook of consciousness

and acquisitiveness. True, the categoriza-tion used is quaint and bizarrely ‘high-level’and Gall’s shortcut methodology of infer-ring brain structure from bumps on the skulldubious (to say the least), but the core idearetains vigorous life in today’s brain imag-ing studies and the theory of mental/brainmodules. As D. B. Klein said, ‘Gall gavewrong answers to good questions’ (Klein,1970, 669).

Throughout the 19th century, one of theprimary activities of psychological scienceand even the main impetus for its creationwas discovering correlations between psy-chological states and physical conditions,either of the environment of the subject orbrain anatomy discovered via postmorteminvestigation (but not of course brain states,which were entirely inaccessible to 19th-century science). Unlike in the quackery ofphrenology, genuine and profound advanceswere made. Following foundational workon the physical basis of sensation by Her-mann Helmholtz (1821–1894), also famedfor introducing the hypothesis that uncon-scious inference accounts for many aspectsof cognition and perception, important dis-coveries included the connection betweencertain brain regions and linguistic ability inthe work of Paul Broca (1824–1880); sem-inal studies of stimulus strength and theintrospected intensity of sensation by Gus-tav Fechner (1803–1887), who coined thephrase ‘psycho-physical law’ (1946), and thecreation of the first psychological labora-tory devoted to such studies by WilhelmWundt (1832–1920), who also developedthe first distinctive research methodologyof psychology – that of introspectionism(1892/1894).

From the point of view of the problemof consciousness these developments pointto a bifurcation in the issue. Almost all thethinkers associated with the birth of psy-chology endorsed some form of idealism asthe correct metaphysical account of mindand matter, and none of prominence werematerialists. They were nonetheless keen onstudying what we would call the neural basesof consciousness and never questioned thelegitimacy of such studies. It is useful to dis-

tinguish the study of the structure of con-sciousness from the question of the ultimatenature of consciousness and its place in thenatural world. The pioneers of psychologywere, so to speak, officially interested in thestructure of consciousness, both its intro-spectible experiential structure and its struc-tural link to physical conditions (both inter-nal and external to the body).

The growth of interest in these questionscan also be seen in more purely philosoph-ical work in the rise of the phenomeno-logical movement, although of course thephilosophers were not particularly inter-ested in investigating correlations betweenmental and material conditions but ratherfocused on the internal structure of pureconsciousness. Phenomenology was fore-shadowed by Franz Brentano (1838–1917),who in a highly influential work, Psychologyfrom the Empirical Standpoint (1874/1973 ,121 ff.), advanced the view that mentalstates were self-intimating coupled with anupdated version of Aristotle’s regress argu-ment (Brentano rather generously creditsAristotle for his whole line of thought here).

Brentano also reminded philosophers ofa feature of mental states which had almostbeen forgotten since it had first been notedin the Middle Ages (though Descartes’notion of objective reality is closely related).Brentano labeled this feature intentionality,which is the ‘directedness’ or ‘aboutness’ ofat least many mental states onto a content,which may or may not refer to an exist-ing object. If I ask you to imagine a uni-corn, you are easily able to do so, despitethe fact that there are no unicorns. Now,what is your thought about? Evidently notany real unicorn, but neither is your thoughtabout the image of a unicorn produced inyour imagination or even just the idea of aunicorn. For if I asked you to think aboutyour image or idea of a unicorn you coulddo that as well, but it would be a differ-ent thought, and a rather more complexone. One way to think about this is to saythat any act of imagination has a certainrepresentational content, and imagining aunicorn is simply the having of a partic-ular unicorn-content (in the ‘appropriate’

Page 45: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 2 7

way as well, for imagination must be distin-guished from other content-bearing mentalacts). The consciousness involved in such anact of imagination is the presentation of thatcontent to your mind. This is not to say thatyou are aware of your mental state wheneveryou imagine, but rather it is through havingsuch a state that you are conscious of whatthe state represents, although Brentano him-self held that any conscious state presenteditself as well as its content to the subject.The failure to notice the intentional aspectof consciousness had bedeviled philosophy,leading to a plethora of theories of thoughtand perception that left us in the awkwardposition of never being aware of anything butour own mental states.

Brentano went so far as to declare inten-tionality the mark of the mental, the uniqueproperty that distinguished the mental fromthe physical. Of course, many other things,such as pictures, words, images on television,electronic computation, and so on, have rep-resentational content, but arguably these allget their content derivatively, via a men-tal interpretation. Uniquely mental or not,intentionality poses an extremely difficultquestion: how is it that mental states (oranything else) can acquire representationalcontent? Perhaps if one accepts, as so manyof the thinkers of this period did, that mindis the bedrock reality, then one can acceptthat it is simply a brute fact, an essentialproperty of mentality, that it carry represen-tational content. No explanation of this basicfact can be given in terms of anything sim-pler or more fundamental.

However, if one aspires to a materialistaccount of mind, then one cannot avoid thisissue. A frequent metaphor which materi-alists of the time appealed to was that ofbiological secretion, perhaps first explicitlyarticulated by Cabanis in his Rapports duphysique et du moral de l’homme (1802/1981),who proclaimed that the brain secretesthought as the liver secretes bile. As it stands,this is little more than a declaration of loy-alty to the materialist viewpoint, for weexpect there should be an explication of theprocess and nature of such secretion just asthere is such an account of the production

of bile. Just which part of the brain gener-ates these ‘secretions’ and how do they man-age to possess representational or phenome-nal content? Nonetheless, the metaphor waseffective. It was approved by Darwin him-self (a closet materialist), who (privately)endorsed its repetition by John Elliotson(1791–1868) – physician, phrenologist, mes-merist, and the so-called strongest materi-alist of the day (see Desmond and Moore1994 , 250 ff.). In one of his private notebooksDarwin modified the metaphor in an inter-esting way, writing, ‘Why is thought, beinga secretion of brain, more wonderful thangravity as a property of matter’? This com-ment is striking because it clarifies how themetaphor implicitly suggests that it is a brutefact that brains produce thought, just as it is abrute fact that matter is associated with grav-itation. Note also how the power to gravi-tate seems remote from matter’s core prop-erties of extension, exclusion, and mass and,at least in the Newtonian view, provides thealmost miraculous ability to affect all thingsinstantaneously at a distance. Nevertheless,the essential emptiness of the metaphor didnot go unremarked. William James (1842–1910) wrote ‘the lame analogy need hardlybe pointed out . . . we know of nothing con-nected with liver and kidney activity whichcan be in the remotest degree comparedwith the stream of thought that accompaniesthe brain’s material secretions’ (1890/1950,102–3).

Leaving aside once again this metaphys-ical issue, workers focused on the structureand meaning of the contents of conscious-ness along with their empirically deter-minable relationship to a host of internal andexternal material conditions. I have referredto early scientific psychology above, butI would also put Sigmund Freud (1856–1939) in this group. Although an advo-cate of materialism, his theory of mind wasfocused on psychological structure, ratherthan explications of how matter gives riseto mind. In philosophy, this emphasis even-tually led to the birth of a philosophicalviewpoint explicitly dedicated to investi-gating the inner structure of consciousness:the phenomenology of Edmund Husserl

Page 46: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

2 8 the cambridge handbook of consciousness

(1859–1938; see Chapter four). In thenewly scientific psychology under the guid-ance of Wundt, introspection became theparadigmatic research methodology, raisingsuch fundamental questions as whether allthought was necessarily accompanied by, oreven constituted out of, mental imagery.Unfortunately, this methodology sufferedfrom inherent difficulties of empirical verifi-cation and inter-observer objectivity, whicheventually brought it into disrepute, proba-bly overall to the detriment of scientific psy-chology.

Though James decried the grossmetaphor of consciousness as a brainsecretion, he introduced one of the mostpotent and durable metaphors, that of thestream of consciousness. In his remarkablycompendious work, The Principles of Psy-chology, which remains to this day full offresh insight, James devoted a chapter tothe stream of thought in which he notedthat ongoing consciousness is continuous,meaning ‘without breach, crack or division’(1890/1950, 237) and that, by contrast, ‘thebreach from one mind to another is perhapsthe greatest breach in nature’. James ofcourse allowed that there were noticeablegaps in one’s stream of consciousness, butthese are peculiar gaps such that we sensethat both sides of the gap belong together insome way; he also noted that the stream isa stream of consciousness and that unnoticedtemporal gaps – which are perfectly con-ceivable – are simply not part of the stream.Throughout his writings James exhibits akeen and durable interest in the structureand contents of the stream of consciousness,even delving enthusiastically into mysticaland religious experience.

Along with virtually all psychologicalresearchers of the time, James was no mate-rialist. His metaphysics of mind is complexand somewhat obscure, wavering betweena neutral monism and a form of panpsy-chism (see Stubenberg 2005). James heapedscorn (and powerful counter-arguments)upon crude forms of ‘molecular’ panpsy-chism, what he called the ‘mind dust’ the-ory (see 1890/1950, ch. 5), but his monismleaned decidedly towards the mental pole.

In a notebook he wrote that ‘the constitu-tion of reality which I am making for is ofthe psychic type’ (see Cooper 1990).

This lack of clarity may arise from theepistemological asymmetry between ourapprehension of mind and matter. We seemto have some kind of direct access to the for-mer – when we feel a pain there is an occur-rence, at least some properties of which aremade evident to us. We do not seem to haveany similarly direct awareness of the natureof matter. Thus the avowed neutrality ofneutral monism tends to slide towards somekind of panpsychism. From another point ofview, the asymmetry encourages the associ-ation of some forms of phenomenalism withneutral monism.

For example, the highly influential Britishphilosopher John Stuart Mill (1806–1873)endorsed a phenomenalism which regardedmaterial objects as ‘permanent possibilitiesof sensation’. This allows for the interpo-sition of a something we know not whichlurking behind our sensations (what mightbe called ‘unsensed sensibilia’ – see Mill1865 /1983 and Wilson 2003), but the seem-ingly unbridgeable gap between this ur-matter and our perceptual experiences cre-ates a constant pressure to replace it withentirely mental sequences of sensations. Tobe sure, intuition suggests that materialobjects exist unperceived, but this ‘exis-tence’ can, perhaps, be analysed in termsof dispositions to have certain sensationsunder certain mentalistically defined condi-tions. Furthermore, as our relation with theunknowable basis of matter is entirely men-talistic, why not accept that the primal mate-rial is itself mental (a view which can leadeither back to idealism or to some formof panpsychism)? Bertrand Russell (1872–1970) devoted great effort to developingMill’s phenomenalism as a kind of neutralmonism (see Russell 1927) in which whatwe call matter has intrinsic mental proper-ties with which we are directly acquaintedin experience – thus, Russell’s seeminglybizarre remark that when a scientist exam-ines a subject’s brain he is really observing apart of his own brain (for an updated defenseof a Russellian position see Lockwood 1991).

Page 47: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 2 9

His one-time collaborator, Alfred NorthWhitehead (1861–1947), pursued the alter-native panpsychist option in a series of worksculminating in the dense and obscurelywritten Process and Reality (1929). Roughlyspeaking Whitehead proposed a radicalreform of our conception of the fundamen-tal nature of the world, placing events (oritems that are more event-like than thing-like) and the ongoing process of their cre-ation as the core feature of the world, ratherthan the traditional triad of matter, space,and time. His panpsychism arises from theidea that the elementary events that makeup the world (which he called occasions) par-take of mentality in some often extremelyattenuated sense, metaphorically expressedin terms of the mentalistic notions of cre-ativity, spontaneity, and perception. White-head’s position nicely exposes the difficultyin maintaining a pure neutral monism. Mat-ter must have some underlying intrinsicnature. The only intrinsic nature we seem tobe acquainted with is consciousness. Thus itis tempting to simplify our metaphysics byassigning the only known intrinsic nature tomatter. We thus arrive at panpsychism ratherthan neutral monism (for an introduction toWhitehead’s philosophy of mind see Griffin1998).

Such high metaphysical speculations,though evidently irresistible, seem far fromthe common-sense view of matter whichwas more or less enshrined in the world viewof 19th-century science, which began thento fund the rapid and perpetual develop-ment of technology we are now so famil-iar with, and which greatly added to thesocial prestige of science. If we take the sci-entific picture seriously – and it came toseem irresponsible not to – then the cen-tral mystery of consciousness becomes thatof the integration of mind with this scientificviewpoint. This is the modern problem ofconsciousness, which bypasses both idealistmetaphysics and phenomenalistic construc-tionism.

But how could such integration beachieved? An important line of thoughtbegins with some technical distinctions ofMill. In his System of Logic (1843 /1963)

Mill attempted a compendious classifica-tion of scientific law, two forms of whichhe called ‘homopathic’ and ‘heteropathic’.Homopathic laws are ones in which theresultant properties of a system are the mereadditive results of the properties of the sys-tem’s components. For example, the lawsof motion are homopathic: the motion ofan object is the result of all the forces act-ing on the object, and the resultant forceis simply the vector addition of each sep-arate force. Heteropathic laws are ones inwhich the resultant properties are not sim-ply the sum of the properties of the compo-nents. It was George Lewes (1817–1878) –now best remembered as the consort ofGeorge Elliot – who coined the term ‘emer-gent’ to refer to heteropathic effects (he used‘resultant’ to mean those features which Millcalled homopathic effects). Here it is impor-tant to distinguish the more general notion ofhomopathic effects from what is sometimescalled part-whole reductionism. The lattermay well be false of the world: there are rea-sonable arguments that some physical prop-erties are non-local and perhaps in some wayholistic (both general relativity and quan-tum mechanics can be invoked to supportthese contentions). But the crucial ques-tion about homopathic versus heteropathiceffects is whether the fundamental physi-cal state of the world, along with the basicphysical laws, determines all other, higher-level properties and laws. If not, we have trueemergence.

The emergentists postulated that con-sciousness was a heteropathic effect, oremergent property, of certain complexmaterial systems (e.g., brains). Emergentismmay seem no more than extravagant meta-physical speculation, except that at the timeit was widely conceded that there wereexcellent candidate emergent properties inareas other than consciousness. That is, itseemed there were independent groundsfor endorsing the concept of emergence,which – thus legitimated – could then befruitfully applied to the mind-body prob-lem. The primary example of supposedlyuncontentious emergent properties werethose of chemistry. It was thought that, for

Page 48: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

30 the cambridge handbook of consciousness

example, the properties of water could notbe accounted for in terms of the proper-ties of oxygen and hydrogen and the lawsof nature which governed atomic-level phe-nomena. Emergentists, of which two promi-nent ones were Conwy Lloyd Morgan (1852–1936) (see Morgan 1923) and C. D. Broad(1887–1971), recognized that the complex-ity of the interactions of the componentsof a system could present the appearanceof emergence when there was none. Broad(1925) liked to imagine a ‘mathematicalarchangel’ who knew the laws of nature asthey applied at the submergent level, knewthe configuration of the components, andsuffered from no cognitive limitations aboutdeducing the consequences of this informa-tion. If the archangel could figure out thatwater would dissolve salt by considering onlythe properties of oxygen, hydrogen, sodium,and chlorine as well as the laws which gov-erned their interaction at the atomic level,then this aspect of H2 O would fail to be anemergent property.

Thus the emergentists would have scoffedat current popular examples of emergence,such as John Conway’s Game of Life andchaotic dynamical systems. Such examplesrepresent nothing more than ‘epistemologi-cal emergence’. Cognitive and physical limi-tations – albeit quite fundamental ones – oncomputational power and data acquisitionprevent us (or our machines) from deducingthe high-level properties of complex sys-tems, but this is not a metaphysical bar-rier. The mathematical archangel could fig-ure out the effect of the butterfly’s flight onfuture weather.

But the emergentists believed that theworld really did contain non-epistemologicalemergence; in fact, it was virtually every-where. They regarded the world as anhierarchical cascade of emergent featuresbuilt upon other, lower level emergent fea-tures. Unfortunately for them, their linchpinexample, chemistry, was the masterpiece ofthe new quantum mechanics of the 1920s,which basically provided new laws of naturewhich opened the door – in principle – tothe deduction of chemical properties fromatomic states (nowadays we even have de

novo calculation of some simple chemicalfeatures based on the quantum mechani-cal description of atomic components). Ofcourse, this does not demonstrate that thereis no real emergence in the world, but with-out any uncontentious example of it, andwith the growing ability of physics to pro-vide seemingly complete accounts of thebasic structure of the world, the emergen-tist position was devastated (see McLaughlin1992).

On the mainstream view articulated inSection II, emergentism seems no less meta-physically extravagant than the other posi-tions we have considered. Emergentismespouses a form of property dualism andpostulates that the novel emergent prop-erties of a system would have distinctivecausal powers, going beyond those deter-mined solely by the basic physical fea-tures of the system (seemingly courting vio-lation of a number of basic conservationlaws).

Not that the science of psychology pro-vided a more palatable alternative. Earlyin the 20th century, the introspectionistmethodology as well as the sophisticatedsensitivity to the issues raised by con-sciousness by such psychologists as Jamesdisappeared with the rise of a soullessbehaviourism that at best ignored the mindand at worst denied its very existence. Ittook until halfway through the 20th cen-tury before philosophy and psychology grap-pled with the problem of the mind in newways. In psychology, the so-called cognitiverevolution made an appeal to inner mentalprocesses and states legitimate once again(see Neisser 1967 for a classic introductionto cognitive psychology), although scientificpsychology largely steered clear of the issueof consciousness until near the end of the20th century.

In philosophy, the 1950s saw the begin-ning of a self-conscious effort to under-stand the mind and, eventually, conscious-ness as physical through and through inessentially scientific terms. This was part ofa broader movement based upon the doc-trine of scientific realism, which can beroughly defined as the view that it is science

Page 49: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 31

that reveals the ultimate nature of real-ity, rather than philosophy or any othernon-empirical domain of thought. Appliedto the philosophy of mind and conscious-ness, this led to the rise of the identity the-ory (see e.g., Smart 1959, or for a morepenetrating and rather prescient account,Feigl 1958). This was a kind of turningpoint in philosophy, in which it was self-consciously assumed that because physicalscience should be the basis for our beliefsabout the ultimate nature of the world, phi-losophy’s job in this area would henceforthbe to show how the mind, along with every-thing else, would smoothly fit into the sci-entific picture of the world. It embracedwhat I earlier called the mainstream viewand began with high optimism that a scien-tific outlook would resolve not just the prob-lem of consciousness but perhaps all philo-sophical problems. But subsequent work hasrevealed just how extraordinarily difficult itis to fully explicate the mainstream view,especially with regard to consciousness. Inthe face of unprecedented expansion in ourtechnical abilities to investigate the workingsof the brain and an active and explicit scien-tific, as well as philosophical, effort to under-stand how, within the mainstream view, con-sciousness emerges, we find that the ultimateproblem remains intractable and infinitelyfascinating.

References

Aristotle (1984). The complete works of Aristotle (J.Barnes, Ed.). Princeton: Princeton UniversityPress.

Augustine (1998). Confessions. Oxford: OxfordUniversity Press.

Augustine (1998). City of God. Cambridge: Cam-bridge University Press.

Barnes, J. (1982). The Presocratic philosophers.London: Routledge and Kegan Paul.

Behe, M. (1998). Darwin’s black box: The biochem-ical challenge to evolution. New York: Free Press.

Berkeley, G. (1998). A treatise concerning the prin-ciples of human knowledge. Oxford: OxfordUniversity Press. (Original work published1710.)

Bloom, P. (2004). Descartes’s baby: Howthe science of child development explainswhat makes us human. New York: BasicBooks.

Bradley, F. (1966). Appearance and reality (2nded). Oxford: Clarendon Press. (Original workpublished 1897)

Brentano, Franz C. (1973). Psychology from anempirical standpoint. London: Routledge andKegan Paul. (Original work published 1874 .)

Broad, C. D. (1925). The mind and its placein nature. London: Routledge and KeganPaul.

Cabanis, P. (1802/1981). Rapports du physique etdu moral de l’homme (On the relations betweenthe physical and moral aspects of man; G. Mora,Ed. & M. Saidi, Trans). Paris: Crapart, Caille etRavier.

Caston, V. (2002). Aristotle on consciousness.Mind, 111(444), 751–815 .

Clifford, William K. (1874). Body and mind. Fort-nightly Review, December. (Reprinted in LeslieStephen & Frederick Pollock (Eds.), Lecturesand essays, 1876, London: Macmillan)

Churchland, Paul (1981). Eliminative materialismand propositional attitudes. Journal of Philoso-phy, 78, 67–90.

Cooper, W. E. (1990). William James’s theoryof mind. Journal of the History of Philosophy,2 8(4), 571–93 .

Darwin, C. (1967). The origin of species by naturalselection. London: Dent (Everyman’s Library).(Originally published 1859.)

Darwin, C. (1874). The descent of man. London:Crowell.

Dawkins, R. (1995). River out of Eden. New York:Basic Books.

Descartes, R. (1985). Meditations on first phi-losophy. In J. Cottingham, R. Stoothoff, & D.Murdoch (Eds.), The philosophical writings ofDescartes. Cambridge: Cambridge UniversityPress. (Originally published 1641.)

Descartes, R. (1985). The passions of the soul.In J. Cottingham, R. Stoothoff, & D. Mur-doch (Eds.), The philosophical writings ofDescartes. Cambridge: Cambridge UniversityPress. (Originally published 1649.)

Desmond, A., & Moore, J. (1994). Darwin: Thelife of a tormented evolutionist. New York:Norton.

Edwards, P. (1967). Panpsychism. In P. Edwards(Ed.), The encyclopedia of philosophy (Vol. 5).New York: Macmillan.

Page 50: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

32 the cambridge handbook of consciousness

Eldredge, N. & Gould, S. (1972). Punctuatedequilibria: An alternative to phyletic gradual-ism. In T. Schopf (Ed.), Models in paleobiology.San Francisco: Freeman Cooper.

Fechner, G. (1946). The religion of a scientist (W.Lowrie, Ed. & Trans). New York: Pantheon.

Feigl, H. (1958). The mental and the physical. InH. Feigl, M. Scriven, & G. Maxwell (Eds.), Min-nesota studies in the philosophy of science: Vol. 2 .Concepts, theories and the mind-body problem.Minneapolis: University of Minnesota Press.

Galileo, G. (1957). The assayer. In D. Stillman(Ed.), Discoveries and opinions of Galileo. NewYork: Doubleday. (Originally published 1623 .)

Garrett, B. (2003). Vitalism and teleology in thenatural philosophy of Nehemiah Grew, BritishJournal for the History of Science 36(1), 63–81.

Gergeley, G, Z., Nadasdy, G. Csibra, & Biro,S. (1995). Taking the intentional stance at 12

months of age. Cognition, 56(2), 165–93 .Grew, N. (1701). Cosmologia sacra: or a dis-

course of the universe as it is the creature andkingdom of God. London: Rogers, Smith andWalford.

Griffin, D. (1998). Unsnarling the world knot: Con-sciousness, freedom and the mind-body problem.Berkeley: University of California Press.

Hegel, G. (1969). The science of logic (A. Miller,Trans.). London: Allen and Unwin. (Originallypublished 1812 .)

Hobbes, T. (1998). Leviathan (J. Gaskin, Ed.).Oxford: Oxford University Press. (Originallypublished 1651.)

Holbach, Baron d’ (1970). Systeme de la nature;ou, Des lois du monde physique et du mondemoral (published under the pseudonym J.Mirabaud) (The system of nature: or, laws of themoral and physical world; H. Robinson, Trans.).(Originally published 1770.)

Huxley, T. (1866). Lessons in elementary physiol-ogy. London: Macmillan.

James, W. (1950). The principles of psychology (Vol.1). New York: Dover. (Originally published1890.)

Kant, I. (1929). Critique of pure reason (N. KempSmith, Ed. & Trans.). New York: St. Macmillan.(Originally published 1781.)

King, P. (2005). Why isn’t the mind-body prob-lem mediaeval? In H. Lagerlund & O. Pluta(Eds.), Forming the mind – Conceptions of bodyand soul in late medieval and early modern phi-losophy. Berlin and New York: Springer Verlag.

Klein, D. (1970). A history of scientific psychology.New York: Basic Books.

La Mettrie, J. (1745). L’histoire naturelle de l’ame.The Hague.

La Mettrie, J. (1987). L’homme machine / Man aMachine. La Salle, IL: Open Court. (Originallypublished 1748.)

Leibniz, G. (1989). Monadology. In R. Ariew &D. Garber (Eds. & Trans.), G. W. Leibniz: Philo-sophical essays. Indianapolis: Hackett. (Origi-nally published 1714 .)

Lockwood, M. (1991). Mind, brain and the quan-tum. Oxford: Blackwell.

Mangan, B. (2001). Sensation’s ghost: Thenon-sensory ‘fringe’ of consciousness. Psy-che, 7(18). http://psyche.cs.monash.edu.au/v7/psyche-7–18-mangan.html.

Matson, W. (1966). Why isn’t the mind-bodyproblem ancient? In P. Feyerabend & G.Maxwell (Eds.), Mind, matter and method:Essays in philosophy and science in honor ofHerbert Feigl. Minneapolis: University of Min-nesota Press.

McLaughlin, B. (1992). The rise and fall of Britishemergentism. In A. Beckermann, H. Flohr, & J.Kim (Eds.), Emergence or reduction? Berlin: DeGruyter.

Mill, J. (1963). A system of logic. In J. Robson (Ed.),Collected works of John Stuart Mill (Vols. 7 & 8).Toronto: University of Toronto Press. (Origi-nally published 1843 .)

Mill, J. (1963). An examination of Sir WilliamHamilton’s philosophy, In J. Robson (Ed.),Collected works of John Stuart Mill (Vol. 9).Toronto: University of Toronto Press. (Origi-nally published 1865 .)

Morgan, C. (1923). Emergent evolution. London:Williams and Norgate.

Mourelatos, Alexander (1986). Quality, struc-ture, and emergence in later Pre-Socratic phi-losophy. Proceedings of the Boston Area Collo-quium in Ancient Philosophy, 2 , 127–94 .

Nagel, T. (1974). What is it like to be a bat? Philo-sophical Review, 83 , 435–50.

Neisser, U. (1967). Cognitive psychology. NewYork: Appleton-Century-Crofts.

Nelson, J. (1990). Was Aristotle a functionalist?Review of Metaphysics, 43 , 791–802 .

Nussbaum, M. C., & Putnam, H. (1992).Changing Aristotle’s mind. In M. Nussbaum& A. Rorty (Eds.), Essays on Aristotle’sDe Anima (pp. 27–56). Oxford: ClarendonPress.

Page 51: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

a brief history of the philosophical problem of consciousness 33

O’Daly, G. (1987). Augustine’s philosophy of mind.London: Duckworth.

Perner, J. (1991). Understanding the repre-sentational mind. Cambridge, MA: MITPress.

Plato (1961). Plato: Collected dialogues (E. Hamil-ton & H. Cairns, Eds.). Princeton: PrincetonUniversity Press.

Priestley, J. (1975). Disquisitions relating to matterand spirit. New York: Arno Press. (Originallypublished 1777.)

Russell, B. (1927). The analysis of matter. London:Kegan Paul.

Schopenhauer, A. (1966). The world as willand representation (E. Payne, Trans.). NewYork: Dover Books. (Originally published1819.)

Seager, W. (1999). Theories of consciousness, NewYork: Routledge.

Seager, W. (2005). Panpsychism. In E. Zalta(Ed.), Stanford encyclopedia of philosophy,http://plato.stanford.edu/archives/sum2005 /entries/panpsychism.

Smart, J. (1959). Sensations and brain processes.Philosophical Review, 68, 141–56. Reprinted inslightly revised form in V. Chappell (ed.),

The philosophy of mind. Englewood Cliffs, NJ:Prentice-Hall.

Spinoza, B. (1677/1985). Ethics. In E. Curly (Ed.& Trans.), The collected works of Spinoza (Vol. 1).Princeton: Princeton University Press.

Stubenberg, L. (2005). ‘Neutral Monism’,The Stanford Encyclopedia of Philosophy(Spring 2005 Edition), Edward N. Zalta (ed.)http://plato.stanford.edu/.

Toland, J. (1704). Letters to Serena, London:Bernard Lintot.

Whitehead, A. (1929). Process and reality: Anessay in cosmology. New York: Macmillan.

Wilkes, K. (1988). Yishi, duh, um, and con-sciousness. In A. Marcel & E. Bisiach (Eds.),Consciousness in contemporary science. Oxford:Oxford University Press.

Wilson, F. (2003). John Stuart Mill. In E. Zalta(Ed.), Stanford encyclopedia of philosophy.http://plato.stanford.edu/archives/fall2003 /entries/mill

Wundt, W. (1894). Vorlesungen uber die Menschen-und Thierseele (Lectures on human and ani-mal psychology; J. Creighton & E. Titchener,Trans.). Hamburg: L. Voss. (Originally pub-lished 1892 .)

Page 52: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c02 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :3

34

Page 53: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

C H A P T E R 3

Philosophical Theories of Consciousness:Contemporary Western Perspectives

Uriah Kriegel

Abstract

This chapter surveys current approaches toconsciousness in Anglo-American analyticphilosophy. It focuses on five approaches,to which I will refer as mysterianism, dual-ism, representationalism, higher-order mon-itoring theory, and self-representationalism.With each approach, I will present in order(i) the leading account of consciousnessalong its line, (ii) the case for the approach,and (iii) the case against the approach. I willnot issue a final verdict on any approach,though by the end of the chapter it shouldbe evident where my own sympathies lie.

Introduction: The Conceptof Consciousness

This chapter surveys current approaches toconsciousness in Anglo-American analyticphilosophy. It focuses on five approaches,to which I will refer as mysterianism, dual-ism, representationalism, higher-order mon-itoring theory, and self-representationalism.

With each approach, I will present in order(i) the leading account of consciousnessalong its line, (ii) the case for the approach,and (iii) the case against the approach.1 I willnot issue a final verdict on any approach,though by the end of the chapter it shouldbe evident where my own sympathies lie.

Before starting, let us draw certain dis-tinctions that may help fix our ideas for thediscussion to follow. The term “conscious-ness” is applied in different senses to dif-ferent sorts of things. It is applied, in onesense, to biological species, as when we saysomething like “Gorillas are conscious, butsnails are not”; in a different sense, to individ-ual organisms or creatures, as when we say“Jim is conscious, but Jill is comatose”; andin a third sense, to particular mental states,events, and processes, as when we say “Mythought about Vienna is conscious, but Jim’sbelief that there are birds in China is not.”To distinguish these different senses, we maycall the first species consciousness, the secondcreature consciousness, and the third state con-sciousness.2

There appear to be certain concep-tual connections among these three senses,

35

Page 54: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

36 the cambridge handbook of consciousness

such that they may be analyzable interms of one another. Plausibly, speciesconsciousness is analyzable in terms ofcreature consciousness: a species S is species-conscious just in case a prototypical spec-imen of S is creature-conscious. Creatureconsciousness may in turn be analyzable interms of state consciousness: a creature Cis creature-conscious just in case C has (oris capable of having) mental states that arestate-conscious. If so, state consciousness isthe most fundamental notion of the three.

State consciousness is itself ambiguous asbetween several senses. If Jim tacitly believesthat there are birds in China, but never con-sciously entertained this belief, whereas Jilloften contemplates consciously the fact thatthere are birds in China, but is not doing soright now, there is a sense of “conscious” inwhich we may want to say that Jim’s belief isunconscious whereas Jill’s is conscious. Letus call this availability consciousness.3 Bycontrast, there is a sense of “conscious” inwhich a mental state is conscious when andonly when there is something it is like for thesubject – from the inside – to have it.4 Thus,when I take in a spoonful of honey, there isa very specific – sweet, smooth, honey-ish,if you will – way it is like for me to have theresulting conscious experience. Let us callthis phenomenal consciousness.

Some of the leading scientific theoriesof consciousness – such as Baars’ (1988,1997) Global Workspace Theory and Crickand Koch’s (1990, 2003) synchrony-based“neurobiological theory” – shed much lighton availability consciousness and neighbor-ing notions. But there is a persistent feel-ing that they do not do much to explainphenomenal consciousness. Moreover, thereis a widespread sense that there is some-thing principled about the way in whichthey fail to do so. One way to bring out thisfeeling is through such philosophers’ con-cepts as the explanatory gap (Levine, 1983) orthe hard problem (Chalmers, 1995). Accord-ing to Chalmers, for instance, the problemsof explaining the various cognitive func-tions of conscious experiences are the “easyproblems” of consciousness; the “hard prob-lem” is that of understanding why thereshould be something it is like to execute these

functions.5 The sense is that an insight of acompletely different order would be neededto make scientific theories, and indeed sci-ence itself, at all relevant to our understand-ing of phenomenal consciousness. Some sortof conceptual breakthrough, which wouldenable us to conceive of the problem of con-sciousness in new ways, is required. This iswhere philosophical theories of conscious-ness come into the picture.6 ,7

Mysterianism

Some philosophers hold that science cannotand will not, in fact, help us understand con-sciousness. So-called mysterianists hold thatthe problem of consciousness – the prob-lem of how there could be something likephenomenal consciousness in a purely nat-ural world – is not a problem we are capa-ble (even in principle) of solving. Thus con-sciousness is a genuine mystery, not merelya prima facie mystery that we may one daydemystify.

We may introduce a conceptual distinc-tion between two kinds of mysterianism – anontological one and an epistemological one.According to ontological mysterianism, con-sciousness cannot be demystified because itis an inherently mysterious (perhaps super-natural) phenomenon.8 According to episte-mological mysterianism, consciousness is inno way inherently mysterious, and a greatermind could in principle demystify it – butit just so happens that we humans lack thecognitive capacities that would be required.

Epistemological mysterianism has actu-ally been pursued by contemporary West-ern philosophers. The most comprehensivedevelopment of the view is offered in ColinMcGinn’s (1989, 1995 , 1999, 2004) writ-ings. We now turn to an examination of hisaccount.

McGinn’s Mysterianism

McGinn’s theory of consciousness has twocentral tenets. First, the phenomenon of con-sciousness is in itself perfectly natural andin no way mysterious. Second, the humanmind’s conceptual capacities are too poor

Page 55: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 37

to demystify consciousness. That is, McGinnis an epistemological mysterianist: he doesnot claim that the world contains, in andof itself, insoluble mysteries, but he doescontend that we will never understand con-sciousness.

At the center of McGinn’s theory is theconcept of cognitive closure. McGinn (1989,p. 529) defines cognitive closure as follows:“A type of mind M is cognitively closed withrespect to a property P (or a theory T) if andonly if the concept-forming procedures atM’s disposal cannot extend to a grasp of P (oran understanding of T).”9 To be cognitivelyclosed to X is thus to lack the procedure forconcept formation that would allow one toform the concept of X.

To illustrate the soundness and appli-cability of the notion of cognitive clo-sure, McGinn adduces the case of animalminds and their constitutive limitations.As James Joyce writes in A Portrait ofthe Artist as a Young Man, rats’ mindsdo not understand trigonometry. Likewise,snails do not understand quantum physics,and cats do not understand market eco-nomics. Why should humans be spared thispredicament? As a natural, evolved mecha-nism, the human mind must have its ownlimitations. One such limitation, McGinnsuggests, may be presented by the phe-nomenon of consciousness.

Interestingly, McGinn does not claim thatwe are cognitively closed to consciousnessitself. Rather, his claim is that we are cog-nitively closed to that property of the brainresponsible for the production of conscious-ness. As someone who does not wish to por-tray consciousness as inherently mysterious,McGinn is happy to admit that the brainhas the capacity to somehow produce con-scious awareness. But how the brain does so issomething he claims we cannot understand.Our concept-forming procedures do extendto a grasp of consciousness, but they do notextend to a grasp of the causal basis of con-sciousness in the brain.

The Master Argument for Mysterianism

A natural reaction to McGinn’s view is thatit may be based upon an overly pessimistic

induction. From the fact that all the theoriesof consciousness we have come up with todate are hopelessly unsatisfactory, it shouldnot be concluded that our future theorieswill be the same. It may well be that a thou-sand years hence we will look back withamusement at the days of our ignorance andself-doubt.

However, McGinn’s main argument forhis position is not the inductive argumentjust sketched. Rather, it is a deductive argu-ment based on consideration of our cog-nitive constitution. The argument revolvesaround the claim that we do not have a sin-gle mechanism, or faculty, that can accessboth consciousness and the brain. Our accessto consciousness is through the faculty ofintrospection. Our access to the brain isthrough the use of our senses, mainly vision.But unfortunately, the senses do not give usaccess to consciousness proper, and intro-spection does not give us access to the brainproper. Thus, we cannot see with our eyeswhat it is like to taste chocolate. Nor can wetaste with our buds what it is like to tastechocolate. We can, of course, taste choco-late. But we cannot taste the feeling of tast-ing chocolate. The feeling of tasting choco-late is something we encounter only throughintrospection. But alas, introspection fails togive us access to the brain. We cannot intro-spect neurons, and so could never introspectthe neural correlates of consciousness.

Using the term “extrospective” to denotethe access our senses give us to the world,McGinn’s argument may be formulated asfollows:

1) We can have introspective access to con-sciousness but not to the brain;

2) We can have extrospective access to thebrain but not to consciousness;

3) We have no accessing method that is bothintrospective and extrospective; there-fore,

4) We have no method that can give usaccess to both consciousness and thebrain.

As we can see, the argument is based on con-siderations that are much more principledthan a simple pessimistic induction from

Page 56: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

38 the cambridge handbook of consciousness

past theories. Dismayed as we may be bythe prospects of mysterianism, we must notconfuse McGinn’s position for sheer despair.Instead, we must contend with the argumentjust formulated.

Some materialists would contest the firstpremise. Paul Churchland (1985) has repeat-edly argued that we will one day be ableto directly introspect the neurophysiolog-ical states of our brains. Perception andintrospection are theory-laden, according toChurchland, and can therefore be funda-mentally changed when the theory they areladen with is changed.10 Currently, our intro-spective practice is laden with a broadlyCartesian theory of mind. But when wemature enough scientifically, and when theright neuroscientific theory of consciousnessmakes its way to our classroom and livingroom, this will change and we (or ratherour distant offspring) will start thinkingabout ourselves in purely neurophysiologicalcategories.

Other materialists may deny the secondpremise of the argument. As long as brainstates are considered to be merely correlatesof conscious states, the claim that the con-scious states cannot be perceived extrospec-tively is plausible. But according to material-ists, conscious states will turn out to be iden-tical with the brain states in question, ratherthan merely correlated therewith. If so, per-ceiving those brain states would just be per-ceiving the conscious states.11 To assume thatwe cannot perceive the conscious states is tobeg the question against the materialist.

The Case Against Mysterianism

To repeat the last point, McGinn appearsto assume that conscious states are causedby brain states. His argument does not gothrough if conscious states are simply iden-tical to brain states. In other words, theargument does not go through unless anyidentity of conscious states with brain statesis rejected.12 But such rejection amountsto dualism. McGinn is thus committed todualism.13 On the view he presupposes, theconscious cannot be simply identified withthe physical. Rather, there are two different

kinds of states a person or organism may bein: brain states on the one hand and con-scious states on the other.

Recall that McGinn’s mysterianism is ofthe epistemological variety. The epistemo-logical claim now appears to be conditionalupon an ontological claim, namely dualism.So at the end of the day, as far as the ontologyof consciousness is concerned, McGinn is astraightforward dualist. The plausibility ofhis (epistemological) mysterianism depends,to that extent, on the plausibility of (onto-logical) dualism. In the next section, we con-sider the plausibility of dualism.

Before doing so, let us raise one more diffi-culty for mysterianism, and in particular thenotion of cognitive closure. It is, of course,undeniable that rats do not understandtrigonometry. But observe that trigonomet-ric problems do not pose themselves to rats(Dennett, 1995 , pp. 381–383). Indeed, itis precisely because rats do not understandtrigonometry that trigonometric problemsdo not pose themselves to rats. For rats tograpple with trigonometric problems, theywould have to understand quite a bit oftrigonometry. Arguably, it is a mark of gen-uine cognitive closure that certain questionsdo not even pose themselves to the cogni-tively closed. The fact that certain questionsabout consciousness do pose themselves tohumans may therefore indicate that humansare not cognitively closed to consciousness(or more accurately to the link between con-sciousness and the brain).14

Dualism

Traditionally, approaches to the ontology ofmind and consciousness have been dividedinto two main groups: monism and dual-ism. The former holds that there is one kindof stuff in the world; the latter that thereare two.15 Within monism, there is a furtherdistinction between views that construe thesingle existing stuff as material and viewsthat construe it as immaterial; the formerare materialist views, the latter idealist.16

Descartes framed his dualism in terms oftwo different kinds of substance (where a

Page 57: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 39

substance is something that can in principleexist all by itself). One is the extended sub-stance, or matter; the other is the thinkingsubstance, or mind. A person, on this view,is a combination of two different objects: abody and a soul. A body and its correspond-ing soul “go together” for some stretch oftime, but being two separate objects, theirexistence is independent and can thereforecome apart.17

Modern dualism is usually of a more sub-tle sort, framed not in terms of substances(or stuffs), but rather in terms of properties.The idea is that even though there is onlyone kind of stuff or substance, there are twokinds of properties, mental and physical, andneither can be reduced to the other.18 Thisis known as property dualism. A particularlycautious version of property dualism claimsthat although most mental properties arereducible to physical ones, conscious or phe-nomenal properties are irreducible.

Chalmers’ Naturalistic Dualism

For many decades, dualistic arguments weretreated mainly as a challenge to a physical-ist worldview, not so much as a basis fora non-physicalist alternative. Thus dualismwas not so much an explanation or accountof consciousness, but rather the avoidanceof one. This state of affairs has been recti-fied in the past decade or so, mainly throughthe work of David Chalmers (1995 , 1996,2002a).

Chalmers’ theory of consciousness, whichhe calls naturalistic dualism, is stronger thanordinary dualism, in that it claims not onlythat phenomenal properties are not identicalto physical properties, but also that they failto supervene – at least with metaphysical orlogical necessity19 – on physical properties.20

We tend to think, for instance, that biologicalproperties necessarily supervene on physicalproperties, in the sense that two systems can-not possibly differ in their biological proper-ties if all their physical properties are exactlysimilar. But according to Chalmers, phenom-enal properties are different: two systemscan be exactly the same physically, but havedifferent phenomenal properties.

At the same time, Chalmers does nottake phenomenal properties to be accidentalor random superpositions onto the physicalworld. On the contrary, he takes them to becausally grounded in physical laws. That is,instantiations of phenomenal properties arecaused by instantiations of physical proper-ties, and they are so caused in accordancewith strict laws of nature.21

This means that phenomenal conscious-ness can be explained in physical terms. It isjust that the explanation will not be a reduc-tive explanation, but rather a causal expla-nation. To explain an event or phenomenoncausally is to cite its cause, that is, to saywhat brought it about or gave rise to it.22

According to Chalmers, one could in princi-ple explain the instantiation of phenomenalproperties by citing their physical causes.

A full theory of consciousness woulduncover and list all the causal laws that gov-ern the emergence of phenomenal proper-ties from the physical realm. And a fulldescription of nature and its behavior wouldhave to include these causal laws on topof the causal laws obtained by “ultimatephysics.”23

Chalmers himself does not attempt todetail many of these laws. But he does pro-pose a pair of principles to which we shouldexpect such laws to conform. These arethe “structural coherence” principle and the“organizational invariance” principle. Theformer concerns the sort of direct avail-ability for global control that consciousstates appear to exhibit, the latter the sys-tematic correspondence between a system’sfunctional organization and its phenomenalproperties.24

The Case for Dualism

The best-known arguments in favor of prop-erty dualism about consciousness are so-called epistemic arguments. The two mainones are Frank Jackson’s (1984) “Knowl-edge Argument” and Thomas Nagel’s (1974)“what is it like” argument. Both follow a sim-ilar pattern. After describing a situation inwhich all physical facts about something areknown, it is shown that some knowledge is

Page 58: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

40 the cambridge handbook of consciousness

still missing. It is then inferred that the miss-ing knowledge must be knowledge of non-physical facts.

The Knowledge Argument proceeds asfollows. Suppose a baby is kept in a black-and-white environment, so that she neverhas color experiences. But she grows tobecome an expert on color and color vision.Eventually, she knows all the physical factsabout color and color vision. But when shesees red for the first time, she learns some-thing new: she learns what it is like to see red.That is, she acquires a new piece of knowl-edge. Since she already knew all the physicalfacts, this new piece of knowledge cannot beknowledge of a physical fact. It is thereforeknowledge of a non-physical fact. So, the factthereby known (what it is like to see red) isa non-physical fact.

Nagel’s argument, although more obscurein its original presentation, can be “format-ted” along similar lines. We can know all thephysical facts about bats without knowingwhat it is like to be a bat. It follows that theknowledge we are missing is not knowledgeof a physical fact. Therefore, what it is liketo be a bat is not a physical fact.

These arguments have struck many mate-rialists as suspicious. After all, they infer anontological conclusion from epistemologicalpremises. This move is generally suspicious,but it is also vulnerable to a response thatemphasizes what philosophers call the inten-sionality of epistemic contexts.25 This hasbeen the main response among materialists(Loar, 1990; Tye, 1986). The claim is thatthe Knowledge Argument’s protagonist doesnot learn a new fact when she learns what itis like to see red, but rather learns an oldfact in a new way; and similarly for the batstudent.26

Consider knowledge that the evening starglows and knowledge that the morning starglows. These are clearly two different piecesof knowledge. But the fact thereby knownis one and the same – the fact that Venusglows. Knowledge that this is what it is liketo see red and knowledge that this is the neu-ral assembly stimulated by the right wave-length may similarly constitute two separatepieces of knowledge that correspond to onlyone fact being known. So from the acquisi-

tion of a new piece of knowledge one cannotinfer the existence of a new fact – and thatis precisely the inference made in the abovedualist arguments.27 ,28

A different argument for dualism that iswidely discussed today is Chalmers’ (1996)argument from the conceivability of zom-bies. Zombies are imaginary creatures thatare physically indistinguishable from us butlack consciousness. We seem to be able toconceive of such creatures, and Chalmerswants to infer from this that materialism isfalse. The argument is often caricatured asfollows:

1) Zombies are conceivable;2) If As are conceivable, then As are (meta-

physically) possible;29 therefore,3) Zombies are possible; but,4) Materialism entails that zombies are not

possible; therefore,5) Materialism is false.

Or, more explicitly formulated:

1) For any physical property P, it is conceiv-able that P is instantiated but conscious-ness is not;

2) For any pair of properties F and G, if itis conceivable that F is instantiated whenG is not, then it is (metaphysically) pos-sible that F is instantiated when G is not;therefore,

3) For any physical property P, it is possiblethat P is instantiated and consciousness isnot; but,

4) If a property F can be instantiated whenproperty G is not, then F does not super-vene on G;30 therefore,

5) For any physical property P, consciousnessdoes not supervene on P.

To this argument it is objected that the sec-ond premise is false, and the conceivabilityof something does not entail its possibility.Thus, we can conceive of water not beingH2 O, but this is in fact impossible; Eschertriangles are conceivable, but not possible.31

The zombie argument is more subtlethan this, however. One way to get at thereal argument is this.32 Let us distinguishbetween the property of being water and

Page 59: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 41

the property of appearing to be water, orbeing apparent water.33 For a certain quan-tity of stuff to be water, it must be H2 O.But for it to appear to be water, it needonly be clear, drinkable, liquid, and so on –or perhaps only strike normal subjects asclear, drinkable, liquid, etc. Now, althoughthe unrestricted principle that conceivabil-ity entails possibility is implausible, a versionof the principle restricted to what we maycall appearance properties is quite plausible.Thus, if we can conceive of apparent waternot being H2 O, then it is indeed possible thatapparent water should not be H2 O.

Once the restricted principle is accepted,there are two ways a dualist may proceed.The zombie argument seems to be capturedmore accurately as follows:34

1) For any physical property P, it is conceiv-able that P is instantiated but apparentconsciousness is not;

2) For any pair of properties F and G, suchthat F is an appearance property, if it isconceivable that F is instantiated when Gis not, then it is (metaphysically) possi-ble that F is instantiated when G is not;therefore,

3) For any physical property P, it is possiblethat P is instantiated when apparent con-sciousness is not; but,

4) If a property F can be instantiated whenproperty G is not, then F does not (meta-physically) supervene on G; therefore,

5) For any physical property P, apparentconsciousness does not (metaphysically)supervene on P.

A materialist might want to reject thisargument by denying Premise 2 (the res-tricted conceivability-possibility principle).Whether the restricted principle is true issomething we cannot settle here. Note, how-ever, that it is surely much more plausiblethan the corresponding unrestricted princi-ple, and it is the only principle that the argu-ment for dualism really needs.

Another way the argument could berejected is by denying the existence of suchproperties as apparent water and appar-ent consciousness.35 More generally, perhaps,while “natural” properties such as being

water or being conscious do exist, “unnatu-ral” properties do not, and appearance prop-erties are unnatural in the relevant sense.36

To avoid this latter objection, a dualistmay proceed to develop the argument dif-ferently, claiming that in the case of con-sciousness, there is no distinction betweenappearance and reality (Kripke, 1980). Thiswould amount to the claim that the propertyof being conscious is identical to the prop-erty of appearing to be conscious. The con-ceivability argument then goes like this:

1) For any physical property P, it is conceiv-able that P is instantiated but apparentconsciousness is not;

2) For any pair of properties F and G, suchthat F is an appearance property, if it isconceivable that F is instantiated when Gis not, then it is (metaphysically) possi-ble that F is instantiated when G is not;therefore,

3) For any physical property P, it is possiblethat P is instantiated when apparent con-sciousness is not; but,

4) If property F can be instantiated whenproperty G is not, then F does not super-vene on G; therefore,

5) For any physical property P, apparent con-sciousness does not supervene on P; but,

6) Consciousness = apparent consciousness;therefore,

7) For any physical property P, consciousnessdoes not supervene on P.

Materialists may reject this argument bydenying that there is no distinction betweenappearance and reality when it comes to con-sciousness (the sixth premise).

The debate over the plausibility of thevarious versions of the zombie argumentcontinues. A full critical examination isimpossible here. Let us move on, then,to consideration of the independent caseagainst dualism.

The Case against Dualism

The main motivation to avoid dualism con-tinues to be the one succinctly wordedby Smart (1959, p. 143) almost a half-century ago: “It seems to me that science is

Page 60: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

42 the cambridge handbook of consciousness

increasingly giving us a viewpoint wherebyorganisms are able to be seen as physico-chemical mechanisms: it seems that eventhe behavior of man himself will one day beexplicable in mechanistic terms.” It would becurious if consciousness stood out in natureas the only property that defied reduc-tive explanation in microphysical terms.More principled arguments aside, this simpleobservation seems to be the chief motivat-ing force behind naturalization projects thatattempt to reductively explain conscious-ness and other recalcitrant phenomena.

As I noted above, against traditional dual-ists it was common to present the moremethodological argument that they do notin fact propose any positive theory of con-sciousness, but instead rest content witharguing against existing materialist theories,and that this could not lead to real progress inthe understanding of consciousness. Yet, thischarge cannot be made against Chalmers,who does propose a positive theory ofconsciousness.

Chalmers’ own theory is open to moresubstantial criticisms, however. In particu-lar, it is arguably committed to epiphenom-enalism about consciousness, the thesis thatconscious states and events are causally inert.As Kim (1989a,b, 1992) has pointed out,it is difficult to find causal work for non-supervenient properties. Assuming that thephysical realm is causally closed (i.e., thatevery instantiation of a physical propertyhas as its cause the instantiation of anotherphysical property), non-supervenient prop-erties must either (i) have no causal effecton the physical realm or (ii) causally overde-termine the instantiation of certain physicalproperties.37 But because pervasive overde-termination can be ruled out as implausi-ble, non-supervenient properties must becausally inert vis-a-vis the physical world.However, the notion that consciousness iscausally inert, or epiphenomenal, is extremelycounter-intuitive: we seem to ourselves toact on our conscious decisions all the timeand at will.

In response to the threat of epiphenom-enalism, Chalmers pursues a two-prongedapproach.38 The first prong is to claim

that epiphenomenalism is merely counter-intuitive, but does not face serious argumen-tative challenges. This is not particularly sat-isfying, however: all arguments must cometo an end, and in most of philosophy, theend is bound to be a certain intuition orintuitively compelling claim. As intuitionsgo, the intuition that consciousness is notepiphenomenal is very strong.

The second prong is more interesting.Chalmers notes that physics characterizesthe properties to which it adverts in purelyrelational terms – essentially, in terms ofthe laws of nature into which they enter.The resulting picture is a network of inter-related nodes, but the intrinsic character ofthe thus interrelated nodes remains opaque.It is a picture that gives us what BertrandRussell once wittily called “the causal skele-ton of the world.” Chalmers’ suggestion isthat phenomenal properties may constitutethe intrinsic properties of the entities whoserelational properties are mapped out byphysics. At least this is the case with intrin-sic properties of obviously conscious entities.As for apparently inanimate entities, theirintrinsic properties may be crucially similarto the phenomenal properties of consciousentities. They may be, as Chalmers puts it,“protophenomenal” properties.

Although intriguing, this suggestion hasits problems. It is not clear that physicsindeed gives us only the causal skeleton ofthe world. It is true that physics characterizesmass in terms of its causal relations to otherproperties. But it does not follow that theproperty thus characterized is nothing but abundle of causal relations. More likely, therelational characterization of mass is whatfixes the reference of the term “mass,” butthe referent itself is nonetheless an intrinsicproperty. The bundle of causal relations isthe reference-fixer, not the referent. On thisview of things, although physics character-izes mass in causal terms, it construes massnot as the causing of effects E, but rather asthe causer (or just the cause) of E. It con-strues mass as the relatum, not the relation.

Furthermore, if physics did present uswith the causal skeleton of the world, thenphysical properties would turn out to be

Page 61: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 43

epiphenomenal (or nearly so). As Block(1990b) argued, functional properties –properties of having certain causes andeffects – are ultimately inert, because aneffect is always caused by its cause, not byits causing. So if mass was the causing of E,rather than the cause of E, then E wouldnot be caused by mass. It would be caused,rather, by the protophenomenal propertythat satisfies the relational characterizationattached to mass in physics.39 The upshotis that if mass was the causing of E, ratherthan the cause of E, mass would not have thecausal powers we normally take it to have.More generally, if physical properties werenothing but bundles of causal relations, theywould be themselves causally inert.40

Chalmers faces a dilemma, then: eitherhe violates our strongly held intuitionsregarding the causal efficacy of phenomenalproperties, or he violates our strongly heldintuitions regarding the causal efficacy ofphysical properties. Either way, half hisworld is epiphenomenal, as it were. In anyevent, as we saw above, the claim that phys-ical properties are merely bundles of causalrelations – which therefore call for the postu-lation of phenomenal and protophenomenalproperties as the putative causal relata – isimplausible.

Problems concerning the causal efficacyof phenomenal properties will attach toany account that portrays them as non-supervenient upon, or even as non-reducibleto, physical properties. These problems areless likely to rear their heads for reduc-tive accounts of consciousness. Let usturn, then, to an examination of the mainreductive accounts discussed in the currentliterature.

Representationalism

According to the representational theoryof consciousness – or for short, representa-tionalism – the phenomenal properties ofconscious experiences can be reductivelyexplained in terms of the experiences’ rep-resentational properties.41 Thus, when I lookup at the blue sky, what it is like for me to

have my conscious experience of the sky isjust a matter of my experience’s representa-tion of the blue sky. The phenomenal char-acter of my experience can be identified with(an aspect of) its representational content.42

This would be a theoretically happyresult, since we have a fairly good notion asto how mental representation may be itselfreductively explained in terms of informa-tional and/or teleological relations betweenneurophysiological states of the brain andphysical states of the environment.43 Thereductive strategy here is two-stepped, then:first reduce phenomenal properties to repre-sentational properties, then reduce represen-tational properties to informational and/orother physical properties of the brain.

Tye’s PANIC Theory

Not every mental representation is con-scious. For this reason, a representationalaccount of consciousness must pin downmore specifically the kind of representationthat would make a mental state conscious.The most worked-out story in this genre isprobably Michael Tye’s (1992 , 1995 , 2000,2002) “PANIC Theory.”44

The acronym “PANIC” stands for Poised,Abstract, Non-conceptual, Intentional Con-tent. So for Tye, a mental representationqualifies as conscious when, and only when,its representational content is (a) inten-tional, (b) non-conceptual, (c) abstract, and(d) poised. What all these qualifiers mean isnot particularly important, though the prop-erties of non-conceptuality and poise areworth pausing to explicate.45

The content of a conscious experience isnon-conceptual in that the experience canrepresent properties for which the subjectlacks the concept. My conscious experienceof the sky represents the sky not simply asbeing blue, but as being a very specific shadeof blue, say blue17 . And yet if I am presenteda day later with two samples of very simi-lar shades of blue, blue17 and blue18 , I willbe unable to recognize which shade of bluewas the sky’s. This suggests that I lack theconcept of blue17 . If so, my experience’s rep-resentation of blue17 is non-conceptual.46

Page 62: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

44 the cambridge handbook of consciousness

The property of poise is basically a func-tional role property: a content is poisedwhen it is ready and available to make adirect impact on the formation of beliefs anddesires. Importantly, Tye takes this to dis-tinguish conscious representation from, say,blindsighted representations. A square canbe represented both consciously and blind-sightedly. But only the conscious representa-tion is poised to make a direct impact on thebeliefs that the subject subsequently forms.

PANIC theory is supposed to cover notonly conscious perceptual experiences butalso all manners of phenomenal experi-ence: somatic, emotional, and so on. Thus, atoothache experience represents tissue dam-age in the relevant tooth, and does so inten-tionally, non-conceptually, abstractly, andwith poise.47

The Master Argument forRepresentationalism

The main motivation for representational-ism may seem purely theoretical: it holds thepromise of a reductive explanation of con-sciousness in well-understood informationaland/or teleological terms. Perhaps becauseof this, however, the argument that has beenmost influential in making representational-ism popular is a non-theoretical argument,one that basically rests on a phenomenolog-ical observation. This is the observation ofthe so-called transparency of experience. It hasbeen articulated in a particularly influentialmanner by Harman (1990), but goes back atleast to Moore (1903).

Suppose you have a conscious experienceof the blue sky. Your attention is focusedon the sky. You then decide to turn yourattention away from the sky and onto yourexperience of the sky. Now your attentionis no longer focused on the sky, but ratheron the experience thereof. What are youaware of? It seems that you are still awareof the blueness of the sky. Certainly you arenot aware of some second blueness, whichattaches to your experience rather than tothe sky. You are not aware of any intermedi-ary blue quality interposed between yourselfand the sky.

It appears, then, that when you pay atten-tion to your experience, the only thing youbecome aware of is which features of theexternal sky your experience represents. Inother words, the only introspectively acces-sible properties of conscious experience areits representational properties.

The transparency of experience providesa straightforward argument for representa-tionalism. The argument may be laid out asfollows:

1) The only introspectively accessible prop-erties of conscious experience are its rep-resentational properties;

2) The phenomenal character of consciousexperience is given by its introspectivelyaccessible properties; therefore,

3) The phenomenal character of consciousexperience is given by its representationalproperties.

The first premise is the thesis oftransparency; the second one is intendedas a conceptual truth (about what wemean by “phenomenal”). The conclusion isrepresentationalism.

Another version of the argument fromtransparency, one that Tye employs, centerson the idea that rejecting representational-ism in the face of transparency would requireone to commit to an “error theory.”48 Thisversion may be formulated as follows:

1) The phenomenal properties of consciousexperience seem to be representationalproperties;

2) It is unlikely that the phenomenal prop-erties of conscious experience are radi-cally different from what they seem to be;therefore,

3) It is likely that the phenomenal proper-ties of conscious experience are represen-tational properties.

Here the transparency thesis is again thefirst premise. The second premise is theclaim that convicting experience of massiveerror is to be avoided. And the conclusion isrepresentationalism.

Page 63: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 45

The Case against Representationalism

Most of the arguments that have been mar-shaled against representationalism are argu-ments by counter-example. Scenarios ofvarying degrees of fancifulness are adduced,in which allegedly (i) a conscious experi-ence has no representational properties, or(ii) two possible experiences with differentphenomenal properties have the same repre-sentational properties, or (iii) inversely, twopossible experiences with the same phenom-enal properties have different representa-tional properties. For want of space, I presentonly one representative scenario from eachcategory.

Block (1996) argues that phosphene expe-riences are non-representational. These canbe obtained by rubbing one’s eyes longenough so that when one opens them again,one “sees” various light bits floating about.Such experiences do not represent any exter-nal objects or features, according to Block.

In response, Tye (2000) claims that suchexperiences do represent – it is just that theymisrepresent. They misrepresent there to besmall objects with phosphorescent surfacesfloating around the subject’s head.

A long-debated case in which phenome-nal difference is accompanied by represen-tational sameness is due to Peacocke (1983).Suppose you stand in the middle of a mostlyempty road. All you can see in front of youare two trees. The two trees, A and B, havethe same size and shape, but A is twice as farfrom you as B. Peacocke claims that, beingaware that the two trees are the same size,you represent to yourself that they have thesame properties. And yet B “takes up moreof your visual field” than A, in a way thatmakes you experience the two trees differ-ently. There is phenomenal difference with-out representational difference.

Various responses to this argument havebeen offered by representationalists. Perhapsthe most popular is that although you rep-resent the two trees to have the same sizeproperties, you also represent them to havecertain different properties – for example,B is represented to subtend a larger visualangle than A (DeBellis, 1991; Harman, 1990;

Tye, 2000). To be sure, you do not necessar-ily possess the concept of subtending a visualangle. But recall that the content of expe-rience can be construed as non-conceptual.So your experience can represent the twotrees to subtend different visual angles with-out employing the concept of subtend-ing a visual angle. Thus a representationaldifference is matched to the phenomenaldifference.

Perhaps the most prominent allegedcounter-example is Block’s (1990a) InvertedEarth case. Inverted Earth is an imaginaryplanet just like Earth, except that everyobject there has the color complementary tothe one it has here. We are to imagine that asubject is clothed with color-inverting lensesand shipped to Inverted Earth unbeknownstto her. The color inversions due to the lensesand to the world cancel each other out,so that her phenomenal experiences remainthe same. But externalism about represen-tational contents ensures that the represen-tational content of her experiences eventu-ally change.49 Her bluish experiences nowrepresent a yellow sky. When her sky expe-riences on Inverted Earth are compared toher earthly sky experience, it appears thatthe two groups are phenomenally the samebut representationally different.

This case is still being debated in theliterature, but there are two representa-tionalist strategies for accommodating it.One is to argue that the phenomenal char-acter also changes over time on InvertedEarth (Harman, 1990); the other is to deviseaccounts of representational content thatmake the representational content of thesubject’s experiences remain the same onInverted Earth, externalism notwithstanding(Tye, 2000).50

There may be, however, a more princi-pled difficulty for representationalism thanthe myriad counter-examples it faces.51 Rep-resentationalism seems to construe the phe-nomenal character of conscious experiencespurely in terms of the sensuous qualities theyinvolve. But arguably there is more to phe-nomenal character than sensuous quality. Inparticular, there seems to be a certain mine-ness, or for-me-ness, to them.

Page 64: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

46 the cambridge handbook of consciousness

One way to put it is as follows (Kriegel,2005a; Levine, 2001; Smith, 1986). When Ihave my conscious experience of the bluesky, there is a bluish way it is like for meto have my experience. A distinction canbe drawn between two components of this“bluish way it is like for me”: the bluish com-ponent, which we may call qualitative char-acter, and the for-me component, which wemay call subjective character. We may con-strue phenomenal character as the comp-resence of qualitative and subjective charac-ter. This subjective character, or for-me-ness,is certainly an elusive phenomenon, but itis present in every conscious experience.Indeed, its presence seems to be a condi-tion of any phenomenality: it is hard tomake sense of the idea of a conscious expe-rience that does not have this for-me-nessto it. If it did not have this for-me-ness, itwould be a mere subpersonal state, a statethat takes place in me but is not for mein the relevant sense. Such a subpersonalstate seems not to qualify as a consciousexperience.

The centrality of subjective character (asconstrued here) to consciousness is some-thing that has been belabored in the phe-nomenological tradition (see Chapter 4 ;Zahavi, 1999). The concept of prereflec-tive self-consciousness – or a form of self-awareness that does not require focusedand explicit awareness of oneself and one’scurrent experience, but is rather built intothat very experience – is one that figurescentrally in almost all phenomenologicalaccounts of consciousness.52 But it has beensomewhat neglected in analytic philosophyof mind.53

The relative popularity of representation-alism attests to this neglect. While a repre-sentationalist account of sensuous qualities –what we have called qualitative character –may turn out to win the day (if the all-eged counter-examples can be overcome), itwould not provide us with any perspectiveon subjective character.54 Therefore, even ifrepresentationalism turns out to be a satis-factory account of qualitative character, it isunlikely to be a satisfactory account of phe-nomenal consciousness proper.

Higher-Order Monitoring Theory

One theory of consciousness from analyticphilosophy that can be interpreted as target-ing subjective character is the higher-ordermonitoring theory (HOMT). According toHOMT, what makes a mental state con-scious is the fact that the subject is awareof it in the right way. It is only when thesubject is aware (in that way) of a mentalstate that the state becomes conscious.55

HOMT tends to anchor consciousness inthe operation of a monitoring device. Thisdevice monitors and scans internal states andevents and produces higher-order represen-tations of some of them.56 When a mentalstate is represented by such a higher-orderrepresentation, it is conscious. So a mentalstate M of a subject S is conscious when,and only when, S has another mental state,M∗, such that M∗ is an appropriate represen-tation of M. The fact that M∗ represents Mguarantees that there is something it is likefor S to have M.57

Observe that, on this view, what con-fers conscious status on M is something out-side M, namely, M∗. This is HOMT’s reduc-tive strategy. Neither M nor M∗ is consciousin and of itself, independently of the otherstate. It is their coming together in the rightway that yields consciousness.58

Versions of the HOMT differ mainly inhow they construe the monitoring deviceand/or the representations it produces. Themost seriously worked out version is prob-ably David Rosenthal’s (1986, 1990, 2002a,b). Let us take a closer look at his “higher-order thought” theory.

Rosenthal’s Higher-Order ThoughtTheory

According to Rosenthal, a mental state isconscious when its subject has a suitablehigher-order thought about it.59 The higher-order state’s being a thought is supposedto rule out, primarily, its being a quasi-perceptual state.

There is a long tradition, hailing fromLocke, of construing the monitoring deviceas analogous in essential respects to a sense

Page 65: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 47

organ (hence as being a sort of “inner sense”)and accordingly as producing mental statesthat are crucially similar to perceptual rep-resentations and that may to that extentbe called “quasi-perceptual.” This sort of“higher-order perception theory” is cham-pioned today by Armstrong (1968, 1981)and Lycan (1987, 1996). Rosenthal believesthat this is a mistake and that the higher-order states that confer consciousness are notanalogous to perceptual representations.60

Rather, they are intellectual, or cognitive,states – that is, thoughts.

Another characteristic of thoughts – inaddition to being non-perceptual – is theirbeing assertoric. An assertoric state is onethat has a thetic, or mind-to-world, direc-tion of fit.61 This is to be contrasted withstates (such as wanting, hoping, disapprov-ing, etc.) that have primarily a telic, or world-to-mind, direction of fit.62 A third character-istic of thoughts – at least the kind suitablefor conferring consciousness – is that theyare occurrent mental states.63

Crucially, a suitable higher-order thoughtwould also have to be non-inferential, in thatit could not be the result of a conscious infer-ence from the lower-order state (or from anyother state, for that matter).64 To be sure,the thought is formed through some processof information processing, but that processmust be automatic and unconscious. This isintended to reflect the immediacy, or at leastfelt immediacy, of our awareness of our con-scious states.65 The fact that my experienceof the sky has for-me-ness entails that I amsomehow aware of its occurrence; but notany sort of awareness would do – very medi-ated forms of awareness cannot confer con-scious status on their objects.

One last characteristic Rosenthal ascribesto the “suitable” higher-order representationis that it represents the lower-order stateas a state of oneself. Its content must be, asthis is sometimes put, de se content.66 Sothe content of the higher-order represen-tation of my conscious experience of thesky is not simply something like “this bluishexperience is occurring,” but rather some-thing like “I myself am having this bluishexperience.”67

It is worth noting that according to Rosen-thal the second-order representation is nor-mally an unconscious state. To be sure, it neednot necessarily be: in the more introspective,or reflective, episodes of our conscious life,the second-order state becomes itself con-scious. It is then accompanied by a third-order state, one that represents its occur-rence in a suitable way. When I explicitlyintrospect and dwell on my conscious expe-rience of the sky, there are three separatestates I am in: the (first-order) experience, a(second-order) awareness of the experience,and a (third-order) representation of thatawareness. When I stop introspecting andturn my attention back to the sky, however,the third-order state evaporates, and con-sequently the second-order state becomesunconscious again. In any event, at any onetime the subject’s highest-order state, theone that confers consciousness on the chainof lower-order states “below” it, is uncon-scious.68

In summary, Rosenthal’s central thesisis that a mental state is conscious just incase the subject has a non-perceptual, non-inferential, assertoric, de se, occurrent repre-sentation of it. This account of consciousnessis not intended as an account of introspec-tive or reflective consciousness, but of regu-lar, everyday consciousness.

The Master Argument for Higher-OrderMonitoring Theory

The master argument for the higher-ordermonitoring approach to consciousness hasbeen succinctly stated by Lycan (2001):

1) A mental state M of subject S is consciouswhen, and only when, S is aware of M inthe appropriate way;

2) Awareness of X requires mental represen-tation of X; therefore,

3) M is conscious when, and only when, Shas a mental state M∗, such that M∗ rep-resents M in the appropriate way.

Although the second premise is by no meanstrivial, it is the first premise that has beenthe bone of contention in the philosophicalliterature (see, e.g., Dretske, 1993).

Page 66: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

48 the cambridge handbook of consciousness

One can defend the claim that consciousstates are states we are aware of having sim-ply as a piece of conceptual analysis – as aplatitude reflecting the very meaning of theword “conscious” (Lycan, 1996). To my ear,this sounds right: a mental state of whichthe subject is completely unaware is a sub-personal, and therefore unconscious, state.

To some, however, this seems plainlyfalse. When I have an experience of the sky,I am attending to the sky, they stress, not tomyself and my internal goings-on. By con-sequence, I am aware of the sky, not of myexperience of the sky. I am aware through myexperience, not of my experience.

This objection seems to rely, however,on an unwarranted assimilation of awarenessand attention. There is a distinction to bemade between attentive awareness and inat-tentive awareness. If S attends to X and notto Y, it follows that S is not attentively awareof Y, but it does not follow that S is com-pletely unaware of Y. For S may still be inat-tentively aware of Y.

Consider straightforward visual aware-ness. The distinction between foveal visionand peripheral vision means that our visualawareness at any one time has a peripheryas well as a focal center. Right now, I am(visually) focally aware of my laptop, butalso (visually) peripherally aware of an ash-tray at the far corner of my desk. A similardistinction applies to perceptual awarenessin other modalities: I am now (auditorily)focally aware of Duke Ellington’s voice and(auditorily) peripherally aware of the airconditioner’s hum overhead.

There is no reason to think that a similardistinction would not apply to higher-orderawareness. In reflective moods I may befocally aware of my concurrent experiencesand feelings, but on other occasions I am justperipherally aware of them. The former isan attentive form of second-order awareness,the latter an inattentive one. Again, from thefact that it is inattentive it would be falla-cious to infer that it is no awareness at all.

When it is claimed that conscious statesare states we are aware of, the claim is notthat we are focally aware of every consciousstate we are in. That is manifestly false: thefocus of our attention is mostly on the out-

side world. The claim is rather that we areat least peripherally aware of every consciousstate we are in.69 As long as M is conscious,S is aware, however dimly and inattentively,of M. Once S’s awareness of M is extin-guished altogether, M drops into the realm ofthe unconscious. This seems highly plausibleon both conceptual and phenomenologicalgrounds.70

The Case against Higher-OrderMonitoring Theory

Several problems for the monitoring the-ory have been continuously debated in thephilosophical literature. I focus here on whatI take to be the main three.71

The first is the problem of animal andinfant consciousness. It is intuitively plausi-ble to suppose that cats, dogs, and humanneonates are conscious, that is, they haveconscious states; but it appears empiricallyimplausible that they should have second-order representations (Lurz, 1999). Theproblem is particularly acute for Rosenthal’saccount, since it is unlikely that these crea-tures can have thoughts, and moreover of thecomplex form, “I myself am enjoying thismilk.”

There are two ways to respond to thisobjection. One is to deny that having suchhigher-order representations requires a levelof sophistication of an order unlikely tobe found in (say) cats. Thus, Rosenthal(2002b) claims that whereas adult humanhigher-order thoughts tend to be conceptu-ally structured and employ a rich conceptof self, these are not necessary features ofsuch thoughts. There could be higher-orderthoughts that are conceptually simple andemploy a rudimentary concept of self, onethat consists merely in the ability to distin-guish oneself from anything that is not one-self. It may well turn out that worms, wood-peckers, or even day-old humans lack eventhis level of conceptual sophistication – inwhich case we would be required to denythem consciousness – but it is unlikely thatcats, dogs, and year-old humans lack them.

The second possible line of response isto dismiss the intuition that animals, suchas cats, dogs, and even monkeys, do in

Page 67: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 49

fact have conscious states. Thus, Carruthers(1998, 1999) claims that there is a signifi-cant amount of projection that takes placewhen we ascribe conscious states to, say,our pets. In reality there is very little evi-dence to suggest that they have not onlyperceptual and cognitive states but alsoconscious ones.

Both lines of response offer some hopeto the defender of higher-order monitor-ing, but also implicate the theory in certaincounter-intuitive and prima facie implausi-ble claims. Whether these could somehowbe neutralized, or accepted as outweighed bythe theoretical benefits of HOMT, is some-thing that is very much under debate.

Perhaps more disturbing is the problemof so-called targetless higher-order thoughts(or more generally, representations). Whensomeone falsely believes that the almondtree in the backyard is blooming again, thereare two ways he or she may get things wrong:(i) it may be that the backyard almond treeis not blooming, or (ii) it may be that thereis no almond tree in the backyard (bloomingor not). Let us call a false belief of type (ii)a targetless thought. HOMT gets into trou-ble when a subject has a targetless higher-order thought (Byrne, 1997).72 Suppose at atime t subject S thinks (in the suitable way)that she has a throbbing toothache, when inreality she has no toothache at all (throb-bing or not). According to HOMT, what itis like for S at t is the way it is like to have athrobbing toothache, even though S has notoothache at t. In other words, if S has anM∗ that represents M when in reality thereis no M,73 S will be under the impressionthat she is in a conscious state when in real-ity she is not. (She is not in a conscious statebecause M does not exist, and it is M that issupposed to bear the property of being con-scious.) Moreover, on the assumption that aperson is conscious at a time t only if shehas at least one conscious state at t,74 thiswould entail that when a subject harborsa targetless higher-order misrepresentation,she is not conscious, even though it feels toher as though she is. This is a highly counter-intuitive consequence: we want to say that aperson cannot be under the impression thatshe is conscious when she is not.

There are several ways higher-order mon-itoring theorists may respond to this objec-tion. Let us briefly consider three possibleresponses.

First, they may claim that when M∗ istargetless, the property of being conscious,although not instantiated by M, is instanti-ated by M∗. But as we saw above, accordingto their view, M∗ is normally unconscious.So to say that M∗ instantiates the propertyof being conscious would be to say that it is,in the normal case, both conscious and notconscious – which is incoherent.75

Second, they may claim that the prop-erty of being conscious is, in reality, nota property of the discrete state M, butrather attaches itself to the compound ofM and M∗.76 But this will not work either,because HOMT would then face the fol-lowing dilemma. Either the compound stateM + M∗ is a state we are aware of having,or it is not. If it is not, then HOMT is false,since it claims that conscious states are stateswe are aware of having. If it is, then accord-ing to the theory it must be represented by athird-order mental state, M∗∗, in which casethe same problem would recur when M∗∗ istargetless.

Third, they may claim that there areno targetless higher-order representations.But even if this can be shown to be theactual case (and it is hard to imagine howthis would be done), we can surely con-ceive of counterfactual situations in whichtargetless higher-order representationsdo occur.77

A third problem for the HOMT is itstreatment of the epistemology of conscious-ness (Goldman, 1993b; Kriegel, 2003b). Ourknowledge that we are in a conscious stateis first-person knowledge, knowledge that isnot based on inference from experimental,or theoretical, or third-personal evidence.But if HOMT were correct, what wouldmake our conscious states conscious is (nor-mally) the occurrence of some unconsciousstate (i.e., the higher-order representation),so in order to know that we are in a consciousstate we would need to know of the occur-rence of that unconscious state. But knowl-edge of unconscious states is necessarily the-oretical and third-personal, since we have

Page 68: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

50 the cambridge handbook of consciousness

no direct acquaintance with our unconsciousstates.

Another way to put the argument is this.How does the defender of HOMT knowthat conscious states are states of which weare aware? It does not seem to be some-thing she knows on the basis of experimenta-tion and theorization. Rather, it seems to beintuitively compelling, something that sheknows on the basis of first-person acquain-tance with her conscious states. But ifHOMT were correct, it would seem thatthat knowledge would have to be purelytheoretical and third-personal. So construed,this “epistemic argument” against HOMTmay be formulated as follows:

1) If HOMT were correct, our awareness ofour conscious states would normally bean unconscious state; that is,

2) We do not have non-theoretical, first-person knowledge of our unconsciousstates; therefore,

3) If HOMT were correct, we would nothave non-theoretical, first-person knowl-edge of the fact that we are aware of ourconscious states; but,

4) We do have non-theoretical, first-personknowledge of the fact that we are awareof our conscious states; therefore,

5) HOMT is incorrect.

The upshot of the argument is that theawareness of our conscious states must in thenormal case be itself a conscious state. This issomething that HOMT cannot allow, how-ever, since within its framework it wouldlead to infinite regress. The problem is toreconcile the claim that conscious states arestates we are aware of having with the notionthat we have non-theoretical knowledge ofthis fact.

The Self-Representational Theoryof Consciousness

One approach to consciousness that has avenerable tradition behind it, but has onlyvery recently regained a modest degree of

popularity, is what we may call the “self-representational theory.” According to thisview, mental states are conscious when, andonly when, they represent their own occur-rence (in the right way). Thus, my consciousexperience of the blue sky represents boththe sky and itself – and it is in virtue of repre-senting itself that it is a conscious experience.

Historically, the most thorough devel-opment and elucidation of the self-rep-resentational theory is Brentano’s (1874).Through his work, the view has had a sig-nificant influence in the phenomenologi-cal tradition. But apart from a couple ofexceptions – Lehrer (1996, 1997) and Smith(1986, 1989) come to mind – the viewhad enjoyed virtually no traction in Anglo-American philosophy. Recently, however,versions of the view, and close variationson it, have been defended by a number ofphilosophers.78

Rather than focus on any one particularaccount of consciousness along these lines, Inow survey the central contributions to theunderstanding of consciousness in terms ofself-representation.

Varieties of Self-Representational Theory

Brentano held that every conscious state isintentionally directed at two things. It isprimarily directed at whatever object it isabout, and it is secondarily directed at itself.My bluish sky experience is directed pri-marily at the sky and secondarily at itself.In more modern terminology, a consciousstate has two representational contents: another-directed (primary) content and a self-directed (secondary) content. Thus, if Sconsciously fears that p, S’s fear has twocontents: the primary content is p, the sec-ondary content is itself, the fear that p.The distinction between primary intention-ality and secondary intentionality is pre-sumably intended to capture the differ-ence (discussed above) between attentive orfocal awareness and inattentive or peripheralawareness.79

Caston (2002) offers an interesting glosson this idea in terms of the type/token dis-tinction. For Caston, S’s conscious fear that p

Page 69: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 51

is a single token state that falls under two sep-arate state types: the fear-that-p type and theawareness-of-fear-that-p type. The state hastwo contents, arguably, precisely in virtue offalling under two types.

Brook and Raymont (2006) stress thatthe self-representational content of the con-scious state is not simply that the stateoccurs, but rather that it occurs within one-self – that it is one’s own state. Just as Rosen-thal construed the content of higher-orderstates as “I myself am having that state,” soBrook and Raymont suggest that the full self-representational content of conscious statesis something like “I myself am herewith hav-ing this very state.”80

For Brentano and his followers, the self-directed element in conscious states is anaspect of their intentionality, or content.In David Woodruff Smith’s (1986, 2004)“modal account,” by contrast, the self-directed element is construed not as anaspect of the representational content, butrather as an aspect of the representationalattitude (or mode). When S consciouslyfears that p, it is not in virtue of figuringin its own secondary content that the fearis conscious. Indeed, S’s fear does not havea secondary content. Its only content is p.The “reflexive character” of the fear, as Smithputs it, is rather part of the attitude S takestoward p. Just as the attitudes toward p canvary from fear, hope, expectation, and soon, so they can vary between self-directedor “reflexive” fear and un-self-directed or“irreflexive” fear. S’s fear that p is conscious,on this view, because S takes the attitude ofself-directed fear toward p.81 ,82

One way in which the self-represen-tational thesis can be relaxed to make a sub-tler claim is the following. Instead of claim-ing that a mental state M of a subject S isconscious just in case M represents itself, thethesis could be that M is conscious just incase S has an M∗ that is a representation ofM and there is a constitutive, non-contingentrelation between M and M∗.83 One consti-tutive relation is of course identity. So oneversion of this view would be that M is con-scious just in case M is identical with M∗ –this is how Hossack (2002) formulates his

thesis – and this seems to amount to theclaim that M is conscious just in case it rep-resents itself (constitutes a representation ofitself). But the point is that there are other,weaker constitutive relations that fall shortof full identity.

One such relation is the part-whole rela-tion. Accordingly, one version of the view,the one defended by Gennaro (1996, 2006),holds that M∗ is a part of M; another version,apparently put forth by Kobes (1995), holdsthat M is part of M∗; and yet another ver-sion, Van Gulick’s (2001, 2006), holds thatM is conscious when it has two parts, one ofwhich represents the other.

In Van Gulick’s “higher-order globalstates theory,” S’s fear that p becomes con-scious when the fear and S’s awareness of thefear are somehow integrated into a single,unified state. This new state supersedes itsoriginal components, though, in a way thatmakes it a genuine unity, rather than a sumof two parts, one of which happens to rep-resent the other. The result is a state that, ifit does not represent itself, does somethingvery close to representing itself.84

The Master Argument for theSelf-Representational Theory

The basic argument for the self-represen-tational approach to consciousness is thatit is the only way to accommodate thenotion that conscious states are states we areaware of without falling into the pitfalls ofHOMT.

The argument can be organized, then, asa disjunctive syllogism that starts from themaster argument for HOMT, but then goesbeyond it:

1) A mental state M of subject S is con-scious when, and only when, S is awareof M;

2) Awareness of X requires mental represen-tation of X; therefore,

3) M is conscious when, and only when, Shas a mental state M∗, such that M∗ rep-resents M.

4) Either M∗ = M or M∗ �= M;

Page 70: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

52 the cambridge handbook of consciousness

5) There are good reasons to think that it isnot the case that M∗ �= M; therefore,

6) There are good reasons to think that it isthe case that M∗ = M; therefore,

7) Plausibly, M is conscious when, and onlywhen, M is self-representing.

The fourth premise could also be formu-lated as “either M∗ and M do not enter-tain a constitutive, non-contingent relation,or they do,” with appropriate modificationsin Premises 5 and 6 to suit. The conclu-sion of the relevantly modified argumentwould then be the thesis that M is consciouswhen, and only when, S has a mental stateM∗, such that (i) M∗ represents M and (ii)there is a constitutive, non-contingent rela-tion between M and M∗.

The fallacy in the master argument forHOMT is the supposition that if S is awareof M, then S must be so aware in virtue ofbeing in a mental state that is numericallydifferent from M. This supposition is broughtto the fore and rejected in the argument justsketched.

The case for the fifth premise consists inall the reasons to be suspicious of HOMT, aselaborated in the previous section, althoughit must also be shown that the same prob-lems do not bedevil the self-representationaltheory as well.

Consider first the epistemic argument.We noted that HOMT fails to account forthe non-theoretical, first-person knowledgewe have of the fact that we are awareof our conscious states. This is because itconstrues this awareness as (normally) anunconscious state. The self-representationaltheory, by contrast, construes this awarenessas a conscious state, since it construes theawareness as the same state, or part of thestate, of which one is thereby aware. So theself-representational theory, unlike HOMT,can provide for the right epistemology ofconsciousness.

Consider next the problem of target-less higher-order representations. Recall, theproblem ensues from the fact that M∗ couldin principle misrepresent not only that Mis F when in reality M is not F, but alsothat M is F when in reality there is no M

at all. The same problem does not arise forself-representing states, however: althoughM could in principle misrepresent itself to beF when in reality it is not F, it could not possi-bly misrepresent itself to be F when in realityit does not exist at all. For if it did not exist itcould not represent anything, itself included.Thus the problem of targetless higher-orderrepresentations has no bite against the self-representational theory.

These are already two major problemsthat affect gravely the plausibility of HOMT,but do not apply to the self-representationaltheory. They make a strong prima facie casefor the fifth premise above. The fourthpremise is a logical truism, and the first andsecond ones were defended above. So theargument appears to go through.

Problems for the Self-RepresentationalTheory

One problem that does persist for the self-representational theory is the problem ofanimal consciousness. The ability to haveself-representing states presumably requiresall the conceptual sophistication that theability to have higher-order monitoringstates does (since the self-representationalcontent of a conscious state is the same asthe representational content that a higher-order state would have), and perhaps evengreater sophistication.85

Another problem is the elucidation andviability of the notion of self-representation.What does it mean for a mental state torepresent itself, and what sort of mecha-nism could subserve the production of self-representing states? There is something atleast initially mysterious about the notionof a self-representing state that needs to beconfronted.

In fact, one might worry that there areprincipled reasons why self-representationis incompatible with any known natural-ist account of mental representation. Theseaccounts construe mental representation assome sort of natural relation between brainstates and world states. Natural relations, asopposed to conceptual or logical ones, arebased on causality and causal processes. But

Page 71: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 53

causality is an anti-reflexive relation, thatis, a relation that nothing can bear to itself.Thus no state can bring about its own occur-rence or give rise to itself. The argument canbe formulated as follows:

1) Mental representation involves a causalrelation between the representation andthe represented;

2) The causal relation is anti-reflexive;therefore,

3) No mental state can cause itself; andtherefore,

4) No mental state can represent itself.

The basic idea is that there is no naturalistaccount of mental representation that couldallow for self-representing mental represen-tations.

Even more fundamentally, one may worrywhether the appeal to self-representationreally explains consciousness. Perhaps self-representation is a necessary condition forconsciousness, but why think it is also a suf-ficient condition? A sentence such as “thisvery sentence contains six words” is self-representing, but surely there is nothing itis like to be that sentence.86

One may respond to this last point thatwhat is required for consciousness is intrinsicor original self-representation, not derivativeself-representation.87 Sentences and linguis-tic expressions do not have any representa-tional content in and of themselves, indepen-dently of being interpreted. But plausibly,mental states do.88 The same goes for self-representational content: sentences and lin-guistic expressions may be derivatively self-representing, but only mental states can benon-derivatively self-representing. A moreaccurate statement of the self-representationtheory is therefore this: A mental state M of asubject S is conscious when, and only when,M is non-derivatively self-representing.

Still, self-representing zombies are read-ily conceivable. It is quite easy to imagineunconscious mental states in our own cog-nitive system – say, states formed early onin visual processing – that represent them-selves without thereby being conscious.89

Furthermore, it is easy to imagine a crea-

ture with no conscious awareness whatso-ever who harbors mental states that rep-resent themselves. Thus Chalmers’ zombieargument can be run in a particularized ver-sion directed specifically against the self-representational theory.90

Conclusion: Directions forFuture Research

Much of the philosophical discourse onconsciousness is focused on the issue ofreducibility. As we just saw, the zombie argu-ment and other dualist arguments can betailored to target any particular reductiveaccount of consciousness. This debate holdsgreat intrinsic importance, but it is impor-tant to see that progress toward a scientificexplanation of consciousness can be madewithout attending to it.

All three reductive approaches to con-sciousness we considered – the represen-tational, higher-order monitoring, and self-representational theories – can readily berefashioned as accounts not of consciousnessitself, but of the emergence base (or causalbasis) of consciousness. Instead of claimingthat consciousness is (or is reducible to) phys-ical structure P, the claim would be thatconsciousness emerges from (or is broughtabout by) P. To make progress toward thescientific explanation of consciousness, weshould focus mainly on what the right phys-ical structure is – what P is. Whether P isconsciousness itself or only the emergencebase of consciousness is something we canset aside for the purposes of scientific expla-nation. If it turns out that P is conscious-ness itself (as the reductivist holds), then wewill have obtained a reductive explanation ofconsciousness; if it turns out that P is onlythe emergence base of consciousness (as thedualist holds), then we will have obtaineda causal explanation of consciousness. Butboth kinds of explanation are bona fide sci-entific explanations.

In other words, philosophers could use-fully reorganize their work on conscious-ness around a distinction between two sep-arate issues or tasks. The first task is to

Page 72: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

54 the cambridge handbook of consciousness

devise a positive account of the physical(or more broadly, natural) correlate of con-sciousness, without prejudging whether itwill constitute a reduction base or merelyan emergence base. Work along these lineswill involve modifying and refining the rep-resentational, higher-order monitoring, andself-representational theories and/or devis-ing altogether novel positive accounts. Thesecond task is to examine the a priori anda posteriori cases for reducibility. Work herewill probably focus on the issue of how muchcan be read off of conceivability claims,as well as periodic reconsideration of theintuitive plausibility of such claims in lightof newer and subtler positive accounts ofconsciousness.91

Another front along which progress cancertainly be made is tightening the connec-tion between the theoretical and experi-mental perspectives on consciousness. Ulti-mately, one hopes that experiments could bedesigned that would test well-defined empir-ical consequences of philosophical (or moregenerally, purely theoretical) models of con-sciousness. This would require philosophersto be willing to put forth certain empiri-cal speculations, as wild as these may seem,based on their theories of consciousness, andexperimental scientists to take interest inthe intricacies of philosophical theories inan attempt to think up possible ways to testthem.

All in all, progress in our understand-ing of consciousness and the outstandingmethodological and substantive challenges itpresents has been quite impressive over thepast two decades. The central philosophicalissues are today framed with a clarity andprecision that allow a corresponding level ofclarity and precision in our thinking aboutconsciousness. Even more happily, there isno reason to suppose that this progress willcome to a halt or slow down in the nearfuture.92

Notes

1. More accurately, I present central aspects ofthe main account, the case in favor, and the

case against. Obviously, space and other lim-itations do not allow me to present the fullstory on each of these approaches.

2 . The distinction between creature conscious-ness and state consciousness is due to Rosen-thal (1986).

3 . Availability consciousness as construed here isvery similar to the notion of access conscious-ness as defined by Block (1995). There arecertain differences, however. Block definesaccess consciousness as the property a men-tal state has when it is poised for free useby the subject in her reasoning and actioncontrol. It may well be that a mental stateis availability-conscious if and only if it isaccess-conscious. For a detailed discussion ofthe relation between phenomenal conscious-ness and access consciousness, see Kriegel(2006b).

4 . It is debatable whether thoughts, beliefs,desires, and other cognitive states can at allbe conscious in this sense. I remain silent onthis issue here. For arguments that they canbe conscious, see Goldman (1993a), Horganand Tienson (2002), and Siewert (1998).

5 . The terms “easy problems” and “hard prob-lem” are intended as mere labels, not asdescriptive. Thus it is not suggested here thatunderstanding any of the functions of con-sciousness is at all easy in any significant sense.Any scientist who has devoted time to thestudy of consciousness knows how outstand-ing the problems in this field are. These termsare just a terminological device designed tobring out the fact that the problem of whythere is something it feels like to undergo aconscious experience appears to be of a dif-ferent order than the problems of mappingout the cognitive functions of consciousness.

6. This is so even if phenomenal consciousnessdoes not turn out to have much of a func-tional significance in the ordinary cognitivelife of a normal subject – as some (Libet, 1985 ;Velmans, 1992 ; Wegner, 2002) have indeedargued.

7. In the course of the discussion I avail myselfof philosophical terminology that may notbe familiar to the non-philosophically trainedreader. However, I have tried to recognize allthe relevant instances and such and include anendnote that provides a standard explicationof the terminology in question.

8. No major philosopher holds this view, to myknowledge.

Page 73: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 55

9. Many of the key texts discussed in this chap-ter are conveniently collected in Block et al.(1997). Here, and in the rest of the chapter, Irefer to the reprint in that volume.

10. This is what Churchland often discussesunder the heading of the “plasticity of mind”(see especially Churchland, 1979).

11. It may not be perceiving those brain statesas brain states. But it will nonetheless be amatter of perceiving the brain states.

12 . The view – sometimes referred to as emer-gentism – that consciousness is caused by thebrain, or causally emerges from brain activ-ity, is often taken by scientists to be material-ist enough. But philosophers, being interestedin the ontology rather than genealogy of con-sciousness, commonly take it to be a form ofdualism. If consciousness cannot be shown tobe itself material, but only caused by matter,then consciousness is itself immaterial, as thedualist claims. At the same time, the positionimplicit in scientists’ work is often that whatis caused by physical causes in accordancewith already known physical laws should beimmediately considered physical. This posi-tion, which I have called elsewhere inclusivematerialism (Kriegel, 2005b), is not unreason-able. But the present chapter is dedicated tophilosophers’ theories of consciousness, so I setit aside.

13 . It should be noted that McGinn himself hasrepeatedly claimed that his position is notdualist. Nonetheless others have accused himof being committed to dualism (e.g., Brueck-ner and Berukhim, 2003). There is no doubtthat McGinn does not intend to commit todualism. In a way, his position is preciselythat, because of our cognitive closure we can-not even know whether materialism or dual-ism is true. Yet it is a fair criticism to suggestthat McGinn is committed to dualism despitehimself because his argument for mysterian-ism would not go through unless dualism wastrue.

14 . More generally, it is curious to hold, asMcGinn does, that an organism’s concept-forming procedures are powerful enough toframe a problem, without being powerfulenough to frame the solution. To be sure,the wrong solution may be framed, but thiswould suggest not that the conceptual capa-bilities of the organism are at fault, but ratherthat the organism made the wrong turn some-where in its reasoning. The natural thought

is that if a conceptual scheme is power-ful enough to frame a problem it shouldbe powerful enough to frame the solution.Whether the correct solution will actuallybe framed is of course anyone’s guess. Butthe problem cannot be a constitutive limita-tion on concept formation mechanisms. (For amore detailed development of this line of cri-tique, see Kriegel, 2004a.) There is a counter-example of this sort of claim, however. Cer-tain problems that can be framed within thetheory of rational numbers cannot be solvedwithin it; the conceptual machinery of irra-tional numbers must be brought in to solvethese problems. It might be claimed, how-ever, that this sort of exception is limited toformal systems and does not apply to theo-ries of the natural world. Whether this claimis plausible is something I do not adjudicatehere.

15 . Monism divides into two subgroups: mate-rialist monism, according to which the onlykind of stuff there is is matter, and idealistmonism, according to which the stuff in ques-tion is some sort of mindstuff.

16. Idealism is not really considered a live optionin current philosophical discussions, althoughit is defended by Foster (1982). I do not discussit here.

17. Such coming-apart happens, for Descartes,upon death of the physical body. We shouldnote that Cartesian substance dualism drewmuch of its motivation from religious con-siderations, partly because it provided for thesurvival of the soul. The main difficulty his-torically associated with it is whether it canaccount for the causal interaction betweenthe mind and the body.

18. So property dualism is compatible with sub-stance monism. Unlike Descartes and otherold-school dualists, modern dualists for themost part hold that there is only one kind ofstuff, or substance, in the world – matter. Butmatter has two different kinds of properties –material and immaterial.

19. A kind of property F supervenes on a kindof property G with logical necessity – or forshort logically supervenes on them – just incase two objects differing with respect to theirF properties without differing with respectto their G properties would be in contraven-tion of the laws of logic. A kind of prop-erty F supervenes on a kind of property Gwith metaphysical necessity – or for short

Page 74: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

56 the cambridge handbook of consciousness

metaphysically supervenes on them – just incase it is impossible for two objects to dif-fer with respect to their F properties with-out differing with respect to their G prop-erties. Philosophers debate whether there isa difference between the two (logical andmetaphysical supervenience). That debatedoes not concern us here.

20. This stronger claim will require a strongerargument. The claim that phenomenal prop-erties are not identical to physical proper-ties could be established through the nowfamiliar argument from multiple realizability(Putnam, 1967). But multiple realizabilitydoes not entail failure of supervenience. Toobtain the latter, Chalmers will have to appealto a different argument, as we will see in thenext subsection.

21. As a consequence, phenomenal properties dosupervene on physical properties with nomo-logical necessity, even though they do notsupervene with metaphysical or logical neces-sity. A kind of property F supervenes on a kindof property G with nomological (or natural)necessity – or for short nomologically super-venes on them – just in case two objects differ-ing with respect to their F properties withoutdiffering with respect to their G propertieswould be in contravention of laws of nature.

22 . So causal explanation is the sort of explana-tion one obtains by citing the cause of theexplanandum. For discussions of the natureof causal explanation, see (e.g., Lewis, 1993).

23 . The latter will govern only the causal inter-action among physical events. They will notcover causal interaction between physical andphenomenal, non-physical events. These willhave to be covered by a special and new setof laws.

24 . In Baars’ (1988, 1997) Global Workspace The-ory, consciousness is reductively explained interms of global availability. In a functionalisttheory such as Dennett’s (1981, 1991), con-sciousness is reductively explained in terms offunctional organization. Chalmers’ position isthat neither theory can explain consciousnessreductively, though both may figure as partof the causal explanation of it. These the-ories are not discussed in the present chap-ter, because they are fundamentally psycho-logical (rather than philosophical) theories ofconsciousness.

25 . A linguistic context is intensional if it disal-lows certain inferences, in particular existen-

tial generalization (the inference from “a is F”to “there is an x, such that x is F”) and sub-stitution of co-referential terms salva veritate(the inference from “a is F” and “a = b” to “b isF”). Epistemic contexts – contexts involvingthe ascription of knowledge – are intensionalin this sense.

26. Another popular materialist response to thesearguments is that what is being gained isnot new knowledge, but rather new abil-ities (Lewis, 1990; Nemirow, 1990). Uponbeing released from her room, the KnowledgeArgument’s protagonist does not acquire newknowledge, but rather a new set of abilities.And likewise what we lack with respect towhat it is like to be a bat is not any particularknowledge, but a certain ability – the abilityto imagine what it is like to be a bat. But fromthe acquisition of a new ability one can surelynot infer the existence of a new fact.

27. Materialists reason that because what it islike to see red is identical to a neurophysi-ological fact about the brain, and ex hypoth-esi the Knowledge Argument’s protagonistknows the latter fact, she already knows theformer. So she knows the fact of what it is liketo see red, but not as a fact about what it islike to see red. Instead, she knows the fact ofwhat it is like to see red as a fact about theneurophysiology of the brain. What happenswhen she comes out of her room is that shecomes to know the fact of what it is like tosee red as a fact about what it is like to seered. That is, she learns in a new way a factshe already knew in another way. The sameapplies to knowledge of what it is like to be abat: we may know all the facts about what itis like to see a bat, and still gain new knowl-edge about bats, but this new knowledge willpresent to us a fact we already know in a waywe do not know it yet.

28. It could be responded by the dualist thatsome pieces of knowledge are so different thatthe fact known thereby could not possiblyturn out to be the same. Knowledge that theevening star is glowing and knowledge thatthe morning star is glowing are not such. Butconsider knowledge that justice is good andknowledge that banana is good. The dual-ist could argue that these are such differentpieces of knowledge that it is impossible thatthe facts thereby known should turn out tobe one and the same. The concepts of eveningstar and morning star are not different enoughto exclude the possibility that they pick out

Page 75: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 57

the same thing, but the concepts of justiceand banana are such that it cannot possiblybe the case that justice should turn out to bethe same thing as bananas.

29. The kind of possibility we are concerned withhere, and in the following presentation ofvariations on this argument, is not practicalpossibility, or even a matter of consistencywith the laws of nature. Rather it is possibil-ity in the widest possible sense – that of con-sistency with the laws of logic and the veryessence of things. This is what philosophersrefer to as metaphysical possibility.

30. The modal force of this supervenience claimis concordant with that of the claim inPremise 2 ; that is, that of metaphysicalnecessity.

31. The reason it is impossible is that there isno such thing as contingent identity, accordingto the official doctrine hailing from Kripke.Since all identity is necessary, and necessity iscashed out as truth in all possible worlds, itfollows that when a = b in the actual world,a = b in all possible worlds, that is, a is neces-sarily identical to b.

32 . The interpretation I provide is based on cer-tain key passages in Chalmers (1996, pp. 13 1–134), but I cast the argument in terms that aremine, not Chalmers’.

33 . I mean the property of apparent water to bemore or less the same as the property philoso-phers often refer to as “watery stuff” (i.e.,the property of being superficially (or to thenaked eye) the same as water – clear, drink-able, liquid, etc.).

34 . Chalmers (1996, 132) writes, “ . . . the primaryintension [of “consciousness”] determines aperfectly good property of objects in possibleworlds. The property of being watery stuff[or apparent water] is a perfectly reasonableproperty, even though it is not the same as theproperty of being H2 O. If we can show thatthere are possible worlds that are physicallyidentical to ours but in which the properlyintroduced by the primary intension is lack-ing, then dualism will follow [italics added].”

35 . Our discussion so far has presupposed a “lat-itudinous” approach to properties, accordingto which there is a property that correspondsto every predicate we can come up with.(Thus, if we can come up with the predi-cate, “is a six-headed space lizard or a flyingcow,” then there is the property of being a six-headed space lizard or a flying cow. This does

not mean, however, that the property is actu-ally instantiated by any actual object.) But ona sparse conception of property – one thatrejects the latitudinous assumption – theremay not be appearance properties at all.

36. The notion of a natural property is hard topin down and is the subject of philosophi-cal debate. The most straightforward way ofunderstanding natural properties is as proper-ties that figure in the ultimate laws of nature(Armstrong, 1978; Fodor, 1974).

37. That is, they would have their causal effi-cacy restricted to bringing about physi-cal events and property-instantiations thatalready have independent sufficient causes(and that would therefore take place any-way, regardless of the non-supervenient prop-erties. (This is the second option of thedilemma.)

38. This is the strategy in Chalmers (1996).Later on, Chalmers (2002a) embraces a three-pronged approach, the third prong consistingin accepting causal overdetermination.

39. When a cause C causes an effect E, C’s caus-ing of E may have its own (mostly acciden-tal) effects (e.g., it may surprise an observerwho did not expect the causing to take place),but E is not one of them. This is becauseE is caused by C, not by C’s causing of E.Dretske (1988) distinguished between trig-gering causes and structuring causes, the lat-ter being causes of certain causal relations(such as C’s causing of E), and offers anaccount of structuring causes. But this is anaccount of the causes of causal relations, notof their effects. To my knowledge, there isno account of the effects of causal relations,mainly because these seem to be chiefly acci-dental.

40. Or at least they would be nearly epiphenom-enal, having no causal powers except perhapsto bring about some accidental effects of thesort pointed out in the previous endnote.

41. By “representational properties” it is meantproperties that the experience has in virtueof what it represents – not, it is important tostress, properties the experience has in virtueof what does the representing. In terms of thedistinction between vehicle and content, rep-resentational properties are to be understoodas content properties rather than vehicularproperties. We can also make a distinctionbetween two kinds of vehicular properties:those that are essential to the vehicling of

Page 76: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

58 the cambridge handbook of consciousness

the content and those that are not. (Block’s(1996) distinction between mental paint andmental latex (later, “mental oil”) is supposedto capture this distinction.) There is a sense inwhich a view according to which phenomenalproperties are reductively accountable for interms of vehicular properties essential to thevehicling is representational, but the way theterm “representationalism” is used in currentdiscussions of consciousness, it does not qual-ify as representationalism. A view of this sortis defended, for instance, by Maloney (1989),but otherwise lacks a vast following. I do notdiscuss it here.

42 . By the “phenomenal character” of a mentalstate at a time t I mean the set of all phenom-enal properties the state in question instanti-ates at t. By “representational content” I meanwhatever the experience represents. (Experi-ences represent things, in that they have cer-tain accuracy or veridicality conditions: con-ditions under which an experience would besaid to get things right.)

43 . See Dretske (1981, 1988) for the most thor-oughly worked out reductive account of men-tal representation in informational and tele-ological terms. According to Dretske (1981),every event in the world generates a certainamount of information (in virtue of exclud-ing the possibility that an incompatible eventcan take place). Some events also take placeonly when other events take place as well,and this is sometimes dictated by the laws ofnature. Thus it may be a law of nature thatdictates that an event type E1 is betokenedonly when event type E2 is betokened. Whenthis is the case, E1 is said to be nomically depen-dent upon E2 , and the tokening of E1 carriesthe information that E2 has been betokened.Or more accurately, the tokening of E1 carriesthe information generated by the tokening ofE2 . Some brain states bear this sort of rela-tion to world states: the former come intobeing, as a matter of law, only when the latterdo (i.e., the former are nomically dependentupon the latter). Thus, a certain type of brainstate may be tokened only when it rains. Thisbrain state type would thus carry the informa-tion that it rains. An informational account ofmental representation is based on this idea:that a brain state can represent the fact that itrains by carrying information about it, whichit does in virtue of nomically depending on it.

44 . Other representational theories can be foundin Byrne (2001), Dretske (1995), Lurz (2003),

Shoemaker (1994a, b, 1996, 2002) and Thau(2002). Some of these versions are impor-tantly different from Tye’s, not only in detailbut also in spirit. This is particularly sowith regard to Shoemaker’s view (as wellas Lurz’s). For a limited defense and elab-oration of Shoemaker’s view, see Kriegel(2002a, b). In what way this defense islimited will become evident at the end ofthis section.

45 . The properties of intentionality and abstract-ness are fairly straightforward. The for-mer is a matter of intensionality; that is,the disallowing of existential generalizationsand truth-preserving substitutions of co-referential terms. The second is a matter ofthe features represented by experience notbeing concrete entities (this is intended tomake sense of misrepresentation of the samefeatures, in which case no concrete entity isbeing represented).

46. This line of thought can be resisted on anumber of scores. First, it could be arguedthat I do have a short-lived concept of blue17 ,which I possess more or less for the dura-tion of my experience. Second, it could beclaimed that although I do not possess thedescriptive concept “blue17 ,” I do possess theindexical concept “this shade of blue,” andthat it is the latter concept that is deployedin my experience’s representational content.Be that as it may, the fact that consciousexperiences can represent properties thatthe subject cannot recognize across rela-tively short stretches of time is significantenough. Even if we do not wish to treat themas non-conceptual, we must treat them atleast as “sub-recognitional.” Tye’s modifiedclaim would be that the representational con-tent of experience is poised, abstract, sub-recognitional, intentional content.

47. To be sure, it does not represent the tissuedamage as tissue damage, but it does repre-sent the tissue damage. Since the represen-tation is non-conceptual, it certainly cannotemploy the concept of “tissue damage.”

48. An error theory is a theory that ascribesa widespread error in commonsense beliefs.The term was coined by J. L. Mackie (1977).Mackie argued that values and value judg-ment are subjective. Oversimplifying thedialectic, a problem for this view is that such ajudgment as “murder is wrong” appears to be,and is commonly taken to be, objectively true.

Page 77: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 59

In response Mackie embraced what he termedan error theory: that the common view ofmoral and value judgments is simply one hugemistake.

49. Externalism about representational content,or “content externalism” for short, is thethesis that the representational content ofexperiences, thoughts, and even spoken state-ments is partially determined by objects out-side the subject’s head. Thus, if a person’sinteractions with watery stuff happen tobe interactions with H2 O, and another per-son’s interactions with watery stuff happento be interactions with a superficially simi-lar stuff that is not composed of H2 O, theneven if the two persons cannot tell apartH2 O and the other stuff and are unawareof the differences in the molecular com-position of the watery stuff in their envi-ronment, the representational contents oftheir respective water thoughts (as well aswater pronouncements and water experi-ences) are different (Putnam, 1975). Or soexternalists claim.

50. Another option is to go internalist withrespect to the representational content thatdetermines the phenomenal properties ofconscious experiences. With the recentadvent of credible account of narrow con-tent (Chalmers, 2002b, Segal, 2000), it is nowa real option to claim that the phenomenalproperties of experience are determined byexperience’s narrow content (Kriegel, 2002a;Rey, 1998). However, it may turn out thatthis version of representationalism will notbe as well supported by the transparency ofexperience.

51. For one such line of criticism, on which I donot elaborate here, see Kriegel (2002c).

52 . Elsewhere, I construe this form of pre-reflective self-consciousness as what I callintransitive self-consciousness. Intransitive self-consciousness is to be contrasted with transi-tive self-consciousness. The latter is ascribedin reports of the form “I am self-consciousof my thinking that p,” whereas the formeris ascribed in reports of the form “I am self-consciously thinking that p.” For details seeKriegel (2003b, 2004b).

53 . Part of this neglect is justified by the thesisthat the for-me-ness of conscious experiencesis an illusory phenomenon. For an argumentfor the psychological reality of it, see Kriegel(2004b).

54 . There are versions of representationalism thatmay be better equipped to deal with the sub-jective character of experience. Thus, accord-ing to Shoemaker’s (2002) version, a men-tal state is conscious when it represents asubject-relative feature, such as the disposi-tion to bring about certain internal states inthe subject. It is possible that some kind offor-me-ness can be accounted for in this man-ner. It should be noted, however, that thisis not one of the considerations that moti-vate Shoemaker to develop his theory the wayhe does.

55 . Rosenthal prefers to put this idea as follows:conscious states are states we are consciousof. He then draws a distinction between con-sciousness and consciousness of – intransi-tive and transitive consciousness (Rosenthal,1986, 1990). To avoid unnecessary confusion,I state the same idea in terms of awareness-of, rather than consciousness-of. But the ideais the same. It is what Rosenthal calls some-times the “transitivity principle” (e.g., Rosen-thal, 2000): a mental state is intransitivelyconscious only if we are transitively consciousof it.

56. The representation is “higher-order” in thesense that it is a representation of a repre-sentation. In this sense, a first-order represen-tation is a representation of something that isnot itself a representation. Any other repre-sentation is higher-order.

57. More than that, according to Rosenthal(1990), for instance, the particular way it islike for S to have M is determined by the par-ticular way M∗ represents M. Suppose S tastesan identical wine in 1980 and in 1990. Duringthe 1980s, however, S had become a wine con-noisseur. Consequently, wines she could notdistinguish at all in 1980 strike her in 1990 asworlds apart. That is, during the eighties sheacquired a myriad of concepts for very spe-cific and subtle wine tastes. It is plausible toclaim that what it is like for S to taste thewine in 1990 is different from what it waslike for her to taste it in 1980 – even thoughthe wines’ own flavors are identical. Arguably,the reason for the difference in what it is liketo taste the wine is that the two wine-tastingexperiences are accompanied by radically dif-ferent higher-order representations of them.This suggests, then, that the higher-order rep-resentation not only determines that there issomething it is like for S to have M, but alsowhat it is like for S to have M.

Page 78: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

60 the cambridge handbook of consciousness

58. I do not mean the term “yield” in a causalsense here. The higher-order monitoring the-ory does not claim that M∗’s represent-ing of M somehow produces, or gives riseto, M’s being conscious. Rather, the claimis conceptual: M’s being conscious consistsin, or is constituted by, M∗’s representingof M.

59. Other versions of the higher-order thoughtview can be found in Carruthers (1989, 1996),Dennett (1969, 1991), and Mellor (1978).

60. Rosenthal (1990, pp. 739–40) claims that it isessential to a perceptual state that it has a sen-sory quality, but the second-order represen-tations do not have sensory qualities and aretherefore non-perceptual. Van Gulick (2001)details a longer and more thorough list offeatures that are characteristic of perceptualstates and considers which of them is likelyto be shared by the higher-order represen-tations. His conclusion is that some are andsome are not.

61. The notion of direction of fit has its origins inthe work of Anscombe (1957), but has beendeveloped in some detail and put to extensivework mainly by Searle (1983). The idea is thatmental states divide into two main groups, thecognitive ones (paradigmatically, belief) andthe conative ones (paradigmatically, desire).The former are such that they are supposed tomake the mind fit the way the world is (thus“getting the facts right”), whereas the latterare such that they are supposed to make theworld fit the way the mind is (a change in theworld is what would satisfy them).

62 . Kobes (1995) suggests a version of higher-order monitoring theory in which the higher-order representation has essentially a telicdirection of fit. But Rosenthal construes it ashaving only a thetic one.

63 . Carruthers (1989, 1996, 2000), and probablyalso Dennett (1969, 1991), attempt to accountfor consciousness in terms of merely tacitor dispositional higher-order representations.But these would not do, according to Rosen-thal. The reason for this is that a merely dis-positional representation would not make thesubject aware of her conscious state, but onlydisposed to being aware of it, whereas the cen-tral motivation behind the higher-order mon-itoring view is the fact that conscious statesare states we are aware of having (Rosenthal1990, p. 742).

64 . Earlier on, Rosenthal (1990) required thatthe higher-order thought be not onlynon-inferential but also non-observational.This latter requirement was later dropped(Rosenthal, 1993).

65 . A person may come to believe that she isashamed about something on the strength ofher therapist’s evidence. And yet the shamestate is not conscious. In terms of the ter-minology introduced in the introduction, thestate may become availability-conscious, butnot phenomenally conscious. This is whythe immediacy of awareness is so crucial.Although the person’s second-order beliefconstitutes an awareness of the shame state, itis not a non-inferential awareness, and there-fore not immediate awareness.

66. De se content is content that is of one-self, or more precisely, of oneself as oneself.Castaneda (1966), who introduced this term,also claimed that de se content is irreducibleto any other kind of content. This latter claimis debatable and is not part of the officialhigher-order thought theory.

67. Rosenthal’s (1990, p. 742) argument for thisrequirement is the following. My awarenessof my bluish experience is an awareness ofthat particular experience, not of the generaltype of experience it is. But it is impossi-ble to represent a mental state as particu-lar without representing in which subject itoccurs. Therefore, the only way the higher-order thought could represent my experiencein its particularity is if it represented it asoccurring in me.

68. This is necessary to avert infinite regress. Ifthe higher-order state was itself conscious,it would have to be itself represented by ayet higher-order state (according to the the-ory) and so the hierarchy of states wouldgo to infinity. This is problematic on twoscores. Firstly, it is empirically implausible,and perhaps impossible, that a subject shouldentertain an infinity of mental states when-ever conscious. Secondly, if a mental state’sbeing conscious is explained in terms ofanother conscious states, the explanation is“empty,” inasmuch as it does not explain con-sciousness in terms of something other thanconsciousness.

69. This claim can be made on phenomenologicalgrounds, instead of on the basis of conceptualanalysis. For details, see Kriegel (2004b).

70. To repeat, the conceptual grounds are the factthat it seems to be a conceptual truth that

Page 79: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 61

conscious states are states we are aware ofhaving. This seems to be somehow inherentin the very concept of consciousness.

71. There are other arguments that have beenleveled against the higher-order monitoringtheory, or specific versions thereof, which Ido not have the space to examine. For argu-ments not discussed here, see Block (1995),Caston (2002), Dretske (1995), Guzeldere(1995), Kriegel (2006a), Levine (2001), Nat-soulas (1993), Rey (1988), Seager (1999), andZahavi and Parnas (1998).

72 . The argument has also been made by Caston(2002), Levine (2001), and Seager (1999). Fora version of the argument directed at higher-order perception theory (and appealing tohigher-order misperceptions), see Neander(1998).

73 . Note that M∗ does not merely misrepresentM to be F when in reality M is not F, butmisrepresents M to be F when in reality thereis no M at all.

74 . This would be a particular version of the sup-position we made at the very beginning of thischapter, by way of analyzing creature con-sciousness in terms of state consciousness.

75 . Furthermore, if M∗ were normally conscious,the same problem would arise with the third-order representation of it (and if the third-order representation were normally con-scious, the problem would arise with thefourth-order state). To avert infinite regress,the higher-order monitoring theorist mustsomewhere posit an unconscious state, andwhen she does, she will be unable to claimthat that state instantiates the property ofbeing conscious when it misrepresents.

76. This appears to be Rosenthal’s latest stanceon the issue (in conversation).

77. There are surely other ways the higher-ordermonitoring theorist may try to handle theproblem of targetless higher-order represen-tations. But many of them are implausible,and all of them complicate the theory con-siderably. One of the initial attractions of thetheory is its clarity and relative simplicity.Once it is modified along any of the linessketched above, it becomes significantly lessclear and simple. To that extent, it is consid-erably less attractive than it initially appears.

78. See Brook and Raymont (2006), Caston(2002), Hossack (2002), Kriegel (2003b), andWilliford (2006). For the close variation, seeCarruthers (2000, 2006), Gennaro (1996,

2002 , 2006), Kobes (1995), Kriegel (2003a,2005 , 2006a), and Van Gulick (2001, 2004).

79. For fuller discussion of Brentano’s account,see Caston (2002), Kriegel (2003a), Smith(1986, 1989) Thomasson (2000), and Zahavi(1998, 2004).

80. So the self-representational content of con-scious states is de se content. There are placeswhere Brentano seems to hold something likethis as well. See also Kriegel (2003a).

81. For more on the distinction between contentand attitude (or mode), see Searle (1983).For a critique of Smith’s view, see Kriegel(2005a).

82 . A similar account would be that consciousstates are not conscious in virtue of stand-ing in a certain relation to themselves, butthis is because their secondary intentionalityshould be given an adverbial analysis. Thisis not to say that all intentionality must betreated adverbially. It may well be that theprimary intentionality of conscious states is amatter of their standing in a certain informa-tional or teleological relation to their primaryobjects. Thus, it need not be the case that S’sconscious fear that p involves S’s fearing p-ly rather than S’s standing in a fear relationto the fact that p. But it is the case that S’sawareness of her fear that p involves beingaware fear-that-p-ly rather than standing inan awareness relation to the fear that p. Tomy knowledge, nobody holds this view.

83 . A constitutive, non-contingent relation is arelation that two things do not just happen toentertain, but rather they would not be thethings they are if they did not entertain thoserelations. Thus A’s relation to B is constitutiveif bearing it to B is part of what constitutes A’sbeing what it is. Such a relation is necessaryrather than contingent, since there is no pos-sible world in which A does not bear it to B –for in such a world it would no longer be A.

84 . Elsewhere, I have defended a view similar inkey respects to Van Gulick’s – see Kriegel(2003a, 2005 , 2006a).

85 . Indeed, the problem may be even more press-ing for a view such as the higher-order globalstates theory. For the latter requires not onlythe ability to generate higher-order contents,but also the ability to integrate those with theright lower-order contents.

86. For a more elaborate argument that self-representation may not be a sufficient condi-tion for consciousness, one that could provide

Page 80: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

62 the cambridge handbook of consciousness

a reductive explanation of it, see Levine(2001, Ch. 6).

87. I am appealing here to a distinction defended,e.g., by Cummins (1979), Dretske (1988),and Searle (1992). Grice noted that somethings that exhibit aboutness of meaningful-ness, such as words, traffic signs, and arrows,do so only on the assumption that someoneinterprets them to have the sort of meaningthey have. But these acts of interpretationare themselves contentful, or meaningful. Sotheir own meaning must be either derived byfurther interpretative acts or be intrinsic tothem and non-derivative. Grice’s claim wasthat thoughts and other mental states havean aboutness all their own, independently ofany interpretation.

88. This is denied by Dennett (1987), who claimsthat all intentionality is derivative.

89. One might claim that such states areless clearly conceivable when their self-representational content is fully specified.Thus, if the content is of the form, “I myselfam herewith having this very bluish experi-ence,” it is less clearly the case that one canconceive of an unconscious state having thiscontent.

90. The conceivability of unconscious self-representing states may not be proof of theirpossibility, but it is evidence of their possibil-ity. It is therefore evidence against the self-representational theory.

91. The reductivist may claim that zombies withthe same physical properties we have are con-ceivable only because we are not yet in a posi-tion to focus our mind on the right physicalstructure. As progress is made toward identi-fication of the right physical structure, it willbecome harder and harder to conceive of azombie exhibiting this structure but lackingall consciousness.

92 . For comments on an earlier draft of thischapter, I would like to thank GeorgeGraham, David Jehle, Christopher Mal-oney, Amie Thomasson, and especially DavidChalmers.

References

Armstrong, D. M. (1968). A materialist theory ofthe mind. New York: Humanities Press.

Armstrong, D. M. (1978). A theory of univer-sals, Vol. 2 . Cambridge: Cambridge UniversityPress.

Armstrong, D. M. (1981). What is consciousness?In D. M. Armstrong (Ed.), The nature of mind(pp. 55–67). Ithaca, NY: Cornell UniversityPress.

Anscombe, G. E. M. (1957). Intention. Oxford:Blackwell.

Baars, B. (1988). A cognitive theory of consciousness.New York: Cambridge University Press.

Baars, B. (1997). In the theater of consciousness: Theworkspace of the mind. Oxford: Oxford Univer-sity Press.

Block, N. J. (1990a). Inverted earth. PhilosophicalPerspective, 4 , 52–79.

Block, N. J. (1990b). Can the mind changethe world? In G. Boolos (Ed.), Meaning andmethod: Essays in honor of Hilary Putnam.NewYork: Cambridge University Press.

Block, N. J. (1995). On a confusion about thefunction of consciousness. Behavioral and BrainSciences, 18, 227–247.

Block, N. J. (1996). Mental paint and mental latex.Philosophical Issues, 7, 19–50.

Block, N. J., Flanagan, O., & Guzeldere, G. (Eds.).(1997). The nature of consciousness: Philosophi-cal debates. Cambridge, MA: MIT Press.

Brentano, F. (1973). Psychology from empiricalstandpoint (O. Kraus, Ed. & A. C. Rancurello,D. B. Terrell, & L. L. McAlister, Trans.).London: Routledge and Kegan Paul. (Originalwork published 1874)

Brook, A., & Raymont, P. (2006). A unifiedtheory of consciousness. Cambridge, MA: MITPress.

Brueckner, A., & Berukhim, E. (2003). McGinnon consciousness and the mind-body problem.”In Q. Smith & D. Jokic (Eds.), Consciousness:New philosophical perspectives. Oxford: OxfordUniversity Press.

Byrne, D. (1997). Some like it HOT: Conscious-ness and higher order thoughts. PhilosophicalStudies, 86, 103–129.

Byrne, A. (2001). Intentionalism defended. Philo-sophical Review, 110, 199–240.

Carruthers, P. (1989). Brute experience. Journalof Philosophy, 85 , 258–269.

Carruthers, P. (1996). Language, thought, and con-sciousness. Cambridge: Cambridge UniversityPress.

Page 81: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 63

Carruthers, P. (1998). Natural theories of con-sciousness. European Journal of Philosophy, 6,203–222 .

Carruthers, P. (1999). Sympathy and subjectiv-ity. Australasian Journal of Philosophy, 77, 465–482 .

Carruthers, P. (2000). Phenomenal consciousness.New York: Cambridge University Press.

Carruthers, P. (2006). Conscious experience ver-sus conscious thought. In U. Kriegel & K. Willi-ford (Eds.), Consciousness and self-reference.Cambridge, MA: MIT Press.

Castaneda, H.-N. (1966). ‘He’: A study in thelogic of self-consciousness. Ratio, 8, 130–157.

Caston, V. (2002). Aristotle on consciousness.Mind, 111, 751–815 .

Chalmers, D. J. (1995). Facing up to the prob-lem of consciousness. Journal of ConsciousnessStudies, 2 , 200–219.

Chalmers, D. J. (1996). The conscious mind.Oxford: Oxford University Press.

Chalmers, D. J. (2002a). Consciousness and itsplace in nature. In D. J. Chalmers (Ed.), Philos-ophy of mind. Oxford: Oxford University Press.

Chalmers, D. J. (2002b). The components of con-tent. In D. J. Chalmers (Ed.), Philosophy ofmind. Oxford: Oxford University Press.

Churchland, P. M. (1979). Scientific realism andthe plasticity of mind. New York: CambridgeUniversity Press.

Churchland, P. M. (1985). Reduction, qualia, andthe direct introspection of brain states. Journalof Philosophy, 82 , 8–28.

Crick, F., & Koch. C. (1990). Towards a neurobio-logical theory of consciousness. Seminars in theNeurosciences, 2 , 263–275 .

Crick, F., & Koch. C. (2003). A framework forconsciousness. Nature Neuroscience, 6, 119–126.

Cummins, R. (1979). Intention, meaning, andtruth conditions. Philosophical Studies, 35 ,345–360.

DeBellis, M. (1991). The representational con-tent of musical experience. Philosophy and Phe-nomenological Research, 51, 303–324 .

Dennett, D. C. (1969). Consciousness and content.London: Routledge.

Dennett, D. C. (1981). Towards a cognitive the-ory of consciousness. In D. C. Dennett (Ed.),Brainstorms. Brighton: Harvester.

Dennett, D. C. (1987). The intentional stance.Cambridge, MA: MIT Press.

Dennett, D. C. (1991). Consciousness explained.Cambridge, MA: MIT Press.

Dennett, D. C. (1995). Darwin’s dangerous idea.New York: Simon and Schuster.

Dretske, F. I. (1981). Knowledge and the flow ofinformation. Oxford: Clarendon.

Dretske, F. I. (1988). Explaining behavior. Cam-bridge MA: MIT Press.

Dretske, F. I. (1993). Conscious experience.Mind, 102 , 263–283 .

Dretske, F. I. (1995). Naturalizing the mind. Cam-bridge MA: MIT Press.

Fodor, J. A. (1974). Special sciences. Synthese, 2 8,97–115 .

Foster, J. (1982). The case for idealism. London:Routledge.

Gennaro, R. J. (1996). Consciousness and self-consciousness. Philadelphia: John Benjamin.

Gennaro, R. J. (2002). Jean-Paul Sartre and theHOT theory of consciousness. Canadian Jour-nal of Philosophy, 32 , 293–330.

Gennaro, R. J. (2006). Between pure self-referentialism and (extrinsic) HOT theory. InU. Kriegel & K. Williford (Eds.), Conscious-ness and self-reference. Cambridge, MA: MITPress.

Goldman, A. (1993a). The psychology of folk psy-chology. Behavioral and Brain Sciences, 16, 15–28.

Goldman, A. (1993b). Consciousness, folk psy-chology, and cognitive science. Consciousnessand Cognition, 2 , 364–383 .

Grice, P. (1957). Meaning. Philosophical Review,66, 377–388.

Guzeldere, G. (1995). Is consciousness the per-ception of what passes in one’s own mind?In T. Metzinger (Ed.), Conscious experience.Padborn: Schoeningh-Verlag.

Harman, G. (1990). The intrinsic quality ofexperience. Philosophical Perspectives, 4 , 31–52 .

Horgan, T., & Tienson, J. (2002). The intention-ality of phenomenology and the phenomenol-ogy of intentionality. In D. J. Chalmers (Ed.),Philosophy of mind. Oxford: Oxford UniversityPress.

Hossack, K. (2002). Self-knowledge and con-sciousness. Proceedings of the Aristotelian Soci-ety, 163–181.

Jackson, F. (1984). Epiphenomenal qualia. Philo-sophical Quarterly, 34 , 147–152 .

Page 82: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

64 the cambridge handbook of consciousness

Kim, J. (1989a). The myth of nonreductive mate-rialism. Proceedings and Addresses of the Amer-ican Philosophical Association, 63 , 31–47.

Kim, J. (1989b). Mechanism, purpose, andexplanatory exclusion. Philosophical Perspec-tives, 3 , 77–108.

Kim, J. (1992). Multiple realization and themetaphysics of reduction. Philosophy and Phe-nomenological Research, 52 , 1–26.

Kobes, B. W. (1995). Telic higher-order thoughtsand Moore’s paradox. Philosophical Perspec-tives, 9, 291–312 .

Kriegel, U. (2002a). Phenomenal content. Erken-ntnis, 57, 175–198.

Kriegel, U. (2002b). Emotional content. Con-sciousness and Emotion, 3 , 213–230.

Kriegel, U. (2002c). PANIC theory and theprospects for a representational theory of phe-nomenal consciousness. Philosophical Psychol-ogy, 15 , 55–64 .

Kriegel U. (2003a). Consciousness, higher-ordercontent, and the individuation of vehicles. Syn-these, 134 , 477–504 .

Kriegel, U. (2003b). Consciousness as intransi-tive self-consciousness: Two views and an argu-ment. Canadian Journal of Philosophy, 33 , 103–132 .

Kriegel, U. (2004a). The new mysterianism andthe thesis of cognitive closure. Acta Analytica,18, 177–191.

Kriegel, U. (2004b). Consciousness and self-consciousness. The Monist 87, 185–209.

Kriegel, U. (2005a). Naturalizing subjectivecharacter. Philosophy and PhenomenologicalResearch, 71, 23–57.

Kriegel, U. (2005b). Review of Jeffrey Gray,Consciousness: creeping up on the hard problem.Mind, 114 , 417–421.

Kriegel, U. (2006a). The same-order mon-itoring theory of consciousness In U.Kriegel & K. Williford (Eds.), Conscious-ness and self-reference. Cambridge, MA: MITPress.

Kriegel, U. (2006b). The concept of conscious-ness in the cognitive sciences: Phenomenal con-sciousness, access consciousness, and scientificpractice. In P. Thagard (Ed.), Handbook of thephilosophy of psychology and cognitive science.Amsterdam: North-Holland.

Kriegel, U., & Williford, K. (eds.). (2006).Self-representational approaches to consciousness.Cambridge, MA: MIT Press.

Kripke, S. (1980). The identity thesis. Reprintedin N. J. Block, O. Flanagan, & G. Guzeldere(Eds.), (1997). The nature of consciousness:Philosophical debates. Cambridge, MA: MITPress.

Lehrer, K. (1996). Skepticism, lucid content, andthe metamental loop. In A. Clark, J. Ezquerro,& J. M. Larrazabal (Eds.), Philosophy and cogni-tive science. Dordrecht: Kluwer.

Lehrer, K. (1997). Self-trust: A study of rea-son, knowledge, and autonomy. Oxford: OxfordUniversity Press.

Levine, J. (1983). Materialism and qualia: Theexplanatory gap. Pacific Philosophical Quar-terly, 64 , 354–361.

Levine, J. (2001). Purple haze: The puzzle of con-sciousness. Oxford: Oxford University Press.

Lewis, D. K. (1990). What experience teaches. InW. G. Lycan (Ed.), Mind and cognition. Oxford:Blackwell.

Lewis, D. K. (1993). Causal explanation. InD.-H. Ruben (Ed.), Explanation. Oxford:Oxford University Press.

Libet, B. (1985). Unconscious cerebral initiativeand the role of conscious will in voluntaryaction. Behavioral and Brain Sciences, 8, 529–566.

Loar, B. (1990). Phenomenal states. PhilosophicalPerspectives, 4 , 81–108.

Lurz, R. (1999). Animal consciousness. Journal ofPhilosophical Research, 2 4 , 149–168.

Lurz, R. (2003). Neither HOT nor COLD: Analternative account of consciousness. Psyche, 9.

Lycan, W. G. (1987). Consciousness. Cambridge,MA: MIT Press.

Lycan, W. (1996). Consciousness and experience.Cambridge, MA: MIT Press.

Lycan, W. G. (2001). A simple argument fora higher-order representation theory of con-sciousness. Analysis, 61, 3–4 .

McGinn, C. (1989). Can we solve the mind-bodyproblem? Mind, 98, 349–366.

McGinn, C. (1995). Consciousness and space.Journal of Consciousness Studies, 2 , 220–30.

McGinn, C. (1999). The mysterious flame. Cam-bridge, MA: MIT Press.

McGinn, C. (2004). Consciousness and its objects.Oxford: Oxford University Press.

Mackie, J. L. (1977). Ethics: Inventing right andwrong. New York: Penguin.

Page 83: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

philosophical theories of consciousness: contemporary western perspectives 65

Maloney, J. C. (1989). The mundane matter of themental language.New York: Cambridge Univer-sity Press.

Mellor, D. H. (1978). Conscious belief. Proceed-ings of the Aristotelian Society, 78, 87–101.

Moore, G. E. (1903). The refutation of ideal-ism. In G. E. Moore (Ed.), Philosophical papers.London: Routledge and Kegan Paul.

Nagel, T. (1974). What is it like to be a bat? Philo-sophical Review, 83 , 435–450.

Natsoulas, T. (1993). What is wrong withappendage theory of consciousness? Philosoph-ical Psychology, 6, 137–154 .

Neander, K. (1998). The division of phenomenallabor: A problem for representational theoriesof consciousness. Philosophical Perspectives, 12 ,411–434 .

Nemirow, L. (1990). Physicalism and the cogni-tive role of acquaintance. In W. G. Lycan (Ed.),Mind and cognition. Oxford: Blackwell.

Peacocke, C. (1983). Sense and content. Oxford:Clarendon.

Putnam, H. (1967). The nature of mental states.In D. M. Rosenthal (Ed.), The nature of mind.Oxford: Oxford University Press.

Putnam, H. (1975). The meaning of ‘meaning.’ InH. Putnam (Ed.), Mind, language, and reality,New York: Cambridge University Press.

Rey, G. (1988). A question about consciousness.In H. Otto & J. Tueidio (Eds.), Perspectives onmind. Norwell: Kluwer Academic Publishers.

Rey, G. (1998). A narrow representationalistaccount of qualitative experience. Philosophi-cal Perspectives, 12 , 435–457.

Rosenthal, D. M. (1986). Two concepts ofconsciousness. Philosophical Studies, 94 , 329–359.

Rosenthal, D. M. (1990). A theory of con-sciousness. ZiF Technical Report 40, Bielfield,Germany.

Rosenthal, D. M. (1993). Thinking that onethinks. In M. Davies & G. W. Humphreys(Eds.), Consciousness: Psychological and philo-sophical essays. Oxford: Blackwell.

Rosenthal, D. M. (2000). Consciousness andmetacognition. In D. Sperber (Ed.), Metarep-resentation. Oxford: Oxford University Press.

Rosenthal, D. M. (2002a). Explaining conscious-ness. In D. J. Chalmers (Ed.), Philosophy ofmind. Oxford: Oxford University Press.

Rosenthal, D. M. (2002b). Consciousness andhigher-order thoughts. In L. Nadel (Ed.),

Macmillan encyclopedia of cognitive science. NewYork: Macmillan.

Seager, W. (1999). Theories of consciousness.London: Routledge.

Searle, J. R. (1983). Intentionality: An essay in thephilosophy of mind. New York: Cambridge Uni-versity Press.

Searle, J. R. (1992). The rediscovery of mind.Cambridge, MA: MIT Press.

Segal, G. (2000). A slim book on narrow content.Cambridge, MA: MIT Press.

Shoemaker, S. (1994a). Phenomenal character.Nous, 2 8, 21–38.

Shoemaker, S. (1994b). Self-knowledge and‘inner sense.’ Lecture III: The phenomenalcharacter of experience. Philosophy and Phe-nomenological Research, 54 , 291–314 .

Shoemaker, S. (1996). Colors, subjective reac-tions, and qualia. Philosophical Issues, 7, 55–66.

Shoemaker, S. (2002). Introspection and phe-nomenal character. In D. J. Chalmers (Ed.),Philosophy of mind. Oxford: Oxford UniversityPress.

Siewert, C. P. (1998). The significance of conscious-ness. Princeton, NJ: Princeton University Press.

Smart, J. J. C. (1959). Sensations and brain pro-cesses. Philosophical Review, 68, 141–156.

Smith, D. W. (1986). The structure of (self-) con-sciousness. Topoi, 5 , 149–156.

Smith, D. W. (1989). The circle of acquaintance.Dordrecht: Kluwer Academic Publishers.

Smith, D. W. (2004). Return to consciousness.In D. W. Smith (Ed.), Mind world. New York:Cambridge University Press.

Thau, M. (2002). Consciousness and cognition.Oxford: Oxford University Press.

Thomasson, A. L. (2000). After Brentano: A one-level theory of consciousness. European Journalof Philosophy, 8, 190–209.

Tye, M. (1986). The subjective qualities of expe-rience. Mind, 95 , 1–17.

Tye, M. (1992). Visual qualia and visual content.In T. Crane (ed.), The contents of experience.New York: Cambridge University Press.

Tye, M. (1995). Ten problems of consciousness.Cambridge, MA: MIT Press.

Tye, M. (2000). Consciousness, color, and content.Cambridge, MA: MIT Press.

Tye, M. (2002). Visual qualia and visual contentrevisited. In D. J. Chalmers (Ed.), Philosophy ofmind. Oxford: Oxford University Press.

Page 84: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c03 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 4 :21

66 the cambridge handbook of consciousness

Van Gulick, R. (1993). Understanding the phe-nomenal mind: Are we all just armadillos?Reprinted in N. J. Block, O. Flanagan, &G. Guzeldere (Eds.), (1997). The natureof consciousness: Philosophical debates. Cam-bridge, MA: MIT Press.

Van Gulick, R. (2001). Inward and upward –reflection, introspection needed and self-awareness. Philosophical Topics, 2 8, 275–305 .

Van Gulick, R. (2006). Mirror mirror – is thatall? In U. Kriegel & K. Williford (Eds.), Con-sciousness and self-reference. Cambridge, MA:MIT Press.

Velmans, M. (1992). Is human information pro-cessing conscious? Behavioral and Brain Sci-ences, 14 , 651–669.

Wegner, D. M. (2002). The illusion of consciouswill. Cambridge, MA: MIT Press.

Williford, K. W. (2006). The self-representationalstructure of consciousness. In U. Kriegel &K. Williford (Eds.), Consciousness and self-reference. Cambridge, MA: MIT Press.

Zahavi, D. (1998). Brentano and Husserl on self-awareness. Etudes Phenomenologiques, 2 7(8),127–169.

Zahavi, D. (1999). Self-awareness and alterity.Evanston, IL: Northwestern University Press.

Zahavi, D. (2004). Back to Brentano? Journal ofConsciousness Studies, 11, 66–87.

Zahavi, D., & Parnas, J. (1998). Phenomenal con-sciousness and self-awareness: A phenomeno-logical critique of representational theory. Jour-nal of Consciousness Studies, 5 , 687–705 .

Page 85: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

C H A P T E R 4

Philosophical Issues: Phenomenology

Evan Thompson and Dan Zahavi∗

Abstract

Current scientific research on conscious-ness aims to understand how conscious-ness arises from the workings of the brainand body, as well as the relations betweenconscious experience and cognitive pro-cessing. Clearly, to make progress in theseareas, researchers cannot avoid a range ofconceptual issues about the nature andstructure of consciousness, such as thefollowing: What is the relation betweenintentionality and consciousness? What isthe relation between self-awareness andconsciousness? What is the temporal struc-ture of conscious experience? What is itlike to imagine or visualize something, andhow is this type of experience differentfrom perception? How is bodily experiencerelated to self-consciousness? Such issueshave been addressed in detail in the philo-sophical tradition of phenomenology, inau-gurated by Edmund Husserl (1859–1938)

∗Order of authors was set alphabetically, and eachauthor did equal work.

and developed by numerous other philoso-phers throughout the 20th century. Thischapter provides an introduction to this tra-dition and its way of approaching issuesabout consciousness. We first discuss somefeatures of phenomenological methodologyand then present some of the most impor-tant, influential, and enduring phenomeno-logical proposals about various aspects ofconsciousness. These aspects include inten-tionality, self-awareness and the first-personperspective, time-consciousness, embodi-ment, and intersubjectivity. We also high-light a few ways of linking phenomenologyand cognitive science in order to suggestsome directions that consciousness researchcould take in the years ahead.

Introduction

Contemporary Continental perspectives onconsciousness derive either whole or in partfrom phenomenology, the philosophicaltradition inaugurated by Edmund Husserl(1859–1938). This tradition stands as oneof the dominant philosophical movements

67

Page 86: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

68 the cambridge handbook of consciousness

of the last century and includes major20th-century European philosophers, nota-bly Martin Heidegger, Jean-Paul Sartre, andMaurice Merleau-Ponty, as well as impor-tant North American and Asian exponents(Moran, 2000). Considering that virtuallyall of the leading figures in 20th-centuryGerman and French philosophy, includingAdorno, Gadamer, Habermas, Derrida, andFoucault, have been influenced by phe-nomenology, and that phenomenology isboth a decisive precondition and a constantinterlocutor for a whole range of subse-quent theories and approaches, includingexistentialism, hermeneutics, structuralism,deconstruction, and post-structuralism,phenomenology can be regarded as thecornerstone of what is often (but some-what misleadingly) called Continentalphilosophy.

The phenomenological tradition, like anyother philosophical tradition, spans manydifferent positions and perspectives. Thispoint also holds true for its treatments andanalyses of consciousness. Like analytic phi-losophy, phenomenology offers not one butmany accounts of consciousness. The fol-lowing discussion, therefore, is by neces-sity selective. Husserl’s analyses are themain reference point, and the discussionfocuses on what we believe to be some ofthe most important, influential, and endur-ing proposals about consciousness to haveemerged from these analyses and their sub-sequent development in the phenomenolog-ical tradition.1

Furthermore, in recent years a new cur-rent of phenomenological philosophy hasemerged in Europe and North America, onethat goes back to the source of phenomenol-ogy in Husserl’s thought, but addressesissues of concern to contemporary analyticphilosophy of mind, philosophy of psy-chology, and cognitive science (see Peti-tot, Varela, Pachoud, & Roy, 1999, andthe new journal Phenomenology and theCognitive Sciences). This important currentof phenomenological research also informsour discussion.2 Accordingly, after introduc-ing some features of the phenomenologicalmethod of investigation, we focus on the fol-

lowing topics relevant to cognitive scienceand the philosophy of mind: intentionality,self-awareness and the first-person perspec-tive, time-consciousness, embodiment, andintersubjectivity.

Method

Phenomenology grows out of the recogni-tion that we can adopt, in our own first-person case, different mental attitudes orstances toward the world, life, and experi-ence. In everyday life we are usually straight-forwardly immersed in various situations andprojects, whether as specialists in one oranother form of scientific, technical, or prac-tical knowledge or as colleagues, friends, andmembers of families and communities. Inaddition to being directed toward more-or-less particular, ‘thematic’ matters, we arealso overall directed toward the world asthe unthematic horizon of all our activities(Husserl, 1970, p. 281). Husserl calls this atti-tude of being straightforwardly immersedin the world ‘the natural attitude’, and hethinks it is characterized by a kind of unre-flective ‘positing’ of the world as somethingexisting ‘out there’ more or less indepen-dently of us.

The ‘phenomenological attitude’, on theother hand, arises when we step back fromthe natural attitude, not to deny it, butto investigate the very experiences it com-prises. If such an investigation is to be gen-uinely philosophical, then it must strive tobe critical and not dogmatic, and thereforecannot take the naıve realism of the nat-ural attitude for granted. Yet to deny thisrealistic attitude would be equally dogmatic.The realistic ‘positing’ of the natural atti-tude must rather be suspended, neutral-ized, or put to one side, so that it playsno role in the investigation. In this way,we can focus on the experiences that sus-tain and animate the natural attitude, butin an open and non-dogmatic manner. Wecan investigate experience in the natural atti-tude without being prejudiced by the naturalattitude’s own unexamined view of things.This investigation should be critical and

Page 87: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 69

not dogmatic, shunning metaphysical andscientific prejudices. It should be guided bywhat is actually given to experience, ratherthan by what we expect to find given ourtheoretical commitments.

Yet how exactly is such an investigationto proceed? What exactly are we supposedto investigate? Husserl’s answer is decep-tively simple: Our investigation should turnits attention toward the givenness or appear-ance of reality; that is, it should focus onthe way in which reality is given to us inexperience. We are to attend to the worldstrictly as it appears, the world as it is phe-nomenally manifest. Put another way, weshould attend to the modes or ways in whichthings appear to us. We thereby attend tothings strictly as correlates of our experience,and the focus of our investigation becomesthe correlational structure of our subjec-tivity and the appearance or disclosure ofthe world.3

The philosophical procedure by whichthis correlational structure is investigatedis known as the phenomenological reduction.‘Reduction’ in this context does not meanreplacing or eliminating one theory or modelin favour of another taken to be more funda-mental. It signifies rather a ‘leading back’ (re-ducere) or redirection of thought away fromits unreflective and unexamined immersionin experience of the world to the way inwhich the world manifests itself to us. Toredirect our interest in this way does notmean we doubt the things before us or some-how try to turn away from the world tolook elsewhere. Things remain before us, butwe envisage them in a new way; namely,strictly as they appear to us. Thus, every-day things available to our perception arenot doubted or considered as illusions whenthey are ‘phenomenologically reduced’, butinstead are envisaged and examined simplyand precisely as perceived (and similarly forremembered things as remembered, imag-ined things as imagined, and so on). In otherwords, once we adopt the phenomenologi-cal attitude, we are interested not in whatthings are in themselves, in some naıve,mind-independent, or theory-independentsense, but rather in exactly how they appear,

and thus as strict relational correlates of ourexperience.

The phenomenological reduction, in itsfull sense, is a rich mode of analysis compris-ing many steps. Two main ones are crucial.The first leads back from the natural attitudeto the phenomenological attitude by neu-tralizing the realistic positing of the naturalattitude and then orienting attention towardthe disclosure or appearance of reality to us.The second leads from this phenomenolog-ical attitude to a more radical kind of philo-sophical attitude. Put another way, this stepleads from phenomenology as an empirical,psychological attitude (phenomenologicalpsychology) to phenomenology as a ‘tran-scendental’ philosophical attitude (transcen-dental phenomenology).

‘Transcendental’ is used here in itsKantian sense to mean an investigationconcerned with the modes or ways in whichobjects are experienced and known, as wellas the a priori conditions for the possi-bility of such experience and knowledge.Husserl casts these two aspects of transcen-dental inquiry in a specific form, which isclearly related to but nonetheless differentfrom Kant’s (see Steinbock, 1995 , pp. 12–15). Two points are important here. First,transcendental phenomenology focuses noton what things are, but on the ways inwhich things are given. For Husserl, thismeans focusing on phenomena (appear-ances) and the senses or meanings theyhave for us and asking how these mean-ingful phenomena are ‘constituted’. ‘Con-stitution’ does not mean fabrication orcreation; the mind does not fabricate theworld. To constitute, in the technical phe-nomenological sense, means to bring toawareness, to present, or to disclose. Themind brings things to awareness; it disclosesand presents the world. Stated in a classi-cal phenomenological way, the idea is thatobjects are disclosed or made available toexperience in the ways they are thanksto how consciousness is structured. Thingsshow up, as it were, having the features theydo, because of how they are disclosed andbrought to awareness, given the structure ofconsciousness.

Page 88: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

70 the cambridge handbook of consciousness

Such constitution is not apparent to usin everyday life, but requires systematicanalysis to discern. Consider, for exam-ple, our experience of time. Our sense ofthe present moment as both simultaneouslyopening into the immediate future and slip-ping away into the immediate past dependson the formal structure of our conscious-ness of time. The present moment mani-fests as having temporal breadth, as a zoneor span of actuality, instead of as an instan-taneous flash, because of the way our con-sciousness is structured. Second, to addressthis constitutional problem of how mean-ingful phenomena are brought to awarenessor disclosed, transcendental phenomenologytries to uncover the invariant formal princi-ples by which experience necessarily oper-ates in order to be constitutive. A funda-mental example of this type of principleis the ‘retentional-protentional’ structure oftime-consciousness, which we discuss in thelater section, Temporality and Inner Time-Consciousness.

The purpose of the phenomenologicalreduction, therefore, contrary to many mis-understandings, is neither to exclude theworld from consideration nor to com-mit one to some form of methodolog-ical solipsism. Rather, its purpose is toenable one to explore and describe thespatiotemporal world as it is given. ForHusserl, the phenomenological reduction ismeant as a way of maintaining this radi-cal difference between philosophical reflec-tion on phenomenality and other modesof thought.

Henceforth, we are no longer to considerthe worldly object naıvely; rather, we are tofocus on it precisely as a correlate of expe-rience. If we restrict ourselves to that whichshows itself (whether in straightforward per-ception or a scientific experiment), and ifwe focus specifically on that which tendsto be ignored in daily life (because it is sofamiliar) – namely, on phenomenal mani-festation as such, the sheer appearances ofthings – then we cannot avoid being ledback (re-ducere) to subjectivity. Insofar as weare confronted with the appearance of anobject – that is, with an object as presented,

perceived, judged, or evaluated – we are ledback to the intentional structures to whichthese modes of appearance are correlated.We are led to the intentional acts of presen-tation, perception, judgement, and evalua-tion and thereby to the subject (or subjects),in relation to whom the object as appearingmust necessarily be understood.

Through the phenomenological attitudewe thus become aware of the givenness ofthe object. Yet the aim is not simply tofocus on the object exactly as it is given, butalso on the subjective side of consciousness.We thereby become aware of our subjec-tive accomplishments, specifically the kindsof intentionality that must be in play for any-thing to appear as it does. When we inves-tigate appearing objects in this way, we alsodisclose ourselves as ‘datives of manifesta-tion’ (Sokolowski, 2000), as those to whomobjects appear.

As a procedure of working back, as itwere, from the objects of experience, asgiven to perception, memory, imagination,and so on, to the acts whereby one is aware ofthese objects – acts of perceiving, remember-ing, imagining, and so on – the phenomeno-logical reduction has to be performed inthe first person. As with any such proce-dure, it is one thing to describe its gen-eral theoretical character and another todescribe it pragmatically, the concrete stepsby which it is carried out. The main method-ical step crucial for the phenomenologi-cal reduction Husserl calls the epoche. Thisterm comes originally from Greek skepti-cism, where it means to refrain from judge-ment, but Husserl adopted it as a term forthe ‘suspension’, ‘neutralization’, or ‘brack-eting’ of both our natural ‘positing’ attitude(see above) and our theoretical beliefs orassertions (scientific or philosophical) about‘objective reality’. From a more concreteand situated first-person perspective, how-ever, the epoche can be seen as a prac-ticed mental gesture of shifting one’s atten-tion to how the object appears and thus toone’s experiencing of the object: “Literally,the epoche corresponds to a gesture of sus-pension with regard to the habitual courseof one’s thoughts, brought about by an

Page 89: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 71

interruption of their continuous flowing . . .

As soon as a mental activity, a thoughtanchored to the perceived object alone,turns me away from the observation of theperceptual act to re-engage me in the per-ception of the object, I bracket it” (Depraz,1999, pp. 97–98). The aim of this bracketingis to return one’s attention to the act of expe-riencing correlated to the object, therebysustaining the phenomenological reduction:“in order that the reduction should always bea living act whose freshness is a function of itsincessant renewal in me, and never a simpleand sedimented habitual state, the reflectiveconversion [of attention] has to be operativeat every instant and at the same time per-manently sustained by the radical and vig-ilant gesture of the epoche” (Depraz, 1999,p. 100).

One can discern a certain ambivalencein the phenomenological tradition regard-ing the theoretical and practical or exis-tential dimensions of the epoche. On theone hand, Husserl’s great concern was toestablish phenomenology as a new philo-sophical foundation for science, and so theepoche in his hands served largely as acritical tool of theoretical reason.4 On theother hand, because Husserl’s theoreticalproject was based on a radical reappraisalof experience as the source of meaningand knowledge, it necessitated a constantreturn to the patient, analytic description oflived experience through the phenomeno-logical reduction. This impulse generateda huge corpus of careful phenomenologi-cal analyses of numerous different dimen-sions and aspects of human experience –the perceptual experience of space (Husserl,1997), kinesthesis and the experience ofone’s own body (Husserl, 1989, 1997),time-consciousness (Husserl, 1991), affect(Husserl, 2001), judgement (Husserl, 1975),imagination and memory (Husserl, 2006),and intersubjectivity (Husserl, 1973), toname just a few. Nevertheless, the epocheas a practical procedure – as a situated prac-tice carried out in the first person by thephenomenologist – has remained strangelyneglected in the phenomenological litera-ture, even by so-called existential phenome-

nologists such as Heidegger and Merleau-Ponty, who took up and then recast intheir own ways the method of the phe-nomenological reduction (see Heidegger,1982 ; Merleau-Ponty, 1962).

For this reason, one new current in phe-nomenology aims to develop more explic-itly the pragmatics of the epoche as a‘first-person method’ for investigating con-sciousness (Depraz, 1999; Depraz, Varela,& Vermersch, 2003 ; Varela & Shear 1999).This pragmatic approach has also com-pared the epoche to first-person meth-ods in other domains, such as contem-plative practice (Depraz et al., 2003),and has explored the relevance of first-person methods for producing more refinedfirst-person reports in experimental psychol-ogy and cognitive neuroscience (Lutz &Thompson, 2003 ; Varela, 1996). This lat-ter endeavour is central to the researchprogramme known as ‘neurophenomenol-ogy’, introduced by Francisco Varela (1996,1999) and developed by other researchers(Lloyd, 2002 , 2003 ; Lutz & Thompson,2003 ; Rainville, 2005 ; Thompson, in press;Thompson, Lutz, & Cosmelli, 2005 ; see alsoChapters 19 and 26).

Intentionality

Implicit in the foregoing treatment of phe-nomenological method is the phenomeno-logical concept of intentionality. Accordingto Husserlian phenomenology, conscious-ness is intentional, in the sense that it ‘aimstoward’ or ‘intends’ something beyond itself.This sense of intentional should not be con-fused with the more familiar sense of hav-ing a purpose in mind when one acts, whichis only one kind of intentionality in thephenomenological sense. Rather, intention-ality is a generic term for the pointing-beyond-itself proper to consciousness (fromthe Latin intendere, which once referred todrawing a bow and aiming at a target).

Phenomenologists distinguish differenttypes of intentionality. In a narrow sense,intentionality is defined as object-direct-edness. In a broader sense, which covers

Page 90: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

72 the cambridge handbook of consciousness

what Husserl (2001, p. 206) and Merleau-Ponty (1962 , p. xviii) called ‘operativeintentionality’ (see below), intentionalityis defined as openness toward otherness(or alterity). In both cases, the empha-sis is on denying that consciousness isself-enclosed.

Object-directedness characterizes almostall of our experiences, in the sense that inhaving them we are exactly conscious ofsomething. We do not merely love, fear, see,or judge; we love, fear, see, or judge some-thing. Regardless of whether we consider aperception, a thought, a judgement, a fan-tasy, a doubt, an expectation, a recollection,and so on, these diverse forms of conscious-ness are all characterized by the intendingof an object. In other words, they cannotbe analyzed properly without a look at theirobjective correlates; that is, the perceived,the doubted, the expected, and so forth. Theconverse is also true: The intentional objectcannot be analyzed properly without a lookat its subjective correlate, the intentional act.Neither the intentional object nor the men-tal act that intends it can be understood apartfrom the other.

Phenomenologists call this act-objectrelation the ‘correlational structure ofintentionality’. ‘Correlational’ does notmean the constant conjunction of two termsthat could be imagined to exist apart, butthe necessary structural relation of mentalact and intended object. Object-directedintentional experiences necessarily comprisethese two inseparable poles. In Husserlianphenomenological language these two polesare known as the ‘noema’ (the object asexperienced) and the ‘noesis’ (the mental actthat intends the object).

There has been a huge amount of schol-arly discussion about the proper way to inter-pret the Husserlian notion of the noema (seeDrummond, 2003 , for an overview). Thediscussion concerns the relation betweenthe object-as-intended (the noema) and theobject-that-is-intended (the object itself )– thewine bottle-as-perceived (as felt and seen)and the bottle itself. According to the rep-resentationalist interpretation, the noema isa type of representational entity, an ideal

sense or meaning, that mediates the inten-tional relation between the mental act andthe object. On this view, consciousness isdirected toward the object by means of thenoema and thus achieves its openness tothe world only in virtue of the represen-tational noema.

According to the rival non-represen-tationalist interpretation, the noema is notany intermediate, representational entity;the noema is the object itself, but the objectconsidered phenomenologically; that is, pre-cisely as experienced. In other words,the object-as-intended is the object-that-is-intended, abstractly and phenomenologicallyconsidered; namely, in abstraction fromthe realistic positing of the natural atti-tude and strictly as experientially given.The noema is thus graspable only in aphenomenological or transcendental atti-tude. This view rejects the representation-alism of the former view. Consciousnessis intrinsically self-transcending and accord-ingly does not achieve reference to the worldin virtue of intermediate ideal entities thatbestow intentionality upon it. Experiencesare intrinsically intentional (see Searle, 1983 ,for a comparable claim in the analytic tra-dition). Their being is constituted by beingof something else. It would take us too farafield to review the twists and turns of thisdebate, so we simply state for the record thatfor a variety of reasons we think the rep-resentationalist interpretation of the noemais mistaken and the non-representationalistinterpretation is correct (see Zahavi, 2003 ,2004).

We have been considering object-directedintentionality, but many experiences arenot object-directed – for example, feelingsof pain and nausea, and moods, such asanxiety, depression, and boredom. Philoso-phers whose conception of intentionalityis limited to object-directedness deny thatsuch experiences are intentional (e.g., Searle,1983). Phenomenologists, however, in dis-tinguishing between intentionality as object-directedness and intentionality as openness,have a broader conception. It is true thatpervasive moods, such as sadness, boredom,nostalgia, and anxiety, must be distinguished

Page 91: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 73

from intentional feelings, such as the desirefor an apple or the admiration for a particularperson. Nevertheless, moods are not withouta reference to the world. They do not encloseus within ourselves, but are lived through aspervasive atmospheres that deeply influencethe way the world is disclosed to us. Moods,such as curiosity, nervousness, or happiness,disclose our embeddedness in the worldand articulate or modify our existentialpossibilities. As Heidegger argued, moods,rather than being merely attendant phenom-ena, are fundamental forms of disclosure:“Mood has always already disclosed being-in-the-world as a whole and first makes pos-sible directing oneself toward something”(Heidegger, 1996, p. 129).

What about pain? Sartre’s classic analy-sis in Being and Nothingness (1956) is illu-minating in this case. Imagine that you aresitting late a night trying to finish reading abook. You have been reading most of the dayand your eyes hurt. How does this pain orig-inally manifest itself? According to Sartre,not initially as a thematic object of reflec-tion, but by influencing the way in whichyou perceive the world. You might becomerestless and irritated and have difficultiesin focusing and concentrating. The wordson the page might tremble and quiver. Thepain is not yet apprehended as an inten-tional object, but that does not mean thatit is either cognitively absent or uncon-scious. It is not yet reflected-upon as a psy-chic object, but given rather as a vision-in-pain, as an affective atmosphere thatinfluences your intentional interaction withthe world.

Another important part of the phe-nomenological account of intentionality isthe distinction among signitive (linguis-tic), pictorial, and perceptual intentionali-ties (Husserl, 2000). I can talk about a with-ering oak, I can see a detailed drawing of theoak, and I can perceive the oak myself. Thesedifferent ways to intend an object are notunrelated. According to Husserl, there is astrict hierarchical relation among them, inthe sense that they can be ranked accord-ing to their ability to give us the object asdirectly, originally, and optimally as possible.

The object can be given more or less directly;that is, it can be more or less present. Onecan also speak of different epistemic lev-els. The lowest and most empty way inwhich the object can appear is in the sig-nitive acts. These (linguistic) acts certainlyhave a reference, but apart from that theobject is not given in any fleshed-out man-ner. The pictorial acts have a certain intu-itive content, but like the signitive acts, theyintend the object indirectly. Whereas the sig-nitive acts intend the object via a contingentrepresentation (a linguistic sign), the picto-rial acts intend the object via a representa-tion (picture) that bears a certain similar-ity or projective relation to the object. It isonly the perception that gives us the objectdirectly. This is the only type of intentionthat presents the object in its bodily presence(leibhaftig).

Recollection and imagination are twoother important forms of object-directedintentionality (empathy is a third: see thesection, Intersubjectivity). These types aremediated; that is, they intend their objectsby way of other, intermediate mental activ-ities, rather than directly, as does percep-tion. In recollection, I remember the with-ering oak (the object itself ) by means ofre-presenting (reproducing or re-enacting)a past perception of the oak. In imagina-tion, I can either imagine the withering oak(the actual tree), or I can imagine a non-existent oak in the sense of freely fantasizinga different world. Either way, imaginationinvolves re-presenting to myself a possi-ble perceptual experience of the oak. Yet,in imagination, the assertoric or ‘positing’character of this (re-presented) perceptualexperience is said to be ‘neutralized’, forwhereas an ordinary perceptual experienceposits its object as actually there (regard-less of whether the experience is veridi-cal), imagination does not. In recollection,by contrast, this assertoric or positing fea-ture of the experience is not neutralized,but remains in play, because the percep-tion reproduced in the memory is repre-sented as having actually occurred in thepast. Husserl thus describes perception andrecollection as positional (assertoric) acts,

Page 92: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

74 the cambridge handbook of consciousness

whereas imagination is non-positional (non-assertoric; see Husserl, 2006, and Bernet,Kern, & Marbach, 1993 , for an overview).

We thus arrive at another crucial dis-tinction, the distinction between intentionalacts of presentation (Gegenwartigung) and ofre-presentation (Vergegenwartigung). Accord-ing to standard usage in the analytic phi-losophy of mind and cognitive science, theterm ‘representation’ applies to any kindof mental state that has intentional content(‘intentional content’ and ‘representationalcontent’ being used synonymously). In phe-nomenological parlance, on the other hand,‘re-presentation’ applies only to those typesof mental acts that refer to their objectsby way of intermediate mental activity,as in remembrance, imagination (imagingand fantasy), and pictorial consciousness(looking at a picture). Perception, by con-trast, is not re-presentational, but presen-tational, because the object-as-experienced(the intentional object or objective correlateof the act) is ‘bodily present’ or there ‘in fleshand blood’ (regardless of whether the per-ceptual experience turns out to be veridicalor not).

Perceptual intentionality can be furtherdifferentiated into, on the one hand, athematic, explicit, or focal object-directedmode of consciousness and, on the otherhand, a non-reflective tacit sensibility, whichconstitutes our primary openness to theworld. This tacit sensibility, called ‘opera-tive [ fungierende] intentionality’, functionsprereflectively, anonymously, and passively,without being engaged in any explicit cog-nitive acquisition. In this context it isimportant to distinguish between activityand passivity. One can be actively taking aposition in acts of comparing, differentiat-ing, judging, valuing, wishing, and so on. AsHusserl (2001) points out, however, when-ever one is active, one is also passive, becauseto be active is to react to something that hasaffected one. Every kind of active position-taking presupposes a prior and passivebeing affected.

Following Husserl one step further in hisanalysis, we can distinguish between receptiv-ity and affectivity. Receptivity is taken to bethe first, lowest, and most primitive type of

intentional activity; it consists in respondingto or paying attention to that which is affect-ing us passively. Even receptivity understoodas a mere ‘I notice’ presupposes a prior‘affection’ (meaning one’s being affectivelyinfluenced or perturbed, not a feeling of fond-ness). Whatever becomes thematized (evenas a mere noticing) must have been alreadyaffecting and stimulating one in an unheededmanner. Affectivity, however, is not a mat-ter of being affected by an isolated, undif-ferentiated sense impression. If something isto affect us, impose itself on us, and arouseour attention, it must be sufficiently strong.It must be more conspicuous than its sur-roundings, and it must stand out in some waythrough contrast, heterogeneity, and differ-ence. Thus, receptivity emerges from withina passively organized and structured field ofaffectivity.5

In summary, explicit, object-directedintentional experience arises against thebackground of a precognitive, operativeintentionality, which involves a dynamicinterplay of affectivity and receptivity andconstitutes our most fundamental way ofbeing open to the world.

Phenomenal Consciousnessand Self-Awareness

In contemporary philosophy of mind theterm ‘phenomenal consciousness’ refers tomental states that have a subjective andexperiential character. In Nagel’s words,for a mental state to be (phenomenally)conscious is for there to be somethingit is like for the subject to be in thatstate (Nagel, 1979). Various notions areemployed to describe the properties char-acteristic of phenomenal consciousness –qualia, sensational properties, phenomenalproperties, and the subjective characterof experience – and there is considerabledebate about the relation between theseproperties and other properties of mentalstates, such as their representational con-tent or their being cognitively accessibleto thought and verbal report (‘access con-sciousness’). The examples used in thesediscussions are usually bodily sensations,

Page 93: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 75

such as pain, or perceptual experiences,such as the visual experience of colour.Much less frequently does one find discus-sion of the subjective character of emo-tion (feelings, affective valences, moods),to say nothing of memory, mental imagery,or thought.

According to Husserl, however, the phe-nomenal aspect of experience is not lim-ited to sensory or even emotional states,but also characterizes conscious thought.In his Logical Investigations, Husserl (2000)argues that conscious thoughts have expe-riential qualities and that episodes of con-scious thought are experiential episodes.Every intentional experience possesses twodifferent, but inseparable ‘moments’ (i.e.,dependent aspects or ingredients): (i) Everyintentional experience is an experience of aspecific type, be it an experience of judging,hoping, desiring, regretting, remembering,affirming, doubting, wondering, fearing, andso on. Husserl calls this aspect the intentionalquality of the experience. (ii) Every inten-tional experience is also directed at or aboutsomething. He calls this aspect the inten-tional matter of the experience. Clearly, thesame quality can be combined with differentmatters, and the same matter can be com-bined with different qualities. It is possibleto doubt that ‘the inflation will continue’,doubt that ‘the election was fair’, or doubtthat ‘one’s next book will be an internationalbestseller’, precisely as it is possible to denythat ‘the lily is white’, to judge that ‘the lilyis white’, or to question whether ‘the lilyis white’. Husserl’s distinction between theintentional matter and the intentional qual-ity thus bears a certain resemblance to thecontemporary distinction between proposi-tional content and propositional attitudes(though it is important to emphasize thatHusserl by no means took all intentionalexperiences to be propositional in nature;see Husserl, 1975).

Nevertheless – and this is the centralpoint – Husserl considered these cognitivedifferences to be also experiential differences.Each of the different intentional qualitieshas its own phenomenal character. There isan experiential difference between affirm-ing and denying that Hegel was the great-

est of the German idealists, as there is anexperiential difference between expectingand doubting that Denmark will win the2010 FIFA World Cup. What it is like to bein one type of intentional state is differentfrom what it is like to be in another type ofintentional state. Similarly, each of the dif-ferent intentional matters has its own phe-nomenal character. To put it differently, achange in the intentional matter will entaila change in what it is like to undergo theexperience in question. (This does not entail,however, that two experiences differing inwhat it is like to undergo them cannot intendthe same object, nor that two experiencesalike in this respect must necessarily intendthe same object.) These experiential differ-ences, Husserl argues, are not simply sensorydifferences.6

In summary, every phenomenally con-scious state, be it a perception, an emotion, arecollection, an abstract belief, and so forth,has a certain subjective character, a certainphenomenal quality, corresponding to whatit is like to live through or undergo that state.This is what makes the mental state in ques-tion phenomenally conscious.

This experiential quality of consciousmental states, however, calls for further elu-cidation. Let us take perceptual experienceas our starting point. Whereas the objectof my perceptual experience is intersubjec-tively (publicly) accessible, in the sense thatit can in principle be given to others in thesame way it is given to me, the case is dif-ferent with my perceptual experience itself.Whereas you and I can both perceive oneand the same cherry, each of us has his or herown distinct perception of it, and we can-not share these perceptions, precisely as wecannot share each other’s pains. You mightcertainly realize that I am in pain and evenempathize with me, but you cannot actu-ally feel the pain the same way I do. Thispoint can be formulated more precisely bysaying that you have no access to the first-personal givenness of my experience. Thisfirst-personal quality of experience leads tothe issue of self and self-awareness.

When one is directly and non-inferent-ially conscious of one’s own occurrent thou-ghts, perceptions, feelings, or pains, they are

Page 94: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

76 the cambridge handbook of consciousness

characterized by a first-personal givennessthat immediately reveals them as one’s own.This first-personal givenness of experientialphenomena is not something incidental totheir being, a mere varnish the experiencescould lack without ceasing to be experiences.On the contrary, it is their first-personalgivenness that makes the experiences subjec-tive. To put it differently, their first-personalgivenness entails a built-in self-reference,a primitive experiential self-referentiality.When I am aware of an occurrent pain, per-ception, or thought from the first-personperspective, the experience in question isgiven immediately and non-inferentially asmine. I do not first scrutinize a specific per-ception or feeling of pain and then identifyit as mine.

Accordingly, self-awareness cannot beequated with reflective (explicit, thematic,introspective) self-awareness, as claimed bysome philosophers and cognitive scientists.On the contrary, reflective self-awarenesspresupposes a prereflective (implicit, tacit)self-awareness. Self-awareness is not some-thing that comes about only at the momentI realize that I am (say) perceiving theEmpire State Building, or realize that I amthe bearer of private mental states, or referto myself using the first-person pronoun.Rather, it is legitimate to speak of a primitivebut basic type of self-awareness whenever Iam acquainted with an experience from afirst-person perspective. If the experience inquestion, be it a feeling of joy, a burningthirst, or a perception of a sunset, is givenin a first-personal mode of presentation tome, it is (at least tacitly) given as my expe-rience and can therefore count as a case ofself-awareness. To be aware of oneself is con-sequently not to apprehend a pure self apartfrom the experience, but to be acquaintedwith an experience in its first-personal modeof presentation; that is, from ‘within’. Thus,the subject or self referred to is not some-thing standing opposed to, apart from, orbeyond experience, but is rather a feature orfunction of its givenness. Or to phrase it differ-ently, it is this first-personal givenness of theexperience that constitutes the most basicform of selfhood (Zahavi, 1999, 2005).

In summary, any (object-directed) con-scious experience, in addition to being ofor about its intentional object is prereflec-tively manifest to itself. To use another for-mulation, transitive phenomenal conscious-ness (consciousness-of) is also intransitiveself-consciousness (see Chapter 3). Intran-sitive self-consciousness is a primitive formof self-consciousness in the sense that (i)it does not require any subsequent act ofreflection or introspection, but occurs simul-taneously with awareness of the object; (ii)does not consist in forming a belief or mak-ing a judgement; and (iii) is passive in thesense of being spontaneous and involun-tary. According to some phenomenologists(e.g., Merleau-Ponty, 1962), this tacit self-awareness involves a form of non-objectivebodily self-awareness, an awareness of one’slived body (Leib) or embodied subjectiv-ity, correlative to experience of the object(see the section, Embodiment and Percep-tion). The roots of such prereflective bod-ily self-awareness sink to the passive andanonymous level of the interplay betweenreceptivity and affectivity constitutive of‘operative intentionality’ (see the section,Intentionality).

Phenomenology thus corroborates cer-tain proposals about consciousness com-ing from neuroscience. Such theorists asPanksepp (1998a, b) and Damasio (1999)have argued that neuroscience needs toexplain both how the brain enables us toexperience the world outside us and howit “also creates a sense of self in the act ofknowing . . . how each of us has a sense of‘me’” (Parvizi & Damasio, 2001, pp. 136–137). In phenomenological terms, this sec-ond issue concerns the primitive sense of ‘I-ness’ belonging to consciousness, known as‘ipseity’ (see also Chapter 19). As a num-ber of cognitive scientists have emphasized,this core of self-awareness in conscious-ness is fundamentally linked to bodily pro-cesses of life-regulation, emotion, and affect,such that cognition and intentional actionare emotive (Damasio, 1999; Freeman,2000; Panksepp 1998a,b). A promising lineof collaboration between phenomenologyand affective-cognitive neuroscience could

Page 95: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 77

therefore centre on the lived body as away of deepening our understanding ofsubjectivity and consciousness (Thompson,2007).

Temporality and InnerTime-Consciousness

Why must an investigation of conscious-ness inevitably confront the issue of time?There are many reasons, of course, but inthis section we focus on two main ones.First, experiences do not occur in isola-tion. The stream of consciousness comprisesan ensemble of experiences that are uni-fied both at any given time (synchronically)and over time (diachronically); therefore, weneed to account for this temporal unity andcontinuity. In addition, we are able not onlyto recollect earlier experiences and recog-nize them as our own but also to perceiveenduring (i.e., temporally extended) objectsand events; hence, we need to account forhow consciousness must be structured forthere to be such experiences of coherenceand identity over time. Second, our presentcognitive activities are shaped and influ-enced conjointly by both our past experi-ences and our future plans and expectations.Thus, if we are to do justice to the dynamiccharacter of cognition, we cannot ignore therole of time.

In a phenomenological context, the term‘temporality’ does not refer to objective, cos-mic time, measured by an atomic clock, orto a merely subjective sense of the passageof time, although it is intimately relatedto the latter. Temporality, or ‘inner time-consciousness’, refers to the most fundamen-tal, formal structure of the stream of con-sciousness (Husserl, 1991).

To introduce this idea, we can considerwhat phenomenologists call ‘syntheses ofidentity’ in the flow of experience. If I movearound a tree to gain a fuller appreciationof it, then the tree’s different profiles – itsfront, sides, and back – do not appear asdisjointed fragments, but as integrated fea-tures belonging to one and the same tree.The synthesis that is a precondition for this

integration is temporal in nature. Thus, time-consciousness must be regarded as a formalcondition of possibility for the perception ofany object. Yet, how must this experientialprocess be structured for identity or unityover time to be possible?

Phenomenological analyses point to the‘width’ or ‘depth’ of the ‘living present’ ofconsciousness: Our experience of temporalenduring objects and events, as well as ourexperience of change and succession, wouldbe impossible were we conscious only of thatwhich is given in a punctual now and wereour stream of consciousness composed of aseries of isolated now-points, like a stringof pearls. According to Husserl (1991), thebasic unit of temporality is not a ‘knife-edge’ present, but a ‘duration-block’ (to bor-row William James’s words; see James, 1981,p. 574) i.e., a temporal field that comprisesall three temporal modes of present, past,and future. Just as there is no spatial objectwithout a background, there is no experi-ence without a temporal horizon. We can-not experience anything except on the back-ground of what it succeeds and what weanticipate will succeed it. We can no moreconceive of an experience empty of futurethan one empty of past.

Three technical terms describe this tem-poral form of consciousness. There is (i)a ‘primal impression’ narrowly directedtoward the now-phase of the object. Theprimal impression never appears in isola-tion and is an abstract component that byitself cannot provide us with a perceptionof a temporal object. The primal impres-sion is accompanied by (ii) a ‘retention’,which provides us with a consciousness ofthe just-elapsed phase of the object, andby (iii) a ‘protention’, which in a more-or-less indefinite way intends the phase of theobject about to occur. The role of the proten-tion is evident in our implicit and unreflec-tive anticipation of what is about to happenas experience progresses. That such antici-pation belongs to experience is illustratedby the fact that we would be surprised if(say) the wax figure suddenly moved or ifthe door we opened hid a stone wall. Itmakes sense to speak of surprise only in light

Page 96: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

78 the cambridge handbook of consciousness

of anticipation, and because we can alwaysbe surprised, we always have a horizon ofanticipation. The concrete and full struc-ture of all lived experience is thus primalimpression-retention-protention. Although thespecific experiential contents of this struc-ture from moment to moment progressivelychange, at any given moment this threefoldstructure is present (synchronically) as a uni-fied whole. This analysis provides an accountof the notion of the specious present thatimproves on that found in William James,C. D. Broad, and others (see Gallagher,1998).

It is important to distinguish retentionand protention, which are structural featuresof any conscious act, from recollection andexpectation, understood as specific typesof mental acts. There is a clear differencebetween, on the one hand, retaining notesthat have just sounded and protending notesabout to sound while listening to a melody,and, on the other hand, remembering a pastholiday or looking forward to the next vaca-tion. Whereas recollection and expectationpresuppose the work of retention and pro-tention, protention and retention are intrin-sic components of any occurrent experi-ence one might have. Unlike recollectionand expectation, they are passive (involun-tary) and automatic processes that take placewithout our active or deliberate contribu-tion. Finally, they are invariant structural fea-tures that make possible the temporal flowof consciousness as we know and experienceit. In other words, they are a priori condi-tions of possibility of there being ‘synthesesof identity’ in experience at all.

Husserl’s analysis of the structure of innertime-consciousness serves a double purpose.It is meant to explain not only how we canbe aware of objects with temporal extensionbut also how we can be aware of our ownstream of experiences. To put it differently,Husserl’s investigation is meant to explainnot only how we can be aware of temporallyextended units but also how consciousnessunifies itself across time.

Like bodily self-awareness, temporalityand time-consciousness are rich in potentialfor collaborative study by phenomenology

and cognitive science. Work by FranciscoVarela (1999) in particular has shownthat phenomenological analyses of time-consciousness can be profitably linked toneurodynamical accounts of the brain pro-cesses associated with the temporal flow ofconscious experience (see also Chapter 26

and Thompson, 2007). This linkage betweenphenomenology and neurodynamics is cen-tral to the research programme of neurophe-nomenology, mentioned above.

Embodiment and Perception

Conscious experience involves one’s body.Yet what exactly is the relationship betweenthe two? It is obvious that we can perceiveour own body by (say) visually inspectingour hands. It is less obvious that our bod-ily being constitutes our subjectivity and thecorrelative modes or ways in which objectsare given to us.

The phenomenological approach to therole of the body in its constitution of sub-jective life is closely linked to the analysisof perception. Two basic points about per-ception are important here: (i) the inten-tional objects of perceptual experience arepublic spatiotemporal objects (not privatemental images or sense-data); and (ii) suchobjects are always given only partially to per-ception and can never present themselvesin their totality. On the one hand, percep-tion purports to give us experience of publicthings, not private mental images. On theother hand, whatever we perceive is alwaysperceived in certain ways and from a cer-tain perspective. We see things, for instance,as having various spatial forms and visiblequalities (lightness, colour, etc.), and we areable to distinguish between constancy andvariation in appearance (the grass looks uni-formly green, but the shaded part looks dark,whereas the part in direct sunlight lookslight). We see only one profile of a thing atany given moment, yet we do not see thingsas mere facades, for we are aware of the pres-ence of the other sides we do not see directly.We do not perceive things in isolation; wesee them in contexts or situations, in which

Page 97: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 79

they relate to and depend on each other andon dimensions of the environment in multi-farious ways.

These invariant characteristics of per-ception presuppose what phenomenologistscall the lived body (Leib). Things are percep-tually situated in virtue of the orientationthey have to our perceiving and movingbodies. To listen to a string quartet bySchubert is to enjoy it from a certain per-spective and standpoint, be it from thestreet, in the gallery, or in the first row. Ifsomething appears perspectivally, then thesubject to whom it appears must be spa-tially related to it. To be spatially relatedto something requires that one be embod-ied. To say that we perceive only one pro-file of something while being aware of otherpossible profiles means that any profile weperceive points beyond itself to further pos-sible profiles. Yet this reference of a givenprofile beyond itself is equally a referenceto our ability to exchange this profile foranother through our own free movement(tilting our head, manipulating an object inour hands, walking around something, etc.).Co-given with any profile and through anysequence of profiles is one’s lived body asthe ‘zero point’ or absolute indexical ‘here’,in relation to which any appearing object isoriented. One’s lived body is not co-givenas an intentional object, however, but asan implicit and practical ‘I can’ of move-ment and perception. We thus rejoin thepoint made earlier (in the section, Phenom-enality and Self-Awareness) that any object-directed (transitive) intentional experienceinvolves a non-object-directed (intransitive)self-awareness, here an intransitive bod-ily self-awareness. In short, every object-experience carries with it a tacit form of self-experience.

The role of bodily self-experience inperception can be phenomenologicallydescribed in much greater detail. Oneimportant topic is the role it plays in theconstitution (i.e., the bringing to awarenessor disclosure) of both objects and space forperception. Perspectival appearances of theobject bear a certain relation to kinaestheticsituations of the body. When I watch a bird in

flight, the bird is given in conjunction witha sensing of my eye and head movements;when I touch the computer keys, the keysare given in conjunction with a sensing ofmy finger movements. Husserl’s 1907 lec-tures on Thing and Space (Husserl, 1997) dis-cuss how this relation between perceptionand kinaesthesis (including proprioception)is important for the constitution of objectsand space. To perceive an object from a cer-tain perspective is to be aware (tacitly orprereflectively) that there are other coexist-ing but absent profiles of the object. Theseabsent profiles stand in certain sensorimo-tor relations to the present profile: Theycan be made present if one carries out cer-tain movements. In other words, the profilesare correlated with kinaesthetic systems ofpossible bodily movements and positions. Ifone moves this way, then that aspect of theobject becomes visible; if one moves thatway, then this aspect becomes visible.

In Husserl’s terminology, every perspec-tival appearance is kinaesthetically moti-vated. In the simple case of a motionlessobject, for instance, if the kinaesthetic expe-rience (K1) remains constant, then the per-ceptual appearance (A1) remains constant.If the kinaesthetic experience changes (K1

becomes K2 ), then the perceptual appear-ance changes in correlation with it (A1

becomes A2 ). There is thus an interde-pendency between kinaesthetic experiencesand perceptual appearances: A given appear-ance (A1) is not always correlated with thesame kinaesthetic experience (e.g., K1), butit must be correlated with some kinaes-thetic experience or other. Turning now tothe case of perceptual space, Husserl arguesthat different kinaesthetic systems of thebody imply different perceptual achieve-ments with regard to the constitution ofspace. One needs to distinguish among theoculomotor systems of one eye alone and oftwo eyes together, the cephalomotor systemof head movements, and the system of thewhole body as it moves towards, away from,and around things. These kinaesthetic sys-tems are hierarchically ordered in relationto the visual field: The cephalomotor visualfield contains a continuous multiplicity of

Page 98: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

80 the cambridge handbook of consciousness

oculomotor fields; the egocentric field ofthe body as a whole contains a contin-uous multiplicity of cephalomotor fields.This hierarchy also reflects a progressivedisclosure of visual space: The eyes alonegive only a two-dimensional continuum;head movements expand the field into aspherical plane at a fixed distance (like aplanetarium); and movement of the bodyas a whole introduces distance, depth, andthree-dimensional structure. It is the linkagebetween the kinaesthetic system of whole-body movements (approaching, retreating,and circling) and the corresponding per-ceptual manifold of profiles or perspectivalappearances that fully discloses the three-dimensional space of visual perception.

Insofar as the body functions as the zero-point for perception and action (i.e., consid-ered in its function as a bodily first-personperspective), the body recedes from experi-ence in favour of the world. My body sup-plies me with my perspective on the world,and thus is first and foremost not an object onwhich I have a perspective. In other words,bodily awareness in perception is not in thefirst instance a type of object-consciousness,but a type of non-transitive self-awareness(see the section on Phenomenal and Self-awareness and Chapter 3). Although one cancertainly experience one’s body as an object(e.g., in a mirror), bodily self-awareness ismore fundamentally an experience of one’sbody as a unified subjective field of percep-tion and action. A full account of bodilyexperience thus reveals the body’s double orambiguous character as both a subjectivelylived body (Leib) and a physical (spatiotem-poral) objective body (Korper).

The phenomenological analyses of embo-diment and perception summarized in thissection are relevant to current trends incognitive science. In recent years cog-nitive scientists have increasingly chal-lenged the classical view that perceptionand action are separate systems. Althoughphenomenologists have long emphasizedthe constitutive role of motor actionin perceptual experience, cognitive scien-tists often seem unaware of this impor-tant body of research (but see Rizzolatti

et al., 1997, for an exception). For exam-ple, neuropsychologists Milner and Goodalewrite in their influential book, The VisualBrain in Action: “For most investigators,the study of vision is seen as an enter-prise that can be conducted without anyreference whatsoever to the relationshipbetween visual inputs and motor outputs.This research tradition stems directly fromphenomenological intuitions that regardvision purely as a perceptual phenomenon”(Milner & Goodale 1995 , p. 13). It can beseen from our discussion in this section,however, that it is important to distinguishbetween uncritical common-sensical intu-itions and the critical examination of percep-tual experience found in the phenomeno-logical tradition. The intuitions Milner andGoodale target do not belong to phe-nomenology. On the contrary, Husserl’sand Merleau-Ponty’s analyses of the rela-tion between perception and kinaesthesisclearly indicate that perception is also amotor phenomenon. Indeed, these analy-ses anticipate the so-called dynamic sen-sorimotor approach to perception (Hurley,1998; Hurley & Noe, 2003 ; Noe, 2004 ;O’Regan & Noe, 2001). Rather than look-ing to the intrinsic properties of neuralactivity to understand perceptual experi-ence, this approach looks to the dynamicsensorimotor relations among neural activ-ity, the body, and the world. This approachhas so far focused mainly on the phe-nomenal qualities of perceptual experience,but has yet to tackle the perceptual con-stitution of space, intransitive bodily self-awareness, or the relationship between per-ception and affectively motivated attention,all long-standing topics in phenomenology.Further development of the dynamic sen-sorimotor approach might therefore bene-fit from the integration of phenomenologicalanalyses of embodiment and perception (seeThompson, 2007).

Intersubjectivity

For many philosophers, the issue of inter-subjectivity is equated with the ‘problem of

Page 99: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 81

other minds’: How can one know the men-tal states of others, or even that there are anyother minds at all (see Dancy, 1985 , pp. 67–68)? One classical attempt to deal with thisproblem takes the form of trying to justifyour belief in other minds on the basis of thefollowing argument from analogy: The onlymind I have direct access to is my own. Myaccess to the mind of another is always medi-ated by my perception of the other’s bodilymovements, which I interpret as intentionalbehaviour (i.e., as behaviour resulting frominternal mental states). But what justifies mein this interpretation? How can the percep-tion of another person’s bodily movementsprovide me with information about hismind, such that I am justified in viewinghis movements as intentional behaviour? Inmy own case, I can observe that I haveexperiences when my body is causally influ-enced and that these experiences frequentlybring about certain actions. I observe thatother bodies are influenced and act in sim-ilar manners, and I therefore infer by anal-ogy that the behaviour of foreign bodies isassociated with experiences similar to thoseI have myself. Although this inference doesnot provide me with indubitable knowledgeabout others, it gives me reason to believe intheir existence and to interpret their bodilymovements as meaningful behaviour.

This way of conceptualizing self and othercan also be discerned, to varying degrees, incertain approaches to social cognition in cog-nitive science. Thus, both certain versionsof the ‘theory-theory’ (e.g., Gopnik, 1993)and the ‘simulation-theory’ (e.g., Goldman,2000) have crucial features in commonwith the traditional argument from anal-ogy. According to the theory-theory, nor-mal human adults possess a common-senseor folk-psychological ‘theory of mind’ thatthey employ to explain and predict humanbehaviour. Advocates of the theory-theoryconsider this folk-psychological body ofknowledge to be basically equivalent to ascientific theory: Mental states are unob-servable entities (like quarks), and our attri-bution of them to each other involvescausal-explanatory generalizations (compa-rable in form to those of empirical science)

that relate mental states to each other andto observable behaviour. According to thesimulation-theory, on the other hand, ‘mind-reading’ depends not on the possession of atacit psychological theory, but on the abilityto mentally ‘simulate’ another person – touse the resources of one’s own mind to cre-ate a model of another person and therebyidentify with that person, projecting oneselfimaginatively into his or her situation (Gold-man, 2000). In either case, intersubjectivityis conceptualized as a cognitively medi-ated relation between two otherwise iso-lated subjects. Both theories take intersub-jective understanding to be a matter of howone represents unobservable, inner mentalstates on the basis of outward behaviour(what they disagree about is the nature of therepresentations involved). Thus, both theo-ries foster a conception of the mental as aninner realm essentially different from out-ward behaviour.

Phenomenologists do not frame the issueof intersubjectivity in this way, for theyreject the presuppositions built into theproblem of other minds (see Zahavi, 2001a,2001b). Two presuppositions in particularare called into question. The first is thatone’s own mind is given to one as a soli-tary and internal consciousness. The prob-lem with this assumption is that our ini-tial self-acquaintance is not with a purelyinternal, mental self, for we are embodiedand experience our own exteriority, includ-ing our bodily presence to the other. Thesecond assumption is that, in perceiving theother, all we ever have direct access to isthe other’s bodily movements. The prob-lem with this assumption is that what wedirectly perceive is intentional or meaningfulbehaviour – expression, gesture, and action –not mere physical movement that gets inter-preted as intentional action as a result ofinference. Thus, on the one hand, one’s ownsubjectivity is not disclosed to oneself asa purely internal phenomenon, and on theother hand, the other’s body is not dis-closed as a purely external phenomenon.Put another way, both the traditional prob-lem of other minds and certain cognitive-scientific conceptions of ‘mind-reading’ rest

Page 100: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

82 the cambridge handbook of consciousness

on a deeply problematic conception of themind as essentially inner, the body as essen-tially outer, and intentional behaviour as aris-ing from a purely contingent and causal con-nection between these two spheres.

Phenomenological treatments of inter-subjectivity start from the recognition that,in the encounter with the other, one is facedneither with a mere body nor a hidden psy-che, but with a unified whole. This unifiedwhole is constituted by the expressive relationbetween mental states and behaviour, a rela-tion that is stronger than that of a mere con-tingent, causal connection, but weaker thanthat of identity (for clearly not every men-tal state need be overtly expressed). In otherwords, expression must be more than simplya bridge supposed to close the gap betweeninner mental states and external bodilybehaviour; it must be a direct manifesta-tion of the subjective life of the mind (seeMerleau-Ponty, 1962 , Part One, Chapter 6).Thus, one aspect of the phenomenologi-cal problem of intersubjectivity is to under-stand how such manifestation is possible.

Phenomenologists insist that we need tobegin from the recognition that the body ofthe other presents itself as radically differentfrom any other physical entity, and accord-ingly that our perception of the other’s bod-ily presence is unlike our perception of phys-ical things. The other is given in its bodilypresence as a lived body according to a dis-tinctive mode of consciousness called empa-thy (see Husserl, 1989; Stein, 1989). Empa-thy is a unique form of intentionality, inwhich one is directed towards the other’slived experiences. Thus, any intentional actthat discloses or presents the other’s sub-jectivity from the second-person perspec-tive counts as empathy. Although empathy,so understood, is based on perception (ofthe other’s bodily presence) and can involveinference in difficult or problematic situ-ations (where one has to work out howanother person feels about something), it isnot reducible to some additive combinationof perception and inference.

The phenomenological conception ofempathy thus stands opposed to any the-ory according to which our primary modeof understanding others is by perceiving

their bodily behaviour and then inferring orhypothesizing that their behaviour is causedby experiences or inner mental states sim-ilar to those that apparently cause similarbehaviour in us. Rather, in empathy, weexperience the other directly as a person, asan intentional being whose bodily gesturesand actions are expressive of his or her expe-riences or states of mind (for further discus-sion, see Thompson, 2001, 2005 , 2007).

Phenomenological investigations of inter-subjectivity go beyond intentional analy-ses of empathy, however, in a variety ofways (see Zahavi, 2001b). Another approachacknowledges the existence of empathy, butinsists that our ability to encounter oth-ers cannot simply be taken as a brute fact.Rather, it is conditioned by a form of alter-ity (otherness) internal to the embodied self.When my left hand touches my right, orwhen I perceive another part of my body,I experience myself in a manner that antici-pates both the way in which an other wouldexperience me and the way in which Iwould experience an other. My bodily self-exploration thus permits me to confrontmy own exteriority. According to Husserl(1989), this experience is a crucial pre-condition for empathy: It is precisely theunique subject-object status of the body,the remarkable interplay between ipseity(I-ness) and alterity characterizing body-awareness, that provides me with the meansof recognizing other embodied subjects.

Still another line of analysis goes onestep further by denying that intersubjectiv-ity can be reduced to any factual encounterbetween two individuals, such as the face-to-face encounter (see Zahavi, 2001a, b).Rather, such concrete encounters presup-pose the existence of another, more fun-damental form of intersubjectivity that isrooted a priori in the very relation betweensubjectivity and world. Heidegger’s (1996)way of making this point is to describehow one always lives in a world perme-ated by references to others and alreadyfurnished with meaning by others. Husserl(1973) and Merleau-Ponty (1962) focus onthe public nature of perceptual objects.The subject is intentionally directed towardsobjects whose perspectival appearances bear

Page 101: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 83

witness to other possible subjects. My per-ceptual objects are not exhausted in theirappearance for me; each object always pos-sesses a horizon of coexisting profiles, whichalthough momentarily inaccessible to me,could be perceived by other subjects. Theperceptual object as such, through its per-spectival givenness, refers, as it were, to otherpossible subjects, and is for that very reasonalready intersubjective. Consequently, priorto any concrete perceptual encounter withanother subject, intersubjectivity is alreadypresent as co-subjectivity in the very struc-ture of perception.

Finally, there is a deep relation bet-ween intersubjectivity so understood andobjectivity. My experience of the worldas objective is mediated by my experi-ence of and interaction with other world-engaged subjects. Only insofar as I experi-ence that others experience the same objectsas myself do I really experience these objectsas objective and real. To put this point inphenomenological language, the objectiv-ity of the world is intersubjectively con-stituted (i.e., brought to awareness or dis-closed). This is an idea not foreign toAnglo-American philosophy, as the follow-ing remark by Donald Davidson indicates:“A community of minds is the basis ofknowledge; it provides the measure of allthings. It makes no sense to question the ade-quacy of this measure, or to seek a more ulti-mate standard” (Davidson 2001, p. 218).

Conclusion

Phenomenology and analytic philosophyare the two most influential philosophi-cal movements of the twentieth century.Unfortunately, their relationship in the pastwas not one of fruitful cooperation andexchange, but ranged from disregard to out-right hostility. To the extent that cogni-tive science (especially in North America)has been informed by analytic philoso-phy of mind, this attitude was at timesperpetuated between phenomenology andcognitive science.

In recent years, however, this state ofaffairs has begun to change and is rapidly

coming to seem outdated, as this volumeitself indicates. Conferences on conscious-ness (such as the biannual ‘Towards a Sci-ence of Consciousness’ conference held inTucson, Arizona, and the annual meetingsof the Association for the Scientific Study ofConsciousness) now routinely include col-loquia informed by phenomenology along-side cognitive science and analytic philoso-phy. In 2001 there appeared a new journal,Phenomenology and the Cognitive Sciences.Other journals, such as Consciousness andCognition and the Journal of ConsciousnessStudies, include articles integrating phe-nomenological, cognitive-scientific, and ana-lytic approaches to consciousness. Giventhese developments, the prospects for coop-eration and exchange among these traditionsin the study of consciousness now look verypromising. To this end, in this chapter wehave called attention to a number of relatedareas in which there is significant poten-tial for collaborative research – intention-ality, self-awareness, temporality, embodi-ment and perception, and intersubjectivity.We have also sketched a few ways of link-ing phenomenology and cognitive science inthese areas in order to suggest some direc-tions such research could take in the yearsahead.

Notes

1. For a recent discussion of the unity of the phe-nomenological tradition, see Zahavi (2006).

2 . An important forerunner of the current inter-est in the relation between phenomenologyand cognitive science is the work of HubertDreyfus (1982). Dreyfus has been a pioneer inbringing the phenomenological tradition intothe heartland of cognitive science throughhis important critique of artificial intelligence(Dreyfus, 1991) and his groundbreaking stud-ies on skillful knowledge and action (Dreyfus,2002 ; Dreyfus & Dreyfus, 1982). Yet this workis also marked by a peculiar (mis)interpretationand use of Husserl. Dreyfus presents Husserl’sphenomenology as a form of representational-ism that anticipates cognitivist and computa-tional theories of mind. He then rehearses Hei-degger’s criticisms of Husserl thus understoodand deploys them against cognitivism and

Page 102: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

84 the cambridge handbook of consciousness

artificial intelligence. Dreyfus reads Husserllargely through a combination of Heideg-ger’s interpretation and a particular ana-lytic philosophical (Fregean) reconstruction ofone aspect of Husserl’s thought (the repre-sentationalist interpretation of the noema).Thus, Husserlian phenomenology as Dreyfuspresents it to the cognitive science and ana-lytic philosophy of mind community is a prob-lematic interpretive construct and should notbe taken at face value. For a while Dreyfus’sinterpretation functioned as a received view inthis community of Husserl’s thought and itsrelation to cognitive science. This interpreta-tion has since been seriously challenged by anumber of Husserl scholars and philosophers(see Zahavi, 2003 , 2004 , for further discussion;see also Thompson, 2007). These studies haveargued that (i) Husserl does not subscribe to arepresentational theory of mind; (ii) Husserl isnot a methodological solipsist (see the sectionon methodology; (iii) Husserl does not assimi-late all intentionality to object-directed inten-tionality (see the section on intentionality);(iv) Husserl does not treat the ‘background’ ofobject-directed intentional experiences as sim-ply a set of beliefs understood as mental rep-resentations (see the section on intentionality);and (v) Husserl does not try to analyze the ‘life-world’ into a set of sedimented backgroundassumptions or hypotheses (equivalent to asystem of frames in artificial intelligence). Insummary, although Dreyfus is to be creditedfor bringing Husserl into the purview of cog-nitive science, it is important to go beyondhis interpretation and to re-evaluate Husserl’srelationship to cognitive science on the basisof a thorough assessment of his life’s work.This re-evaluation is already underway (seePetitot, Varela, Pachoud & Roy, 1999) andcan be seen as part of a broader reappro-priation of phenomenology in contemporarythought.

3 . Does Husserl thereby succumb to the so-calledphilosophical myth of the given? This is adifficult and complicated question, and spaceprevents us from addressing it here. There isnot one but several different notions of the‘given’ in philosophy, and Husserl’s thoughtdeveloped considerably over the course of hislife, such that he held different views at dif-ferent times regarding what might be meantby it. Suffice it to say that it is a mistake tolabel Husserl as a philosopher of the given inthe sense originally targeted by Wilfrid Sellars,

for two main reasons: First, the given in thephenomenological sense is not non-intentionalsense-data, but the phenomenal world as dis-closed by consciousness. Second, the phenom-enality of the world is not understandableapart from the constitutive relation that con-sciousness bears to it. For recent discussionsof some of these issues see Botero (1999) andRoy (2003).

4 . This sense of the epoche is well put by thenoted North American and Indian phenome-nologist, J. N. Mohanty (1989, pp. 12–13): “Ineed not emphasize how relevant and, in fact,necessary is the method of phenomenolog-ical epoche for the very possibility of gen-uine description in philosophy. It was Husserl’sgenius that he both revitalized the descrip-tive method for philosophy and brought tothe forefront the method of epoche, withoutwhich one cannot really get down to the job.The preconceptions have to be placed withinbrackets, beliefs suspended, before philosophycan begin to confront phenomena as phenom-ena. This again is not an instantaneous act ofsuspending belief in the world or of directingone’s glance towards the phenomena as phe-nomena, but involves a strenuous effort at rec-ognizing preconceptions as preconceptions, atunraveling sedimented interpretations, at get-ting at presuppositions which may pretend tobe self-evident truths, and through such pro-cesses aiming asymptotically at the prereflec-tive experience.”

5 . We can discern here another reason for notinterpreting Husserl as a philosopher whorelies on any simple or straightforward notionof an uninterrupted given in experience: Pas-sive affection is not the reception of simple andunanalyzable sense impressions, but has a fieldstructure.

6. When we think a certain thought, the thinkingwill often be accompanied by a non-vocalizedutterance or aural imagery of the very stringof words used to express the thought. At thesame time, the thought will also frequentlyevoke certain mental images. It could be arguedthat the phenomenal qualities encounteredin abstract thought are constituted by suchimagery. Husserl makes clear in his LogicalInvestigations, however, that this attempt todeny that thought has any distinct phenome-nality beyond such sensorial and imagistic phe-nomenality is problematic. There is a markeddifference between what it is like to imag-ine aurally a certain string of meaningless

Page 103: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 85

noise, and what it is like to imagine aurallythe very same string while understanding andmeaning something by it (Husserl, 2000, I.,pp. 193–194 , II., p. 105). Because the phenom-enality of the sensory content is the same inboth cases, the phenomenal difference mustbe located elsewhere, namely, in the think-ing itself. The case of homonyms and syn-onyms also demonstrates that the phenome-nality of thinking and the phenomenality ofaural imagery can vary independently of eachother. As for the attempt to identify the phe-nomenal quality of thought with the phenom-enal quality of visualization, a similar argumentcan be employed. Two different thoughts, say,‘Paris is the capital of France’, and ‘Parisiansregularly consume baguettes’, might be accom-panied by the same visualization of baguettes,but what it is like to think the two thoughtsremains different. Having demonstrated thismuch, Husserl then proceeds to criticize theview according to which the imagery actuallyconstitutes the very meaning of the thought –that to understand what is being thoughtis to have the appropriate ‘mental image’before one’s inner eye (Husserl, 2000, I.,pp. 206–209). The arguments he employs bearstriking resemblance to some of the ideasthat were subsequently used by Wittgenstein(1999) in his Philosophical Investigations: (i)From time to time, the thoughts we are think-ing, for instance ‘every algebraic equation ofuneven grade has at least one real root’, will infact not be accompanied by any imagery what-soever. If the meaning were actually located inthe ‘mental images’, the thoughts in questionwould be meaningless, but this is not the case.(ii) Frequently, our thoughts, for instance ‘thehorrors of World War I had a decisive impacton post-war painting’, will in fact evoke cer-tain visualizations, but visualizations of quiteunrelated matters. To suggest that the mean-ings of the thoughts are to be located in suchimages is absurd. (iii) Furthermore, the factthat the meaning of a thought can remainthe same although the accompanying imageryvaries also precludes any straightforward iden-tification. (iv) An absurd thought, like thethought of a square circle, is not meaning-less, but cannot be accompanied by a matchingimage (a visualization of a square circle beingimpossible in principle). (v) Finally, referringto Descartes’ famous example in the Medita-tions, Husserl points out that we can easily dis-tinguish thoughts like ‘a chiliagon is a many

sided polygon’, and ‘a myriagon is a many sidedpolygon’, although the imagery that accompa-nies both thoughts might be indistinguishable.Thus, as Husserl concludes, although imagerymight function as an aid to the understanding,it is not what is understood; it does not con-stitute the meaning of the thought (Husserl,2000, I., p. 208).

References

Bernet, R., Kern, I., & Marbach, E. (1993).An introduction to Husserlian phenomenology.Evanston, IL: Northwestern University Press.

Botero, J.-J. (1999). The immediately given asground and background. In J. Petitot, F. J.Varela, B. Pachoud, & J.-M. Roy (Eds.), Nat-uralizing phenomenology: Issues in contemporaryphenomenology and cognitive science (pp. 440–463). Stanford, CA: Stanford University Press.

Damasio, A. R. (1999). The feeling of whathappens: Body and emotion in the making of con-sciousness. New York: Harcourt Brace.

Dancy, J. (1985). An introduction to contemporaryepistemology. Oxford: Basil Blackwell.

Davidson, D. (2001). Subjective, intersubjective,objective. Oxford: Oxford University Press.

Depraz, N. (1999). The phenomenological reduc-tion as praxis. Journal of Consciousness Studies,6, 95–110.

Depraz, N., Varela, F. J., & Vermersch, P. (2003).On becoming aware: A pragmatics of expe-riencing. Amsterdam and Philadelphia: JohnBenjamins Press.

Dreyfus, H. (1982). Introduction. In H. Dreyfus &H. Harrison (Eds.), Husserl, intentionality andcognitive science. Cambridge, MA: MIT Press.

Dreyfus, H. (1991). What computers still can’t do.Cambridge, MA: MIT Press.

Dreyfus, H. (2002). Intelligence without repre-sentation – Merleau-Ponty’s critique of mentalrepresentation. Phenomenology and the Cogni-tive Sciences, 1, 367–383 .

Dreyfus, H., & Dreyfus, S. (1982). Mind overmachine. New York: Free Press.

Drummond, J. J. (2003). The structure of inten-tionality. In D. Welton (Ed.), The new Husserl:A critical reader (pp. 65–92). Bloomington:Indiana University Press.

Freeman, W. J. (2000). Emotion is essentialto all intentional behaviors. In M. D. Lewis

Page 104: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

86 the cambridge handbook of consciousness

& I. Granic (Eds.), Emotion, development, andself-organization. Dynamic systems approachesto emotional development (pp. 209–235).Cambridge: Cambridge University Press.

Gallagher, S. (1998). The inordinance of time.Evanston, IL: Northwestern University Press.

Goldman, A. I. (2000). Folk psychology and men-tal concepts. Protosociology, 14 , 4–25 .

Gopnik, A. (1993). How we know our minds:The illusion of first-person knowledge of inten-tionality. Behavioral and Brain Sciences, 16,1–14 .

Heidegger, M. (1982). The basic problems of phe-nomenology (A. Hofstadter, Trans.). Blooming-ton, IN: Indiana University Press.

Heidegger, M. (1996). Being and time (J. Stam-baugh, Trans). Albany, NY: State University ofNew York Press.

Hurley, S. L. (1998). Consciousness in action.Cambridge, MA: Harvard University Press.

Hurley, S. L., & Noe, A. (2003). Neural plasticityand consciousness. Biology and Philosophy, 18,13 1–168.

Husserl, E. (1970). The crisis of European sciencesand transcendental phenomenology (D. Carr,Trans.). Evanston, IL: Northwestern UniversityPress.

Husserl, E. 1973 . Zur Phanomenologie der Intersub-jektivitat, Dreiter Teil: 192 9–1935 . Husserliana(Vol. 15). The Hague: Martinus Nijhoff.

Husserl, E. (1975). Experience and judgment (J. S.Churchill, Trans.) Evanston, IL: NorthwesternUniversity Press.

Husserl, E. (1987). Aufsatze und Vortrage (1911–192 1). Husserliana XXV. Dordrecht: MartinusNijhoff.

Husserl, E. (1989). Ideas pertaining to a pure phe-nomenology and to a phenomenological philoso-phy, second book (R. Rojcewicz & A. Schuwer,Trans). Dordrecht: Kluwer Academic Publish-ers.

Husserl, E. (1991). On the phenomenology of theconsciousness of internal time (1893–1917) (J. B.Brough, Trans.) Dordrecht: Kluwer AcademicPublishers.

Husserl, E. (1997). Thing and space: Lectures of1907 (R. Rojcewicz, Trans.) Dordrecht: KluwerAcademic Publishers.

Husserl, E. (2000). Logical investigations I–II (J.N. Findley, Trans.). London: Routledge Press.

Husserl, E. (2001). Analyses concerning passiveand active synthesis. Lectures on transcenden-

tal logic (A. J. Steinbock, Trans.) Dordrecht:Kluwer Academic Publishers.

Husserl, E. (2006). Phantasy, image consciousness,and memory (1898–192 5) (J. B. Brough, Trans.).Dordrecht, The Netherlands: Springer.

James, W. (1981). The principles of psychology.Cambridge, MA: Harvard University Press.

Lloyd, D. (2002). Functional MRI and the studyof human consciousness. Journal of CognitiveNeuroscience, 14 , 818–831.

Lloyd, D. (2003). Radiant cool. A novel theory ofconsciousness. Cambridge, MA: MIT Press.

Lutz, A., & Thompson, E. (2003). Neurophe-nomenology: Integrating subjective experienceand brain dynamics in the neuroscience of con-sciousness. Journal of Consciousness Studies, 10,31–52 .

Merleau-Ponty, M. (1962). Phenomenology of per-ception (C. Smith, Trans.) London: RoutledgePress.

Milner, A. D., & Goodale, M. A. (1995). Thevisual brain in action. New York: Oxford Uni-versity Press.

Mohanty, J. N. (1989). Transcendental phenomeno-logy. Oxford: Basil Blackwell.

Moran, D. (2000). Introduction to phenomenology.London: Routledge Press.

Nagel, T. (1979). What is it like to be a bat? InT. Nagel (Ed.), Mortal questions (pp. 165–180).New York: Cambridge University Press.

Noe, A. (2004). Action in perception. Cambridge,MA: MIT Press.

O’Regan, J. K., & Noe, A. (2001). A sensori-motor account of vision and visual conscious-ness. Behavioral and Brain Sciences, 2 4 , 939–1031.

Panskepp, J. (1998a). Affective neuroscience: Thefoundations of human and animal emotions.Oxford: Oxford University Press.

Panskepp, J. (1998b). The periconscious sub-strates of consciousness: Affective states andthe evolutionary origins of self. Journal of Con-sciousness Studies, 5 , 566–582 .

Parvizi, J., & Damasio, A. (2001). Consciousnessand the brainstem. Cognition, 79, 135–159.

Petitot, J., Varela, F. J., Pachoud, B., & Roy, J.-M.(Eds.). (1999). Naturalizing phenomenology:Issues in contemporary phenomenology and cogni-tive science. Stanford, CA: Stanford UniversityPress.

Rainville, P. (2005). Neurophenomenologie desetats et des contenus de conscience dans

Page 105: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

philosophical issues: phenomenology 87

l’hypnose et l’analgesie hypnotique. Theolog-ique, 12 , 15–38.

Rizzolatti, G. L., Fadiga, L., Fogassi, L., & Gallese,V. (1997). The space around us. Science, 2 77,190–191.

Roy, J.-M.(2003). Phenomenological claims andthe myth of the given. In E. Thompson (Ed.),The problem of consciousness: New essays inphenomenological philosophy of mind. Cana-dian Journal of Philosophy, Suppl. Vol. 29

(pp. 1–32). Calgary, AL: University of AlbertaPress.

Sartre, J.-P. (1956). Being and nothingness (H.Barnes, Trans). New York: PhilosophicalLibrary.

Searle, J. R. (1983). Intentionality: An essay inthe philosophy of mind. Cambridge: CambridgeUniversity Press.

Stein, E. (1989). On the problem of empathy (W.Stein, Trans.) Washington, DC: ICS Publica-tions.

Steinbock, A. (1995). Home and beyond: Genera-tive phenomenology after Husserl. Evanston, IL:Northwestern University Press.

Sokolowski, R. (2000). An introduction to phe-nomenology. Cambridge: Cambridge UniversityPress.

Thompson, E. (2001). Empathy and conscious-ness. In E. Thompson (Ed.), Between ourselves:Second-person issues in the study of conscious-ness (pp. 1–32). Thorverton, UK: ImprintAcademic.

Thompson, E. (2005). Empathy and humanexperience. In J. D. Proctor (Ed.), Science, reli-gion, and the human experience (pp. 261–285).New York: Oxford University Press.

Thompson, E. (2007). Mind in life: Biology, phe-nomenology, and the sciences of mind. Cam-bridge, MA: Harvard University Press.

Thompson, E., Lutz, A., & Cosmelli, D. (2005).Neurophenomenology: An introduction forneurophilosophers. In A. Brook & K. Akins

(Eds.), Cognition and the brain: The philosophyand neuroscience movement (pp. 40–97). NewYork: Cambridge University Press.

Varela, F. J. (1996). Neurophenomenology: Amethodological remedy for the hard prob-lem. Journal of Consciousness Studies, 3 , 330–350.

Varela, F. J. (1999). The specious present: A neu-rophenomenology of time consciousness. In J.Petitot, F. J. Varela, B. Pachoud, & J.-M. Roy(Eds.), Naturalizing phenomenology: Issues incontemporary phenomenology and cognitive sci-ence (pp. 266–314). Stanford, CA: StanfordUniversity Press.

Varela, F. J., & Shear, J. (1999). The view fromwithin. First-person approaches to the study ofconsciousness. Thorverton, UK: Imprint Aca-demic.

Wittgenstein, L. (1999). Philosophical investiga-tions (3d ed.; G. E. M. Anscombe, Trans.)Englewood Cliffs, NJ: Prentice Hall.

Zahavi, D. (1999). Self-awareness and alterity.A phenomenological investigation. Evanston, IL:Northwestern University Press.

Zahavi, D. (2001a). Husserl and transcendentalintersubjectivity. Athens, OH: Ohio UniversityPress.

Zahavi, D. (2001b). Beyond empathy: Phenomen-ological approaches to intersubjectivity. Jour-nal of Consciousness Studies, 8, 15 1–167.

Zahavi, D. (2003). Husserl’s phenomenology. Stan-ford, CA: Stanford University Press.

Zahavi, D. (2004). Husserl’s noema and theinternalism-externalism debate. Inquiry, 47,42–66.

Zahavi, D. (2005). Subjectivity and selfhood: Inves-tigating the first-person perspective. Cambridge,MA: MIT Press.

Zahavi, D. (2006). The phenomenological tra-dition. In D. Moran (Ed.), Routledge compan-ion to twentieth-century philosophy. London:Routledge.

Page 106: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c04 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 3 :56

88

Page 107: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

C H A P T E R 5

Asian Perspectives: Indian Theoriesof Mind

Georges Dreyfus and Evan Thompson

Abstract

This chapter examines Indian views of themind and consciousness, with particularfocus on the Indian Buddhist tradition. Tocontextualize Buddhist views of the mind,we first provide a brief presentation of someof the most important Hindu views, particu-larly those of the Sam. khya school. Whereasthis school assumes the existence of a realtranscendent self, the Buddhist view is thatmental activity and consciousness functionon their own without such a self. We focuson the phenomenological and epistemologi-cal aspects of this no-self view of the mind.We first discuss the Buddhist Abhidharmaand its analysis of the mind in terms ofawareness and mental factors. The Abhid-harma is mainly phenomenological; it doesnot present an epistemological analysis ofthe structure of mental states and the waythey relate to their objects. To cover thistopic we turn to Dharmakırti, one of themain Buddhist epistemologists, who offersa comprehensive view of the types of cogni-tion and their relation to their objects.

Introduction

In discussing Asian views of mind and con-sciousness, we must start from the realiza-tion that this topic presents insurmountablechallenges. The diversity of Asian culturesfrom China to India to Iran is so great thatit is impossible to find coherent ways to dis-cuss the mental concepts of these culturesover and above listing these conceptionsand noting their differences. Hence, ratherthan chart a territory that hopelessly extendsour capacities, we have chosen to exam-ine Indian views of the mind, with a spe-cial focus on the Indian Buddhist tradition,which can be traced back to the first cen-turies after the life of Siddhartha Gautama,the Buddha (566–483 bce), and which con-tinued to develop in India through the 7thand 8th centuries ce. This approach allowsus to present a more grounded and coherentview of the mind as conceived in the Indianphilosophical tradition and to indicate someareas of interest that this tradition offersto cognitive scientists and philosophersof mind.

89

Page 108: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

90 the cambridge handbook of consciousness

In talking about the mind, it is impor-tant to define the term, for it is far fromunambiguous. In most Indian traditions, themind is neither a brain structure nor a mech-anism for treating information. Rather, mindis conceived as a complex cognitive processconsisting of a succession of related mentalstates. These states are at least in principlephenomenologically available; that is, theycan be observed by attending to the wayin which we experience feeling, perceiving,thinking, remembering, and so on. Indianthinkers describe these mental states as cog-nizing (jna) or being aware (buddh) of theirobject. Thus, the mind is broadly conceivedby traditional Indian thinkers as constitutedby a series of mental states that cognizetheir objects.

This general agreement breaks downquickly, however, when we turn to a moredetailed analysis of the nature and struc-ture of the mind, a topic on which vari-ous schools entertain vastly different views.Some of these disagreements relate to theontological status of mental states and theway they relate to other phenomena, partic-ularly physical ones. Such disagreements arerelated to well-known ideas in the Westerntradition, particularly the mind-body dual-ism that has concerned Western philoso-phy since Descartes. But many of the viewsentertained by Indian thinkers are not eas-ily mapped in Western terms, as we see inthis chapter.

Most Indian thinkers do not consider theontological status of mental states to be aparticularly difficult question, for most ofthem accept that there is an extra-physicalreality. Among all the schools, only theMaterialist, the Carvaka, reduces the men-tal to physical events. For its proponents,mental states do not have any autonomousontological status and can be completelyreduced to physical processes. They are justproperties of the body, much like the ine-briating property of beer is a property ofbeer. Most other thinkers reject this viewforcefully and argue that the mind canneither be eliminated nor reduced to thematerial. Their endorsement of an extra-physical reality does not, however, necessar-

ily amount to a classical mind-body dualism(of the sort found in Descartes’ Meditationsor Plato’s Phaedo). Moreover, although theyagree in rejecting the materialist view, theystrongly disagree in their presentations of themind.

In this chapter, we focus mostly on theBuddhist tradition, exploring some of itsviews of the mind. One of the most salientfeatures of this tradition is that its accountsof the mind and consciousness do not positthe existence of a self. According to this tra-dition, there is no self, and mental activitycannot be understood properly as long asone believes in a self. The Hindu tradition,by contrast, maintains that mental life doesinvolve a permanent self. Thus, to contextu-alize Buddhist views of the mind, we beginwith a brief presentation of some of the mostimportant Hindu views. We then present theBuddhist Abhidharma and its analysis of themind in terms of awareness and mental fac-tors. Traditionally, the Abhidharma makesup one of the ‘three baskets’ into whichBuddhists divide their scriptures – Sutra orsayings of the Buddha, Vinaya or monas-tic discipline, and Abhidharma, which sys-tematizes Buddhist teachings in the form ofdetailed analyses of experience. In examin-ing the Abhidharma, we examine the waysin which this tradition analyzes the differ-ent functions of the mind without positingthe existence of a self. These analyses are incertain ways reminiscent of those in cogni-tive science that aim to account for cognitiveprocessing without invoking a homunculusor ‘little man’ inside the head who over-sees the workings of the mind (or merelypassively witnesses the results; see Varela,Thompson, & Rosch, 1991, for further dis-cussion of this parallel). The Abhidharma,however, is phenomenological; its concern isto discern how the mind works as evidencedby experience (but especially by mentallydisciplined and refined contemplative expe-rience). Although thus it is also epistemo-logical, the Abhidharma does not presentany developed epistemological analysis ofthe structure of mental states and the waythey relate to their objects so as to produceknowledge. To cover this topic we turn to

Page 109: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 91

Dharmakırti (c. 600 ce), one of the mainBuddhist epistemologists, who offers a com-prehensive view of the types of cognitionand their relation to their objects.

The phenomenological analyses con-tained in the Abhidharma and the epis-temological analyses of Dharmakırti offersignificant resources for cognitive scientistsand philosophers of mind in their efforts togain a better understanding of consciousness.These analyses also constitute the theoreticalframework for the ways in which the Bud-dhist tradition conceives of meditation andmental training, both with regard to the phe-nomenology of contemplative mental statesand the epistemology of the types of knowl-edge that these states are said to provide.Given the increasing scientific interest in thephysiological correlates and effects of med-itation and their relation to consciousness(see Chapter 19), it is important for the sci-entific community to appreciate the phe-nomenological and philosophical precisionwith which these states are conceptualizedin the Buddhist tradition.

Self and Mental States:A Sam. khya View

One of the most important views of themind in the Hindu tradition is found in theSam. khya school. Traditionally this school issaid to have been founded by the philoso-pher Kapila, a legendary figure who mayhave lived as early as the 7th centurybce, but the earliest Sam. khya text we pos-sess dates from the 3 rd century ce. TheSam. khya tradition is one of the six classicalschools of Hindu philosophy (Nyaya, Vais-esika, Sam. khya, Yoga, Purva Mimamsa, andVedanta). Its influence extends to the otherschools, particularly the Vedanta school,which later became especially important inthe development of Hindu thought. TheSam. khya was in fact less a school proper thana way of thinking based on the categoriza-tion of reality. It was crucial in the forma-tion of Indian philosophical thinking beforeand after the start of the Common Era, andhence it is unsurprising that its view of the

mind has been largely adopted in the Hindutradition and beyond.1

The Sam. khya approach rests on a dual-istic metaphysics built on the opposi-tion between material primordial nature(pradhana) or materiality (prakr.ti) and aspiritual self (atman) or person (purus.a).2

Nature is the universal material substratumout of which all phenomena other than theself emerge and evolve. These phenomena,which make up the world of diversity, arephysical transformations of the three quali-ties (gun. a) that compose primordial nature.These three qualities are sattva (trans-parency, buoyancy), rajas (energy, activity),and tamas (inertia, obstruction). They areprinciples or forces, rather than buildingblocks. All material phenomena, includingthe intellect and organs of perception, areunderstood to be made up of a combina-tion of these three principles. The one prin-ciple not included in this constant process oftransformation is the self, which is perma-nent, non-material, and conscious or aware.The self is also described as the consciouspresence that witnesses the transformationsof nature, but does not participate in them.As such it is passive, though it witnesses theexperiences deriving from the transforma-tions of the world of diversity.3

Although the Sam. khya analysis of mindis dualistic, it does not fit within classicalmind-body dualism. For the Sam. khya, themind involves a non-material spiritual ele-ment, namely the self. The self, however, isnot the same as the mind. Rather, the self isthe mere presence to or pure witnessing ofthe mental activities involved in the ordi-nary awareness of objects. This pure wit-nessing, untainted by the diversity of thematerial world, is not sufficient for men-tal activities, for mental activities are rep-resentational or semantic and require morethan passive mirroring. Mental activity is theapprehension of an object, and this activityrequires active engagement with objects andthe formation of ideas and concepts neces-sary for purposeful action in the world. Theself cannot account for such activity, how-ever, because it is changeless and hence pas-sive. To account for our cognitive activities,

Page 110: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

92 the cambridge handbook of consciousness

we therefore need other elements that par-ticipate in the world of diversity. Becauseany element that participates in the worldof change must emerge out of primordialmateriality and hence be material, it fol-lows that the analysis of mental states can-not be limited to their spiritual dimension(the self), but must also involve materialelements. Hence, for the Sam. khya, mentalactivity requires the cooperation of the twofundamental types of substance that makeup the universe, passive consciousness andmaterial nature.

Having described the Sam. khya meta-physics, we can now sketch its influentialanalysis of mental activity.4 This analysisstarts with buddhi, which is usually trans-lated as ‘the intellect’ and is the ability todistinguish and experience objects. This abil-ity provides the prereflective and presubjec-tive ground out of which determined men-tal states and their objects arise; it is also thelocus of all the fundamental predispositionsthat lead to these experiences. The intellectemerges out of primordial matter and there-fore is active, unlike the non-material andpassive self. The self is described metaphor-ically as a light, for it passively illuminatesobjects, making it possible for the intellectto distinguish them. The intellect operatesin a representational way by taking on theform of what is known. This representationalability works in two directions – toward theconscious and uninvolved self and towardthe objects. The intellect, thanks to its qual-ity of clarity and transluscence (sattva), takeson the form of the self by reflecting it. As aresult, it seems as if the self experiences thediversity of objects, when it is actually theintellect that undergoes these experiences,the self being the mere witness of them. Thisability of the intellect to usurp the func-tion of consciousness helps the intellect inits apprehension of objects, for by itself theintellect is active but unconsious. Awarenessof objects arises only when the intellect takeson the light of the self and reflects it onobjects, much like pictures are created whenlight is projected onto a film. In this way, theintellect becomes able to take on the formof the object and thus to discern it.

The intellect’s reflecting the self and tak-ing on the form of an object are not, how-ever, sufficient to fully determine experi-ence. To become fully cognitive, experiencerequires the formation of subjective andobjective poles. Experience needs to be theexperience of a particular individual appre-hending a particular object. The formationof the subjective pole is the function of the‘ego-sense’ (aham. kara), the sense of individ-ual subjectivity or selfhood tied to embodi-ment. This sense colors most of our expe-riences, which involve a sense of being asubject opposed to an object. The determi-nation of the objective pole, on the otherhand, is the function of ‘mentation’ (manas),which oversees the senses and whose spe-cial function is discrimination. This functionallows mentation to serve as an intermediarybetween the intellect and the senses. Men-tation organizes sensory impressions andobjects and integrates them into a temporalframework created by memories and expec-tations. In this way, our experience of objectsin the world is created.

Although the dualistic metaphysics asso-ciated with this view was rejected in thehistory of Indian philosophy, the Sam. khyamodel of the mind was taken over by otherHindu schools. It serves as a foundationof the philosopher Patanjali’s (c. 2nd cen-tury bce) Yoga view of mind, which issimilar to the Sam. khya.5 The Yoga viewalso rests on the opposition between pas-sive self and active mental activities (citta), arubric under which intellect, ego-sense, andmentation are grouped. Similarly, Iamkara(788–820 ce), who savaged the dualism ofthe Sam. khya, took over its model of themind in his Advaita Vedanta, emphasizingthe contrast between the transcendence ofthe self and the mental activities of the‘inner sense’ (antah. karava) belonging to theperson.6 Hence, the Sam. khya view can betaken as representative of the Hindu view ofthe mind, especially in its emphasis on thedifference between a passive witnessing con-sciousness and mental activity.

According to this view, as we have seen,mental events come about through the con-junction of two heterogeneous factors – a

Page 111: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 93

transcendent self and a diversity of men-tal activities. It is a basic presupposition ofthe Hindu tradition that mental life involvesa permanent self. Yet because mental lifealso undeniably involves change, it cannotbe reduced to this single, motionless factor ofthe self; hence the need for the complicatedanalysis briefly summarized here. This ten-sion in accounts of the mind and conscious-ness between identity and change, unity anddiversity, is of course also prevalent through-out Western philosophy and persists in cog-nitive science. We turn now to the Bud-dhist tradition, which presents a differentperspective on this issue.

The Abhidharma Traditionand its View of the Mind

The Buddhist tradition is based on the oppo-site view of no-self (anatman). For the Bud-dhists, there is no self, and hence mentalactivity is not in the service of such an entity,but rather functions on its own. In short,for the Buddhists there is no self that isaware of the experiences one undergoes orthe thoughts one has. Rather the thoughtsthemselves are the thinker, and the experi-ences the experiencer.

How, then, do Buddhists explain thecomplexities of the mind? How do theyexplain mental regularities if there is nocentral controller to oversee the wholeprocess?

For an answer, we turn to the Abhid-harma, one of the oldest Buddhist tradi-tions, which can be traced back to the firstcenturies after the Buddha (566–483 bce).First elaborated as lists,7 the Abhidharmacontains the earlier texts in which Bud-dhist concepts were developed and hence isthe source of most philosophical develop-ments in Indian Buddhism. But the Abhid-harma is not limited to this role as a sourceof Buddhist philosophical development. Itremained a vital focus of Buddhist thoughtand kept evolving, at least until the 7th or8th century ce. In this chapter, we focus ontwo Indian thinkers from the 4th or 5 th cen-tury ce, Asanga and Vasubandhu, and ignore

the diversity of opinions and debates that hasanimated this tradition.

The object of the Abhidharma is to ana-lyze both the realm of sentient experienceand the world given in such experience intoits components in language that avoids thepostulation of a unified subject. This anal-ysis concerns the whole range of phenom-ena, from material phenomena to nirvana(the state of enlightenment, understood asthe direct realization of the nature of reality,including especially the lack of any essen-tial self and the consequent liberation fromsuffering). For example, there are elaboratediscussions of the four primary and four sec-ondary elements that make up matter (see dela Vallee Poussin, 1971, I: 22). There are alsolengthy treatments of the nature, scope, andtypes of soteriological practices prescribedby the Buddhist tradition, a central focusof the Abhidharma. But a large part of theAbhidharmic discourse focuses on the anal-ysis of mental phenomena and their variouscomponents. It is this part of the Abhid-harma that we examine in this chapter.

In considering experience, the Abhid-harma proceeds in a rather characteristicway that may be disconcerting for new-comers, but reflects its historical origin asmnemonic lists of elements abstracted fromthe Buddha’s discourses. For each type ofphenomenon considered, the Abhidharmaanalyzes it into its basic elements (dharma),lists these elements, and groups them intothe appropriate categories (examples aregiven below). The study of the Abhidharmathus often revolves around the considerationof series of extended lists.

In elaborating such lists of componentsof experience and the world given in expe-rience, the Abhidharma follows the cen-tral tenets of Buddhist philosophy, in par-ticular the twin ideas of non-substantialityand dependent origination. According to thisphilosophy, the phenomena given in experi-ence are not unitary and stable substances,but complex and fleeting formations of basicelements that arise in dependence on com-plex causal nexuses. Such non-substantialityis particularly true of the person, who is nota substantial self, but a changing construct

Page 112: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

94 the cambridge handbook of consciousness

dependent on complex configurations ofmental and material components. This anal-ysis, which is diametrically opposed to theSam. khya view, is not just limited to the per-son, but is applied to other objects.

All composite things are thus analyzed asbeing constituted of more basic elements.Moreover, and this point is crucial, thesebasic elements should not be thought ofas reified or stable entities, but as dynam-ically related momentary events instanta-neously coming into and going out of exis-tence. Thus, when the Abhidharma analyzesmatter as being made up of basic compo-nents, it thinks of those components notas stable particles or little grains of matter,but rather as fleeting material events, com-ing into and going out of existence depend-ing on causes and conditions. Similarly, themind is analyzed into its basic components;namely, the basic types of events thatmake up the complex phenomenon we call‘mind’.

This Abhidharmic analysis is not justphilosophical but it also has practical import.Its aim is to support the soteriological prac-tices that the Buddhist tradition recom-mends. The lists of material and mentalevents are used by practitioners to informand enhance their practices. For example,the list of mental factors we examine shortlyis a precious aid to various types of medi-tation, providing a clear idea of which fac-tors need to be developed and which are tobe eliminated. In this way, the Abhidharmafunctions not just as the source of Buddhistphilosophy but also informs and supportsthe practices central to this tradition.

In the Abhidharma the mind is conceivedas a complex cognitive process consisting ofa succession of related momentary mentalstates. These states are phenomenologicallyavailable, at least in principle: They can beobserved by turning inwardly and attendingto the way we feel, perceive, think, remem-ber, and so on. When we do so, we noticea variety of states of awareness, and we alsonotice that these states change rapidly. It isthese mental states arising in quick succes-sion that the Abhidharma identifies as beingthe basic elements of the mind.

It should be clear from this preliminarycharacterization that in elaborating a the-ory of the mind the Abhidharma relies pri-marily on what we would call a first-personapproach. It is by looking directly at experi-ence that we gain an understanding of mind,not by studying it as an object and attendingto its external manifestations. This approachof the Ahbhidharma is not unlike that ofsuch Western thinkers as James, Brentano,and Husserl, who all agree that the studyof the mind must be based on attention toexperience (see Chapter 4). This approachis well captured by James’s famous claimthat in the study of the mind, “Introspec-tive Observation is what we have to rely onfirst and foremost and always” (James, 1981,p. 185).

As James himself recognizes, how-ever, first-person observation of the mind,although it might seem a straightforwardenterprise, is not a simple affair and raisesnumerous questions. What does it mean toobserve the mind? Who observes? What isbeing observed? Is the observation director mediated? In addition to these difficultepistemological issues (some of which wetake up in the next section), there are alsoquestions about the reliability of observa-tion. We are all able to certain degrees toobserve our own minds, but it is clear thatour capacities to do so differ. Whose obser-vations are to be considered reliable? Thisquestion is significant for the Abhidharmists,who may include in their data not onlyordinary observations but also the observa-tions of trained meditators. This inclusionof observation based on contemplative men-tal training and meditative experience marksan important difference between the Abhid-harma and James, as well as other Westernphenomenologists. Nevertheless, the degreeto which meditative experience is relevantto Buddhist theories of the mind is not astraightforward matter, as we see shortly.

The comparison between the Abhid-harma and James goes further, however, thantheir reliance on an introspective method.They also share some substantive similari-ties, the most important of which is per-haps the idea of the stream of consciousness.

Page 113: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 95

For the Abhidharma, mental states do notarise in isolation from each other. Rather,each state arises in dependence on precedingmoments and gives rise to further moments,thus forming a mental stream or continuum(santana, rgyud), much like James’s ‘streamof thought’. This metaphor is also found inthe Buddhist tradition in which the Bud-dha is portrayed as saying, “The river neverstops: there is no moment, no minute, nohour when the river stops: in the same way,the flux of thought” (de la Vallee Poussin,1991, p. 69, translation from the French byDreyfus).

Unsurprisingly, there are also significantdifferences between James and the Abhid-harma. One difference of interest to con-temporary research is the issue of whethermental states arise in continuity or not (seeVarela, Thompson, & Rosch, 1991, pp. 72–79). James’s view is well known: “Con-sciousness does not appear to itself choppedup in bits” (James, 1981, p. 233). Althoughthe content of consciousness changes, weexperience these changes as smooth andcontinuous, without any apparent break.The Abhidharma disagrees, arguing thatalthough the mind is rapidly changing, itstransformation is discontinuous. It is onlyto the untrained observer that the mindappears to flow continuously. Accordingto the Abhidharma, a deeper observationreveals that the stream of consciousness ismade up of moments of awareness, momentsthat can be introspectively individuated anddescribed.

Several Abhidharma texts even offermeasurements of this moment, measure-ments one would expect to be based onempirical observation. Yet such claims areproblematic, for different Abhidharma tra-ditions make claims that at times are strik-ingly at odds with one another. For exam-ple, the Mahavibhas. a, an important textfrom the first centuries of the Common Era,states that there are 120 basic moments inan instant. The text further illustrates theduration of an instant by equating it to thetime needed by an average spinner to graba thread. Not at all, argues another text:This measurement is too coarse. A moment

is the 64th part of the time necessary toclick one’s fingers or blink an eye (see de laVallee Poussin, 1991, pp. 70–71). Althoughthese measurements differ, one could arguethat given the imprecision of premodernmeasurement, there is a rough agreementbetween these accounts, which present amoment of awareness as lasting for about1/100th of a second. This is already signifi-cantly faster than pyschophysical and elec-trophysiological estimates of the duration ofa moment of awareness as being on the orderof 250 milliseconds or a quarter of a sec-ond (see Poppel, 1988; Varela, Thompson,& Rosch, 1991, pp. 72–79). But consider theclaim made by a Theravada Abhidharmatext that “in the time it takes for lightning toflash or the eyes to blink, billions of mind-moments can elapse” (Bodhi, 1993 , p. 156).The time scale in this account, which is stan-dard in the Theravada tradition, is faster bymany orders of magnitude.

This dramatic discrepancy alerts us tosome of the difficulties of accounts based onobservation. For whom are we to believe?On which tradition should we rely? More-over, we cannot but wonder about thesources of these differences. Do they derivefrom the observations of meditators, or arethey the results of theoretical elaborations?It is hard to come to a definitive conclusion,but it seems reasonable to believe that theseaccounts are not simply empirical observa-tions, but largely theoretical discussions, per-haps supplemented by observation reports.Hence one must be cautious and not assumethat these texts reflect empirical findings.Although some may, they are mostly the-oretical elaborations, which cannot be takenat face value, but require critical interpre-tation. Finally, another Abhidharma textseems to muddy the waters further by claim-ing that the measure of a moment is beyondthe understanding of ordinary beings. Onlyenlightened beings can measure the dura-tion of a moment (de la Vallee Poussin, 1991,p. 73). Thus it is not surprising that we areleft wondering!

According to the Abhidharma, the men-tal episodes that compose a stream of con-sciousness take as their objects either real or

Page 114: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

96 the cambridge handbook of consciousness

fictional entities. This object-directed char-acter of mind has been called ‘intentionality’by Western philosophers, such as Brentanoand Husserl. Brentano claimed that inten-tionality is an essential feature of conscious-ness and proposed it as a criterion of themental. All acts of awareness are directedtoward or refer to an object, regardless ofwhether this object is existent or not. Wecannot think, wish, or dread unless ourmind is directed toward something thoughtabout, wished for, or dreaded, which thusappears to the mind. Therefore, to be awareis for something to appear to the mind. TheAbhidharma seems to share this view, hold-ing that every moment of cognition relates toparticular objects, and hence it assumes thatintentionality and consciousness are insepa-rable.8

The Abhidharma also holds that thisstream of consciousness is not material. Itis associated with the body during this life-time, but will come to exist in dependenceon other bodies after the death of this body.It is crucial to recognize, however, that theimmaterial stream of consciousness is not asoul in the Platonic or Cartesian sense, but animpersonal series of mental events. Buddhistphilosophers do not believe in an ontology ofsubstances – that reality comprises the exis-tence of independent entities that are thesubjects of attributes or properties. Rather,they argue that reality is made up of eventsconsisting of a succession of moments. Thus,mind and matter are not substances, butevanescent events, and mental and mate-rial events interact in a constantly ongo-ing and fluctuating process. Moreover, Bud-dhist philosophers partake of the generalIndian reluctance to separate the mental andthe material. Hence they do not hold thatthe divide between the material and men-tal spheres is absolute. Nevertheless, for theBuddhists, in contrast to the Sam. khya, thereis a sharp divide between the mental, whichis intentional and conscious, and other ele-ments. In this respect, Buddhists are perhapsthe closest among Indian philosophers to aclassical mind-body dualism.

The Abhidharma, however, does not stopat a view of the mind as a succession of men-

tal states, but goes much further in its anal-ysis, breaking down each mental state intoits components. According to the Abhid-harma schema, which is to our knowledgeunique, each mental state is analyzed ashaving two aspects: (i) the primary factorof awareness (citta), whose function is tobe aware of the object, and (ii) mental fac-tors (caitesika), whose function is to qual-ify this awareness by determining its qual-itative nature as pleasant or unpleasant,focused or unfocused, calm or agitated, pos-itive or negative, and so on. The philosopherVasubandhu (c. 4th or 5 th century ce), oneof the great Abhidharmists, explains this dis-tinction between awareness and mental fac-tors as follows:

Cognition or awareness apprehends thething itself, and just that; mental factorsor dharmas associated with cognition suchas sensation, etc., apprehend special char-acteristics, special conditions (de la ValleePoussin, 1971, I: 30).9

The basic insight is that mental states havetwo types of cognitive functions – (1) aware-ness and (2) cognitive and affective engage-ment and characterization. The mental stateis aware of an object. For example, the senseof smell is aware of a sweet object. But men-tal states are not just states of awareness.They are not passive mirrors in which objectsare reflected. Rather, they actively engagetheir objects, apprehending them as pleasantor unpleasant, approaching them with par-ticular intentions, and so forth. For example,a gustatory cognition of a sweet object is notjust aware of the sweet taste but also appre-hends the object as pleasant, distinguishescertain qualities such as its texture, and soon. It also categorizes the object as being(say) one’s favorite Swiss chocolate. Suchcharacterization of the object is the func-tion of the mental factors. We now describethis distinction between the primary factor ofawareness and mental factors in more detail.

The Primary Factor of Awareness

The primary factor of awareness (citta) isalso described as vijnana, a term often

Page 115: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 97

translated as consciousness or cognitive aware-ness. It is the aspect of the mental state thatis aware of the object. It is the very activityof cognizing the object, not an instrumentin the service of an agent or self (which,as we have seen, the Buddhist philosophersargue is nonexistent). This awareness merelydiscerns the object, as in the above exam-ple where one apprehends the taste of whatturns out to be one’s favorite Swiss choco-late. Thus Vasubandhu speaks of awarenessas the “bare apprehension of each object” (dela Vallee Poussin, 1971, I: 30).

In most Abhidharma systems, there aresix types of awareness: five born from the fivephysical senses (sight, hearing, smell, taste,and touch) and mental cognition. Each typeof sensory cognition is produced in depen-dence on a sensory basis (one of the fivephysical senses) and an object. This aware-ness arises momentarily and ceases imme-diately, to be replaced by another momentof awareness, and so on. The sixth typeof awareness is mental. It is considered asense by the Abhidharma, like the five phys-ical senses, though there are disagreementsabout its basis (see Guenther, 1976, pp. 20–30).

Some Abhidharma texts, such as Asanga’s(Rahula, 1980), argue that these six types ofconsciousness do not exhaust all the possi-ble forms of awareness. To this list Asangaadds two types of awareness: the store-consciousness (alaya-vijnana, kun gzhi rnamshes) and afflictive mentation (klis.†a-manas,nyon yid; Rahula, 1980, p. 17).10 The ideaof a store-consciousness is based on a dis-tinction between the six types of awareness,which are all described as manifest cognitiveawareness (pravr.tti-vijnana, ’jug shes), and amore continuous and less manifest form ofawareness, which is the store-consciousness.This awareness is invoked to answer the fol-lowing objection: If there is no self and themind is just a succession of mental states,then how can there be any continuity in ourmental life? Asanga’s answer is that there isa more continuous form of consciousness,which is still momentary, but exists at alltimes. Because it is subliminal, we usuallydo not notice it. It is only in special circum-

stances, such as fainting, that its presencecan be noticed or at least inferred. This con-sciousness contains all the basic habits, ten-dencies, and propensities (including thosethat persist from one life to the next) accu-mulated by the individual. It thus providesa greater degree of continuity than manifestcognitive awareness on its own.

The store-consciousness is mistaken bythe afflictive mentation as being a self. In thisway one’s core inborn sense of self is formed.From a Buddhist point of view, however, thissense of self is fundamentally mistaken. It isa mental imposition of unity where there isin fact only the arising of a multiplicity ofinterrelated physical and mental events. Thesense of control belonging to one’s sense ofself is thus largely illusory. There is reallynobody in charge of the physical and men-tal processes, which arise according to theirown causes and conditions, not our whims.The mind is not ruled by a central unit, butby competing factors whose strength variesaccording to circumstances.

Thus Asanga, allegedly Vasubandhu’shalf-brother, posits as many as eight typesof consciousness, a doctrine usually asso-ciated with a particular Buddhist school,the Yogacara. This school contains manyinteresting insights, without which there isno complete understanding of the depthof Buddhist views of the mind, but thereis not space to discuss these insights here.Let us simply point out that there aresome interesting similarities between theYogacara and the Sam. khya views. Thestore-consciousness, in acting as the holderof all the potentialities accumulated byan individual, is not unlike the intellect(buddhi), whereas the afflictive mentationseems similar to the ego-sense (aham. kara).Furthermore, mental cognition does notseem too different from mentation (manas).These similarities indicate the reach of theSam. khya model, even in a tradition whosebasic outlook is radically different.

Mental Factors

Mental states are not just states of awareness;they also actively engage their objects,

Page 116: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

98 the cambridge handbook of consciousness

qualifying them as pleasant or unpleasant,approaching them with a particular attitude,and so on. Mental factors, which are aspectsof the mental state that characterize theobject of awareness, account for this engage-ment. In other words, whereas conscious-ness makes known the mere presence of theobject, mental factors make known the par-ticulars of the content of awareness, definingthe characteristics and special conditions ofits object. They qualify the apprehension ofthe object as being pleasant or unpleasant,attentive or distracted, peaceful or agitated,and so forth.

The translation of these elements of themind (caitesika) as factors is meant to cap-ture the range of meanings that the Abhid-harma associates with this term. The rela-tion between cognitive awareness and men-tal factors is complex. At times the Abhid-harma construes this relation diachronicallyas being causal and functional. Factors causethe mind to apprehend objects in particu-lar ways. At other times, the Abhidharmaseems to emphasize a synchronic perspec-tive in which cognitive awareness and men-tal factors coexist and cooperate in the samecognitive task.11

In accordance with its procedure, theAbhidharma studies mental factors by listingthem, establishing the ways in which theyarise and cease, and grouping them in theappropriate categories. Each Abhidharmatradition has a slightly different list. Here wefollow a list of 51 mental factors distributedin 6 groups.12 The mental typology presentedin this list has a number of interesting fea-tures in relation to more familiar Westernphilosophical and scientific typologies:� Five omnipresent factors: feeling, discern-

ment, intention, attention, and contact� Five determining factors: aspiration,

appreciation, mindfulness, concentra-tion, and intelligence

� Four variable factors: sleep, regret, inves-tigation, and analysis

� Eleven virtuous factors: confidence/faith,self-regarding shame, other-regardingshame, joyful effort, pliability, consci-entiousness, detachment, non-hatred

(lovingkindness), wisdom, equanimity,and non-harmfulness (compassion).

� Six root-afflictions: attachment, anger,ignorance, pride, negative doubt, and mis-taken view.

� Twenty branch-afflictions: belligerence,vengefulness, concealment, spite, jeal-ousy, avarice, pretense, dissimulation,self-satisfaction, cruelty, self-regardingshamelessness, other-regarding shame-lessness, inconsideration, mental dullness,distraction, excitement, lack of confi-dence/faith, laziness, lack of conscien-tiousness, and forgetfulness.

The nature of this complex typologybecomes clearer when one realizes that thesesix groups can be further reduced to three.The first three groups contain all the neu-tral factors. They are the factors that can bepresent in any mental state, whether positiveor negative. Hence these factors are neitherpositive nor negative in and of themselves.The next three groups are different. Thesefactors are ethically determined. The elevenvirtuous factors are positive in that they donot compel us toward attitudes that lead tosuffering. They leave us undisturbed, opento encounter reality with a more relaxed andfreer outlook. The twenty-six afflictive fac-tors, on the other hand, disturb the mind,creating frustration and restlessness. Theyare the main obstacles to the life of the goodas understood by the Buddhist tradition. Thevery presence of these factors marks themental state as virtuous or afflictive. Thusit is clear that the Abhidharma typology isexplicitly ethical.

This presentation also offers interestinginsights concerning the cognitive functionsof the mind. In particular, the analysis of thefive omnipresent factors – feeling, discern-ment, intention, attention, and contact –shows some of the complexities of Abhid-harmic thinking. These five are describedas omnipresent because they are present inevery mental state. Even in a subliminal statesuch as the store-consciousness these fivefactors are present. The other factors are notnecessary for the performance of the mostminimal cognitive task (the apprehension of

Page 117: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 99

an object, however dimly and indistinctly).Hence they are not present in all mentalstates, but only in some.

One striking feature of this list is thepre-eminent place of feeling (vedana, tshorba) as the first of the factors. This empha-sis reflects the fundamental outlook of thetradition, which views humans as being firstand foremost sentient. But it also reflects adistinctive view of the cognitive realm thatemphasizes the role of spontaneous valueattribution. For the Abhidharma, a mentalstate is not only aware of an object but atthe same time it also evaluates this object.This evaluation is the function of the feel-ing tone that accompanies the awareness andexperiences of the object as either pleasant,unpleasant, or neutral. This factor is centralin determining our reactions to the eventswe encounter, because, for the most part,we do not perceive an object and then feelgood or bad about it out of considerate judg-ments. Rather, evaluation is already builtinto our experiences. We may use reflectionto come to more objective judgments, butthose mostly operate as correctives to ourspontaneous evaluations.

Feeling is not the only important factor,however. A mental state involves not onlyawareness and feeling but also discernment(sam. jna, ’du shes also often translated as per-ception or recognition). This factor involvesthe mind’s ability to identify the object bydistinguishing it from other objects. Thisconcept of discernment presents some diffi-culties, however. In its most elaborate form,discernment is based on our semiotic abilityto make distinctions, mostly through linguis-tic signs. But for the Abhidharma, the mind’sability to identify objects is not limited to lin-guistic distinctions, however important theymay be. Infants and non-human animals areunderstood to have the ability to make dis-tinctions, although they do not use sym-bolic thinking. Are these prelinguistic cogni-tions nevertheless semiotic? Do they involvenon-linguistic signs, or do they make dis-tinctions without the use of signs? It seemsplausible to argue that some of these statesinvolve non-linguistic signs, as in the caseof visual cognitions that distinguish objects

on the basis of visual clues. For the Abhid-harma, however, this question strikes deeper,because several meditative states in the Bud-dhist tradition are described as signless (ani-mitta, mthan med).13 Can the mind in thesestates identify its object without makingdistinctions? Or is it the case that even in thecase of signless states the mind still makesdistinctions, although they are not linguisticor even conceptual? In a short chapter suchas this one, we cannot delve into this issue,despite its relevance to the dialogue betweenBuddhism and the sciences of mind.

Other factors are also significant. Inten-tion (cetana, sems pa) is a central andomnipresent factor, which determines themoral (not ethical) character of the men-tal state. Every mental state approaches itsobject with an intention, a motivation thatmay be evident to the person or not. Thisintention determines the moral nature of themental state, whether it is virtuous, non-virtuous, or neutral. This factor is associ-ated with the accomplishment of a goal andhence is also thought of as a focus of organi-zation for the other factors.

Also important is the role of attention(manasikara, yid la byed pa), another one ofthe five omnipresent factors. It is the abilityof the mind to be directed to an object. Acontemporary commentator explains atten-tion this way: “Attention is the mental fac-tor responsible for the mind’s advertence tothe object, by virtue of which the object ismade present to consciousness. Its character-istic is the conducting of the associated men-tal states [i.e., factors] to the object. Its func-tion is to yoke the associated mental states[i.e., factors] to the object” (Bodhi, 1993 ,p. 81). Every mental state has at least a min-imal amount of focus on its object; henceattention is an omnipresent factor.

Attention needs to be distinguished fromtwo other related factors. The first is concen-tration (samadhi, ting nge ’dzin), the abilityof the mind to dwell on its object single-pointedly. The second is mindfulness (smr.ti,dran pa, also translated as recollection),which is the mind’s ability to keep the objectin focus without forgetting, being distracted,wobbling, or floating away from the object.

Page 118: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

100 the cambridge handbook of consciousness

Both abilities are not present in every men-tal state. Concentration differs from atten-tion in that it involves the ability of themind not just to attend to an object butalso to sustain this attention over a period oftime. Similarly, mindfulness is more than thesimple attending to the object. It involvesthe capacity of the mind to hold the objectin its focus, preventing it from slippingaway in forgetfulness. Hence both factors,which are vital to the practice of Buddhistmeditation (see Chapter 19), are includedamong the determining factors. They arepresent only when the object is apprehendedwith some degree of clarity and sustainedfocus.

The factors discussed so far are mainlycognitive, but the Abhidharma list alsoincludes mental factors we would describe asemotions. Consider the ethically determinedfactors, starting with the eleven virtuousones: confidence/faith, self-regarding shame,other-regarding shame, joyful effort, plia-bility, conscientiousness, detachment, non-hatred (lovingkindness), wisdom, equanim-ity, and non-harmfulness (compassion).

We would describe several of these fac-tors, such as lovingkindness and compassion,as emotions. These two factors belong towhat we would characterize as the affec-tive domain, although here they are under-stood not with regard to their affectivity, butrather in relation to their ethical character.14

Hence they are grouped with other fac-tors, such as wisdom and conscientious-ness, that are more cognitive than affec-tive. For the Abhidharma all these factorsare grouped together. They are all positivein that they promote well-being and free-dom from the inner compulsions that lead tosuffering.

The afflictive factors, on the other hand,are precisely those that lead to suffering.They are by far the most numerous groupand are clearly a major focus of this typology:

� Six root-afflictions: attachment, anger,ignorance, pride, negative doubt, and mis-taken view.

� Twenty branch-afflictions: belligerence,vengefulness, concealment, spite, jeal-

ousy, avarice, pretense, dissimulation,self-satisfaction, cruelty, self-regardingshamelessness, other-regarding shame-lessness, inconsideration, mental dullness,distraction, excitement, lack of confi-dence/faith, laziness, lack of conscien-tiousness, and forgetfulness.

Here again we notice that this listcontains factors that look quite differ-ent. Some factors such as ignorance areclearly cognitive, whereas others such asanger and jealousy are more affective. Theyare grouped together because they areafflictive: They trouble the mind, mak-ing it restless and agitated. They alsocompel and bind the mind, preventingone from developing more positive atti-tudes. This afflictive character may be obvi-ous in the case of attachment and jeal-ousy, which directly lead to dissatisfaction,frustration, and restlessness. Ignorance –that is, our innate and mistaken sense of self –is less obviously afflictive, but its role isnonetheless central here, because it bringsabout the other more obviously afflictivefactors.

Although there are many elements in thetypology of mental factors that we can iden-tify as emotions (anger, pride, jealously, lov-ingkindness, and compassion), there is nocategory that maps onto our notion of emo-tion. Most of the positive factors are notwhat we would call emotions, and althoughmost of the negative factors are affective,not all are. Hence it is clear that the Abhid-harma does not recognize the notion of emo-tion as a distinct category of a mental typol-ogy. There is no Abhidharma category thatcan be used to translate our concept of emo-tion, and similarly our concept of emotion isdifficult to use to translate the Abhidharmaterminology. Rather than opposing rationaland irrational elements of the psyche, orcognitive and emotive systems of the mind(or brain), the Abhidharma emphasizes thedistinction between virtuous and afflictivemental factors. Thus, our familiar Westerndistinction between cognition and emotionsimply does not map onto the Abhidharmatypology. Although the cognition/emotion

Page 119: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 101

distinction has recently been called intoquestion by some scientists (see Chapter 29

and Damasio, 1995), it remains centralto most of contemporary cognitive scienceand philosophy of mind. The Abhidharmatypology offers a different approach, onein which mental factors are categorizedaccording to their ethical character. Thistypology could prove fruitful for psychol-ogists and social and affective neuroscien-tists interested in studying the biobehav-ioral components of human well-being (seeGoleman, 2003).

The analyses of mental factors we havereviewed indicate the complexity, sophis-tication, and uniqueness of the Abhid-harma mental typology. For this reason, theAbhidharma is often called, somewhat mis-leadingly, ‘Buddhist psychology’.15 Yet theAbhidharma analysis does not answer allthe questions raised by the Buddhist viewof the mind as lacking a real self. In par-ticular, it leaves out the issue of the cog-nitive or epistemic structure of the mentalstates that make up the stream of con-sciousness. To examine this issue, we turnto another Indian Buddhist tradition, thelogico-epistemological tradition of Dignagaand Dharmakırti (see Dreyfus, 1997; Dunne,2004).

Buddhist Epistemology

This tradition was started by Dignaga around500 ce and was expanded significantly morethan a century later by Dharmakırti, thefocus of our analysis. Its contribution wasthe explicit formulation of a complete Bud-dhist logical and epistemological system.The importance of this system in India canbe seen in the continuous references to itby later Buddhist thinkers and the numer-ous attacks it received from orthodox Hinduthinkers. It gradually came to dominate theIndian Buddhist tradition, even eclipsing theAbhidharma as the prime focus of intellec-tual creativity.

The concern of this tradition is the natureof knowledge. In the Indian context, thisissue is formulated as this question: What

is the nature of valid cognition (praman. a)and what are its types? Hindu thinkers tendto present a realist theory, which liberallyallows a diversity of instruments of validcognition. For example, the Sam. khya assertsthat there are three types of valid sourcesof knowledge: perception (pratyaks.a), infer-ence (anumana), and verbal testimony(sabda). The Nyaya, perhaps the mostimportant Hindu logico-epistemological tra-dition, added a fourth type of valid cog-nition, analogy (upamana). This fourfoldtypology provided the most authoritativeepistemological typology in India. Buddhistepistemology, however, rejects these typolo-gies and offers a more restrictive view, lim-iting knowledge to inference and percep-tion. It is in its examination of inferenceas a source of knowledge that the Buddhisttradition analyzes reasoning, in particularthe conditions necessary for the forma-tion of sound reasons and all their pos-sible types. Hence this tradition is oftendescribed, also somewhat misleadingly, as‘Buddhist logic’.16

The interpretation of the word praman. ais itself a topic of debate among Buddhistand Hindu thinkers. For the latter, this word,in accordance with its grammatical form,refers to ‘means of valid cognition’. Thisunderstanding also accords with the basicview of this school that knowledge is ownedby a subject, the self, to whom knowledgeis ultimately conveyed. For example, theNyaya asserts that knowledge is a qualityof the self. It is only when I become con-scious of something that I can be said toknow it. This view is energetically rejectedby Dharmakırti, who follows the classicalBuddhist line that there is no knowing self,only knowledge. Hence, praman. a should notbe taken in an instrumental sense, but asreferring to the knowledge-event, the worditself being then interpreted as meaningvalid cognition. This type of cognition is inturn defined as that cognition that is non-deceptive (avisam. vadi-jnana):

Valid cognition is that cognition [thatis] non-deceptive (avisam. vadi). Non-deceptiveness [consists] in the readiness

Page 120: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

102 the cambridge handbook of consciousness

[for the object] to perform a function(Dharmakırti, Commentary on ValidCognition II: 1, translated by Dreyfus, inMiyasaka, 1971–2 ).

This statement emphasizes that praman. ais not the instrument that a knowing selfuses to know things. There is no separateknowing subject, but just knowledge, whichis praman. a. According to this account,a cognition is valid if, and only if, it isnon-deceptive. Dharmakırti in turn inter-prets non-deceptiveness as consisting of anobject’s readiness to perform a function thatrelates to the way it is cognized. For example,the non-deceptiveness of a fire is its dispo-sition to burn, and the non-deceptiveness ofits perception is its apprehension as burning.This perception is non-deceptive because itpractically corresponds to the object’s owncausal dispositions, contrary to the appre-hension of the fire as cold.

The scope of the discussion of praman. a,however, is not limited to the analysis ofknowledge, but constitutes a veritable philo-sophical method used in investigating otherphilosophical and even metaphysical topics.All pronouncements about the world andour ways of knowing it must rest on someattested forms of knowledge, such as percep-tion and inference, if they are to be takenseriously. No one can simply claim truth,but must be able to establish statementsby pinning down their epistemic supports.The advantage of this method is that it pro-vides intertraditional standards of validationand the development of a relatively neutralframework within which philosophical andmetaphysical claims can be assessed, with-out regard to religious or ideological back-grounds. This procedure is different from theAbhidharmic approach, which presupposesBuddhist ideas and vocabulary.

In analyzing the mind, Dharmakırti startsfrom the same view of mind as the Abhid-harma. Mind is made up of momentary men-tal states that arise in quick succession. Eachmoment of consciousness comes to be anddisappears instantaneously, making a placefor other moments of awareness. Moreover,each moment apprehends the object that

appears to it and in the process reveals theobject that is apprehended. In this way,each mental state cognizes its object. But asan epistemologist, Dharmakırti investigatesissues left out by the Abhidharma, tacklingquestions that are central to any philosophi-cal exploration of the mind. In this chapter,we examine some of these questions. First,we consider Dharmakırti’s analysis of thenature of cognitive events. We examine hisview of the mind as apprehending represen-tations of external objects, rather than theobjects themselves, and the consequencesthat this view has for the issue of whether themind is inherently reflexive (self-revealingand self-aware). We also examine Dhar-makırti’s theory of perception, as well assome of his views on the nature of concep-tuality and its relation to language. Finally,we revisit the issue of intentionality, showingthe complexity of this notion and attemptingto disentangle its several possible meaningswithin the context of a Buddhist account ofthe mental.

The Reflexive Nature of Mental Events

We commonly assume that we have unprob-lematic access to our environment throughour senses. Even casual first-person investi-gation shows, however, that such access maywell not be the case. There are cases of per-ceptual illusions, and even when we are notdeceived, the perceptions of individuals varygreatly. Hence philosophy cannot take forgranted the common-sense view of percep-tual knowledge. Many Western philosophershave argued that our perceptual knowledgegoes well beyond the sensible experiencesthat give rise to it. Although this claim isdebatable, we cannot assume without exam-ination that we understand the way in whichcognition apprehends its objects.

In thinking about the nature of cognition,Dharmakırti relies crucially on the conceptof aspect (akara), a notion that goes backto the Sam. khya, but has been accepted byseveral other schools. The idea behind thisposition, which is called in Indian philos-ophy sakaravada (‘assertion of aspect’), isthat cognition does not apprehend its object

Page 121: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 103

nakedly, but rather through an aspect, whichis the reflection or imprint left by the objecton the mind. For example, a visual sense con-sciousness does not directly perceive a bluecolor, but captures the likeness of blue asimprinted on cognition. Thus, to be awareof an object does not mean apprehendingthis object directly, but having a mentalstate that has the form of this object andbeing cognizant of this form. The aspect isthe cognitive form or epistemic factor thatallows us to distinguish mental episodes anddifferentiate among our experiences. With-out aspects, we could not distinguish, forinstance, a perception of blue from a percep-tion of yellow, for we do not perceive yellowdirectly. The role of the aspect is thus cru-cial in Dharmakırti’s system, for it explainsa key feature of consciousness: Conscious-ness is not the bare seeing that direct real-ism and common sense suppose, but ratherthe apprehension of an aspect that repre-sents this object in the field of conscious-ness. The aspect is not external to conscious-ness. It is not only the form under which anexternal object presents itself to conscious-ness but also the form that consciousnessassumes when it perceives its object. Thusan aspect is a representation of objects inconsciousness, as well as the consciousnessthat sees this representation.

The implication of this analysis is thatperception is inherently reflexive. Aware-ness takes on the form of an object andreveals that form by assuming it. Thus, inthe process of revealing external things, cog-nition reveals itself. This view of cogni-tion as ‘self-luminous’ (svayam prakasa) andself-presencing is not unique to Dignaga,its first Buddhist propounder, or to Dhar-makırti, his follower. It is also acceptedby other thinkers, particularly the HinduVedantins, who identify consciousness asthe self and describe it as being ‘onlyknown to itself’ (svayaµvedya) and ‘self-effulgent’ (svayamprabha; see Gupta 1998,2003 ; Mayeda, 1979/1992 , pp. 22 , 44). ForDignaga and Dharmakırti, however, theinherently reflexive character of conscious-ness is not a consequence of its transcen-dent and pure nature, but of its consisting of

the beholding of an internal representation.From one side, consciousness has an exter-nally oriented feature, called the objectiveaspect (grahyakara). This feature is the formthat a mental state assumes under the influ-ence of an external object. The second sideis the internal knowledge of our own men-tal states. It is called the subjective aspect(grahakakara), the feature that ensures thatwe are aware of the objective aspect, the rep-resentation of the object. These two partsdo not exist separately. Rather, each mentalstate consists of both and hence is necessar-ily reflexive (aware of itself in being awareof its object).

The necessary reflexivity of consciousnessis understood by Dharmakıirti and his fol-lowers as a particular type of perceptioncalled self-cognition (svasam. vedana). Self-cognition can be compared to what Westernphilosophers call apperception; namely, theknowledge that we have of our own men-tal states. It is important to keep in mind,however, that apperception does not implya second and separate cognition directedtoward a given mental state of which oneis thereby aware. For Dharmakırti, apper-ception is not introspective or reflective, forit does not take inner mental states as itsobjects. It is instead the self-cognizing fac-tor inherent in every mental episode, whichprovides us with a non-thematic awarenessof our mental states. For Dharmakırti, reflex-ivity is a necessary consequence of his analy-sis of perception, according to which a sub-jective aspect beholds an objective aspectthat represents the external object withinthe field of consciousness. Self-cognition isnothing over and above this beholding.

Self-cognition is the intuitive presencethat we feel we have toward our own men-tal episodes. We may not be fully aware ofall the aspects and implications of our expe-riences, but we do seem to keep track ofthem. Tibetan scholars express this idea bysaying that there is no person whose men-tal states are completely hidden to him- orherself. This limited self-presence is not dueto a metaphysical self, but to self-cognition.Because apperception does not rely on rea-soning, it is taken to be a form of perception.

Page 122: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

104 the cambridge handbook of consciousness

Apperception does not constitute, however,a separate reflective or introspective cogni-tion. Otherwise, the charge that the notionof apperception opens an infinite regresswould be hard to avoid.

Dharmakırti’s ideas are not unlike thoseWestern philosophers who have argued thatconsciousness implies self-consciousness(see Chapters 3 and 4). Such philosophersinclude (despite their otherwise vast dif-ferences) Aristotle, Descartes, Locke, Kant,Husserl, and Sartre (see Wider, 1997,pp. 7–39). According to Locke, a person isconscious of his or her own mental states.He defines consciousness as “the perceptionof what passes in a man’s mind” (EssayConcerning Human Understanding II: ii,19). Leibniz, in his New Essays ConcerningHuman Understanding (II: i, 19), criticizesLocke, pointing out that this view leads toan infinite regress, for if every cognitive actimplies self-awareness, self-knowledge mustalso be accompanied by another awareness,and so on ad infinitum. This regress arises,however, only if knowledge of one’s mentalstates is assumed to be distinct from knowl-edge of external objects. This assumptionis precisely what Dharmakırti denies. Aconsciousness is aware of itself in a non-dualway that does not involve the presence ofa separate awareness of consciousness. Thecognizing person simply knows that he orshe cognizes without the intervention of aseparate perception of the cognition. Thisknowledge is the function of apperception,which thus provides an element of certaintywith respect to our mental states. Apper-ception does not necessarily validate thesestates, however. For example, one can takeoneself to be seeing water without knowingwhether that seeing is veridical. In this case,one knows that one has an experience, butone does not know that one knows. Thedetermination of the validity of a cognitionis not internal or intrinsic to that cogni-tion, but is to be established by practicalinvestigation.

Several arguments are presented by Dhar-makırti to establish the reflexive nature ofconsciousness.17 One of his main argumentsconcerns the nature of suffering and hap-piness as it reveals the deeper nature of

mental states. For Dharmakırti, as for theAbhidharma, suffering and happiness arenot external to consciousness, but integralto our awareness of external objects. Ourperceptions arise with a certain feeling-tone,be it pleasant, unpleasant, or neutral; thisfeeling-tone is a function of the presenceof the mental factor of feeling as describedby the Abhidharma. This feeling needs tobe noticed, however; otherwise we wouldnot be aware of how the apprehensionof the object feels. Because this noticingcannot be the function of another men-tal state without incurring the problem ofan infinite regress, it must be the mentalstate apprehending the external object thatbecomes aware at the same time of thefeeling. This conclusion indicates, for Dhar-makırti, the dual nature of mental states.In a single mental state, two aspects canbe distinguished: (1) the objective aspect,the representation of the external object inconsciousness, and (2) the subjective aspect,the apprehension of this appearance orself-cognition.

For Dharmakırti, a mental state thus hastwo functions. It apprehends an externalobject (alambana) and beholds itself. Theapprehension of an external object is notdirect, but results from the causal influenceof the object, which induces cognition toexperience (anubhava) the object’s repre-sentation. Hence, mind does not experiencean external object, but beholds an internalrepresentation that stands for an externalobject. Cognition cannot be reduced to aprocess of direct observation, but involvesa holding of an inner representation. Thisbeholding is not, however, an apprehensionin the usual sense of the word, for the twoaspects of a single mental episode are notseparate. It is an ‘intimate’ contact, a directexperiencing of the mental state by itselfthrough which we become aware of ourmental states at the same time as we per-ceive things.

Theory of Perception

This view of cognition as bearing only indi-rectly on external objects has obvious con-sequences for the theory of perception. The

Page 123: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 105

theory of perception is an important elementof Dharmakırti’s epistemology, for we haveaccess to external reality first and foremostthrough perception, the primary valid cog-nition. But this access is not as unprob-lematic as one might think. Although itmight seem commonsensical that percep-tion results from our encounter with theworld, in reality consciousness does notdirectly cognize the object, but only indi-rectly cognizes it. For Dharmakırti, as wehave seen, the mind has direct access onlyto the representational aspect caused by theobject; the object itself remains inaccessi-ble to consciousness. The similarity betweenobject and aspect – and hence betweenobject and consciousnesss, the aspect beingthe cognitive form of the object that standsfor the object in the field of consciousness –is the crucial element in this causal the-ory of perception. This similarity ensuresthat perception is not locked up in its ownappearances, as conceptions are. Conscious-ness is not in direct contact with the exter-nal world, but only with an internal impres-sion caused by the external object. Hencethe external object remains hidden, thoughnot completely.

When pressed by these problems, Dhar-makırti sometimes shifts between the viewsof two different Buddhist philosophicalschools, using one perspective to bypassproblems that arise in the other. These twoviews are the Sautrantika theory of per-ception, which is representationalist in theways just described, and the Yogacara the-ory, which is idealist and denies that thereis anything outside of consciousness. Fol-lowing Dignaga’s example and his strategyof ascending scales of philosophical analysis,Dharmakırti holds that the Yogacara theoryis truer and hence higher on the scale of anal-ysis. This theory denies that there are anyexternal objects over and above the directobjects of perception. Thus its view of per-ception is phenomenalist: It reduces externalobjects to interpreted mental data, but suchdata are no longer taken to stand for externalobjects (because it is now held that nothingexists outside of consciousness). This theory,however, is counter-intuitive, and so Dhar-makırti refers to it only occasionally, prefer-

ring to argue on the basis of the commonsen-sical assumption that external objects exist.His theory of perception thus has a pecu-liar two-tiered structure, in which he pre-supposes the existence of external objects,which he then ultimately rejects to pro-pound a form of idealism.

Among these two tiers, the one Dharm-akırti most often refers to is the Sautrantikarepresentationalist theory of perception.According to this view, consciousness doesnot have direct access to external objects,but grasps objects via the intermediary ofan aspect caused by and similar to an exter-nal object. He sometimes replaces this viewby a Yogacara view, which holds that inter-nal impressions are not produced by exter-nal objects, but by internal tendencies. Thisshift into full-blown idealism allows Dhar-makırti to bypass the difficulties involvedin explaining the relation between internalperceptions and external objects. Becausethere are no external objects, the problemof the relation between internal impres-sions and external objects does not arise. Atthis level, his philosophy of perception canbe described as phenomenalist, for it holdsthat there is no external object outside ofaspects.

Another major feature of Dharmakırti’saccount is his sharp separation betweenperception and conception, a separationenshrined in his definition of perception asthe cognition that is unmistaken (abhranta)and free from conceptions (kalpanapod.ha)(Commentary on Valid Cognition, III: 300

cd). Because perception is unmistaken andconception is mistaken, perception must befree from conception. This analysis of per-ception differs sharply from the dominantaccount in India, the epistemological real-ism of the Nyaya school and its assertion ofthe existence of a determinate (savikalpaka)form of perception. For the Nyaya, percep-tion does not stop with the simple takingin of sensory stimuli, but also involves theability to categorize this input. Althoughwe may start with a first moment of inde-terminate perception, in which we merelytake in external reality, we do not stop therebut go on to formulate perceptual judg-ments. Moreover, and this is the crux of the

Page 124: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

106 the cambridge handbook of consciousness

question, these judgments are for the Nyayafully perceptual. They are not mistakenconceptual overlays, but true reflections ofreality.

This commonsensical view of perceptionis not acceptable to Dharmakırti, for it leadsto an unenviable choice: either accept thereality of the abstract entities necessary forthe articulation of the content of percep-tion or reject the possibility of an unmis-taken cognition. Because neither possibil-ity is acceptable for Dharmakırti, he holdsthat perception can only be non-conceptual.There is no determinate perception, for thejudgments induced by perception are notperceptual, but are just conceptual superim-positions. They do not reflect the individualreality of phenomena, but instead addresstheir general characteristics. Because thoseare only constructs, the cognitions that con-ceive them cannot be true reflections of real-ity. Hence for perception to be undistortedin a universe of particulars, it must be totallyfree from conceptual elaborations. This posi-tion implies a radical separation betweenperception, which merely holds the object asit is in the perceptual ken, and interpretationof this object, which introduces conceptualconstructs into the cognitive process.

This requirement that perception be non-conceptual is the cornerstone of the Bud-dhist theory of perception. But it createsproblems for Dharmakırti. It would seemthat given his privileging of perception heshould hold an empiricist view, accordingto which perception boils down to a bareencounter with reality and knowledge isgiven to the senses. Dharmakırti should holdthe view that the aspects through which wecome to perceive reality are fully represen-tational like Locke’s ideas, that they standfor external objects, and that their appre-hension is in and of itself cognitive. Dhar-makırti’s view of perception, however, ismore complex, for he shares with Sellars(1956) the recognition that knowledge, evenat the perceptual level, does not boil downto an encounter with reality, but requiresactive categorization. We do not know thingsby sensing them, for perception does notdeliver articulated objects, but only impres-

sions, which by themselves are not formsof knowledge but become so only whenthey are integrated within our categoricalschemes. For example, when we are hit onthe head, we first have an impression. Wejust have a sensation of pain, which is notby itself cognitive. This sensation becomescognitive when it becomes integrated into aconceptual scheme, in which it is explainedas being an impact on a certain part of ourbody due to certain causes. It is only thenthat the impression of being hit becomesfully intentional. Prior to this cognitive inte-gration, the impression, or to speak Dhar-makırti’s language, the aspect, does not yetrepresent anything in the full sense of theword. It only becomes so when interpretedconceptually.

This view of perception agrees with Dhar-makırti’s analysis of the validity of cogni-tions, which consists in their being ‘non-deceptive’, a term interpreted in practicalterms. Cognitions are valid if, and only if,they have the ability to lead us toward suc-cessful practical actions. In the case of per-ception, however, practical validity is not asstraightforward as one might think. Achiev-ing practical purposes depends on correctlydescribing the objects we encounter. It is notenough to see an object that is blue; we mustalso see it as being blue. To be non-deceptive,a cognition depends on the appropriate iden-tification of the object as being this orthat. Perceptions, however, do not identifytheir objects, for they are not conceptual.They cannot categorize their objects, butonly hold them without any determination.Categorization requires conceptual thoughtunder the form of a judgment. Such a judg-ment subsumes its object under an appro-priate universal, thereby making it part ofthe practical world where we deal with long-lasting entities that we conceive of as partsof a determined order of things. For exam-ple, we sense a blue object that we catego-rize as blue. The perceptual aspect (the blueaspect) is not yet a representation in the fullsense of the word, because its apprehension,the perception of blue, is not yet cognitive.It is only when it is interpreted by a concep-tion that the aspect becomes a full-fledged

Page 125: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 107

intentional object standing for an externalobject. Hence, Dharmakırti’s account ofperception leads us to realize the impor-tance of categorical interpretation in the for-mation of perceptual knowledge, a positionthat is not without problems for his system,given his emphasis on the primacy and non-conceptuality of perception. Nevertheless,the merit of this analysis is that it disentan-gles the processes through which we cometo know the world, explaining the role ofperception as a way to contact the worldwhile emphasizing the role of conceptualcategorization in the formation of practicalknowledge.

Thought and Language

In examining thought (kalpana), Dhar-makırti postulates a close association withlanguage. In fact, the two can be consideredequivalent from an epistemological point ofview. Language signifies through conceptualmediation in the same way that thought con-ceives of things. The relation between thetwo also goes the other way: We do not firstunderstand things independently of linguis-tic signs and then communicate this under-standing to others. Dharmakırti recognizes acognitive import to language; through lan-guage we identify the particular things weencounter, and in this way we integrate theobject into the meaningful world we haveconstructed. The cognitive import of lan-guage is particularly obvious in the acqui-sition of more complex concepts. In thesecases, it is clear that there is nothing inexperience that could possibly give rise tothese concepts without language. Withoutlinguistic signs thought cannot keep track ofthings to any degree of complexity. Dhar-makırti also notes that we usually rememberthings by recollecting the words associatedwith those things. Thus concepts and wordsmutually depend on each other.

This close connection between thoughtand language, inherited from Dignaga,differentiates Dharmakırti from classicalempiricists, such as Locke and modernsense-data theorists, who believe in whatSellars (1956) describes as the ‘myth of the

given’. Locke, for example, holds that con-cepts and words are linked through associ-ation. The word ‘tree’ acquires its meaningby becoming connected with the idea tree,which is the mental image of a tree. Hencefor Locke the representation of the tree isnot formed through language, but is givento sensation (Dharmakırti’s perception). Weunderstand a tree as a tree through mereacquaintance with its representation with-out recourse to concepts. Dharmakırti’s phi-losophy is quite different, for it emphasizesthe constitutive and constructive nature oflanguage. This conception of language is wellcaptured by one of Dharmakırti’s definitionsof thought:

Conceptual cognition is that consciousnessin which representation (literally, appear-ance) is fit to be associated which words(Ascertainment of Valid Cognition 40:6–7, in Vetter, 1966).

Thought identifies its object by associat-ing the representation of the object with aword. When we conceive of an object we donot apprehend it directly, but through themediation of its aspect. Mediation throughan aspect also occurs with perception, buthere the process of mediation is different.In the case of perception there is a directcausal connection between the object andits representation, but no such link existsfor thought. There is no direct causal linkbetween the object and thought, but ratheran extended process of mediation in whichlinguistic signs figure prominently.

For Dharmakırti, the starting point of thisprocess is our encounter with a variety ofobjects that we experience as being simi-lar or different. We construct concepts inassociation with linguistic signs to capturethis sense of experienced similarity and dif-ference. This linguistic association creates amore precise concept in which the represen-tations are made to stand for a commonalitythat the objects are assumed to possess. Forexample, we see a variety of trees and appre-hend a similarity between these objects. Atthis level, our mental representations haveyet to yield a concept of tree. The con-cept of tree is formed when we connect our

Page 126: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

108 the cambridge handbook of consciousness

representations with a socially formed andcommunicated sign and assume that theystand for a treeness that we take individ-ual trees to share. In this way experiencesgive rise to mental representations, whichare transformed into concepts by associa-tion with a linguistic sign. The formationof a concept consists of the assumption thatmental representations stand for an agreed-upon imagined commonality. Thus conceptscome to be through the conjunction of theexperience of real objects and the social pro-cess of language acquisition. Concept forma-tion is connected to reality, albeit in a medi-ated and highly indirect way.

But concept formation is also mistaken,according to this view. A concept is basedon the association of a mental representa-tion with a term that enables the repre-sentation to stand for a property assumedto be shared by various individuals. InDharmakırti’s nominalist world of indi-viduals, however, things do not share acommon property; rather, the property isprojected onto them. The property is man-ufactured when a representation is made tostand for an assumed commonality, which avariety of individuals are mistakenly takento instantiate. Hence this property is notreal; it is merely a pseudo-entity superim-posed (adhyaropa) on individual realities.This property is also not reducible to a gen-eral term. In other words, the commonalitythat we project onto things does not residein using the same term to designate discreteindividuals. Upon analyzing the notion ofsameness of terms, we realize that identifyingindividual terms as being the same presup-poses the concept of sameness of meaning,in relation to which the individual terms canbe identified. Thus commonality is not duesimply to a term, but requires the forma-tion of concepts on the basis of the mistakenimputation of commonality onto discreteindividuals.

What does it mean, however, for aconcept to be based on an assumedcommonality? Here Dharmakırti’s theorymust be placed within its proper context,the apoha or exclusion theory of language,which was created by Dignaga. This com-

plex topic is beyond the scope of this chap-ter. Suffice it to say that the apoha theoryis a way to explain how language signifiesin a world of individuals. Linguistic mean-ing poses a particularly acute problem forDignaga and Dharmakırti, for they are com-mitted to a connotationist view of language,in which sense has primacy over reference.Such a view, however, is difficult to hold ina nominalist ontology that disallows abstractentities, such as meaning.18

The apoha theory tries to solve thisconundrum by arguing that language doesnot describe reality positively through uni-versals, but negatively by exclusion. Lan-guage is primarily meaningful, but this doesnot mean that there are real senses. Rather,we posit agreed-upon fictions that we con-struct for the sake of categorizing the worldaccording to our purposes. Thus ‘cow’ doesnot describe Bessie through the mediation ofa real universal (cowness), but by excludinga particular (Bessie) from the class of non-cow. Matilal describes Dignaga’s view thisway:

Each name, as Dignaga understands,dichotomizes the universe into two: those towhich the name can be applied and thoseto which it cannot be applied. The functionof a name is to exclude the object from theclass of those objects to which it cannot beapplied. One might say that the function ofa name is to locate the object outside of theclass of those to which it cannot be applied(Matilal, 1971, p. 45).

Although linguistic form suggests that wesubsume an individual under a property,analysis reveals that words merely excludeobjects from being included in a class towhich they do not belong. The functionof a name is to locate negatively an objectwithin a conceptual sphere. The impressionthat words positively capture the nature ofobjects is misleading.

This theory was immediately attacked byHindu thinkers, such as Kumarila and Uddy-otakara, who raised strong objections. Oneof them was that this theory is counter-intuitive, because we do not perceiveourselves to eliminate non-cows when we

Page 127: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 109

conceive of cows. Dharmakırti’s theory ofconcept formation is in many ways anattempt to answer these attacks. It arguesthat the apoha theory is not psychological,but epistemological. In conceiving of objectswe do not directly eliminate other objects,but instead rely on a representation that ismade to stand in for an assumed common-ality shared by several particulars. It is thisfictional commonality that is the result of anexclusion. There is nothing over and aboveparticulars, which are categorized on thebasis of their being excluded from what theyare not. The concept that has been formedin an essentially negative way is projectedonto real things. In the process of makingjudgments such as ‘this is a tree’, the realdifferences that exist between the differenttrees come to be ignored and the similaritiesare reified into a common universal property,which is nothing but a socially agreed-uponfiction.

The eliminative nature of thought andlanguage is psychologically revealed whenwe examine the learning process. The word‘cow’, for instance, is not learned onlythrough a definition, but by a process ofelimination. We can give a definition of‘cow’, but the definition works only if its ele-ments are known already. For example, wecan define cows as animals having dewlaps,horns, and so on (the traditional definition of‘cow’ in Indian philosophy). But how do weknow what counts as a dewlap? Not just bypointing to the neck of a cow, but by elimi-nating the cases that do not fit. In this way,we establish a dichotomy between those ani-mals that fit, and other animals or thingsthat do not, and on the basis of this negativedichotomy we construct a fictive property,cowness. This construction is not ground-less, however, but proceeds through an indi-rect causal connection with reality. Conceptsare not formed a priori, but elaborated asa result of experiences. Dharmakırti’s solu-tion to the problem of thought and mean-ing is thus to argue that in a world bereftof real abstract entities (properties), thereare only constructed intensional (linguistic)pseudo-entities, but that this constructionis based on experience; that is, perception.

This grounding in perception ensures that,although conception is mistaken in the wayreviewed above, it is neither baseless nor ran-dom and hence can lead to the formation ofconcepts that will be attuned to the causalcapacities of particulars.

Dharmakırti and Abhidharma:Intentionality Revisited

Dharmakırti’s analysis has in certain respectsa great deal of continuity with the Abhid-harma. Both view the mind as constituted bya succession of mental states in accordancewith their ontological commitments, whichprivilege the particular over the general.Reality is made up of a plurality of elements(here moments of awareness), and general-ity, when it is not a figment of our imag-ination, is at best the result of aggregation.This emphasis on the particular derives fromthe central tenets of the Buddhist tradition;namely, non-substantiality and dependentorigination. In Dharmakırti’s epistemologi-cal approach, this emphasis expresses itselfin valuing perception over conception, andin the problematic but necessary coopera-tion between the two forms of cognition. Wedo not come to know things by merely com-ing across them, but by integrating them intoour conceptual schemes on the basis of ourexperiences.

One question raised by this analysis con-cerns intentionality. The Abhidharma tra-dition had assumed all along that cogni-tions were intentional, but did not providea systematic analysis of intentionality. Dhar-makırti fills this gap, analyzing the way inwhich various types of cognition bear ontheir objects. But because he makes a sharpdistinction between perception and concep-tion, his analysis does not yield a singleconcept of intentionality, but on the con-trary leads us to realize that this centralnotion may have to be understood in mul-tiple ways. The cognitive process starts withour encounter with the world through per-ceptions, but this encounter is not enoughto bring about knowledge. Only when weare able to integrate the objects deliv-ered through the senses into our categorical

Page 128: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

110 the cambridge handbook of consciousness

schemes can we be said to know them in thefull sense of the word. Hence, if we under-stand intentionality as cognitive – that is, aspertaining to knowledge – we may well haveto agree with Dharmakırti that perceptionis not in and of itself fully intentional. Onlywhen perception is coordinated with con-ception does it become intentional; hence itcan be said to be intentional only in a derivedsense of the word. Perception is not in and ofitself cognitive, but only inasmuch as it hasthe ability to induce conceptual interpreta-tions of its objects. This does not mean, how-ever, that perception is completely blank orpurely passive. It has an intentional func-tion, that of delivering impressions that wetake in and organize through our conceptualschemes. Hence, perception can be said tohave a phenomenal intentionality, whichmay be revealed in certain forms of medi-tative experiences.

Dharmakırti alludes to such experienceswhen he describes a form of meditation, inwhich we empty our mind without closingit completely to the external world (Com-mentary on Valid Cognition III: 123–5 , inMiyasaka 1971–2). In this state of liminalawareness, things appear to us but we do notidentify them. We merely let them be. Whenwe come out of this stage, the usual concep-tual flow returns, and with it the concep-tualization that allows us to identify thingsas being this or that. This experience shows,Dharmakırti argues, that identification is notperceptual, but is due to conceptualization.In such a state, perception takes place butnot conceptualization. Hence, perception isa non-conceptual sensing onto which inter-pretations are added.

Due to the speed of the mental process,the untrained person cannot differentiateconceptual from non-conceptual cognitions.It is only on special occasions, such as in someform of meditation, that a clear differentia-tion can be made. There, the flow of thoughtgradually subsides, and we reach a state inwhich there is a bare sensing of things. Inthis state, what we call shapes and colors areseen barely (i.e., as they are delivered to oursenses without the adjunctions of concep-tual interpretations). When one graduallyemerges from such a non-conceptual state,

the flow of thoughts gradually reappears,and we are able to make judgments aboutwhat we saw during our meditation. One isthen also able to make a clear differentiationbetween the products of thoughts and thebare delivery of the senses and to distinguishcognitive from phenomenal intentionality.

The analysis of intentionality, however,may have to go even further to account forall the forms of cognition known to Buddhisttraditions. We alluded above to the Abhid-harmic idea of a store-consciousness, a sub-liminal form of cognition that supports allthe propensities, habits, and tendencies of aperson. Although such a store-consciousnessis usually asserted by the Yogacara to sup-port their idealist view, it is known to othertraditions under other names and hence hasto be taken seriously within a Buddhist acc-ount of the mind, regardless of the par-ticular views that are associated with it.But given the particularities of this formof consciousness, its integration within aBuddhist view of the mind is not withoutproblems. The difficulties come from thefact that the store-consciousness does notseem to have cognitive or even phenomenalintentionality. Because it does not captureany feature, it cannot be said to know itsobject, like conceptions. Because it is sub-liminal, it is difficult to attribute to it a phe-nomenal content able to induce categoriza-tion, like perceptions. How then can it beintentional?

To respond to this question would neces-sitate an analysis that goes well beyond thepurview of this chapter. Several avenues areopen to us. We could argue that the store-consciousness is not intentional and hencethat intentionality is not the defining char-acteristic of the mental, but only of certainforms of cognitions. We would then be facedwith the task of explaining the nature of themental in a way that does not presupposeintentionality. Or we could extend the con-cept of intentionality, arguing that the store-consciousness is not intentional in the usualcognitive or phenomenal senses of the word,but rather that its intentionality consists inits having a dispositional ability to generatemore explicit cognitive states. Some Westernphenomenologists, notably Husserl and

Page 129: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 111

Merleau-Ponty, distinguish ‘object directedintentionality’ from ‘operative intentional-ity’ (see Chapter 4). Whereas the formeris what we usually mean by intentional-ity, the latter is a non-reflective tacit sensi-bility, a spontaneous and involuntary levelthat makes us ready to respond cognitivelyand affectively to the world, though it isnot by itself explicitly cognitive. This mostbasic form of intentionality is importantin explaining our openness to the world.It also seems an interesting avenue forexploring the cognitive nature of the store-consciousness.

Conclusion

We can now see the richness and the com-plexities of the Indian Buddhist analyses ofthe nature of the mind and consciousness.The Abhidharma provides the basis of theseanalyses, with its view of the mind as astream of moments of consciousness and itsdistinction between the primary factor ofawareness and mental factors. This traditionalso emphasizes the intentional nature ofconsciousness, the ability of consciousness tobe about something else. As we have seen,however, this concept is far from self-evidentand needs further philosophical clarifica-tion. This clarification is one of the impor-tant tasks of Dharmakırti’s philosophy. Inaccomplishing this task, Dharmakırti criti-cally explores the variety of human cogni-tions, distinguishing the conceptual from theperceptual modes of cognition and empha-sizing the constructed nature of the for-mer and its close connection with language.Yet, as we have also seen, this philoso-phy is not always able to account for allthe insights of the Abhidharma, particu-larly those concerning the deeper layers ofconsciousness.

When we look at the Indian Buddhisttradition, we should not look for a unifiedand seamless view of the mind. Like anyother significant tradition, Indian Buddhistphilosophy of mind is plural and animatedby debates, questions, and tensions. This richtradition has a great deal to offer contem-porary mind science and philosophy, includ-

ing rich phenomenological investigations ofvarious aspects of human cognition andexploration of various levels and types ofmeditative consciousness. This tradition alsoshows, however, that it would be naıve totake these investigations of consciousnessas being objectively given or established.Rather, they are accounts of experience thatare often intertwined with doctrinal formu-lations and hence are open to critique, revi-sion, and challenge, like any other humaninterpretation. Indeed, these formulationsneed to be taken seriously and examinedwith the kind of critical spirit and rigorousphilosophical thinking exhibited by Dhar-makırti. Only then, can we do justice to theinsights of this tradition.

Glossary

Sam. khya

Pradhana: primordial nature or prakr.ti,materiality. The primordial substanceout of which the diversity of phe-nomena arise. It is composed of threequalities (gun. a): sattva (transparency,buoyancy), rajas (energy, activity), andtamas (inertia, obstruction). They arethe principles or forces whose combi-nation produces mental and materialphenomena.

Atman: spiritual self or purus.a, per-son. The non-material spiritual ele-ment that merely witnesses the men-tal activities involved in the ordinaryawareness of objects.

Buddhi: usually translated as ‘the intel-lect’. It has the ability to distinguish andexperience objects. This ability pro-vides the prereflective and presubjec-tive ground out of which determinedmental states and their objects arise. Itis also the locus of all the fundamentalpredispositions that lead to these expe-riences.

Aham. kara: egoity or ego-sense. This isthe sense of individual subjectivity orselfhood tied to embodiment, whichgives rise to the subjective pole ofcognition.

Page 130: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

112 the cambridge handbook of consciousness

Manas: mentation. It oversees the sensesand discriminates between objects. Byserving as an intermediary betweenthe intellect and the senses, menta-tion organizes sensory impressions andobjects and integrates them into a tem-poral framework created by memoriesand expectations.

Citta: mental activities or antah. karan.a,internal organ. This is the grouping ofbuddhi, aham. kara, and manas.

Praman. a: instrument of valid cognition ofthe self. The Sam. khya recognizes threesuch instruments: perception, infer-ence, and testimony. The Nyaya addsa fourth one, analogy.

Buddhist

Citta: primary factor of awareness orvijnana, consciousness. It is the aspectof the mental state that is aware of theobject, or the bare apprehension of theobject. It is the awareness that merelydiscerns the object, the activity of cog-nizing the object.

Caitesika: mental factor. Mental factorsare aspects of the mental state thatcharacterize the object of awarenessand account for its engagement. Inother words, whereas consciousnessmakes known the mere presence of theobject, mental factors make known theparticulars of the content of awareness,defining the characteristics and specialconditions of its object.

Alaya-vijnana: store-consciousness. Thiscontinuously present subliminal con-sciousness is posited by some of theYogacara thinkers to provide a senseof continuity in the person over time.It is the repository of all the basichabits, tendencies, and propensities(including those that persist from onelife to the next) accumulated by theindividual.

Bhavan. ga citta: life-constituent con-sciousness. Although this conscious-ness is not said to be always present andarises only during the moments where

there is no manifest mental activity, italso provides a sense of continuity forthe Theravada school, which asserts itsexistence.

Klis. ta-manas: afflictive mentation. Thisis the inborn sense of self that arisesfrom the apprehension of the store-consciousness as being a self. From aBuddhist point of view, however, thissense of self is fundamentally mistaken.It is a mental imposition of unity wherethere is in fact only the arising of a mul-tiplicity of interrelated physical andmental events.

praman. a: valid cognition. Not the instru-ment of a self but the knowledge-eventitself. There are only two types of validcognition admissible in Buddhist epis-temology, pratyaks.a, perception, andanumana, inference.

Svasam. vedana: self-cognition. This is thelimited but intuitive presence that wefeel we have toward our own men-tal episodes, which is due not to thepresence of a metaphysical self but tothe non-thematic reflexive knowledgethat we have of our own mental states.Because self-cognition does not rely onreasoning, it is taken to be a form ofperception. It does not constitute, how-ever, a separate reflective or introspec-tive cognition. Otherwise, the chargethat the notion of apperception opensan infinite regress would be hard toavoid.

Notes

1. Presenting the Sam. khya view in a few linesis problematic given its evolution over a longperiod of time, an evolution shaped by theaddition of numerous refinements and newanalyses in response to the critiques of Bud-dhists and Vedantins. For a quick summary,see Mahalingam (1977). For a more detailedexamination, see Larson and Bhattacharya(1987).

2 . Contrary to Vedanta, the Sam. khya holds thatthere are many individual selves rather thana universal ground of being such as Brahman.

Page 131: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

asian perspectives: indian theories of mind 113

3 . The notion of a pure and passive ‘witnessconsciousness’ is a central element of manyHindu views about consciousness (see Gupta,1998, 2003).

4 . For a thoughtful discussion of this view of themind, see Schweizer (1993).

5 . Numerous translations of Patanjali’s YogaSutras are available in English.

6. For discussion of the Advaita Vedanta view ofconsciousness, see Gupta (2003 , Chapter 5).For a philosophical overview of AdvaitaVedanta, see Deutsch (1969).

7. For a glimpse of the origins of the Abhid-harma, see Gethin (1992).

8. For Husserl, by contrast, not all conscious-ness is intentional in the sense of being object-directed. See Chapter 4 and the final sectionof this chapter.

9. All quotations from this work are translatedfrom the French by G. Dreyfus.

10. See Rahula (1980, p. 17). Although theTheravada Abhidharma does not recognizea distinct store-consciousness, its concept ofbhavacga citta, the life-constituent conscious-ness, is similar. For a view of the complexitiesof the bhavacga, see Waldron (2003 , pp. 81–87).

11. They are then said to be conjoined (sam-payutta, mtshungs ldan), in that they aresimultaneous and have the same sensory basis,the same object, the same aspect or way ofapprehending this object, and the same sub-stance (the fact that there can be only onerepresentative of a type of consciousness andmental factor at the same time). See Waldron(2003 , p. 205).

12 . This list, which is standard in the Tibetantradition, is a compilation based on Asacga’sAbhidharma–samuccaya. It is not, how-ever, Asacga’s own list, which contains 52

items (Rahula 1980, p. 7). For further dis-cussion, see Napper (1980) and Rabten(1978/1992). For the lists of some of theother traditions, see Bodhi (1993 , pp. 76–79) and de la Vallee Poussin (1971, II: 150–178).

13 . Although some of these states may be sote-riologically significant and involve the abil-ity to transcend duality, not all need be. Thepractice of concentration can involve signlessmeditative states, and so too does the practiceof some of the so-called formless meditativestates.

14 . For a discussion of whether compassion andlovingkindness, seen from a Buddhist point ofview, are emotions, see Dreyfus (2002).

15 . For a brief but thoughtful discussion of theidea of Buddhism as a psychology, see Gomez(2004).

16. For discussion of the characteristics of Indianlogic, see Matilal (1985) and Barlingay (1975).On Buddhist logic, see Kajiyama (1966). Foran analysis of Dharmakırti’s philosophy, seeDreyfus (1997) and Dunne (2004).

17. For a detailed treatment of Dharmakırti’sarguments and their further elaboration in theTibetan tradition, see Dreyfus (1997, pp. 338–341, 400–415).

18. For more on this difficult topic, see Dreyfus(1997) and Dunne (2004).

References

Barlingay, S. S. (1975). A modern introduction toIndian logic. Delhi: National.

Bodhi, B. (Ed.). (1993). A comprehensive manualof Abhidharma. Seattle, WA: Buddhist Publica-tion Society.

Damasio, A. (1995). Descartes’ error: Emotion,reason and the human brain. New York: HarperPerennial.

de la Vallee Poussin, L. (1971). L’Abhidharmakosade Vasubandhu. Bruxelles: Institut Belge desHautes Etudes Chinoises.

de la Vallee Poussin, L. (1991). Notes sur lemoment ou ksana des bouddhistes. In H. S.Prasad (Ed.), Essays on time. Delhi: Sri Satguru.

Deutsch, E. (1969). Advaita Vedanta: A philo-sophical reconstruction. Honolulu: UniversityPress of Hawaii.

Dreyfus, G. (1997). Recognizing reality: Dhar-makırti’s philosophy and its Tibetan interpreta-tions. Albany, NY: State University of NewYork Press.

Dreyfus, G. (2002). Is compassion an emotion? Across-cultural exploration of mental typologies.In R. Davidson & A. Harrington (Eds.), Visionsof compassion: Western scientists and TibetanBuddhists examine human nature (pp. 31–45).Oxford: Oxford University Press.

Dunne, J. D. (2004). Foundations of Dhar-makjrıti’s philosophy. Boston: Wisdom.

Gethin, R. (1992). The Matrikas: Memorization,mindfulness and the list. In J. Gyatso (Ed.), In

Page 132: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c05 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 5 , 2007 5 :25

114 the cambridge handbook of consciousness

the mirror of memory (pp. 149–172). Albany, NY:State University of New York Press.

Goleman, D. (2003). Destructive emotions. A sci-entific dialogue with the Dalai Lama. New York:Bantam.

Gomez, L. (2004). Psychology. In R. Buswell(Ed.), Encyclopedia of Buddhism (pp. 678–692).New York: MacMillan.

Guenther, H. (1976). Philosophy and psychologyin the Abhidharma. Berkeley, CA: ShambalaPress, 1976.

Gupta, B. (1998). The disinterested witness. Afragment of Advaita Vedanta phenomenology.Evanston, IL: Northwestern University Press.

Gupta, B. (2003). Cit. consciousness. New Delhi:Oxford University Press.

James, W. (1981). Principles of psychology. Cam-bridge, MA: Harvard University Press.

Kajiyama, Y. (1966). Introduction to Buddhist logic.Kyoto: Kyoto University.

Larson, J., & Bhattacharya, R. S. (1987). Encyclo-pedia of Indian philosophies: Sam. khya, A dualisttradition in Indian philosophy. Delhi: Motilal.

Mahalingam, I. (1977). Sam. khya-Yoga. In B. Carr& I. Mahalingam (Eds.), Companion encyclo-pedia of Asian philosophy. London: RoutledgePress.

Matlilal, B. K. (1971). Epistemology, logic, andgrammar in Indian philosophical analysis. TheHague: Mouton.

Matilal, B. K. (1985). Logic, language, and reality.Delhi: Matilal Banarsidas.

Mayeda, S. (1992). A thousand teaching: TheUpadesashasri . Albany, NY: State Universityof New York Press. Original work published1979.

Miyasaka, Y. (Ed.) (1971–2). Pramanavarttika-karika. Acta Indologica 2 .

Napper, E. (1980). Mind in Tibetan Buddhism.Ithaca, NY: Snow Lion.

Poppel, E. (1988). Mindworks: Time and consciousexperience. Boston: Harcourt Brace Jovanovich.

Rabten, G. (1992). The mind and its functions. Mt.Pelerin: Rabten Choeling. Original work pub-lished 1978.

Rahula, W. (1980). Le Compendium de la Super-Doctrine de Asacga. Paris: Ecole Francaised’Extreme-Orient.

Schweizer, P. (1993). Mind/consciousness Dual-ism in Samkhya-Yoga philosophy. Philoso-phy and Phenomenological Research, 53 , 845–859.

Sellars, W. (1956). Empiricism and the philoso-phy of mind. In H. Feigl & M. Scriven (Eds.),Minnesota studies in the philosophy of science.Vol. 1: The foundations of science and the con-cepts of psychology and psychoanalysis (pp. 253–329). Minneapolis, MN: University of Min-nesota Press.

Varela, F. J., Thompson, E., & Rosch, E.(1991). The embodied mind: Cognitive scienceand human experience. Cambridge, MA: MITPress.

Vetter, T. (1966). Dharmakırti’s Pramanavinis-cayah 1. Kapitel: Pratyaksam. Vienna: Osterrei-chische Akademie der Wissenschaften.

Waldron, W. (2003). The Buddhist unconscious.London: Routledge Press.

Wider, K. (1997). The bodily nature ofconsciousness: Sartre and contemporary philos-ophy of mind. Ithaca, NY: Cornell UniversityPress.

Page 133: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

B. Computational Approachesto Consciousness

115

Page 134: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

116

Page 135: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

C H A P T E R 6

Artificial Intelligence and Consciousness

Drew McDermott

Abstract

Consciousness is only marginally relevantto artificial intelligence (AI), because tomost researchers in the field other prob-lems seem more pressing. However, therehave been proposals for how consciousnesswould be accounted for in a complete com-putational theory of the mind from suchtheorists as Dennett, Hofstadter, McCarthy,McDermott, Minsky, Perlis, Sloman, andSmith. One can extract from these spec-ulations a sketch of a theoretical synthe-sis, according to which consciousness is theproperty a system has by virtue of mod-eling itself as having sensations and mak-ing free decisions. Critics such as Harnadand Searle have not succeeded in demolish-ing a priori this or any other computationaltheory, but no such theory can be verifiedor refuted until and unless AI is successfulin finding computational solutions to diffi-cult problems, such as vision, language, andlocomotion.

Introduction

Computationalism is the theory that thehuman brain is essentially a computer,although presumably not a stored-program,digital computer like the kind Intel makes.Artificial intelligence (AI) is a field of com-puter science that explores computationalmodels of problem solving, where the prob-lems to be solved are of the complex-ity of those solved by human beings. AnAI researcher need not be a computa-tionalist because he or she might believethat computers can do things that brainsdo non-computationally. However, most AIresearchers are computationalists to someextent, even if they think digital computersand brains-as-computers compute things indifferent ways. When it comes to the prob-lem of phenomenal consciousness, however,the AI researchers who care about the prob-lem and believe that AI can solve it are a tinyminority, as shown in this chapter. Nonethe-less, because I count myself in that minority,

117

Page 136: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

118 the cambridge handbook of consciousness

I do my best here to survey the work of itsmembers and defend a version of the theorythat I think represents that work fairly well.

Perhaps calling computationalism a the-ory is not exactly right here. One mightprefer calling it a working hypothesis,assumption, or dogma. The evidence forcomputationalism is not overwhelming, andsome even believe it has been refuted by apriori arguments or empirical evidence. But,in some form or other, the computational-ist hypothesis underlies modern research incognitive psychology, linguistics, and somekinds of neuroscience. That is, there wouldnot be much point in considering formal orcomputational models of mind if it turnedout that most of what the brain does is notcomputation at all, but, say, some quantum-mechanical manipulation (Penrose, 1989).Computationalism has proven to be a fer-tile working hypothesis, although those whoreject it typically think of the fertility as sim-ilar to that of fungi or of pod people fromouter space.

Some computationalist researchers be-lieve that the brain is nothing more than acomputer. Many others are more cautiousand distinguish between modules that arequite likely to be purely computational (e.g.,the vision system) and others that are lesslikely to be so, such as the modules, or princi-ples of brain organization, that are responsi-ble for creativity or for romantic love. Thereis no need, in their view, to require thatabsolutely everything be explained in termsof computation. The brain could do somethings computationally and other things bydifferent means, but if the parts or aspectsof the brain that are responsible for thesevarious tasks are more or less decoupled,we could gain significant insight into thepieces that computational models are goodfor and could then leave the other pieces tosome other disciplines, such as philosophyand theology.1

Perhaps the aspect of the brain that ismost likely to be exempt from the compu-tationalist hypothesis is its ability to pro-duce consciousness; that is, to experiencethings. There are many different meanings of

the word “conscious,” but I am talking hereabout the “Hard Problem” (Chalmers, 1996),the problem of explaining how it is that aphysical system can have vivid experienceswith seemingly intrinsic “qualities,” such asthe redness of a tomato or the spiciness ofa taco. These qualities usually go by theirLatin name, qualia. We all know what weare talking about when we talk about sensa-tions, but they are notoriously undefinable.We all learn to attach a label such as “spicy”to certain tastes, but we really have no ideawhether the sensation of spiciness to me isthe same as the sensation of spiciness to you.

Perhaps tacos produce my “sourness” inyou, and lemons produce my “spiciness” inyou.2 We would never know, because youhave learned to associate the label “sour”with the quale of the experience you havewhen you eat lemons, which just happens tobe very similar to the quale of the experi-ence I have when I eat tacos. We can’t justtell each other what these qualia are like;the best we can do is talk about compar-isons. But we agree on such questions as, Dotacos taste more like Szechuan chicken ormore like lemons? I focus on this problembecause other aspects of consciousness raiseno special problem for computationalism, asopposed to cognitive science generally.

The purpose of consciousness, from anevolutionary perspective, is often held tohave something to do with the allocation andorganization of scarce cognitive resources.For a mental entity to be conscious is for itto be held in some globally accessible area(Baars, 1988, 1997). AI has made contribu-tions to this idea, in the form of specific ideasabout how this global access works, goingunder names such as the “blackboard model”(Hayes-Roth, 1985), or “agenda-based con-trol” (Currie & Tate, 1991). One can evalu-ate these proposals by measuring how wellthey work or how well they match humanbehavior. But there does not seem to be anyphilosophical problem associated with them.

For phenomenal consciousness, the sit-uation is very different. Computationalismseems to have nothing to say about it, simplybecause computers do not have experiences.

Page 137: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 119

I can build an elaborate digital climate-control system for my house, which keepsits occupants at a comfortable temperature,but the climate-control system never feelsoverheated or chilly. Various physical mech-anisms implement its temperature sensors invarious rooms. These sensors produce signalsthat go to units that compute whether toturn on the furnace or the air conditioner.The result of these computations causesswitches to close so that the furnace or airconditioner does actually change state. Wecan see the whole path from temperaturesensing to turning off the furnace. Every stepcan be seen to be one of a series of straight-forward physical events. Nowhere are youtempted to invoke conscious sensation as aneffect or element of the causal chain.

This is the prima facie case against com-putationalism, and a solid one it seems tobe. The rest of this chapter is an attempt todismantle it.

An Informal Survey

Although one might expect AI researchersto adopt a computationalist position on mostissues, they tend to shy away from ques-tions about consciousness. AI has often beenaccused of being over-hyped, and the onlyway to avoid the accusation, apparently, is tobe so boring that journalists stay away fromyou. As the field has matured and as a flockof technical problems have become its focus,it has become easier to bore journalists. Thelast thing most serious researchers want isto be quoted on the subject of computationand consciousness.

To get some kind of indication of whatpositions researchers take on this issue, I con-ducted an informal survey of Fellows of theAmerican Association for Artificial Intelli-gence (AAAI) in the summer of 2003 . I sentthe following e-mail to all of them:

Most of the time AI researchers don’t con-cern themselves with philosophical ques-tions, as a matter of methodology and per-haps also opinion about what is ultimatelyat stake. However, I would like to find out

how the leaders of our field view the fol-lowing problem: Create a computer or pro-gram that has “phenomenal consciousness,”that is, the ability to experience things. By“experience” here I mean “qualitative expe-rience,” the kind in which the things onesenses seem to have a definite but indescrib-able quality, the canonical example being“looking red” as opposed to “looking green.”Anyway, please choose from the followingpossible resolutions of this problem:

1. The problem is just too uninterestingcompared to other challenges

2 . The problem is too ill defined to beinteresting; or, the problem is onlyapparent, and requires no solution

3 . It’s an interesting problem, but AI hasnothing to say about it

4 . AI researchers may eventually solve it,but will require new ideas

5 . AI researchers will probably solve it,using existing ideas

6. AI’s current ideas provide at least theoutline of a solution

7. My answer is not in the list above. Hereit is: . . .

Of course, I don’t mean to exclude otherbranches of cognitive science; when I say“AI” I mean “AI, in conjunction with otherrelevant disciplines.” However, if you thinkneuroscientists will figure out phenomenalconsciousness, and that their solution willentail that anything not made out of neu-rons cannot possibly be conscious, thenchoose option 3 . Because this topic is of pas-sionate interest to a minority, and quicklybecomes annoying to many others, pleasedirect all follow up discussion to [email protected]. Directions for subscrib-ing to this mailing list are as follows: . . .

Thanks for your time and attention.

Of the approximately 207 living Fellows,I received responses from 34 . The results areas indicated in Table 6.1.

Of those who chose category 7 (None ofthe above) as answer, here are some of thereasons why:

� “Developing an understanding of thebasis for conscious experience is a central,long-term challenge for AI and related

Page 138: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

12 0 the cambridge handbook of consciousness

Table 6.1. Results of survey of AAAI fellows

1 Problem uninteresting 3%2a Ill-defined 11% 19%2b Only apparent 8%3 AI silent 7%4 Requires new ideas 32%5 AI will solve it as is 3%6 Solution in sight 15%7 None of the above 21%

Percentages indicate fraction of the 34 who responded.

disciplines. It’s unclear at the present timewhether new ideas will be needed. . . . ”

� “If two brains have isomorphic computa-tion then the ‘qualia’ must be the same.Qualia must be just another aspect ofcomputation – whatever we say of qualiamust be a property of the computationviewed as computation.”

� “There are two possible ways (at least) ofsolving the problem of phenomenal con-sciousness, ‘explaining what conscious-ness is’ and ‘explaining consciousnessaway.’ It sounds like you are looking fora solution of the first type, but I believethe ultimate solution will be of the secondtype.”

� “The problem is ill-defined, and alwayswill be, but this does not make it unin-teresting. AI will play a major role insolving it.”

If Table 6.1 seems to indicate no particularpattern, just remember that what the datashow is that the overwhelming majority (173

of 207) refused to answer the question at all.Obviously, this was not a scientific survey,and the fact that its target group contained adisproportionate number of Americans per-haps biased it in some way. Furthermore,the detailed responses to my questions indi-cated that respondents understood the termsused in many different ways. But if 84% ofAAAI Fellows don’t want to answer, we caninfer that the questions are pretty far fromthose that normally interest them. Even the34 who answered include very few optimists(if we lump categories 5 and 6 together),although about the same number (cate-gories 1 and 2) thought the problem didn’treally need to be solved. Still, the outright

pessimists (category 3) were definitely in theminority.

Research on Computational Modelsof Consciousness

In view of the shyness about conscious-ness shown by serious AI researchers, it isnot surprising that detailed proposals aboutphenomenal consciousness from this groupshould be few and far between.

Moore/Turing Inevitability

One class of proposals can be dealt withfairly quickly. Hans Moravec, in a series ofbooks (1988, 1999), and Raymond Kurzweil(1999) have more or less assumed thatcontinuing progress in the development offaster, more capable computers will causecomputers to equal and then surpass humansin intelligence and that computer conscious-ness will be an inevitable consequence. Theonly argument offered is that the computerswill talk as though they are conscious; whatmore could we ask?

I believe a careful statement of the argu-ment might go like this:

1. Computers are getting more and morepowerful.

2 . This growing power allows computers todo tasks that would have been consideredinfeasible just a few years ago. It is rea-sonable to suppose, therefore, that manythings we think of as infeasible will even-tually be done by computers.

3 . Pick a set of abilities such that if a systemhad them we would deal with it as wewould a person. The ability to carry on aconversation must be in the set, but wecan imagine lots of other abilities as well:skill in chess, agility in motion, visual per-spicacity, and so forth. If we had a talk-ing robot that could play poker well, wewould treat it the same way we treatedany real human seated at the same table.

4 . We would feel an overwhelming impulseto attribute consciousness to such a robot.If it acted sad when losing money or made

Page 139: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 12 1

whimpering sounds when it was dam-aged, we would respond as we would toa human who was sad or in pain.

5 . This kind of overwhelming impulse is ouronly evidence that a creature is conscious.In particular, it is the only real way wecan tell that people are conscious. There-fore, our evidence that the robot was con-scious would be as good as one couldhave. Therefore the robot would be con-scious, or be conscious for all intents andpurposes.

I call this the “Moore/Turing inevitability”argument because it relies both on Moore’sLaw (Moore, 1965), which predicts expo-nential progress in the power of computers,and on a prediction about how well futureprograms will do on the “Turing Test,” pro-posed by Alan Turing (1950) as a tool forrating the intelligence of a computer.3 Tur-ing thought all questions about the actualintelligence (and presumably degree of con-sciousness) of a computer were too vague ormysterious to answer. He suggested a behav-iorist alternative. Let the computer carry ona conversation over a teletype line (or viaan instant-messaging system, we would saytoday). If a savvy human judge could not dis-tinguish the computer’s conversational abil-ities from those of a real person at a rate bet-ter than chance, then we would have somemeasure of the computer’s intelligence. Wecould use this measure instead of insisting onmeasuring the computer’s real intelligence,or actual consciousness. This argument hasa certain appeal. It certainly seems that iftechnology brings us robots that we can-not help treating as conscious, then in theargument about whether they really are con-scious the burden of proof will shift, in thepublic mind, to the party-poopers who denythat they are. But so what? You can’t winan argument by imagining a world in whichyou’ve won it and declaring it inevitable.

The anti-computationalists can make sev-eral plausible objections to the behavioral-inevitability argument:

� Just because computers have madeimpressive strides does not mean that

they will eventually be able to carryout any task we set them. In particular,progress in carrying on conversations hasbeen dismal.4

� Even if a computer could carry on a con-versation, that would not tell us anythingabout whether it really was conscious.

� Overwhelming impulses are not goodindicators for whether something is true.The majority of people have an over-whelming impulse to believe that there issuch a thing as luck, so that a lucky personhas a greater chance of winning at roulettethan an unlucky person. The whole gam-bling industry is based on exploiting thefact that this absurd theory is so widelybelieved.

I come back to the second of these objec-tions in the section on Turing’s test. The oth-ers I am inclined to agree with.

Hofstadter, Minsky, and McCarthy

Richard Hofstadter touches on the prob-lem of consciousness in many of his writ-ings, especially the material he contributedto Hofstadter and Dennett (1981). Most of hewhat he writes seems to be intended to stim-ulate or tantalize one’s thinking about theproblem. For example, in Hofstadter (1979)there is a chapter (reprinted in Hofstadter& Dennett, 1981) in which characters talk toan anthill. The anthill is able to carry on aconversation because the ants that composeit play roughly the role neurons play in abrain. Putting the discussion in the form of avignette allows for playful digressions on var-ious subjects. For example, the anthill offersthe anteater (one of the discussants) someof its ants, which makes vivid the possibilitythat “neurons” could implement a negotia-tion that ends in their own demise.

It seems clear from reading this story thatHofstadter believes that the anthill is con-scious, and therefore one could use inte-grated circuits rather than ants to achieve thesame end. But most of the details are left out.In this as in other works, it’s as if he wants toinvent a new, playful style of argumentation,in which concepts are broken up and tossed

Page 140: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

12 2 the cambridge handbook of consciousness

together into so many configurations that theoriginal questions one might have asked getshunted aside. If you’re already convincedby the computational story, then this con-ceptual play is delightful. If you’re a skeptic,I expect it can get a bit irritating.

I put Marvin Minsky in this category aswell, which perhaps should be called “Thosewho don’t take consciousness very seriouslyas a problem.” He wrote a paper in 1968

(Minsky, 1968b) that introduced the conceptof self-model, which I argue is central to thecomputational theory of consciousness.

To an observer B, an object A∗ is a model ofan object A to the extent that B can use A∗to answer questions that interest him aboutA. . . . If A is the world, questions for A areexperiments. A∗ is a good model of A, in B’sview, to the extent that A∗’s answers agreewith those of A, on the whole, with respectto the questions important to B. When aman M answers questions about the world,then (taking on ourselves the role of B) weattribute this ability to some internal mech-anism W∗ inside M.

This part is presumably uncontroversial.But what is interesting is that W∗, however itappears, will include a model of M himself,M∗. In principle, M∗ will contain a modelof W∗, which we can call W∗∗. M can useW∗∗ to answer questions about the way he(M) models the world. One would think thatM∗∗ (the model of M∗ in W∗∗) would beused to answer questions about the way Mmodels himself, but Minsky has a somewhatdifferent take: M∗∗ is used to answer gen-eral questions about himself. Ordinary ques-tions about himself (e.g., how tall he is) areanswered by M∗, but very broad questionsabout his nature (e.g., what kind of a thinghe is, etc.) are answered, if at all, by descrip-tive statements made by M∗∗ about M∗.

Now, the key point is that the accuracy ofM∗ and M∗∗ need not be perfect.

A man’s model of the world has a distinctlybipartite structure: One part is concernedwith matters of mechanical, geometrical,physical character, while the other is associ-ated with things like goals, meanings, socialmatters, and the like. This division of W∗

carries through the representations of manythings in W∗, especially to M itself. Hence, aman’s model of himself is bipartite, one partconcerning his body as a physical objectand the other accounting for his social andpsychological experience.

This is why dualism is so compelling. Inparticular, Minsky accounts for free will bysupposing that it develops from a “strongprimitive defense mechanism” to resist ordeny compulsion.

If one asks how one’s mind works, henotices areas where it is (perhaps incor-rectly) understood, that is, where one rec-ognizes rules. One sees other areas wherehe lacks rules. One could fill this in bypostulating chance or random activity. Butthis too, by another route, exposes the selfto the . . . indignity of remote control. Weresolve this unpleasant form of M∗∗ by pos-tulating a third part, embodying a will orspirit or conscious agent. But there is nostructure in this part; one can say nothingmeaningful about it, because whenever aregularity is observed, its representation istransferred to the deterministic rule region.The will model is thus not formed froma legitimate need for a place to store def-inite information about one’s self; it has thesingular character of being forced into themodel, willy-nilly, by formal but essentiallycontent-free ideas of what the model mustcontain.

One can quibble with the details, butthe conceptual framework offers a wholenew way of thinking about consciousness byshowing that introspection is mediated bymodels. There is no way for us to penetratethrough them or shake them off, so we mustsimply live with any “distortion” they intro-duce. I put “distortion” in quotes because itis too strong a word. The concepts we useto describe our mental lives were developedover centuries by people who all shared thesame kind of mental model. The distortionsare built in. For instance, there is no inde-pendent notion of “free will” beyond whatwe observe by means of our self-models. Wecannot even say that free will is a dispens-able illusion, because we have no way ofgetting rid of it and living to tell the tale.

Page 141: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 12 3

Minsky’s insight is that to answer many ques-tions about consciousness we should focusmore on the models we use to answer thequestions than on the questions themselves.

Unfortunately, in that short paper, andin his later book The Society of Mind (Min-sky, 1986), Minsky throws off many inter-esting ideas, but refuses to go into the depththat many of them deserve. He has a lot tosay about consciousness in passing, such ashow Freudian phenomena might arise out ofthe “society” of subpersonal modules that hetakes the human mind to be. But there is nosolid proposal to argue for or against.

John McCarthy has written a lot on whathe usually calls “self-awareness” (McCarthy,1995b). However, his papers are mostlyfocused on robots’ problem-solving capaci-ties and how they would be enhanced by theability to introspect. An important exampleis the ability of a robot to infer that it doesn’tknow something (such as whether the Popeis currently sitting or lying down). This maybe self-awareness, but the word “awareness”here is used in a sense that is quite separatefrom the notion of phenomenal conscious-ness that is our concern here.

McCarthy (1995a) specifically addressesthe issue of “zombies,” philosophers’ termfor hypothetical beings who behave exactlyas we do but do not experience anything.This paper is a reply to an article byTodd Moody (1994) on zombies. He listssome introspective capacities it would begood to give to a robot (“ . . . Observing itsgoal structure and forming sentences aboutit. . . . Observing how it arrived at its currentbeliefs. . . . ”). Then he concludes abruptly:

Moody isn’t consistent in his descrip-tion of zombies. On page 1 they behavelike humans. On page 3 they expresspuzzlement about human consciousness.Wouldn’t a real Moody zombie behave asthough it understood as much about con-sciousness as Moody does?

I tend to agree with McCarthy that theidea of a zombie is worthless, in spite of itsinitial plausibility. Quoting Moody:

Given any functional [ = , more or less,computational] description of cognition, as

detailed and complete as one can imag-ine, it will still make sense to suppose thatthere could be insentient beings that exem-plify that description. That is, it is pos-sible that there could be a behaviourallyindiscernible but insentient simulacrum ofa human cognizer: a zombie.

The plausibility of this picture is thatit does indeed seem that an intricate dia-gram of the hardware and software of arobot would leave consciousness out, just aswith the computer-controlled heating sys-tem described in the introduction to thischapter. One could print the system descrip-tion on rose-colored paper to indicate thatthe system was conscious, but the color ofthe paper would play no role in what it actu-ally did. The problem is that in imagining azombie one tends at first to forget that thezombie would say exactly the same thingsnon-zombies say about their experiences. Itwould be very hard to convince a zombiethat it lacked experience, which means, asfar as I can see, that we might be zombies, atwhich point the whole idea collapses.

Almost everyone who thinks the idea iscoherent sooner or later slips up the wayMoody does: They let the zombie figure outthat it is a zombie by noticing that it hasno experience. By hypothesis, this is some-thing zombies cannot do. Moody’s paper isremarkable only in how obvious the slip-upin it is.

Consider, for example, the phenomenon ofdreaming. Could there be a cognate con-cept in zombie-English? How might weexplain dreaming to them? We could saythat dreams are things that we experiencewhile asleep, but the zombies would not beable to make sense[z] of this.5

Of course, zombies would talk about theirdreams (or dreams[z]?) exactly as we do;consult the intricate system diagram to ver-ify this.

McCarthy’s three-sentence reply is justabout what Moody’s paper deserves. Butmeanwhile philosophers such as Chalmers(1996) have written weighty tomes basedon the assumption that zombies make sense.McCarthy is not interested in refuting them.

Page 142: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

12 4 the cambridge handbook of consciousness

Similarly, McCarthy (1990b) discusseswhen it is legitimate to ascribe mental prop-erties to robots. In some ways his treatmentis more formal than that of Dennett, whichI discuss below. But he never builds on thistheory to ask the key question: Is there moreto your having a mental state than havingthat state ascribed to you?

Daniel Dennett

Daniel Dennett is not a researcher in artifi-cial intelligence, but a philosopher of mindand essayist in cognitive science. Nonethe-less, he is sympathetic to the AI projectand bases his philosophy on computationalpremises to a great degree. The models ofmind that he has proposed can be consideredto be sketches of a computational modeland therefore constitute one of the mostambitious and detailed proposals for how AImight account for consciousness.

Dennett’s (1969) Ph.D. dissertation pro-posed a model for a conscious system. It con-tains the sort of block diagram that has sincebecome a standard feature of the theories ofsuch psychologists as Bernard Baars (1988,1997), although the central working arena isdesigned to account for introspection morethan for problem-solving ability.

In later work, Dennett has not built uponthis model, but, in a sense, has been rebuild-ing it from the ground up. The result hasbeen a long series of papers and books, richwith insights about consciousness, free will,and intentionality. Their very richness makesit hard to extract a brisk theoretical state-ment, but that is my aim.

Dennett has one overriding methodolog-ical principle, to be distrustful of introspec-tion. This position immediately puts himat odds with such philosophers as Nagel,Searle, and McGinn, for whom the first-person point of view is the alpha and omegaof consciousness. On his side Dennett hasmany anecdotes and experimental data thatshow how wildly inaccurate introspectioncan be, but his view does leave him open tothe charge that he is ruling out all the com-petitors to his theory from the start. From acomputationalist’s vantage point, this is all

to the good. It’s clear that any computa-tionalist theory must eventually explain themechanism of the first-person view in termsof third-person components. The third per-son is that which you and I discuss and there-fore must be observable by you and me, andby other interested parties, in the same way.In other words, the term “third-person data”is just another way of saying “scientific data.”If there is to be a scientific explanation of thefirst person, it will surely seem more like an“explaining away” than a true explanation.An account of how yonder piece of meat ormachinery is conscious will almost certainlyinvoke the idea of the machinery playing atrick on itself, the result of which is for itto have a strong belief that it has a specialfirst-person viewpoint.

One of Dennett’s special skills is usingvivid images to buttress his case. He inventedthe phrase “Cartesian Theater” to describethe hypothetical place in the brain where theself becomes aware of things. He observesthat belief in the Cartesian Theater is deep-seated and keeps popping up in philosophi-cal and psychological writings, as well as incommon-sense musings. We all know thatthere is a lot going on the brain that is pre-conscious or subconscious. What happenswhen a train of events becomes conscious?According to the view Dennett is ridicul-ing, to bring it to consciousness is to showit on the screen in the Cartesian Theater.When presented this way, the idea doesseem silly, if for no other reason than thatthere is no plausible homunculus to putin the audience. What’s interesting is howhard it is to shake this image. Just aboutall theorists of phenomenal consciousness atsome point distinguish between “ordinary”and “conscious” events by making the latterbe accessible to . . . what, exactly? The sys-tem as a whole? Its self-monitoring modules?One must tread very carefully to keep fromdescribing the agent with special access asthe good old transcendental self, sitting alonein the Cartesian Theater.

To demolish the Cartesian Theater, Den-nett uses the tool of discovering or invent-ing situations in which belief in it leads toabsurd conclusions. Many of these situations

Page 143: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 12 5

are experiments set up by psychologyresearchers. Most famous are the experi-ments by Libet (1985), whose object wasto determine exactly when a decision tomake a motion was made. What emergedfrom the experiments was that at the pointwhere subjects think they have made thedecision, the neural activity preparatory tothe motion has already been in progress forhundreds of milliseconds. Trying to makesense of these results using the homuncu-lar models leads to absurdities. (Perhaps thechoice causes effects in the person’s past?)But it is easy to explain them if you make amore inclusive picture of what’s going on ina subject’s brain. Libet and others tended toassume that giving a subject a button to pushwhen the decision had been made provideda direct route to . . . that pause again . . . thesubject’s self, perhaps? Or perhaps the guyin the theater? Dennett points out that theneural apparatus required to push the but-ton is part of the overall brain system. Upto a certain resolution, it makes sense to asksomeone, “When did you decide to do X?”But it makes no sense to try to tease off asubsystem of the brain and ask it the samequestion, primarily because there is no sub-system that embodies the “will” of the wholesystem.

Having demolished most of the tradi-tional model of consciousness, Dennett’snext goal is to construct a new one, andhere he becomes more controversial, and inplaces more obscure. A key component ishuman language. It is difficult to think abouthuman consciousness without pondering theability of a normal human adult to say whatthey are thinking. There are two possibleviews about why it should be the case thatwe can introspect so easily. One is that weevolved from animals that can introspect, sonaturally when language evolved one of thetopics it was used on was the contents of ourintrospections. The other is that languageplays a more central role than that; with-out language, we would not be conscious atall, at least full-bloodedly. Dennett’s view isthe second. He has little to say about ani-mal consciousness, and what he does say isdisparaging.

Language, for Dennett, is very impor-tant, but not because it is spoken by thehomunculus in the Cartesian Theater. If youleave it out, who is speaking? Dennett’sanswer is certainly bold: In a sense, the lan-guage speaks itself. We take it for grantedthat speaking feels like it emanates from our“transcendental self” or, less politely, fromthe one-person audience in the Theater.Whether or not that view is correct now, italmost certainly was not correct when lan-guage began. In its original form, languagewas an information-transmission device usedby apes whose consciousness, if similar toours in any real respect, would be aboutthe same as a chimpanzee’s today. Messagesexpresssed linguistically would be heard byone person and, for one reason or another,be passed to others. The messages’ chanceof being passed would depend, very roughly,on how useful their recipients found them.

The same mechanism has been in oper-ation ever since. Ideas (or simple patternsunworthy of the name “idea” – advertisingjingles, for instance) tend to proliferate inproportion to how much they help thosewho adopt them or in proportion to howwell they tend to stifle competing ideas –not unlike what genes do. Dennett adoptsDawkins’ (1976) term meme to denote a lin-guistic pattern conceived of in this way. Onekey meme is the idea of talking to oneself;when it first popped up, it meant literallytalking out loud and listening to what wassaid. Although nowadays we tend to viewtalking to oneself as a possible symptom ofinsanity, we have forgotten that it gives ourbrains a whole new channel for its parts tocommunicate with each other. If an idea – apattern of activity in the brain – can reach thelinguistic apparatus, it gets translated intoa new form, and, as it is heard, gets trans-lated back into a somewhat different pat-tern than the one that started the chain ofevents. Creatures that start to behave thisway start to think of themselves in a newlight, as someone to talk to or listen to. Self-modeling, according to Dennett (and Jaynes,1976) starts as modeling this person to whomwe are talking. There is nothing special aboutthis kind of model; it is as crude as most of

Page 144: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

12 6 the cambridge handbook of consciousness

the models we make. But memes for self-modeling have been some of the most suc-cessful in the history (and prehistory) ofhumankind. To a great degree, they makeus what we are by giving us a model of whowe are that we then live up to. Every childmust recapitulate the story Dennett tells, ashe or she absorbs from parents and peers allthe ways to think of oneself, as a being withfree will, sensations, and a still small voiceinside.

The theory has one striking feature: Itassumes that consciousness is based on lan-guage and not vice versa. For that matter,it tends to assume that for consciousness tocome to be, there must be in place a sub-stantial infrastructure of perceptual, motor,and intellectual skills. There may be somelinguistic abilities that depend on conscious-ness, but the basic ability must exist beforeand independent of consciousness.

This conclusion may be fairly easy toaccept for the more syntactic aspects of lan-guage, but it is contrary to the intuitions ofmany when it comes to semantics. Know-ing what a sentence means requires know-ing how the sentence relates to the world. IfI am told, “There is a lion on the other sideof that bush,” I have to understand that “thatbush” refers to a particular object in view, Ihave to know how phrases like “other sideof” work, and I have to understand what “alion” means so that I have a grasp of just whatI’m expecting to confront. Furthermore, it’shard to see how I could know what thesewords and phrases meant without knowingthat I know what they mean.

Meditating in this way on how mean-ing works, the late 19th-century philoso-pher Franz Brentano developed the notionof intentionality, the power that mental rep-resentations seem to have of pointing to –“being about” – things outside of, and arbi-trarily far from, the mind or brain containingthose representations. The ability of some-one to warn me about that lion dependson that person’s sure-footed ability to rea-son about that animal over there, as well ason our shared knowledge about the speciesPanthera leo. Brentano and many philoso-phers since have argued that intentionality

is at bottom a property only of mental rep-resentations. There seem to be many kindsof “aboutness” in the world; for instance,there are books about lions, but items suchas books can be about a topic only if theyare created by humans using language andwriting systems to capture thoughts aboutthat topic. Books are said to have derivedintentionality, whereas people have originalor intrinsic intentionality.

Computers seem to be textbook cases ofphysical items whose intentionality, if any,is derived. If one sees a curve plotted ona computer’s screen, the surest way to findout what it’s about is to ask the person whoused some program to create it. In fact, that’sthe only way. Digital computers are syntac-tic engines par excellence. Even if there is aninterpretation to be placed on every step ofa computation, this interpretation plays norole in what the computer does. Each stepis produced purely by operations dependenton the formal structure of its inputs and priorstate at that step. If you use TurboTax tocompute your income taxes, then the num-bers being manipulated represent real-worldquantities, and the number you get at theend represents what you actually do owe tothe tax authorities. Nonetheless, TurboTaxis just applying formulas to the numbers. It“has no idea” what they mean.

This intuition is what Dennett wants todefeat, as should every other researcher whoexpects a theory of consciousness based onAI. There is really no alternative. If youbelieve that people are capable of originalintentionality and computers are not, thenyou must believe that something will bemissing from any computer program thattries to simulate humans. That means thathuman consciousness is fundamentally dif-ferent from machine consciousness, whichmeans that a theory of consciousness basedon AI is radically incomplete.

Dennett’s approach to the requireddemolition job on intrinsic intentional-ity is to focus on the prelinguistic, non-introspective case. In a way, this is changingthe subject fairly radically. In the introspec-tive set-up, we are talking about elementsor aspects of the mind that we are routinely

Page 145: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 12 7

acquainted with, such as words and images.In the non-introspective case, it’s not clearthat those elements or aspects are presentat all. What’s left to talk about if we’re nottalking about words, “images,” or “thoughts”?We will have to shift to talking about neu-rons, chips, firing rates, bits, pointers, andother “subpersonal” entities and events. It’snot clear at all whether these things areeven capable of exhibiting intentionality.Nonetheless, showing that they are is a keytactic in Dennett’s attack on the problem ofconsciousness (see especially Appendix A ofDennett, 1991b). If we can define what it isfor subpersonal entities to be intentional, wecan then build on that notion and recoverthe phenomenal entities we (thought we)started with. “Original” intentionality willturn out to be a secondary consequence ofwhat I call impersonal intentionality.

Dennett’s approach to the problem is tocall attention to what he calls the intentionalstance, a way of looking at systems in whichwe impute beliefs and goals to them sim-ply because there’s no better way to explainwhat they’re doing. For example, if you’reobserving a good computer chess programin action, and its opponent has left himselfvulnerable to an obvious attack, then onefeels confident that the program will embarkon that attack. This confidence is not basedon any detailed knowledge of the program’sactual code. Even someone who knows theprogram well won’t bother trying to do atedious simulation to make a prediction thatthe attack will occur, but will base their pre-diction on the fact that the program almostnever misses an opportunity of that kind. Ifyou refuse to treat the program as though ithad goals, you will be able to say very littleabout how it works. The intentional stanceapplies to the innards of the program as well.If a data structure is used by the programto make decisions about some situation orobject S, and the decisions it makes are wellexplained by assuming that one state of thedata structure means that P is true of S, andthat another means P ′, then those states domean P and P ′.

It is perhaps unfortunate that Dennetthas chosen to express his theory this way,

because it is easy to take him as saying thatall intentionality is observer-relative. Thiswould be almost as bad as maintaining a dis-tinction between original and derived inten-tionality, because it would make it hard tosee how the process of intentionality attribu-tion could ever get started. Presumably myintuition that I am an intentional system isindubitable, but on what could it be based?It seems absurd to think that this opin-ion is based on what others tell me, but itseems equally absurd that I could be my ownobserver. Presumably to be an observer youhave to be an intentional system (at least, ifyour observations are to be about anything).Can I bootstrap my way into intentionalitysomehow? If so, how do I tell the successfulbootstrappers from the unsuccessful ones?A computer program with an infinite loop,endlessly printing, “I am an intentional sys-tem because I predict, by taking the inten-tional stance, that I will continue to print thissentence out,” would not actually be claim-ing anything, let alone something true.

Of course, Dennett does not mean forintentionality to be observer-relative, eventhough many readers think he does. (Totake an example at random from the Inter-net, the online Philosopher’s Magazine, in its“Philosopher of the Month” column in April,2003 (Douglas & Saunders, 2003), writes,“Dennett suggests that intentionality is notso much an intrinsic feature of agents, rather,it is more a way of looking at agents.”) Den-nett has defended himself from this misinter-pretation more than once (Dennett, 1991a).I come back to this issue in my attemptat a synthesis in the section, “A SyntheticSummary.”

Perlis and Sloman

The researchers in this section, althoughthey work on hard-headed problems inartificial intelligence, do take philosophi-cal problems seriously and have contributedsubstantial ideas to the development of thecomputational model of consciousness.

Donald Perlis’s papers build a case thatconsciousness is ultimately based on self-consciousness, but I believe he is using the

Page 146: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

12 8 the cambridge handbook of consciousness

term “self-consciousness” in a misleading andunnecessary way. Let’s start with his paper(Perlis, 1994), which I think lays out a veryimportant idea. He asks, Why do we need adichotomy between appearance and reality?The answer is, Because they could disagree(i.e., because I could be wrong about what Ithink I perceive). For an organism to be ableto reason explicitly about this difference, itmust be able to represent both X (an objectin the world) and quote-X, the representa-tion of X in the organism itself. The latter isthe “symbol,” the former the “symboled.” Tomy mind the most important consequenceof this observation is that it must be possi-ble for an information-processing system toget two kinds of information out of its X-recognizer: signals meaning “there’s an X”and signals meaning “there’s a signal mean-ing ‘there’s an X.’”

Perlis takes a somewhat different tack.He believes there can be no notion ofappearance without the notion of appear-ance to someone. So the self-model cannotget started without some prior notion of selfto model.

When we are conscious of X, we are alsoconscious of X in relation to ourselves: It ishere, or there, or seen from a certain angle,or thought about this way and then that.Indeed, without a self model, it is not clearto me intuitively what it means to see orfeel something: it seems to me that a pointof view is needed, a place from which thescene is viewed or felt, defining the placeoccupied by the viewer. Without somethingalong these lines, I think that a “neuronalbox” would indeed “confuse” symbol andsymboled: to it there is no external real-ity, and it has no way to “think” (consideralternatives) at all. Thus I disagree [withCrick] that self-consciousness is a specialcase of consciousness: I suspect that it is themost basic form of all.

Perlis continues to elaborate this idea inlater publications. For example, “Conscious-ness is the function or process that allows asystem to distinguish itself from the rest of theworld. . . . To feel pain or have a vivid expe-rience requires a self” (Perlis, 1997) (ital-ics in original). I have trouble following his

arguments, which often depend on thoughtexperiments, such as imagining cases whereone is conscious but not of anything, or of aslittle as possible. The problem is that intro-spective thought experiments are just not avery accurate tool. One may perhaps con-clude that Perlis, although housed in a Com-puter Science department, is not a thorough-going computationalist at all. As he says, “Iconjecture that we may find in the brain spe-cial amazing structures that facilitate trueself-referential processes, and constitute aprimitive, bare or ur-awareness, an ‘I.’ I willcall this the amazing-structures-and-processesparadigm” (Perlis, 1997) (italics in original).It is not clear how amazing the “amazing”structures will be, but perhaps they will notbe computational.

Aaron Sloman has written prolifi-cally about philosophy and computation,although his interests range far beyondour topic here. In fact, although he hasbeen interested in conscious control, bothphilosophically and as a strategy for orga-nizing complex software, he has tended toshy away from the topic of phenomenalconsciousness. His book The ComputerRevolution in Philosophy (Sloman, 1978) hasalmost nothing to say about the subject,and in many other writings the main pointhe has to make is that the concept of con-sciousness covers a lot of different processes,which should be sorted out before hardquestions can be answered. However, ina few of his papers he has confronted theissue of qualia, notably (Sloman & Chrisley,2003). I think the following is exactlyright:

Now suppose that an agent A . . . uses aself-organising process to develop conceptsfor categorising its own internal virtualmachine states as sensed by internal mon-itors. . . . If such a concept C is applied byA to one of its internal states, then the onlyway C can have meaning for A is in rela-tion to the set of concepts of which it is amember, which in turn derives only fromthe history of the self-organising process inA. These concepts have what (Campbell,1994) refers to as ‘causal indexicality’. Thiscan be contrasted with what happens when

Page 147: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 12 9

A interacts with other agents in such a wayas to develop a common language for refer-ring to features of external objects. ThusA could use ‘red’ either as expressing aprivate, causally indexical, concept refer-ring to features of A’s own virtual-machinestates, or as expressing a shared conceptreferring to a visible property of the sur-faces of objects. This means that if twoagents A and B have each developed con-cepts in this way, then if A uses its causallyindexical concept Ca, to think the thought‘I am having experience Ca’, and B usesits causally indexical concept Cb, to thinkthe thought ‘I am having experience Cb’the two thoughts are intrinsically privateand incommunicable, even if A and B actu-ally have exactly the same architecture andhave had identical histories leading to theformation of structurally identical sets ofconcepts. A can wonder: ‘Does B have anexperience described by a concept relatedto B as my concept Ca is related to me?’But A cannot wonder ‘Does B have expe-riences of type Ca’, for it makes no sensefor the concept Ca to be applied outsidethe context for which it was developed,namely one in which A’s internal sensorsclassify internal states. They cannot classifystates of B.

This idea suggests that the point I casu-ally assumed at the beginning of this chap-ter, that two people might wonder if theyexperienced the same thing when they atetacos, is actually incoherent. Our feeling thatthe meaning is clear is due to the twist ourself-models give to introspections of the kindSloman and Chrisley are talking about. Theinternal representation of the quale of red-ness is purely local to A’s brain, but the self-model says quite the opposite – that objectswith the color are recognizable by A becausethey have that quale. The quale is made intoan objective entity that might attach itselfto other experiences, such as my encounterswith blue things or B’s experiences of redthings.

Brian Cantwell Smith

The last body of research to be examined inthis survey is that of Brian Cantwell Smith. Itis hard to dispute that he is a computational-

ist, but he is also an antireductionist, whichplaces him in a unique category. Althoughit is clear in reading his work that he con-siders consciousness to be a crucial topic, hehas been working up to it very carefully. Hisearly work (Smith, 1984) was on “reflection”in programming languages; that is, how andwhy a program written in a language couldhave access to information about its ownsubroutines and data structures. One mightconjecture that reflection might play a keyrole in a system’s maintaining a self-modeland thereby being conscious. But since thatearly work Smith has moved steadily awayfrom straightforward computational topicsand toward foundational philosophical ones.Each of his papers seems to take tinier stepsfrom first principles than the ones that havegone before, so as to presuppose as lit-tle as humanly possible. Nonetheless, theyoften express remarkable insight. His paper(Smith, 2002) on the “Foundations of Com-puting” is a gem. (I also recommend Sloman(2002) from the same collection [Scheutz,2002].)

One thing both Smith and Slomanargue is that Turing machines are mislead-ing as ideal vehicles for computationalism,which is a point often missed by philoso-phers. For example, Wilkes (1990) says that“ . . . computers (as distinct from robots) pro-duce at best only linguistic and exclusively‘cognitive’ – programmable – ‘behaviour’:the emphasis is on internal psychologicalprocesses, the cognitive ‘inner’ rather thanon action, emotion, motivation, and sen-sory experience.” Perhaps I’ve misunder-stood him, but it’s very hard to see howthis can be true, given that all interest-ing robots are controlled by digital comput-ers. Furthermore, when computers and soft-ware are studied isolated from their physicalenvironments, it’s often for purely tacticalreasons (from budget or personnel limita-tions, or to avoid endangering bystanders).If we go all the way back to Winograd’s(1972) SHRDLU system, we find a simu-lated robot playing the role of conversa-tionalist, not because Winograd thought realrobots were irrelevant, but precisely becausehe was thinking of a long-term project in

Page 148: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

130 the cambridge handbook of consciousness

which an actual robot would be used. AsSmith (2002) says,

In one way or another, no matter whatconstrual [of formality] they pledge alle-giance to, just about everyone thinks thatcomputers are formal. . . . But since theoutset, I have not believed that this isnecessarily so. . . . Rather, what computersare . . . is neither more nor less than the full-fledged social construction and develop-ment of intentional artifacts. (Emphasisin original.)

The point he is trying to make (and itcan be hard to find a succinct quote inSmith’s papers) is that computers are alwaysconnected to the world, whether they arerobots or not, and therefore the meaningtheir symbols possess is more determined bythose connections than by what a formal the-ory might say they mean. One might wantto rule that the transducers that connectthem to the world are non-computational(cf. Harnad, 1990), but there is no princi-pled way to draw a boundary between thetwo parts, because ultimately a computer isphysical parts banging against other physicalparts. As Sloman puts it,

. . . The view of computers as somehowessentially a form of Turing machine . . . issimply mistaken. . . . [The] mathematicalnotion of computation . . . is not the primarymotivation for the construction or use ofcomputers, nor is it particularly helpful inunderstanding how computers work or howto use them (Sloman, 2 002 ).

The point Smith makes in the paper citedabove is elaborated into an entire book, Onthe Origin of Objects (Smith, 1995). Theproblem the book addresses is the basicontology of physical objects. The problemis urgent, according to Smith, because thebasic concept of intentionality is that a sym-bol S stands for an object X, but we have noprior concept of what objects or symbols are.A geologist might see a glacier on a moun-tain, but is there some objective reason whythe glacier is an object (and the group ofstones suspended in it is not)? Smith believesthat all object categories are to some extentcarved out by subjects (i.e., by information-processing systems like us and maybe some-day by robots as well).

The problem with this point of view isthat it is hard to bootstrap oneself out ofwhat Smith calls the Criterion of UltimateConcreteness: “No naturalistically palatabletheory of intentionality – of mind, com-putation, semantics, ontology, objectivity –can presume the identify or existence ofany individual object whatsoever” (1995 .p. 184). He tries valiantly to derive subjectsand objects from prior . . . umm . . . “entities”called s-regions and o-regions, but it is hardto see how he succeeds. In spite of its lengthof 420 pages, the book claims to arrive atno more than a starting point for a com-plete rethinking of physics, metaphysics, andeverything else.

Most people will have a hard time follow-ing Smith’s inquiry, not least because fewpeople agree on his opening premise, thateveryday ontology is broken and needs tobe fixed. I actually do agree with that, but Ithink the problem is much worse than Smithdoes. Unlike him, I am reductionist enoughto believe that physics is the science of “allthere is”; so how do objects emerge from aprimordial superposition of wave functions?Fortunately, I think this is a problem foreveryone and has nothing to do with theproblem of intentionality.6 If computation-alists are willing to grant that there’s a glacierover there, anyone should be willing to con-sider the computational theory of how sys-tems refer to glaciers.

A Synthetic Summary

In spite of the diffidence of most AIresearchers on this topic, I believe that thereis a dominant position on phenomenal con-sciousness among computationalists; it isdominant in the sense that among the smallpopulation of those who are willing to takea clear position, this is more or less the posi-tion they take. In this section I try to sketchthat postion, pointing out the similaritiesand differences from the positions sketchedin the preceding section.

The idea in a nutshell is that phe-nomenal consciousness is the property acomputational system X has if X modelsitself as experiencing things. To understand

Page 149: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 131

it, I need to explain the following threethings:

1. what a computational system is;2 . how such a system can exhibit intention-

ality; and3 . that to be conscious is to model oneself as

having experiences.

The Notion of Computational System

Before we computationalists can really getstarted, we run into the objection that theword “computer” doesn’t denote the rightkind of thing to play an explanatory rolein a theory of any natural phenomenon. Acomputer, so the objection goes, is an objectthat people7 use to compute things. With-out people to assign meanings to its inputsand outputs, a computer is just an overlycomplex electronic kaleidoscope, generatinga lot of pseudo-random patterns. We mayinterpret the output of a computer as aprediction about tomorrow’s weather, butthere’s no other sense in which the com-puter is predicting anything. A chess com-puter outputs a syntactically legal expres-sion that we can take to be its next move,but the computer doesn’t actually intendto make that move. It doesn’t intend any-thing. It doesn’t care whether the move isactually made. Even if it’s displaying themove on a screen or using a robot arm topick up a piece and move it, these out-puts are just meaningless pixel values ordrive-motor torques until people supply themeaning.

In my opinion, the apparent difficultyof supplying an objective definition of syn-tax and especially semantics is the mostserious objection to the computational the-ory of psychology, and in particular to acomputational explanation of phenomenalconsciousness. To overcome it, we need tocome up with a theory of computation(and eventually semantics) that is observer-independent.

There are two prongs to this attack, onesyntactic and the other semantic. The syn-tactic prong is the claim that even the sym-bols we attribute to computers are observer-relative. We point to a register in the

computer’s memory and claim that it con-tains a number. The critic then says that themapping of states that causes this state toencode “55 ,000” is entirely arbitrary; thereare an infinite number of ways of interpret-ing the state of the register, none of which isthe “real” one in any sense. Therefore, all wecan talk about is the intended one. A noto-rious example given by John Searle (1992)exemplifies this kind of attack; he claims thatthe wall of his office could be considered tobe a computer under the right encoding ofits states.

The semantic prong is the observation,discussed in the sections on Daniel Den-nett and on Brian Cantwell Smith, thateven after we’ve agreed that the registerstate encodes “55 ,000,” there is no objec-tive sense in which this figure stands for“Jeanne D’Eau’s 2003 income in euros.” IfJeanne D’Eau is using the EuroTax softwarepackage to compute her income tax, thensuch semantic statements are nothing buta convention adopted by her and the peo-ple who wrote EuroTax. In other words, theonly intentionality exhibited by the programis derived intentionality.

To avoid these objections, we have to becareful about how we state our claims. I havespace for only a cursory overview here; seeMcDermott (2001) for a more detailed treat-ment. First, the idea of computer is prior tothe idea of symbol. A basic computer is anyphysical system whose subsequent states arepredictable given its prior states. By “state” Imean “partial state,” so that the system canbe in more than one state at a time. An encod-ing is a mapping from partial physical statesto some syntactic domain (e.g., numerals).To view a system as a computer, we needtwo encodings, one for inputs and one foroutputs. It computes f(x) with respect to apair 〈I, O〉 of encodings if and only if puttingit into the partial state encoding x under Icauses it to go into a partial state encodingf(x) under O.

A memory element under an encoding E isa physical system that, when placed into astate s such that E(s) = x, tends to remainin the set of states {s: E(s) = x} for a while.

A computer is then a group of basiccomputers and memory elements viewed

Page 150: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

132 the cambridge handbook of consciousness

under a consistent encoding scheme, mean-ing merely that if changes of component 1’sstate cause component 2 ’s state to change,then the encoding of 1’s outputs is the sameas the encoding of 2 ’s inputs. Symbol sitesthen appear as alternative possible stableregions of state space, and symbol tokens aschains of symbol sites such that the occu-pier of a site is caused by the presence of theoccupier of its predecessor site. Space doesnot allow me to discuss all the details here,but the point is clear: The notions of com-puter and symbol are not observer-relative.Of course, they are encoding-relative, butthen velocity is “reference-frame-relative.”The encoding is purely syntactic, or evenpresyntactic, because we have said nothingabout what syntax an encoded value has,if any. We could go on to say more aboutsyntax, but one has the feeling that thewhole problem is a practical joke playedby philosophers on naive AI researchers.(“Let’s see how much time we can get themto waste defining ‘computer’ for us, untilthey catch on.”) I direct you to McDermott(2001) for more of my theory of syntax. Theimportant issue is semantics, to which wenow turn.

One last remark: The definitions aboveare not intended to distinguish digital fromanalog computers, or serial from parallelones. They are broad enough to include any-thing anyone might ever construe as a com-putational system. In particular, they allowneural nets (Rumelhart et al., 1986), bothnatural and artificial, to count as computers.Many observers of AI (Churchland, 1986,1988; Wilkes, 1990) believe that there is anunbridgeable chasm between some classical,digital, traditional AI and a revolutionary,analog, connectionist alternative. The for-mer is the realm of von Neumann machines,the latter the realm of artificial neural net-works – “massively parallel” networks of sim-ple processors (meant to mimic neurons),which can be trained to learn different cat-egories of sensory data (Rumelhart et al.,1986). The “chasm” between the two isless observable in practice than you mightinfer from the literature. AI researchers areomnivorous consumers of algorithmic tech-

niques and think of neural nets as one ofthem – entirely properly, in my opinion. Ireturn to this subject in the section, “Sym-bol Grounding.”

Intentionality of Computational Systems

I have described Dennett’s idea of the “inten-tional stance,” in which an observer explainsa system’s behavior by invoking such inten-tional categories as beliefs and goals. Den-nett is completely correct that there is sucha stance. The problem is that we some-times adopt it inappropriately. People usedto think thunderstorms were out to getthem, and a sign on my wife’s printer says,“Warning! This machine is subject to break-down during periods of critical need.” Whatcould it possibly mean to say that a machinedemonstrates real intentionality when it isso easy to indulge in a mistaken or merelymetaphorical “intentional stance”?

Let’s consider an example. Supposesomeone has a cat that shows up in thekitchen at the time it is usually fed, meow-ing and behaving in other ways that tend toattract the attention of the people who usu-ally feed it. Contrast that with the case ofa robot that, whenever its battery is low,moves along a black trail painted on thefloor that leads to the place where it getsrecharged, and, when it is over a large blackcross that has been painted at the end ofthe trail, emits a series of beeps that tend toattract the attention of the people who usu-ally recharge it. Some people might refuseto attribute intentionality to either the cator the robot and treat as purely metaphor-ical such comments as, “It’s trying to getto the kitchen [or recharging area],” or “Itwants someone to feed [or recharge] it.”They might take this position, or argue thatit’s tenable, on the grounds that we have noreason to suppose that either the cat or therobot has mental states, and hence nothingwith the kind of “intrinsic aboutness” thatpeople exhibit. High catologists8 are surecats do have mental states, but the skep-tic will view this as just another example ofsomeone falling into the metaphorical pit of“as-if” intentionality.

Page 151: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 133

I believe, though, that even hard-headedlow catologists think the cat is truly inten-tional, albeit in the impersonal way dis-cussed in the section on Daniel Dennett.They would argue that if you could openup its brain you would find neural structuresthat “referred to” the kitchen or the path toit, in the sense that those structures becameactive in ways appropriate to the cat’s needs:They were involved in steering the cat to thekitchen and stopping it when it got there.A similar account would tie the meowingbehavior to the event of getting food, medi-ated by some neural states. We would thenfeel justified in saying that some of the neu-ral states and structures denoted the kitchen,or the event of being fed.

The question is, are the ascriptions ofimpersonal intentionality so derived arbi-trary, or are they objectively true? It’s diffi-cult to make either choice. It feels silly sayingthat something is arbitrary if it takes consid-erable effort to figure it out, and if one is con-fident that if others independently under-took the same project they would reachessentially the same result. But it also feelsodd to say that something is objectively trueif it is inherently invisible. Nowhere in thecat will you find labels that say “This meansX,” nor little threads that tie neural struc-tures to objects in the world. One mightwant to say that the cat is an intentionalsystem because there was evolutionary pres-sure in favor of creatures whose innardswere tied via “virtual threads” to their sur-roundings. I don’t like dragging evolutionin because it’s more of a question stopperthan a question answerer. I prefer the con-clusion that the reluctance to classify inten-tionality as objectively real simply revealsan overly narrow conception of objectivereality.

A couple of analogies should help.

code breaking

A code breaker is sure he or she has crackeda code when the message turns into mean-ingful natural-language text. That’s becausethere are an enormous number of possiblemessages and an enormous number of pos-sible ciphers, out of which there is (almost

certainly) only one combination of natural-language text and simple cipher that pro-duces the encrypted message.

Unfortunately for this example, it in-volves interpreting the actions of people. Soeven if there is no observer-relativity fromthe cryptanalyst’s point of view, the inten-tionality in a message is “derived” accord-ing to skeptics about the possible authenticintentionality of physical systems.

geology

A geologist strives to find the best expla-nation for how various columns and strataof rock managed to place themselves in thepositions in which they are found. A goodexplanation is a series of not-improbableevents that would have transformed a plau-sible initial configuration of rocks into whatwe see today.

In this case, there is no observer-relativity,because there was an actual sequence ofevents that led to the current rock config-uration. If two geologists have a profounddisagreement about the history of a rock for-mation, they cannot both be right (as theymight be if disagreeing about the beauty ofa mountain range). Our normal expectationis that any two geologists will tend to agreeon at least the broad outline of an explana-tion of a rock formation and that as moredata are gathered the areas of agreementwill grow.

These examples are cases where, eventhough internal harmoniousness is how wejudge explanations, what we get in the endis an explanation that is true, independentof the harmoniousness. All we need to do isallow for this to be true even though, in thecase of intentionality, even a time machine ormind reader would not give us an indepen-dent source of evidence. To help us acceptthis possibility, consider the fact that geolo-gists can never actually get the entire storyright. What they are looking at is a hugestructure of rock with a detailed microhis-tory that ultimately accounts for the posi-tion of every pebble. What they produce inthe end is a coarse-grained history that talksonly about large intrusions, sedimentary lay-ers, and such. Nonetheless we say that it is

Page 152: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

134 the cambridge handbook of consciousness

objectively true, even though the objects itspeaks of don’t even exist unless the accountis true. It explains how a particular “intru-sion” got to be there, but if geological theoryisn’t more or less correct, there might notbe such a thing as an intrusion; the objectsmight be parsed in a totally different way.

If processes and structures inside a cat’sbrain exhibit objectively real impersonalintentionality, then it’s hard not to acceptthe same conclusion about the robot tryingto get recharged. It might not navigate theway the cat does – for instance, it might haveno notion of a place it’s going to, as opposedto the path that gets it there – but we seethe same fit with its environment among thesymbol structures in its hardware or data. Inthe case of the robot the hardware and soft-ware were designed, and so we have the extraoption of asking the designers what the enti-ties inside the robot were supposed to denote.But it will often happen that there is con-flict between what the designers intendedand what actually occurs, and what actuallyoccurs wins. The designers don’t get to say,“This boolean variable means that the robotis going through a door,” unless the vari-able’s being true tends to occur if and onlyif the robot is between two door jambs. Ifthe variable is correlated with something elseinstead, then that’s what it actually means.It’s appropriate to describe what the roboti-cists are doing as debugging the robot so thatits actual intentionality matches their intent.The alternative would be to describe therobot as “deranged” in the sense that it con-tinuously acts in ways that are bizarre givenwhat its data structures mean.

Two other remarks are in order. What thesymbols in a system mean is dependent onthe system’s environment. If a cat is movedto a house that is so similar to the one it’sfamiliar with that the cat is fooled, then thestructures inside it that used to refer to thekitchen of house 1 now refer to the kitchenof house 2 . And so forth; and there will ofcourse be cases in which the denotation ofa symbol breaks down, leaving no coherentstory about what it denotes, just as in thegeological case an event of a type unknownto geology, but large enough to cause large-

scale effects, will go unhypothesized, andsome parts of geologists’ attempts to makesense of what they see will be too incoher-ent to be true or false or even to refer toanything.

The other remark is that it might be thecase that the sheer size of the symbolicsystems inside people’s heads might makethe impersonal intentionality story irrele-vant. We don’t, of course, know much aboutthe symbol systems used by human brains,whether there is a “language of thought”(Fodor, 1975) or some sort of connection-ist soup but clearly we can have beliefs thatare orders of magnitude more complex thanthose of a cat or a robot (year-2006 model).If you walk to work, but at the end of theday absentmindedly head for the parking lotto retrieve your car, what you will believeonce you get there has the content, “My caris not here.” Does this belief correspond to asymbol structure in the brain whose piecesinclude symbol tokens for “my car,” “here,”and “not”? We don’t know. But if anythinglike that picture is accurate, then assigning ameaning to symbols such as “not” is consider-ably more difficult than assigning a meaningto the symbols a cat or robot might use todenote “the kitchen.” Nonetheless, the samebasic story can still be told: that the symbolsmean what the most harmonious interpreta-tion says they mean. This story allows us toassign arbitrarily abstract meanings to sym-bols like “not”; the price we pay is that fornow all we have is an IOU for a holistic the-ory of the meanings inside our heads.

Modeling Oneself as Conscious

I have spent a lot of time discussing inten-tionality because once we can establish theconcept of an impersonal level of mean-ing in brains and computers, we can intro-duce the idea of a self-model, a device thata robot or a person can use to answer ques-tions about how it interacts with the world.This idea was introduced by Minsky almostforty years ago (Minsky, 1968a) and has sincebeen explored by many others, includingSloman (Sloman & Chrisley, 2003), McDer-mott (2001), and Dennett (1991b). As I

Page 153: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 135

have mentioned, Dennett mixes this ideawith the concept of meme, but self-modelsdon’t need to be made out of memes.

We start with Minsky’s observation thatcomplex organisms use models of their envi-ronments to predict what will happen anddecide how to act. In the case of humans,model making is taken for granted by psy-chologists (Johnson-Laird, 1983); no onereally knows what other animals’ capaci-ties for using mental models are. A mentalmodel is some sort of internal representationof part of the organism’s surroundings thatcan be inspected, or even “run” in some way,so that features of the model can then betransformed back into inferred or predictedfeatures of the world. For example, supposeyou’re planning to go grocery shopping, theskies are threatening rain, and you’re try-ing to decide whether to take an umbrella.You enumerate the situations where theumbrella might be useful and think aboutwhether on balance it will be useful enoughto justify having to keep track of it. One suchsituation is the time when you emerge fromthe store with a cartload of groceries to putin the car. Will the umbrella keep you oryour groceries dry?9

This definition is general (and vague)enough to cover non-computational models,but the computationalist framework pro-vides an obvious and attractive approachto theorizing about mental models. In thisframework, a model is an internal computerset up to simulate something. The organ-ism initializes it, lets it run for a while, readsoff its state, and interprets the state as a setof inferences that then guide behavior. Inthe umbrella example, one might imagine aphysical simulation, at some level of resolu-tion, of a person pushing a cart and holdingan umbrella while rain falls.

A mental model used by an agent A todecide what to do must include A itself, sim-ply because any situation A finds itself inwill have A as one of its participants. If Iam on a sinking ship, and trying to pick alifeboat to jump into, predicting the num-ber of people on the lifeboat must not omitthe “+ 1” required to include me. This seem-ingly minor principle has far-reaching conse-

quences because many of A’s beliefs aboutitself will stem from the way its internalsurrogates participate in mental models. Wecall the beliefs about a particular surrogatea self-model, but usually for simplicity I referto the self-model, as if all those beliefs arepulled together into a single “database.” Letme state up front that the way things reallywork is likely to be much more complexand messy. Let me also declare that the self-model is not a Cartesian point of transcen-dence where the self can gaze at itself. It isa resource accessible to the brain at variouspoints for several different purposes.

We can distinguish between exterior andinterior self-models. The former refers to theagent considered as a physical object, some-thing with mass that might sink a lifeboat.The latter refers to the agent considered as aninformation-processing system. To be con-crete, let’s look at a self-model that arisesin connection with the use of any-time algo-rithms to solve time-dependent planning prob-lems (Boddy & Dean, 1989). An any-timealgorithm is one that can be thought of asan asynchronous process that starts with arough approximation to the desired answerand gradually improves it; it can be stoppedat any time, and the quality of the resultit returns depends on how much run timeit was given. We can apply this idea toplanning robot behavior, in situations wherethe objective is to minimize the total timerequired to solve the problem, which isequal to time (tP) to find a plan P + time(tE(P)) to execute P.

If the planner is an any-time algorithm,then the quality of the plan it returnsimproves with tP. We write P(tP) to indi-cate that the plan found is a function of thetime allotted to finding it. Because qualityis execution time, we can refine that state-ment and say that tE(P(tP)) decreases as tPincreases. Therefore, to optimize

tP + tE (P (tP ))

we must find the smallest tP such that thetime gained by planning �t longer than thatwould probably improve tE by less than �t.The only way to find that optimal tP is to

Page 154: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

136 the cambridge handbook of consciousness

have an approximate model of how fasttE(P(tP)) changes as a function of tP. Sucha model would no doubt reflect the law ofdiminishing returns, so that finding the opti-mal tP is an easy one-dimensional optimiza-tion problem. The important point for us isthat this model is a model of the planningcomponent of the robot, and so counts as aninterior self-model.

Let me make sure my point is clear: Inte-rior self-models are no big deal. Any algo-rithm that outputs an estimate of somethingplus an error range incorporates one. Themere presence of a self-model does not pro-vide us some kind of mystical reflection zonewhere we can make consciousness pop outas an “emergent” phenomenon. This point isoften misunderstood by critics of AI (Block,1997; Rey, 1997) who attribute to computa-tionalists the idea that consciousness is noth-ing but the ability to model oneself. In sodoing, they tend to muddy the water fur-ther by saying that computationalists con-fuse consciousness with self-consciousness. Ihope in what follows I can make these watersa bit clearer.

Today’s information-processing systemsare not very smart. They tend to work in nar-row domains and outperform humans onlyin certain areas, such as chess and numericalcomputation, in which clear formal groundrules are laid out in advance. A robot that canwalk into a room, spy a chessboard, and ask ifanyone wants to play is still far in the future.This state of affairs raises a huge obstaclefor those who believe that consciousness isbuilt on top of intelligence, rather than viceversa, that obstacle being that everythingwe say is hypothetical. It’s easy to counterthe computationalist argument. Just say, “Ithink you’re wrong about intelligence pre-ceding consciousness, but even if you’re rightI doubt that computers will ever reach thelevel of intelligence required.”

To which I reply, Okay. But let’s sup-pose they do reach that level. We avoidbegging any questions by using my hypothet-ical chess-playing robot as a concrete exam-ple. We can imagine it being able to loco-mote, see chessboards, and engage in simpleconversations: “Want to play?” “Later.” “I’ll

be back.” We start by assuming that it isnot conscious and then think about what itwould gain by having interior self-models ofa certain class. The starting assumption, thatit isn’t conscious, should be uncontroversial.

One thing such a robot might need is away to handle perceptual errors. Supposethat it has a subroutine for recognizing chess-boards and chess pieces.10 For serious playonly Staunton chess pieces are allowed, butyou can buy a chessboard with pieces ofalmost any shape; I have no doubt thatDisney sells a set with Mickey and Min-nie Mouse as king and queen. Our robot,we suppose, can correct for scale, lighting,and other variations of the appearance ofStaunton pieces, but just can’t “parse” otherkinds of pieces. It could also be fooled byobjects that only appeared to be Stauntonchess pieces.

Now suppose that the robot containedsome modules for improving its perfor-mance. It might be difficult to calibratethe perceptual systems of our chess-playingrobots at the factory, especially because dif-ferent owners will use them in different situ-ations. So we suppose that after a perceptualfailure a module we will call the perceptiontuner will try to diagnose the problem andchange the parameters of the perceptual sys-tem to avoid it in the future.

The perception tuner must have access tothe inputs and outputs of the chess recogni-tion system and, of course, access to param-eters that it can change to improve thesystem’s performance. It must have a self-model that tells it how to change the param-eters to reduce the likelihood of errors. (The“backpropagation” algorithm used in neuralnets (Rumelhart et al., 1986) is an exam-ple.) What I want to call attention to isthat the perception tuner interprets the out-puts of the perceptual system in a ratherdifferent way from the decision-making sys-tem. The decision-making system interpretsthem (to oversimplify) as being about theenvironment; the tuning system interpretsthem as being about the perceptual system.For the decision maker, the output “Pawnat x, y, z” means that there is a pawn at acertain place. For the tuner, it means that

Page 155: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 137

the perceptual system says there is a pawn;in other words, that there appears to bea pawn.

Here is where the computationalist anal-ysis of intentionality steps in. We don’t needto believe that either the decision makeror the tuner literally “thinks” that a symbolstructure at a certain point means a particu-lar thing. The symbol structure S means X ifthere is a harmonious overall interpretationof the states of the robot in which S meansX. The perceptual-tuner scenario suggeststhat we can distinguish two sorts of accessto a subsystem: normal access and introspec-tive access. The former refers to the flowof information that the subsystem extractsfrom the world (Dretske, 1981). The latterrefers to the flow of information it producesabout the normal flow.11 For our robot, nor-mal access gives it information about chesspieces; introspective access gives it informa-tion about . . . what, exactly? A datum pro-duced by the tuner would consist of a desig-nator of some part of the perceptual fieldthat was misinterpreted, plus informationabout how it was interpreted and how itshould have been. We can think of this asbeing information about “appearance” vs.“reality.”

The next step in our story is to sup-pose that our robot has “episodic” memories;that is, memories of particular events thatoccurred to it. (Psychologists draw distinc-tions between these memories and otherkinds, such as learned skills [e.g., the mem-ory of how to ride a bicycle] and abstractknowledge [e.g., the memory that France isnext to Germany], sometimes called seman-tic memory.) We take episodic memory forgranted, but presumably flatworms do with-out it; there must be a reason why it evolvedin some primates. One possibility is thatit’s a means to keep track of events whosesignificance is initially unknown. If some-thing bad or good happens to an organ-ism, it might want to retrieve past occa-sions when something similar happenedand try to see a pattern. It’s hard to saywhy the expense of maintaining a com-plex “database” would be paid back in termsof reproductive success, especially given

how wrong-headed people can be aboutexplaining patterns of events. But perhapsall that is required is enough paranoia toavoid too many false negatives in predictingcatastrophes.

The final step is to suppose that the robotcan ask fairly general questions about theoperation of its perceptual and decision-making systems. Actually, this ability isclosely tied to the ability to store episodicmemories. To remember something onemust have a notation to express it. Remem-bering a motor skill might require storinga few dozen numerical parameters (e.g.,weights in neural networks, plus somesequencing information). If this is correct,then, as argued above, learning a skill meansnudging these parameters toward optimalvalues. Because this notation is so lean, itwon’t support recording the episodes duringwhich skill was enhanced. You may remem-ber your golf lessons, but those memories areindependent of the “memories,” encoded asnumerical parameters, that manifest them-selves as an improved putt. Trying to thinkof a notation in which to record an arbitraryepisode is like trying to think of a formalnotation to capture the content of a Tol-stoy novel. It’s not even clear what it wouldmean to record an episode. How much detailwould there be? Would it always have to befrom the point of view of the creature thatrecorded it? Such questions get us quicklyinto the realm of Knowledge Representa-tion, and the Language of Thought (Fodor,1975). For that matter, we are quickly ledto the topic of ordinary human language,because the ability to recall an episode seemsclosely related to the abilities to tell about itand to ask about it. We are far from under-standing how language, knowledge represen-tation, and episodic memory work, but itseems clear that the mechanisms are tightlyconnected, and all have to do with whatsorts of questions the self-model can answer.This clump of mysteries accounts for whyDennett’s (1991b) meme-based theory is soattractive. He makes a fairly concrete pro-posal that language came first and that theevolution of the self-model was driven bythe evolution of language.

Page 156: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

138 the cambridge handbook of consciousness

Having waved our hands a bit, we canget back to discussing the ability of humans,and presumably other intelligent creatures,to ask questions about how they work. Wewill just assume that these questions areasked using an internal notation reminiscentof human language, and then answered usinga Minskyesque self-model. The key observa-tion is that the self-model need not be com-pletely accurate or, rather, that there is a cer-tain flexibility in what counts as an accurateanswer, because what it says can’t be con-tradicted by other sources of information. Ifall people’s self-models say they have freewill, then free will can’t be anything butwhatever it is all people think they have. Itbecomes difficult to deny that we have freewill, because there’s no content to the claimthat we have it over and above what the cho-rus of self-models declare.12

Phenomenal experience now emerges asthe self-model’s answer to the question,What happens when I perceive something?The answer, in terms of appearance, real-ity, and error, is accurate up to a point.It’s when we get to qualia that the modelends the explanation with a just-so story.It gives more useful answers on such ques-tions as whether it’s easier to confuse greenand yellow than green and red, or whatto do when senses conflict, or what condi-tions make errors more or less likely. Butto questions such as, How do I know thisis red in the first place?, it gives an answerdesigned to stop inquiry. The answer is thatred has this quality (please focus atten-tion on the red object), which is intrinsi-cally different from the analogous qualityfor green objects (now focus over here, ifyou don’t mind). Because red is “intrinsicallylike . . . this,” there is no further question toask. Nor should there be. I can take steps toimprove my classification of objects by color,but there’s nothing I can do to improve myability to tell red from green (or, more plau-sibly, to tell two shades of red apart) onceI’ve obtained optimal lighting and viewingconditions.13

The computationalist theory of phenom-enal consciousness thus ends up looking likea spoil-sport’s explanation of a magic trick.

It comes down to this: “Don’t look overthere! The key move is over here, where youweren’t looking!”14 Phenomenal conscious-ness is not part of the mechanism of percep-tion, but part of the mechanism of introspectionabout perception.

It is easy to think that this theory is simi-lar to Perlis’s model of self-consciousness asultimately fundamental, and many philoso-phers have misread it that way. That’s whythe term “self-consciousness” is so mislead-ing. Ordinarily what we mean by it is con-sciousness of self. But the self-model theoryof consciousness aims to explain all phenom-enal consciousness in terms of subpersonalmodeling by an organism R of R’s own per-ceptual system. Consciousness of self is just aparticular sort of phenomenal consciousness,so the theory aims to explain it in terms ofmodeling by R of R’s own perceptual systemin the act of perceiving R. In these last twosentences the word “self” does not appearexcept as part of the definiendum, not as partof the definiens. Whatever the self is, it isnot lying around waiting to be perceived;the act of modeling it defines what it is to agreat extent. There is nothing mystical goingon here. When R’s only view of R is R∗, inMinsky’s terminology, then it is no surpriseif terms occur in R∗ whose meaning dependsat least partly on how R∗ fits into everythingelse R is doing, and in particular on how (thenatural-language equivalents of those) termsare used by a community of organisms towhich R belongs.

I think the hardest part of this theoryto accept is that perception is normallynot mediated, or even accompanied, byqualia. In the introduction to this chapter, Iinvited readers to cast their eyes over a com-plex climate-control system and observe theabsence of sensation. We can do the sameexercise with the brain, with the same result.It just doesn’t need sensations to do its job.But if you ask it, it will claim it does. A qualeexists only when you look for it.

Throughout this section, I have tried tostay close to what I think is a consensus posi-tion on a computational theory of phenom-enal consciousness. But I have to admit thatthe endpoint to which I think we are driven

Page 157: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 139

is one that many otherwise fervent computa-tionalists are reluctant to accept. There is noalternative conclusion on the horizon, just awish for one, as in this quote from (Perlis,1997):

. . . Perhaps bare consciousness is in and ofitself a self-distinguishing process, a processthat takes note of itself. If so, it could stillbe considered a quale, the ur-quale, whatit’s like to be a bare subject. . . . What mightthis be? That is unclear. . . .

Perlis believes that a conscious system needsto be “strongly self-referring,” in that itsmodeling of self is modeled in the very mod-eling, or something like that. “Why do weneed a self-contained self, where referringstops? Negotiating one’s way in a complexworld is a tough business. . . . ” He sketchesa scenario in which Ralph, a robot, needs anew arm:

Suppose the new arm is needed within 2 4hours. He cannot allow his decision-makingabout the best and quickest way to orderthe arm get in his way, i.e., he must notallow it to run on and on. He can usemeta-reasoning to watch his reasoning soit does not use too much time, but thenwhat is to watch his meta-reasoning? . . .He must budget his time. Yet the budget-ing is another time-drain, so he must payattention to that too, and so on in an infiniteregress . . . Somehow he must regard [allthese modules] as himself, one (complex)system reasoning about itself, includingthat very observation. He must stronglyself-refer: he must refer to that very refer-ring so that its own time-passage can betaken into account. (Emphasis in original.)

It appears to me that two contrary intu-itions are colliding here. One is the hard-headed computationalist belief that self-modeling is all you need for consciousness;the other is the nagging feeling that self-modeling alone can’t quite get us all theway. Yet when he tries to find an exam-ple, he winds up with a mystical version ofthe work by Boddy and Dean (1989) thatI cited above as a prosaic example of self-modeling. It seems clear to me that theonly reason Perlis needs the virtus dormi-

tiva of “strong self-reference” is because theproblem-solving system he’s imagining is notan ordinary computer program, but a tran-scendental self-contemplating mind – some-thing not really divided into modules at all,but actively dividing itself into time-sharedvirtual modules as it shifts its attention fromone aspect of its problem to another, thento a meta-layer, a meta-meta-layer, and soforth. If you bite the bullet and accept thatall this meta-stuff, if it exists at all, exists onlyin the system’s self-model, then the needfor strong self-reference, and the “ur-quale,”goes away, much like the ether in the the-ory of electromagnetism. So I believe, but Iadmit that most AI researchers who take aposition probably share Perlis’s reluctance tolet that ether go.

The Critics

AI has always generated a lot of controversy.The typical pattern is that some piece ofresearch captures the public’s imagination,as amplified by journalists, then the actualresults don’t fit those public expectations,and finally someone comes along to chalkup one more failure of AI research. Mean-while, often enough the research does suc-ceed, not on the goals hallucinated by thepopular press, but on those the researchersactually had in mind, so that the AI com-munity continues to gain confidence that itis on the right track. Criticism of AI modelsof consciousness doesn’t fit this pattern. AsI observed at the outset, almost no one inthe field is “working on” consciousness, andcertainly there’s no one trying to write a con-scious program. It is seldom that a journalistcan make a breathless report about a robotthat will actually have experiences!!15

Nonetheless, there has been an out-pouring of papers and books arguing thatmechanical consciousness is impossible andthat suggestions to the contrary are wastefulof research dollars and possibly even danger-ously dehumanizing. The field of “artificialconsciousness” (AC) is practically definedby writers who deny that such a thing ispossible. Much more has been written by

Page 158: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

140 the cambridge handbook of consciousness

AC skeptics than by those who think itis possible. In this section I discuss someof those criticisms and refute them as bestI can.

Due to space limitations, I try to focuson critiques that are specifically directedat computational models of consciousness,as opposed to general critiques of materi-alist explanation. For example, I pass overJackson’s (1982) story about “Mary, the colorscientist” who learns what red looks like.There are interesting things to say aboutit,which I say in McDermott (2001), butJackson’s critique is not directed at, anddoesn’t mention, computationalism in par-ticular. I also pass over the vast literatureon “inverted spectrum” problems, which isa somewhat more complex version of thesour/spicy taco problem.

Another class of critiques that I omit arethose whose aim is to show that computerscan never achieve human-level intelligence.As discussed in the section on research oncomputational models of consciousness, Iconcede that if computers can’t be intel-ligent then they can’t be conscious either.But our focus here is on consciousness, sothe critics I try to counter are those whospecifically argue that computers will neverbe conscious, even if they might exhibitintelligent behavior. One important groupof arguments this leaves out are those basedon Godel’s proof that Peano arithmetic isincomplete (Nagel & Newman, 1958; Pen-rose, 1989, 1994). These arguments areintended to show a limitation in the abili-ties of computers to reason, not specificallya limitation on their ability to experiencethings; in fact, the connection between thetwo is too tenuous to justify talking aboutthe topic in detail.

Turing’s Test

Let’s start where the field started: withTuring’s Test (Turing, 1950). As describedearlier, it consists of a judge trying to dis-tinguish a computer from a person by carry-ing on typed conversations with both. If thejudge gets it wrong about 50% of the time,then the computer passes the test.

Turing’s Test is not necessarily relevantto the computational theory of conscious-ness. Few of the theorists discussed in thischapter have invoked it as a methodologi-cal tool. Where it comes in is when relianceon it is attributed to computationalists. Acritic will take the computationalist’s focuson the third-person point of view as anendorsement of behaviorism and then jumpto Turing’s Test as the canonical behavioristtool for deciding whether an entity is con-scious. That first step, from “third-person”to “behaviorist,” is illegitimate. It is, in fact,somewhat ludicrous to accuse someone ofbeing a behaviorist who is so eager to openan animal up (metaphorically, that is) andstuff its head with intricate block diagrams.All the “third-personist” is trying to do isstick to scientifically, that is, publicly, avail-able facts. This attempt is biased against thefirst-person view, and that bias pays off byeventually giving us an explanation of thefirst person.

So there is no particular reason for a com-putationalist to defend the Turing Test. Itdoesn’t particularly help develop theoreti-cal proposals, and it gets in the way of think-ing about intelligent systems that obviouslycan’t pass the test. Nonetheless, an objectionto computationalism raised in the section,“Moore/Turing Inevitability,” does requirean answer. That was the objection that evenif a computer could pass the Turing Test,this achievement wouldn’t provide any evi-dence that it actually was conscious. I dis-agree with this objection on grounds thatshould be clear at this point: To be con-scious is to model one’s mental life in termsof things like sensations and free decisions.It would be hard to have an intelligent robotthat wasn’t conscious in this sense, becauseeverywhere the robot went it would haveto deal with its own presence and its owndecision making, and so it would have tohave models of its behavior and its thoughtprocesses. Conversing with it would be agood way of finding out how it thoughtabout itself; that is, what its self-models werelike.

Keep in mind, however, that the Tur-ing Test is not likely to be the standard

Page 159: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 141

method to check for the presence of con-sciousness in a computer system, if we everneed a standard method. A robot’s self-model, and hence its consciousness, could bequite different from ours in respects that areimpossible to predict given how far we arefrom having intelligent robots. It is alsojust barely possible that a computer notconnected to a robot could be intelligentwith only a very simple self-model. Sup-pose the computer’s job was to control thetraffic, waste management, and electric gridof a city. It might be quite intelligent, buthardly conscious in a way we could recog-nize, simply because it wouldn’t be presentin the situations it modeled the way we are.It probably couldn’t pass the Turing Testeither.

Somewhere in this thicket of possibilitiesthere might be an artificial intelligence withan alien form of consciousness that couldpretend to be conscious on our terms whileknowing full well that it wasn’t. It could thenpass the Turing Test, wine tasting division, byfaking it. All this shows is that there is a slightpossibility that the Turing Test could be goodat detecting intelligence and not so good atdetecting consciousness. This shouldn’t givemuch comfort to those who think that theTuring Test systematically distracts us fromthe first-person viewpoint. If someone everbuilds a machine that passes it, it will cer-tainly exhibit intentionality and intelligenceand almost certainly be conscious. There’s aremote chance that human-style conscious-ness can be faked, but no chance that intel-ligence can be.16

The Chinese Room

One of the most notorious arguments in thedebate about computational consciousnessis Searle’s (1980) “Chinese Room” argument.It’s very simple. Suppose we hire Searle(who speaks no Chinese) to implement acomputer program for reading stories in Chi-nese and then answering questions aboutthose stories. Searle reads each line of theprogram and does what it says. He executesthe program about a million times slowerthan an actual CPU would, but if we don’t

mind the slow motion we could carry on aperfectly coherent conversation with him.

Searle goes on:

Now the claims made by strong AI are thatthe programmed computer understands thestories and that the program in some senseexplains human understanding. But we arenow in a position to examine these claimsin light of our thought experiment.

1. As regards the first claim, it seems to mequite obvious in the example that I donot understand a word of the Chinesestories. I have inputs and outputs thatare indistinguishable from those of thenative Chinese speaker, and I can haveany formal program you like, but I stillunderstand nothing. . . .

2 . As regards the second claim, that theprogram explains human understand-ing, we can see that the computer and itsprogram do not provide sufficient con-ditions of understanding since the com-puter and the program are functioning,and there is no understanding.

It’s hard to see what this argument hasto do with consciousness. The connection issomewhat indirect. Recall that in the sec-tion, “Intentionality of Computational Sys-tems,” I made sure to discuss “impersonal”intentionality, the kind a system has byvirtue of being a computer whose symbolstructures are causally connected to the envi-ronment so as to denote objects and statesof affairs in that environment. Searle abso-lutely refuses to grant that there is any suchthing as impersonal or subpersonal inten-tionality (Searle, 1992). The paradigm caseof any mental state is always the consciousmental state, and he is willing to stretch men-tal concepts only far enough to cover uncon-scious mental states that could have beenconscious (repressed desires, for instance).Hence there is no understanding of Chi-nese unless it is accompanied by a consciousawareness or feeling of understanding.

If Searle’s stricture were agreed upon,then all research in cognitive science wouldcease immediately, because it routinelyassumes the existence of non-conscious sym-bol processing to explain the results ofexperiments.17

Page 160: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

142 the cambridge handbook of consciousness

Searle seems to have left an escape clause,the notion of “weak AI”:

I find it useful to distinguish what I willcall ‘strong’ AI from ‘weak’ or ‘cautious’AI. . . . According to weak AI, the princi-pal value of the computer in the study ofthe mind is that it gives us a very power-ful tool. For example, it enables us to for-mulate and test hypotheses in a more rig-orous and precise fashion. But accordingto strong AI, the computer is not merely atool in the study of the mind; rather, theappropriately programmed computer reallyis a mind, in the sense that computers giventhe right programs can be literally said tounderstand and have other cognitive states(Searle, 1980).

Many people have adopted this terminol-ogy, viewing the supposed weak version ofAI as a safe harbor in which to hide from crit-icism. In my opinion, the concept of weakAI is incoherent. Suppose someone writesa program to simulate a hurricane, to use acommon image. The numbers in the simula-tion denote actual or hypothetical air pres-sures, wind velocities, and the like. The sim-ulation embodies differential equations thatare held to be more or less true statementsabout how wind velocities affect air pres-sures and vice versa, and similarly for all theother variables involved. Now think about“computer simulations of human cognitivecapacities” (Searle’s phrase). What are theanalogues of the wind velocities and air pres-sures in this case? When we use the simu-lations to “formulate and test hypotheses,”what are the hypotheses about? They mightbe about membrane voltages and currents inneurons, but of course they aren’t, becauseneurons are “too small.” We would have tosimulate an awful lot of them, and we don’treally know how they’re connected, and thesimulation would just give us a huge chunkof predicted membrane currents anyway. Sono one does that. Instead, they run simula-tions at a much higher level, at which sym-bols and data structures emerge. This is trueeven for neural-net researchers, whose mod-els are much, much smaller than the realthing, so that each connection weight rep-resents an abstract summary of a huge col-

lection of real weights. What, then, is theontological status of these symbols and datastructures? If we believe that these symbolsand the computational processes over themare really present in the brain, and reallyexplain what the brain does, then we areback to strong AI. But if we don’t believethat, then why the hell are we simulatingthem? By analogy, let us compare strong vs.weak computational meteorology. The for-mer is based on the belief that wind veloci-ties and air pressures really have somethingto do with how hurricanes behave. The latterallows us to build “powerful tools” that per-form “computer simulations of [hurricanes’physical] capacities,” and “formulate and testhypotheses” about . . . something other thanwind velocities and air pressures?

Please note that I am not saying thatall cognitive scientists are committed to acomputationalist account of consciousness.I’m just saying that they’re committed to acomputationalist account of whatever it isthey’re studying. If someone believes thatthe EPAM model (Feigenbaum & Simon,1984) accounts for human errors in memo-rizing lists of nonsense syllables, they have tobelieve that structures isomorphic to the dis-crimination trees in EPAM are actually to befound in human brains. If someone believesthat there is no computationalist account ofintelligence, then they must also believe thata useful computer simulation of intelligencemust simulate something other than symbolmanipulation, perhaps ectoplasm secretions.In other words, given our lack of any non-computational account of the workings ofthe mind, they must believe it to be point-less to engage in simulating intelligence atall at this stage of the development of thesubject.

There remains one opportunity for con-fusion. No one believes that a simulationof a hurricane could blow your house offthe beach. Why should we expect a simula-tion of a conscious mind to be conscious (orexpect a simulation of a mind to be a mind)?Well, we need not expect that, exactly. If asimulation of a mind is disconnected from anenvironment, then it would remain a meresimulation.

Page 161: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 143

However, once the connection is madeproperly, we confront the fact that a suffi-ciently detailed simulation of computationC is computation C. This is a property of for-mal systems generally. As Haugeland (1985)observes, the difference between a game liketennis and a game like chess is that theformer involves moving a physical object,the ball, through space, whereas the latterinvolves jumping from one legal board posi-tion to the next, and legal board positions arenot physical entities. In tennis, one must hita ball with certain prescribed physical prop-erties using a tennis racket, which must alsosatisfy certain physical requirements. Chessrequires only that the state of the game berepresented with enough detail to capturethe positions of all the pieces.18 One can useany 8 × 8 array as a board, and any col-lection of objects as pieces, provided theyare isomorphic to the standard board andpieces. One can even use computer datastructures. So a detailed simulation of a goodchess player is a good chess player, providedit is connected by some channel, encodedhowever you like, between its computationsand an actual opponent with whom it isalternating moves. Whereas for a simulationof a tennis player to be a tennis player, itwould have to be connected to a robot capa-ble of tracking and hitting tennis balls.

This property carries over to the simula-tion of any other process that is essentiallycomputational. So, if it happens that con-sciousness is a computational phenomenon,then a sufficiently faithful simulation of aconscious system would be a conscious sys-tem, provided it was connected to the envi-ronment in the appropriate way. This pointis especially clear if the computations inquestion are somewhat modularizable, asmight be the case for a system’s self-model.The difference between a non-conscioustennis player and a conscious one mightinvolve connections among its internal com-putational modules, and not the connectionsfrom there to its cameras and motors. Therewould then be no difference between the“consciousness module” and a detailed simu-lation of that “module”; they would be inter-changeable, provided that they didn’t differ

too much in speed, size, and energy con-sumption. I use scare quotes here becauseI doubt that things will turn out to be thattidy. Nonetheless, no matter how the wireswork out, the point is that nothing other thancomputation need be involved in conscious-ness, which is what Strong AI boils down to.Weak AI boils down to a sort of “cargo cult”whose rituals involve simulations of thingssomeone only guesses might be importantin some way.

Now that I’ve clarified the stakes, let’slook at Searle’s argument. It is ridiculouslyeasy to refute. When he says that “theclaims made by strong AI are that the pro-grammed computer understands the sto-ries and that the program in some senseexplains human understanding,” he may beright about the second claim (depending onhow literally you interpret “explains”), buthe is completely wrong about the first claim,that the programmed computer understandssomething. As McCarthy says, “The Chi-nese Room Argument can be refuted in onesentence: Searle confuses the mental qual-ities of one computational process, himselffor example, with those of another pro-cess that the first process might be inter-preting, a process that understands Chinese,for example” (McCarthy, 2000). Searle’sslightly awkward phrase “the programmedcomputer” gives the game away. Computersand software continually break our histori-cally founded understanding of the identityof objects across time. Any computer userhas (too often) had the experience of notknowing “whom” they’re talking to whentalking to their program. Listen to a layper-son try to sort out the contributions to theircurrent state of frustration made by thee-mail delivery program, the e-mail read-ing program, and the e-mail server. Whenyou run a program you usually then talkto it. If you run two programs at once youswitch back and forth between talking toone and talking to the other.19 The phrase“programmed computer” makes it sound asif programming it changes it into somethingyou can talk to. The only reason to use suchan odd phrase is because in the story Searlehimself plays the role of the programmed

Page 162: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

144 the cambridge handbook of consciousness

computer, the entity that doesn’t under-stand. By pointing at the “human CPU” andshouting loudly, he hopes to distract us fromthe abstract entity that is brought into exis-tence by executing the story-understandingprogram.

We can state McCarthy’s argumentvividly by supposing that two CPUs areinvolved, as they might well be. The story-understanding program might be run on onefor a while, then on the other, and so forth,as dictated by the internal economics of theoperating system. Do AI researchers imag-ine that the ability to “understand” jumpsback and forth between the two CPUs?If we replace the two CPUs by two peo-ple, does Strong AI predict that the abilityto understand Chinese will jump back andforth between the two people (McDermott,2001)? Of course not.

Symbol Grounding

In both of the preceding sections, it some-times seems as if intentionality is the realissue, or what Harnad (1990, 2001) callsthe symbol-grounding problem. The prob-lem arises from the idea of a disembodiedcomputer living in a realm of pure syntax,which we discussed in the section on BrianCantwell Smith. Suppose that such a com-puter ran a simulation of the battle of Water-loo. That is, we intend it to simulate thatbattle, but for all we know there might beanother encoding of its states that wouldmake it be a simulation of coffee prices inEcuador.20 What connects the symbols tothe things they denote? In other words, whatgrounds the symbols?

This problem underlies some people’sconcerns about the Turing Test and the Chi-nese Room because the words in the Tur-ing Test conversation might be consideredto be ungrounded and therefore meaningless(Davidson, 1990) and the program and datastructures being manipulated by the humanCPU John Searle seem also to be discon-nected from anything that could give themmeaning.

As should be clear from the discussionin the section, “Intentionality of Computa-

tional Systems,” symbols get their meaningsby being causally connected to the world.Harnad doesn’t disagree with this, but hethinks that the connection must take thespecial form of neural networks, natural orartificial.21 The inputs to the networks mustbe sensory transducers. The outputs are neu-rons that settle into different stable patternsof activation depending on how the trans-ducers are stimulated. The possible stablepatterns and the way they classify inputsare learned over time as the network istrained by its owner’s encounters with itssurroundings.

How does the hybrid system find theinvariant features of the sensory projectionthat make it possible to categorize and iden-tify objects correctly? Connectionism, withits general pattern-learning capability, seemsto be one natural candidate (though theremay well be others): Icons, paired with feed-back indicating their names, could be pro-cessed by a connectionist network that learnsto identify icons correctly from the sample ofconfusable alternatives it has encountered,by dynamically adjusting the weights of thefeatures and feature combinations that arereliably associated with the names in a waythat (provisionally) resolves the confusion.It thereby reduces the icons to the invari-ant (confusion-resolving) features of the cat-egory to which they are assigned. The netresult is that the grounding of the name tothe objects that give rise to their sensory pro-jections and their icons would be providedby neural networks (Harnad, 1990).

The symbol-grounding problem, if it isa problem, requires no urgent solution, asfar as I can see. I think it stems from abasic misunderstanding about what com-putationalism is and what the alternativesare. According to Harnad, “The predom-inant approach to cognitive modeling isstill what has come to be called ‘compu-tationalism’ . . . , the hypothesis that cogni-tion is computation. The more recent rivalapproach is ‘connectionism’ . . . , the hypoth-esis that cognition is a dynamic pattern ofconnections and activations in a ‘neural net’”(Harnad, 2001). Put this way, it seems clearthat neural nets would be welcome under

Page 163: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 145

computationalism’s “big tent,” but Harnadspurns the invitation by imposing a seriesof fresh requirements. By “computation”he means “symbolic computation,” whichconsists of syntactic operations on “symboltokens.” Analog computation is ruled out.Symbolic computation doesn’t depend onthe medium in which it is implemented,just so long as it is implemented some-how (because the syntactic categories ofthe symbol tokens will be unchanged). Andlast, but certainly not least, “the symbolsand symbol manipulations in a symbol sys-tem [must be] systematically interpretable(Fodor & Pylyshyn, 1988): They can beassigned a semantics, and they mean some-thing (e.g., numbers, words, sentences, chessmoves, planetary motions, etc.).” The alter-native is “trivial” computation, which pro-duces “uninterpretable formal gibberish.”

As I argue in McDermott (2001), theserequirements have seldom been met by whatmost people call “computational” systems.The average computer programmer knowsnothing about formal semantics or system-atic interpretability. Indeed, in my experi-ence it is quite difficult to teach a program-mer about formal systems and semantics.One must scrape away layers of prior con-ditioning about how to “talk” to comput-ers. Furthermore, as I write in the section,“The Notion of a Computational System,”few AI practitioners refuse to mix and matchconnectionist and symbolic programs. Onemust be careful about how one interpretswhat they say about their practice. Clancey(1999), in arguing for a connectionist archi-tecture, calls the previous tradition model-ing the brain as a “wet” computer similar inimportant respects to the “dry” computerswe use as models. He argues that we shouldreplace it with a particular connectionistarchitecture. As an example of the changethis would bring, he says (p. 30), “Cogni-tive models have traditionally treated pro-cedural memory, including inference rules(‘if X then Y’), as if human memory is justcomputer random-access memory. . . . ” Heproposes to “explore the hypothesis that asequential association, such as an inference-rule . . . , is a temporal relation of activation,

such that if X implies Y,” what is recordedis a “relation . . . of temporal activation, suchthat when X is presently active, Y is a cat-egorization that is potentially active next”(p. 31). But he remains a committed com-putationalist through this seemingly discon-tinuous change. For instance, in discussinghow the new paradigm would actually work,he writes, “The discussion of [insert detailedproposal here] illustrates how the disciplineof implementing a process in a computerrepresentation forces distinctions to be redis-covered and brings into question consistencyof the theory” (p. 44).

The moral is that we must be careful todistinguish between two ways in which com-puters are used in psychological modeling: asimplementation platform and as metaphor.The digital-computer metaphor might shedlight on why we have a single stream ofconsciousness (∼ von Neumann instruc-tion stream?), why we can only remem-ber 7 ± 2 things (∼ size of our registerset?), and why we have trouble with deepcenter-embedded sentences like “The boythe man the dog bit spanked laughed” (∼stack overflow?). The metaphor may havehad some potential in the 1950s, when cog-nitive science was just getting underway, butit’s pretty much run out of steam at thispoint. Clancey is correct to point out howthe metaphor may have affected cognitivescience in ways that seemed too harmlessto notice, but that in retrospect are hard tojustify. For instance, the program counter ina computer makes pursuing a rigid list oftasks easy. If we help ourselves to a programcounter in implementing a cognitive model,we may have begged an important questionabout how sequentiality is achieved in a par-allel system like the brain.

What I argue is that the essence of com-putationalism is to believe that (a) brains areessentially computers and (b) digital com-puters can simulate them in all importantrespects, even if they aren’t digital at all.Because a simulation of a computation is acomputation, the “digitality” of the digitalcomputer cancels out. If symbol groundingis explained by some very special propertiesof a massively parallel neural network of a

Page 164: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

146 the cambridge handbook of consciousness

particular sort, then if that net can be sim-ulated in real time on a cluster of parallelworkstations, the cluster becomes a virtualneural net, which grounds symbols as wellas a “real” one would.

Perhaps this is the place to mention thepaper by O’Brien and Opie (1999) thatpresents a “connectionist theory of phe-nomenal experience.” The theory makes abasic assumption, that a digital simulationof a conscious connectionist system wouldnot be conscious. It is very hard to seehow this could be true. It’s the zombiehypothesis, raised from the dead one moretime. The “real” neural net is conscious, butthe simulated one, in spite of operating inexactly the same way (plus or minus a littlenoise), would be experience-less – anotherzombie lives.

Conclusions

The contribution of artificial intelligence toconsciousness studies has been slender sofar, because almost everyone in the fieldwould rather work on better defined, lesscontroversial problems. Nonetheless, theredo seem to be common themes runningthrough the work of AI researchers thattouches on phenomenal consciousness. Con-sciousness stems from the structure of theself-models that intelligent systems use to rea-son about themselves. A creature’s models ofitself are like models of other systems, exceptfor some characteristic indeterminacy aboutwhat counts as accuracy. To explain howan information-processing system can havea model of something, there must be a priornotion of intentionality that explains whyand how symbols inside the system can referto things. This theory of impersonal intention-ality is based on the existence of harmoniousmatch-ups between the states of the systemand states of the world. The meanings ofsymbol structures are what the match-upssay they are.

Having established that a system’s modelof that very system is a non-vacuous idea, thenext step is to show that the model almostcertainly will contain ways of thinking about

how the system’s senses work. The differ-ence between appearance and reality arisesat this point, and allows the system to rea-son about its errors in order to reduce thechance of making them. But the self-modelalso serves to set boundaries to the questionsthat it can answer. The idea of a sensoryquale arises as a useful way of cutting offuseless introspection about how things areultimately perceived and categorized.

Beyond this point it is hard to find consen-sus between those who believe that the just-so story the self-model tells its owner is allyou need to explain phenomenal conscious-ness, and those who think that somethingmore is needed. Frustratingly, we won’t beable to create systems and test hypothe-ses against them in the foreseeable future,because real progress on creating consciousprograms awaits further developments inenhancing the intelligence of robots. Thereis no guarantee that AI will ever achieve therequisite level of intelligence, in which casethis chapter has been pretty much wastedeffort.

There are plenty of critics who don’twant to wait to see how well AI suc-ceeds, because they think they have argu-ments that can shoot down the conceptof machine consciousness or rule out cer-tain forms of it, right now. We examinedthree such arguments: the accusation thatAI is behaviorist on the subject of conscious-ness, the “Chinese Room” argument, and thesymbol-grounding problem. In each case thebasic computationalist working hypothesissurvived intact: that the embodied brain isan “embedded” computer and that a reason-ably accurate simulation of it would havewhatever mental properties it has, includingphenomenal consciousness.

Notes

1. I would be tempted to say there is a spectrumfrom “weak” to “strong” computationalism toreflect the different stances on these issues,but the terms “weak” and “strong” have beenused by John Searle (1980) in a quite differentway. See the section on the “Chinese room.”

Page 165: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 147

2 . I am taking this possibility seriously for nowbecause everyone will recognize the issue andits relationship to the nature of qualia. But Ifollow Sloman & Chrisley (2003) in believ-ing that cross-personal comparison of qualiamakes no sense. See section on Perlis andSloman and McDermott (2001).

3 . Turing actually proposed a somewhat differ-ent test. See Davidson (1990) for discussion.Nowadays this version is the one everyoneworks with.

4 . The Loebner Prize is awarded every yearto the writer of a program that appears“most human” to a panel of judges. Youcan see how close the programs are get-ting to fooling anyone by going to its Website, http://www.loebner.net/Prizef/loebner-prize.html.

5 . The “[z]” is used to flag zombie words whosemeanings must not be confused with normalhuman concepts.

6. Even more fortunate, perhaps, is the fact thatfew will grant that foundational ontology is aproblem in the first place. Those who thinkelementary particles invented us, rather thanvice versa, are in the minority.

7. Or intelligent aliens, but this is an irrelevantvariation on the theme.

8. By analogy with Christology in Christian the-ology, which ranges from high to low depend-ing on how superhuman one believes Jesus tobe.

9. For some readers this example will elicit fairlydetailed visual images of shopping carts andumbrellas, and for those readers it’s plausiblethat the images are part of the mental-modelmachinery. But even people without muchvisual imagery can still have mental modelsand might still use them to reason about gro-cery shopping.

10. I have two reasons for positing a chessboard-recognition subroutine instead of a general-purpose vision system that recognizes chess-boards and chess pieces in terms of more“primitive” elements: (1) Many roboticistsprefer to work with specialized perceptualsystems, and (2) the qualia-like entities wewill predict will be different in content fromhuman qualia, which reduces the chances ofjumping to conclusions about them.

11. Of course, what we’d like to be able to sayhere is that normal access is the access it wasdesigned to support, and for most purposesthat’s what we will say, even when evolution

is the “designer.” But such basic concepts can’tdepend on historical events arbitrarily that arefar in the past.

12 . For the complete story on free will, seeMcDermott (2001, Chapter 3). I referredto Minsky’s rather different theory above;McCarthy champions his own version inMcCarthy & Hayes (1969).

13 . One may view it as a bug that a concept,qualia, whose function is to end introspec-tive questioning, has stimulated so muchconversation! Perhaps if human evolutiongoes on long enough natural selection willeliminate those of us who persist in talkingabout such things, especially while crossingbusy streets.

14 . Cf. Wittgenstein (1953): “The decisive move-ment in the conjuring trick has been made,and it was the very one we thought quiteinnocent.”

15 . One occasionally hears news reports aboutattempts to build an artificial nose. When Ihear such a report, I picture a device thatmeasures concentrations of substances in theair. But perhaps the average person imag-ines a device that “smells things,” so that,for example, the smell of a rotten egg wouldbe unpleasant for it. In any case, these newsreports seem not to have engendered muchcontroversy so far.

16. I realize that many people, for instanceRobert Kirk (1994), believe that in principlesomething as simple as a lookup table couldsimulate intelligence. I don’t have space hereto refute this point of view, except to notethat in addition to the fact that the tablewould be larger than the known universe andtake a trillion years to build, a computer car-rying on a conversation by consulting it wouldnot be able to answer a question about whattime it is.

17. There is a popular belief that there is sucha thing as “nonsymbolic” or “subsymbolic”cognitive science, as practiced by those whostudy artificial neural nets. As I mentioned inthe section, “The Notion of ComputationalSystem,” this distinction is usually unimpor-tant, and the present context is an exam-ple. The goal of neural-net researchers isto explain conscious thought in terms ofunconscious computational events in neu-rons, and as far as Searle is concerned, thisis just the same fallacy all over again (Searle,1990).

Page 166: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

148 the cambridge handbook of consciousness

18. And a couple of other bits of information,such as whether each player still has castlingas an option.

19. Technically I mean “process” here, not “pro-gram.” McCarthy’s terminology is more accu-rate, but I’m trying to be intelligible by tech-nical innocents.

20. I believe these particular examples (Water-loo and Ecuador) were invented by someoneother than me, but I have been unable to findthe reference.

21. The fact that these are called “connectionist”is a mere pun in this context – I hope.

References

Baars, B. J. (1988). A cognitive theory of conscious-ness. New York: Guilford Press.

Baars, B. J. (1997). In the theater of consciousness:The work space of the mind. New York: OxfordUniversity Press.

Block, N. (1997). On a confusion about a func-tion of consciousness. In N. Block, O. Flana-gan, & G. Guzeldere (Eds.), The nature ofconsciousness: Philosophical debates (pp. 375–415). Cambridge, MA: MIT Press.

Block, N., Flanagan, O., & Guzeldere, G.(Eds.) (1997). The nature of consciousness:Philosophical debates. Cambridge, MA: MITPress.

Boddy, M., & Dean, T. (1989). Solving time-dependent planning problems. Proceedings ofthe 11th International Joint Conference on Art-ficiail Intelligence, 979–984 .

Campbell, J. (1994). Past, space and self. Cam-bridge, MA: MIT Press.

Chalmers, D. (1996). The conscious mind: Insearch of a fundamental theory. New York:Oxford University Press.

Churchland, P. (1986). Neurophilosophy: To-ward a unified science of the mind-brain. Cam-bridge, MA: MIT Press.

Churchland, P. (1988). Matter and consciousness:A contemporary introduction to the philosophy ofmind. Cambridge, MA: MIT Press.

Clancey, W. J. (1999). Conceptual coordination:How the mind orders experience in time. Mah-wah, NJ: Erlbaum.

Currie, K., & Tate, A. (1991). O-plan: Theopen planning architecture. Artificial Intelli-gence, 52 (1), 49–86.

Davidson, D. (1990). Turing’s test. In K. M.Said, W. Newton-Smith, R. Viale, & K. Wilkes(Eds.), Modelling the mind (pp. 1–11). Oxford:Clarendon Press.

Dawkins, R. (1976). The selfish gene. Oxford:Oxford University Press.

Dennett, D. C. (1969). Content and consciousness.London: Routledge & Kegan Paul.

Dennett, D. C. (1991a). Real patterns. Journal ofPhilosophy, 88, 27–51.

Dennett, D. C. (1991b). Consciousness explained.Boston: Little, Brown.

Douglas, G., & Saunders, S. (2003). DanDennett: Philosopher of the month. TPMOnline: The Philosophers’ Magazine on theinternet. Retrieved August 2 , 2005 , fromhttp://www.philosophers.co.uk/cafe/philapr2003 .htm.

Dretske, F. I. (1981). Knowledge and the flow ofinformation. Cambridge, MA: MIT Press.

Feigenbaum, E. A., & Simon, H. A. (1984).Epam-like models of recognition and learning.Cognitive Science, 8(4), 305–336.

Fodor, J. (1975). The language of thought. NewYork: Thomas Y. Crowell.

Fodor, J., & Pylyshyn, Z. (1988). Connectionismand cognitive architecture: A critical analysis.In S. Pinker, & J. Mehler (Eds.), Connectionsand symbols (pp. 3–72). Cambridge, MA: MITPress.

Harnad, S. (1990). The symbol grounding prob-lem. Physica D, 42 , 335–346.

Harnad, S. (2001). Grounding symbols in theanalog world with neural nets – a hybridmodel. Psycoloquy. Retrieved August 2 , 2005 ,from http://psycprints.ecs.soton.ac.uk/archive/00000163 /.

Haugeland, J. (1985). Artificial intelligence: Thevery idea. Cambridge, MA: MIT Press.

Hayes-Roth, B. (1985). A blackboard architecturefor control. Artificial Intelligence, 2 6(3), 251–321.

Hofstadter, D. R. (1979). Godel, Escher, Bach: Aneternal golden braid. New York: Basic Books.

Hofstadter, D. R., & Dennett, D. C. (1981). Themind’s I: Fantasies and reflections on self andsoul. New York: Basic Books.

Jackson, F. (1982). Epiphenomenal qualia. Philo-sophical Quarterly, 32 , 127–136.

Jaynes, J. (1976). The origins of consciousness inthe breakdown of the bicameral mind. Boston:Houghton Mifflin.

Page 167: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

artificial intelligence and consciousness 149

Johnson-Laird, P. N. (1983). Mental models. Cam-bridge, MA: Harvard University Press.

Kirk, R. (1994). Raw feeling: A philosophicalaccount of the essence of consciousness. Oxford:Oxford University Press.

Kurzweil, R. (1999). The age of spiritual machines:When computers exceed human intelligence. NewYork: Penguin Books.

Libet, B. (1985). Unconscious cerebral initiativeand the role of conscious will in voluntaryaction. Behavioral and Brain Sciences, 8, 529–566.

McCarthy, J. (1990b). Ascribing mental qualitiesto machines. In J. McCarthy (Ed.), Formalizingcommonsense. Norwood, NJ: Ablex.

McCarthy, J. (1995a). Todd Moody’s zombies.Journal of Consciousness Studies. Retrieved June26, 2006, from http://www-formal.stanford.edu/jmc/zombie/zombie.html.

McCarthy, J. (1995b). Making robots con-scious of their mental states. Proceedings ofthe Machine Intelligence Workshop. RetrievedAugust 2 , 2005 , from http://www-formal.stanford.edu/jmc/consciousness.html.

McCarthy, J. (2000). John Searle’s Chinese roomargument. Retrieved August 2 , 2005 , fromhttp://www-formal.stanford.edu/jmc/chinese.html.

McCarthy, J. & Hayes, P. (1969). Some philo-sophical problems from the standpoint of arti-ficial intelligence. In B. Meltzer, & D. Michie(Eds.), Machine intelligence 4 (pp. 463–502).Edinburgh: Edinburgh University Press.

McDermott, D. (2001). Mind and mechanism.Cambridge, MA: MIT Press.

Meltzer, B., & Michie, D. (Eds.). (1969). Machineintelligence 4 . Edinburgh: Edinburgh UniversityPress.

Minsky, M. (Ed.). (1968a). Semantic informationprocessing. Cambridge, MA: MIT Press.

Minsky, M. (1968b). Matter, mind, and models.In M. Minsky (Ed.), Semantic information pro-cessing (pp. 425–432). Cambridge, MA: MITPress.

Minsky, M. (1986). The society of mind. New York:Simon and Schuster.

Moody, T. C. (1994). Conversations with zom-bies. Journal of Consciousness Studies, 1(2), 196–200.

Moore, G. E. (1965). Cramming more com-ponents onto integrated circuits. Electronics,38(8), pp. 114–117.

Moravec, H. P. (1988, Summer). Sensor fusion incertainty grids for mobile robots. AI Magazine,9, 61–74 .

Moravec, H. (1999). Robot: Mere machine to tran-scendent mind. New York: Oxford UniversityPress.

Nagel, E., & Newman, J. R. (1958). Goedel’s proof.New York: New York University Press.

O’Brien, G., & Opie, J. (1999). A connectionisttheory of phenomenal experience. Behavioraland Brain Sciences, 2 2 , 127–148.

Penrose, R. (1989). The emperor’s new mind: Con-cerning computers, minds, and the laws of physics.New York: Oxford University Press.

Penrose, R. (1994). Shadows of the mind: A searchfor the missing science of consciousness. NewYork: Oxford University Press.

Perlis, D. (1994). An error-theory of conscious-ness. Unpublished material, University of Mary-land Computer Science, CS-TR-332 4 , CollegePark, MD.

Perlis, D. (1997). Consciousness as self-function.Journal of Consciousness Studies, 4(5 /6), 509–525 .

Pinker, S., & Mehler, J. (1988). Connections andsymbols. Cambridge, MA: MIT Press.

Rey, G. (1997). A question about consciousness.In N. Block, O. Flanagan, & G. Guzeldere(Eds.), The nature of consciousness: Philosophicaldebates (pp. 461–482). Cambridge, MA: MITPress.

Rumelhart, D. E., McClelland, J. L., & the PDPResearch Group (1986). Parallel distributedprocessing: Explorations in the microstructure ofcognition. Cambridge, MA: MIT Press.

Said, K. M., Newton-Smith, W., Viale, R., &Wilkes, K. (1990). Modelling the mind. Oxford:Clarendon Press.

Scheutz, M. (Ed.) (2002). Computationalism:New directions. Cambridge, MA: The MITPress.

Searle, J. R. (1980). Minds, brains, and program.The Behavioral and Brain Sciences, 3 , 417–424 .

Searle, J. R. (1990). Is the brain’s mind a computerprogram? Scientific American, 2 62 , 26–31.

Searle, J. R. (1992). The rediscovery of the mind.Cambridge, MA: MIT Press.

Sloman, A. (1978). The computer revolution in phi-losophy. Hassocks, Sussex: Harvester Press.

Sloman, A. (2002). The irrelevance of Tur-ing machines to artificial intelligence. In M.Scheutz, M, (Ed.), Computationalism: New

Page 168: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzZ0521857430c06 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:22

150 the cambridge handbook of consciousness

directions (pp. 87–127). Cambridge, MA: TheMIT Press.

Sloman, A., & Chrisley, R. (2003). Virtualmachines and consciousness. Journal of Con-sciousness Studies, 10(4–5), 6–45 .

Smith, B. C. (1984). Reflection and semantics inLisp. Proceedings of the Conference on Principlesof Programming Languages, 11, 23–35 .

Smith, B. C. (1995). On the origins of objects. Cam-bridge, MA: MIT Press.

Smith, B. C. (2002). The foundations of comput-ing. In M. Scheutz, M, (Ed.), Computationa-

lism: New directions (pp. 23–58). Cambridge,MA: The MIT Press.

Turing, A. (1950). Computing machinery andintelligence. Mind, 49, 433–460.

Wilkes, K. (1990). Modelling the mind. In K. M.Said, W. Newton-Smith, R. Viale, & K. Wilkes(Eds.), Modelling the mind (pp. 63–82). Oxford:Clarendon Press.

Winograd, T. (1972). Understanding natural lan-guage. New York: Academic Press.

Wittgenstein, L. (1953). Philosophical investiga-tions. New York: MacMillan.

Page 169: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

C H A P T E R 7

Computational Models of Consciousness:A Taxonomy and Some Examples

Ron Sun and Stan Franklin

Abstract

This chapter aims to provide an overview ofexisting computational (mechanistic) mod-els of cognition in relation to the study ofconsciousness, on the basis of psychologi-cal and philosophical theories and data. Itexamines various mechanistic explanationsof consciousness in existing computationalcognitive models. Serving as an example forthe discussions, a computational model ofthe conscious/unconscious interaction, uti-lizing the representational difference expla-nation of consciousness, is described briefly.As a further example, a software agentmodel that captures another explanationof consciousness (the access explanation ofconsciousness) is also described. The discus-sions serve to highlight various possibilitiesin developing computational models of con-sciousness and in providing computationalexplanations of conscious and unconsciouscognitive processes.

Introduction

In this chapter, we aim to present a short sur-vey and a brief evaluation of existing compu-tational (mechanistic) models of cognitionin relation to the study of consciousness.The survey focuses on their explanationsof the difference between conscious andunconscious cognitive processes on the basisof psychological and philosophical theo-ries and data, as well as potential practicalapplications.

Given the plethora of models, theories,and data, we try to provide in this chapter anoverall (and thus necessarily sketchy) exam-ination of computational models of con-sciousness in relation to the available psy-chological data and theories, as well as theexisting philosophical accounts. We come tosome tentative conclusions as to what a plau-sible computational account should be like,synthesizing various operationalized psycho-logical notions related to consciousness.

151

Page 170: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

152 the cambridge handbook of consciousness

We begin by examining some foun-dational issues concerning computationalapproaches toward consciousness. Then, var-ious existing models and their explanationsof the conscious/unconscious distinction arepresented. After examining a particularmodel embodying a two-system approach,we look at one embodying a unified (one-system) approach and then at a few addi-tional models.

Computational Explanationsof Consciousness

Work in the area of computational mod-eling of consciousness generally assumesthe sufficiency and the necessity of mech-anistic explanations. By mechanistic expla-nation, we mean any concrete computa-tional processes, in the broadest sense ofthe term “computation.” In general, com-putation is a broad term that can be usedto denote any process that can be realizedon generic computing devices, such as Tur-ing machines (or even beyond if there issuch a possibility). Thus, mechanistic expla-nations may utilize, in addition to standardcomputational notions, a variety of otherconceptual constructs ranging, for example,from chaotic dynamics (Freeman, 1995), to“Darwinian” competition (Edelman, 1989),and to quantum mechanics (Penrose, 1994).(We leave out the issue of complexityfor now.)

In terms of the sufficiency of mechanisticexplanations, a general working hypothesis issuccinctly expressed by the following state-ment (Jackendoff, 1987):

Hypothesis of computational sufficiency:every phenomenological distinction iscaused by/supported by/projected from acorresponding computational distinction.

For the lack of a clearly better alterna-tive, this hypothesis remains a viable work-ing hypothesis in the area of computationalmodels of consciousness, despite variouscriticisms (e.g., Damasio, 1994 ; Edelman,1989; Freeman, 1995 ; Penrose, 1994 ; Searle,1980).

On the other hand, the necessity of mech-anistic explanations, according to the fore-going definition of mechanistic processes,should be intuitively obvious to anyone whois not a dualist. If one accepts the universal-ity of computation, then computation, in itsbroadest sense, can be expected to includethe necessary conditions for consciousness.

On the basis of such intuition, we needto provide an explanation of the compu-tational/mechanistic basis of consciousnessthat answers the following questions. Whatkind of mechanism leads to conscious pro-cesses, and what kind of mechanism leadsto unconscious processes? What is the func-tional role of conscious processes (Baars,1988, 2002 ; Sun, 1999a, b)? What is thefunctional role of unconscious processes?There have been many such explanations incomputational or mechanistic terms. Thesecomputational or mechanistic explanationsare highly relevant to the science of con-sciousness as they provide useful theoreticalframeworks for further empirical work.

Another issue we need to address beforewe move on to details of computationalwork is the relation between biological/phy-siological models and computational mod-els in general. The problem with biologi-cally centered studies of consciousness ingeneral is that the gap between phenomenol-ogy and physiology/biology is so great thatsomething else may be needed to bridge it.Otherwise, if we rush directly into com-plex neurophysiological thickets (Edelman,1989; Crick & Koch, 1990; Damasio et al.,1990; LeDoux, 1992 ,), we may lose sightof the forests. Computation, in its broad-est sense, can serve to bridge the gap. Itprovides an intermediate level of explana-tion in terms of processes, mechanisms, andfunctions and helps determine how variousaspects of conscious and unconscious pro-cesses should figure into the architecture ofthe mind (Anderson & Lebiere, 1998; Sun,2002). It is possible that an intermediatelevel between phenomenology and physi-ology/neurobiology might be more apt tocapture fundamental characteristics of con-sciousness (Coward & Sun, 2004). This no-tion of an intermediate level of explanation

Page 171: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 153

has been variously expounded recently; forexample, in terms of virtual machines bySloman and Chrisley (2003).

Different Computational Accountsof Consciousness

Existing computational explanations of theconscious/unconscious distinction may becategorized based on the following differ-ent emphases: (1) differences in knowledgeorganization (e.g., the SN+PS view, to bedetailed later), (2) differences in knowledge-processing mechanisms (e.g., the PS+SNview), (3) differences in knowledge content(e.g., the episode+activation view), (4) dif-ferences in knowledge representation (e.g.,the localist+distributed view), or (5) dif-ferent processing modes of the same sys-tem (e.g., the attractor view or the thresholdview).

Contrary to some critics, the debateamong these differing views is not analogousto a debate between algebraists and geome-ters in physics (which would be irrelevant).It is more analogous to the wave vs. parti-cle debate in physics concerning the natureof light, which was truly substantive. Let usdiscuss some of the better known views con-cerning computational accounts of the con-scious/unconscious distinction one by one.

First of all, some explanations are basedon recognizing that there are two sepa-rate systems in the mind. The differencebetween the two systems can be explainedin terms of differences in either knowledgeorganization, knowledge-processing mech-anisms, knowledge content, or knowledgerepresentation:

� The SN+PS view: an instance of the expla-nations based on differences in knowl-edge organization. As originally proposedby Anderson (1983) in his ACT* model,there are two types of knowledge: Declar-ative knowledge is represented by seman-tic networks (SN), and it is consciouslyaccessible, whereas procedural knowl-edge is represented by rules in a produc-tion system (PS), and it is inaccessible.

The difference lies in the two differentways of organizing knowledge – whetherin an action-centered way (proceduralknowledge) or in an action-independentway (declarative knowledge). Computa-tionally, both types of knowledge arerepresented symbolically (using eithersymbolic semantic networks or symbolicproduction rules).1 The semantic net-works use parallel spreading activation(Collins & Loftus, 1975) to activate rel-evant nodes, and the production rulescompete for control through parallelmatching and firing. The models embody-ing this view have been used for model-ing a variety of psychological tasks, espe-cially skill learning tasks (Anderson, 1983 ,Anderson & Lebiere, 1998).

� The PS+SN view: an instance of theexplanations based on differences inknowledge-processing mechanisms. Asproposed by Hunt and Lansman (1986),the “deliberate” computational process ofproduction matching and firing in a pro-duction system (PS), which is serial inthis case, is assumed to be a consciousprocess, whereas the spreading activationcomputation (Collins & Loftus, 1975) insemantic networks (SN), which is mas-sively parallel, is assumed to be an uncon-scious process. The model based on thisview has been used to model controlledand automatic processing data in theattention-performance literature (Hunt& Lansman, 1986). Note that this view isthe exact opposite of the view advocatedby Anderson (1983), in terms of the rolesof the two computational mechanismsinvolved. Note also that the emphasis inthis view is on the processing differenceof the two mechanisms, serial vs. parallel,and not on knowledge organization.

� The algorithm+instance view: another ins-tance of the explanations based on dif-ferences in knowledge-processing mech-anisms. As proposed by Logan (1988)and also by Stanley et al. (1989), thecomputation involved in retrieval anduse of instances of past experience isconsidered to be unconscious (Stanley

Page 172: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

154 the cambridge handbook of consciousness

et al., 1989) or automatic (Logan 1988),whereas the use of “algorithms” involvesconscious awareness. Here the term“algorithm” is not clearly defined andapparently refers to computation morecomplex than instance retrieval/use.Computationally, it was suggested thatthe use of an algorithm is under tightcontrol and carried out in a serial,step-by-step way, whereas instances canbe retrieved in parallel and effortlessly(Logan, 1988). The emphasis here isagain on the differences in process-ing mechanisms. This view is also sim-ilar to the view advocated by Nealand Hesketh (1997), which emphasizesthe unconscious influence of what theycalled episodic memory. Note that theviews by Logan (1988), Stanley et al.(1989), and Neal and Hesketh (1997) arethe exact opposite of the view advo-cated by Anderson (1983) and Bower(1996), in which instances/episodes areconsciously accessed rather than uncon-sciously accessed.

� The episode+activation view: an instanceof the explanations based on differencesin knowledge content. As proposed byBower (1996), unconscious processes arebased on activation propagation throughstrengths or weights (e.g., in a connec-tionist fashion) between different nodesrepresenting perceptual or conceptualprimitives, whereas conscious processesare based on explicit episodic memoryof past episodes. What is emphasizedin this view is the rich spatial-temporalcontext in episodic memory (i.e., the adhoc associations with contextual infor-mation, acquired on a one-shot basis),which is termed type-2 associations asopposed to regular type-1 associations(which are based on semantic related-ness). This emphasis somewhat distin-guishes this view from other views con-cerning instances/episodes (Logan, 1988;Neal & Hesketh, 1997; Stanley et al.1989).2 The reliance on memory of spe-cific events in this view bears some resem-blance to some neurobiologically moti-

vated views that rely on the interplayof various memory systems, such as thatadvocated by Taylor (1997) and McClel-land et al. (1995).

� The localist+distributed representationview: an instance of the explanationsbased on differences in knowledge rep-resentation. As proposed by Sun (1994 ,2002), different representational formsused in different components may beused to explain the qualitative differencebetween conscious and unconsciousprocesses. One type of representation issymbolic or localist, in which one distinctentity (e.g., a node in a connectionistmodel) represents a concept. The othertype of representation is distributed, inwhich a non-exclusive set of entities(e.g., a set of nodes in a connectionistmodel) are used for representing oneconcept, and the representations ofdifferent concepts overlap each other;in other words, a concept is representedas a pattern of activations over a set ofentities (e.g., a set of nodes). Conceptualstructures (e.g., rules) can be imple-mented in the localist/symbolic systemin a straightforward way by connectionsbetween relevant entities. In distributedrepresentations, such structures (includ-ing rules) are diffusely duplicated in away consistent with the meanings of thestructures (Sun, 1994), which capturesunconscious performance. There maybe various connections between corre-sponding representations across the twosystems. (A system embodying this view,CLARION, is described later.)

In contrast to these two-systems views,there exist some theoretical views that insiston the unitary nature of the conscious andthe unconscious. That is, they hold that con-scious and unconscious processes are differ-ent manifestations of the same underlyingsystem. The difference between consciousand unconscious processes lies in the differ-ent processing modes for conscious versusunconscious information within the same

Page 173: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 155

system. There are several possibilities in thisregard:

� The threshold view: As proposed by var-ious researchers, including Bowers et al.(1990), the difference between con-scious and unconscious processes can beexplained by the difference between acti-vations of mental representations abovea certain threshold and activations ofsuch representations below that thresh-old. When activations reach the thresholdlevel, an individual becomes aware of thecontent of the activated representations;otherwise, although the activated repre-sentations may influence behavior, theywill not be accessible consciously.

� The chunking view: As in the mod-els described by Servan-Schreiber andAnderson (1987) and by Rosenbloomet al. (1993), a chunk is considered a uni-tary representation and its internal work-ing is opaque (although its input/outputare accessible). A chunk can be a pro-duction rule (as in Rosenbloom et al.,1993) or a short sequence of perceptual-motor elements (as in Servan-Schreiber& Anderson, 1987). Because of the lackof transparency of the internal workingof a chunk, it is equated with implicitlearning (Servan-Schreiber & Anderson,1987) or automaticity (Rosenbloom et al.,1993). According to this view, the differ-ence between conscious and unconsciousprocesses is the difference between usingmultiple (simple) chunks (involving someconsciousness) and using one (complex)chunk (involving no consciousness).

� The attractor view: As suggested by themodel of Mathis and Mozer (1996), beingin a stable attractor of a dynamic system(a neural network in particular) leads toconsciousness. The distinction betweenconscious and unconscious processes isreduced to the distinction of being in astable attractor and being in a transientstate. O’Brien and Opie (1998) proposedan essentially similar view. This view maybe generalized to a general coherenceview – the emphasis may be placed on the

role of internal consistency in producingconsciousness. There has been supportfor this possibility from neuroscience, forexample, in terms of a coherent “thalamo-cortical core” (Edelman & Tononi, 2000).

� The access view: As suggested by Baars(1988), consciousness is believed to helpmobilize and integrate mental functionsthat are otherwise disparate and inde-pendent. Thus, consciousness is aimed atsolving the relevance problem – findingthe exact internal resources needed todeal with the current situation. Some evi-dence has been accumulated for this view(Baars, 2002). A computational imple-mentation of Baars’ theory in the formof IDA (a running software agent system;Franklin et al., 1998) is described indetail later. See also Coward and Sun(2004).

The coexistence of these various viewsof consciousness seems quite analogous tothe parable of the Blind Men and the Ele-phant. Each of them captures some aspect ofthe truth about consciousness, but the por-tion of the truth captured is limited by theview itself. None seems to capture the wholepicture.

In the next two sections, we look intosome details of two representative compu-tational models, exemplifying either two-system or one-system views. The modelsillustrate what a plausible computationalmodel of consciousness should be like, syn-thesizing various psychological notions andrelating to various available psychologicaltheories.

A Model Adopting theRepresentational Difference View

Let us look into the representational dif-ference view as embodied in the cogni-tive architecture Clarion (which stands forConnectionist Learning with Rule InductionON-line; Sun 1997, 2002 , 2003), as an exam-ple of the two-system views for explainingconsciousness.

Page 174: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

156 the cambridge handbook of consciousness

.

Top Level

Bottom Level

explicit representation

implicit representation

implicit knowledge

explicit knowledge

explicit learning process

implicit learning process

Figure 7.1. The CLARION model.

The important premises of subsequentdiscussions are the direct accessibility of con-scious processes and the direct inaccessibilityof unconscious processes. Conscious pro-cesses should be directly accessible – thatis, directly verbally expressible – withoutinvolving intermediate interpretive or trans-formational steps, which is a requirementprescribed and/or accepted by many the-oreticians (see, e.g., Clark, 1992 ; Hadley,1995).3 Unconscious processes should be,in contrast, inaccessible directly (but theymight be accessed indirectly through some

Dimensions

Cognitive phenomena

Source of knowledge

Representation

Operation

Characteristics

bottom

implicit learning

implicit memory

automatic processing

intuition

trial-and-error

assimilation of explicit knowledge

distributed (micro) features

similarity-based

more context sensitive, fuzzy

less selective

more complex

top

explicit learning

explicit memory

explicit reasoning

external sources

extraction from the bottom level

localist conceptual units

explicit symbol manipulation

more crisp, precise

more selective

simpler

controlled processing

Figure 7.2 . Comparisons of the two levels of the CLARION architecture.

interpretive processes), thus exhibiting dif-ferent psychological properties (see, e.g.,Berry & Broadbent, 1988; Reber, 1989; morediscussions later).

An example model in this regard isClarion, which is a two-level model thatuses the localist and distributed represen-tations in the two levels, respectively, andlearns using two different methods in thetwo levels, respectively. In developing themodel, four criteria were hypothesized (seeSun, 1994), on the basis of the aforemen-tioned considerations: (1) direct accessibilityof conscious processes; (2) direct inaccessi-bility of unconscious processes; and further-more, (3) linkages from localist concepts todistributed features: once a localist conceptis activated, its corresponding distributedrepresentations (features) are also activated,as assumed in most cognitive models, rang-ing from Tversky (1977) to Sun (1995);4

and (4) linkages from distributed featuresto localist concepts: under appropriate cir-cumstances, once some or most of the dis-tributed features of a concept are activated,the localist concept itself can be activated to“cover” these features (roughly correspond-ing to categorization; Smith & Medin, 1981).

The direct inaccessibility of unconsciousknowledge can be best captured by a “sub-symbolic” distributed representation such asthat provided by a backpropagation network(Rumelhart et al., 1986), because represen-tational units in a distributed representation

Page 175: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 157

representation

reinforcement

goal setting

filteringselectionregulation

goal structure

drives

MS MCS

NACSACS

action–centeredexplicit representation

action–centered implicit non–action–centered

non–action–centeredexplicit representation

implicit representation

Figure 7.3 . The implementation of CLARION. ACS denotes the action-centered subsystem, NACSthe non-action-centered subsystem, MS the motivational subsystem, and MCS the metacognitivesubsystem. The top level contains localist encoding of concepts and rules. The bottom level containsmultiple (modular) connectionist networks for capturing unconscious processes. The interaction ofthe two levels and the information flows are indicated with arrows.

are capable of accomplishing tasks butare generally uninterpretable directly (seeRumelhart et al., 1986; Sun, 1994). In con-trast, conscious knowledge can be cap-tured in computational modeling by a sym-bolic or localist representation (Clark &Karmiloff-Smith, 1993 ; Sun & Bookman1994), in which each unit has a clear concep-tual meaning/interpretation (i.e., a semanticlabel). This captures the property of con-scious processes being directly accessible andmanipulable (Smolensky, 1988; Sun, 1994).This difference in representation leads to atwo-level structure whereby each level usesone type of representation (Sun, 1994 , 1995 ,1997; Sun et al., 1996, 1998, 2001). The bot-tom level is based on distributed represen-tation, whereas the top level is based onlocalist/symbolic representation. For learn-

ing, the bottom level uses gradual weighttuning, whereas the top level uses explicit,one-shot hypothesis testing learning, in cor-respondence with the representational char-acteristics of the two levels. There are var-ious connections across the two levels forexerting mutual influences. See Figure 7.1for an abstract sketch of the model. The dif-ferent characteristics of the two levels aresummarized in Figure 7.2 .

Let us look into some implementationaldetails of Clarion. Note that the detailsof the model have been described exten-sively in a series of previous papers, includ-ing Sun (1997, 2002 , 2003), Sun and Peter-son (1998), and Sun et al. (1998, 2001).It has a dual representational structure –implicit and explicit representations being intwo separate “levels” (Hadley, 1995 ; Seger,

Page 176: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

158 the cambridge handbook of consciousness

1994). Essentially it is a dual-process the-ory of mind (Chaiken & Trope, 1999). Italso consists of a number of functional sub-systems, including the action-centered sub-system, the non-action-centered subsystem,the metacognitive subsystem, and the moti-vational subsystem (see Figure 7.3).

Let us first focus on the action-centeredsubsystem of Clarion. In this subsystem, thetwo levels interact by cooperating in actions,through a combination of the action rec-ommendations from the two levels, respec-tively, as well as cooperating in learningthrough a bottom-up and a top-down pro-cess (to be discussed below). Actions andlearning of the action-centered subsystemmay be described as follows:

1. Observe the current state x.2 . Compute in the bottom level the “values”

of x associated with each of all the possi-ble actions ai’s: Q(x, a1), Q(x, a2 ), . . . . . . ,Q(x, an) (to be explained below).

3 . Find out all the possible actions (b1,b2 , . . . . , bm) at the top level, based on theinput x (sent up from the bottom level)and the rules in place.

4 . Compare or combine the values of theais with those of bjs (sent down fromthe top level), and choose an appropriateaction b.

5 . Perform the action b, and observe the nextstate y and (possibly) the reinforcement r.

6. Update Q-values at the bottom levelin accordance with the Q-Learning-Backpropagation algorithm (to beexplained later).

7. Update the rule network at the top levelusing the Rule-Extraction-Refinement algo-rithm (to be explained later).

8. Go back to Step 1.

In the bottom level of the action-centeredsubsystem, implicit reactive routines arelearned: A Q-value is an evaluation of the“quality” of an action in a given state: Q(x,a) indicates how desirable action a is in statex (which consists of some sensory input).The agent may choose an action in any statebased on Q-values (for example, by choos-

ing the action with the highest Q-value). Toacquire the Q-values, one may use the Q-learning algorithm (Watkins 1989), a rein-forcement learning algorithm. It basicallycompares the values of successive actionsand adjusts an evaluation function on thatbasis. It thereby develops reactive sequentialbehaviors.

The bottom level of the action-centeredsubsystem is modular; that is, a number ofsmall neural networks coexist, each of whichis adapted to specific modalities, tasks, orgroups of input stimuli. This coincides withthe modularity claim (Baars, 1988; Cosmides& Tooby, 1994 ; Edelman, 1987; Fodor, 1983 ;Hirschfield & Gelman, 1994 ; Karmiloff-Smith, 1986) that much processing in thehuman mind is done by limited, encapsu-lated (to some extent), specialized proces-sors that are highly effcient. Some of thesemodules are formed evolutionarily; that is,given a priori to agents, reflecting their hard-wired instincts and propensities (Hirsch-field & Gelman, 1994). Some of them can belearned through interacting with the world(computationally through various decompo-sition methods; e.g., Sun & Peterson, 1999).

In the top level of the action-centeredsubsystem, explicit conceptual knowledgeis captured in the form of rules. Symbolic/localist representations are used. See Sun(2003) for further details of encoding (theyare not directly relevant here).

Humans are clearly able to learn implicitknowledge through trial and error, withoutnecessarily utilizing a priori explicit knowl-edge (Seger, 1994). On top of that, explicitknowledge can be acquired, also from ongo-ing experience in the world, and possibly th-rough the mediation of implicit knowledge

(i.e., bottom-up learning; see Karmilof-Smith, 1986; Stanley et al., 1989; Sun, 1997,2002 ; Willingham et al., 1989). The basicprocess of bottom-up learning is as follows(Sun, 2002). If an action decided by thebottom level is successful, then the agentextracts a rule that corresponds to the actionselected by the bottom level and adds therule to the top level. Then, in subsequentinteraction with the world, the agent ver-ifies the extracted rule by considering the

Page 177: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 159

outcome of applying the rule: If the outcomeis not successful, then the rule should bemade more specific and exclusive of the cur-rent case, and if the outcome is successful,the agent may try to generalize the rule tomake it more universal (e.g., Michalski,1983). The details of the bottom-up learningalgorithm (the Rule-Extraction-Refinementalgorithm) can be found in Sun and Peter-son (1998). After rules have been learned, avariety of explicit reasoning methods may beused. Learning explicit conceptual represen-tation at the top level can also be useful inenhancing learning of implicit reactive rou-tines (reinforcement learning) at the bottomlevel.

Although Clarion can learn even whenno a priori or externally provided knowl-edge is available, it can make use of it whensuch knowledge is available (cf. Anderson,1983 ; Schneider & Oliver, 1991). To dealwith instructed learning, externally providedknowledge (in the forms of explicit concep-tual structures, such as rules, plans, routines,categories, and so on) should (1) be com-bined with autonomously generated concep-tual structures at the top level (i.e., internal-ization) and (2) be assimilated into implicitreactive routines at the bottom level (i.e.,assimilation). This process is known as top-down learning. See Sun (2003) for furtherdetails.

The non-action-centered subsystem rep-resents general knowledge about the world,which is equivalent to the notion of seman-tic memory (as in, e.g., Quillian, 1968). Itmay be used for performing various kindsof retrievals and inferences. It is under thecontrol of the action-centered subsystem(through the actions of the action-centeredsubsystem). At the bottom level, associa-tive memory networks encode non-action-centered implicit knowledge. Associationsare formed by mapping an input to an out-put. The regular backpropagation learningalgorithm can be used to establish such asso-ciations between pairs of input and output(Rumelhart et al., 1986).

On the other hand, at the top level ofthe non-action-centered subsystem, a gen-eral knowledge store encodes explicit non-

action-centered knowledge (Sun, 1994). Inthis network, chunks are specified throughdimensional values. A node is set up at thetop level to represent a chunk. The chunknode (a symbolic representation) connectsto its corresponding features (dimension-value pairs) represented as nodes in thebottom level (which form a distributedrepresentation). Additionally, links betweenchunks at the top level encode explicit asso-ciations between pairs of chunks, known asassociative rules. Explicit associative rulesmay be formed (i.e., learned) in a variety ofways (Sun, 2003).

On top of associative rules, similarity-based reasoning may be employed inthe non-action-centered subsystem. Dur-ing reasoning, a known (given or inferred)chunk may be automatically compared withanother chunk. If the similarity betweenthem is sufficiently high, then the latterchunk is inferred (see Sun, 2003 , for details).Similarity-based and rule-based reasoningcan be intermixed. As a result of mixingsimilarity-based and rule-based reasoning,complex patterns of reasoning emerge. Asshown by Sun (1994), different sequencesof mixed similarity-based and rule-basedreasoning capture essential patterns ofhuman everyday (mundane, common-sense)reasoning.

As in the action-centered subsystem,top-down or bottom-up learning may takeplace in the non-action-centered subsystem,either to extract explicit knowledge in thetop level from the implicit knowledge in thebottom level or to assimilate explicit knowl-edge of the top level into implicit knowledgein the bottom level.

The motivational subsystem is concernedwith drives and their interactions (Toates,1986). It is concerned with why an agentdoes what it does. Simply saying that anagent chooses actions to maximizes gains,rewards, or payoffs leaves open the quest-ion of what determines these things. The re-levance of the motivational subsystem to theaction-centered subsystem lies primarily inthe fact that it provides the context in whichthe goal and the payoff of the action-cente-red subsystem are set. It thereby influences

Page 178: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

160 the cambridge handbook of consciousness

the working of the action-centered subsys-tem and, by extension, the working of thenon-action-centered subsystem.

A bipartite system of motivational rep-resentation is again in place in Clarion.The explicit goals (such as “finding food”)of an agent (which is tied to the workingof the action-centered subsystem) may begenerated based on internal drive states (forexample, “being hungry”). See Sun (2003)for details.

Beyond low-level drives concerning phys-iological needs, there are also higher-leveldrives. Some of them are primary, in thesense of being “hardwired.” For example,Maslow (1987) developed a set of thesedrives in the form of a “need hierarchy.”Whereas primary drives are built-in and rel-atively unalterable, there are also “derived”drives, which are secondary, changeable, andacquired mostly in the process of satisfyingprimary drives.

The metacognitive subsystem is closelytied to the motivational subsystem. Themetacognitive subsystem monitors, controls,and regulates cognitive processes for the sakeof improving cognitive performance (Nel-son, 1993 ; Sloman & Chrisley, 2003 ; Smithet al., 2003). Control and regulation may bein the forms of setting goals for the action-centered subsystem, setting essential param-eters of the action-centered and the non-action-centered subsystem, interrupting andchanging ongoing processes in the action-centered and the non-action-centered sub-system, and so on. Control and regulationmay also be carried out through setting rein-forcement functions for the action-centeredsubsystem on the basis of drive states. Themetacognitive subsystem is also made up oftwo levels: the top level (explicit) and thebottom level (implicit).

Note that in Clarion, there are thus avariety of memories: procedural memory(in the action-centered subsystem) in bothimplicit and explicit forms, general “seman-tic” memory (in the non-action-centeredsubsystem) in both implicit and explicitforms, episodic memory (in the non-action-centered subsystem), working memory (inthe action-centered subsystem), goal struc-

tures (in the action-centered subsystem),and so on. See Sun (2003) for furtherdetails of these memories. As touched uponbefore, these memories are important foraccounting for various forms of consciousand unconscious processes (also see, e.g.,McClelland et al., 1995 ; Schacter, 1990;Taylor, 1997).

Clarion has been successful in account-ing for a variety of psychological data. Anumber of well-known skill learning taskshave been simulated using Clarion; thesespan the spectrum ranging from simple reac-tive skills to complex cognitive skills. Thetasks include serial reaction time (SRT) tasks,artificial grammar learning (AGL) tasks, pro-cess control (PC) tasks, the categorical infer-ence (CI) task, the alphabetical arithmetic(AA) task, and the Tower of Hanoi (TOH)task (see Sun, 2002). Among them, SRT,AGL, and PC are typical implicit learningtasks, very much relevant to the issue of con-sciousness as they operationalize the notionof consciousness in the context of psycho-logical experiments (Coward & Sun, 2004 ;Reber, 1989; Seger, 1994 ; Sun et al., 2005),whereas TOH and AA are typical high-levelcognitive skill acquisition tasks. In addition,extensive work have been done on a com-plex minefield navigation task (see Sun &Peterson, 1998; Sun et al., 2001). Metacogni-tive and motivational simulations have alsobeen undertaken, as have social simulationtasks (e.g., Sun & Naveh, 2004).

In evaluating the contribution of Clarion

to our understanding of consciousness, wenote that the simulations using Clarion

provide detailed, process-based interpreta-tions of experimental data related to con-sciousness, in the context of a broadlyscoped cognitive architecture and a uni-fied theory of cognition. Such interpreta-tions are important for a precise, process-based understanding of consciousness andother aspects of cognition, leading to bet-ter appreciations of the role of consciousnessin human cognition (Sun, 1999a). Clarion

also makes quantitative and qualitative pre-dictions regarding cognition in the areas ofmemory, learning, motivation, metacogni-tion, and so on. These predictions either

Page 179: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 161

have been experimentally tested already orare in the process of being tested (see, e.g.,Sun, 2002 ; Sun et al., 2001, 2005). Becauseof the complex structures and their com-plex interactions specified within the frame-work of Clarion, it has a lot to say aboutthe roles that different types of processes,conscious or unconscious, play in humancognition, as well as their synergy (Sun et al.,2005).

Comparing Clarion with Bower (1996),the latter may be viewed as a special case ofClarion for dealing specifically with implicitmemory phenomena. The type-1 and type-2connections, hypothesized by Bower (1996)as the main explanatory constructs, can beequated roughly to top-level representationsand bottom-level representations, respec-tively. In addition to making the distinc-tion between type-1 and type-2 connections,Bower (1996) also endeavored to specifythe details of multiple pathways of spread-ing activation in the bottom level. Thesepathways were phonological, orthographi-cal, semantic, and other connections thatstore long-term implicit knowledge. In thetop level, associated with type-2 connec-tions, it was claimed on the other handthat rich contextual information was stored.These details nicely complement the speci-fication of Clarion and can thus be incorpo-rated into the model.

The proposal by McClelland et al. (1995)that there are complementary learning sys-tems in the hippocampus and neocortexis also relevant here. According to theiraccount, cortical systems learn slowly, andthe learning of new information destroysthe old, unless the learning of new infor-mation is interleaved with ongoing expo-sure to the old information. To resolve thesetwo problems, new information is initiallystored in the hippocampus, an explicit mem-ory system, in which crisp, explicit repre-sentations are used to minimize interferenceof information (so that catastrophic inter-ference is avoided there). It allows rapidlearning of new material. Then, the newinformation stored in the hippocampus isassimilated into cortical systems. The assim-ilation is interleaved with the assimilation

of all other information in the hippocam-pus and with the ongoing events. Weightsare adjusted by a small amount after eachexperience, so that the overall direction ofweight change is governed by the struc-ture present in the ensemble of events andexperiences, using distributed representa-tions (with weights). Therefore, catastrophicinterference is avoided in cortical systems.This model is very similar to the two-levelidea of Clarion, in that it not only adoptsa two-system view but also utilizes repre-sentational differences between the two sys-tems. However, in contrast to this model,which captures only what may be termedtop-down learning (that is, learning that pro-ceeds from the conscious to the uncon-scious), Clarion can capture both top-downlearning (from the top level to the bottomlevel) and bottom-up learning (from the bot-tom level to the top level). See Sun et al.(2001) and Sun (2002) for details of bottom-up learning.

Turning to the declarative/proceduralknowledge models, ACT* (Anderson, 1983)is made up of a semantic network (for declar-ative knowledge) and a production system(for procedural knowledge). ACT-R is adescendant of ACT*, in which procedurallearning is limited to production formationthrough mimicking, and production firingis based on log odds of success. Clarion

succeeds in explaining two issues that ACTdid not address. First, whereas ACT takesa mostly top-down approach toward learn-ing (i.e, from given declarative knowledgeto procedural knowledge), Clarion can pro-ceed bottom-up. Thus, Clarion can accountfor implicit learning better than ACT (seeSun, 2002 , for details). Second, in ACTboth types of knowledge are represented inexplicit, symbolic forms (i.e., semantic net-works and productions), and thus it doesnot explain, from a representational view-point, the differences in conscious accessibil-ity (Sun, 1999b). Clarion accounts for thisdifference based on the use of two differentforms of representation. Top-level knowl-edge is represented explicitly and thus con-sciously accessible, whereas bottom-levelknowledge is represented implicitly and

Page 180: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

162 the cambridge handbook of consciousness

thus inaccessible. Thus, this distinction inClarion is intrinsic, instead of assumed asin ACT (Sun, 1999b).

Comparing Clarion with Hunt and Lans-man’s (1986) model, there are similari-ties. The production system in Hunt andLansman’s model clearly resembles the toplevel in Clarion, in that both use explicitmanipulations in much the same way. Like-wise, the spreading activation in the seman-tic network in Hunt and Lansman’s modelresembles the connectionist network in thebottom level of Clarion, because the samekind of spreading activation was used inboth models, although the representation inHunt and Lansman’s model was symbolic,not distributed. Because of the uniformlysymbolic representations used in Hunt andLansman’s model, it does not explain con-vincingly the qualitative difference betweenconscious and unconscious processes (seeSun, 1999b).

An Application of the Access View

Let us now examine an application of theaccess view on consciousness in building apractically useful system. The access view isa rather popular approach in computationalaccounts of consciousness (Baars, 2002), andtherefore it deserves some attention. It is alsopresented here as an example of various one-system views.

Most computational models of cognitiveprocesses are designed to predict experi-mental data. IDA (Intelligent DistributionAgent), in contrast, models consciousnessin the form of an autonomous softwareagent (Franklin & Graesser, 1997). Specif-ically, IDA was developed for Navy appli-cations (Franklin et al., 1998). At the endof each sailor’s tour of duty, he or she isassigned to a new billet in a process calleddistribution. The Navy employs almost 300

people (called detailers) to effect these newassignments. IDA’s task is to play the role ofa detailer.

Designing IDA presents both communi-cation problems and action selection prob-lems involving constraint satisfaction. It

must communicate with sailors via e-mailand in English, understanding the con-tent and producing human-like responses.It must access a number of existing Navydatabases, again understanding the content.It must see that the Navy’s needs are satisfiedwhile adhering to Navy policies. For exam-ple, a particular ship may require a certainnumber of sonar technicians with the req-uisite types of training. It must hold downmoving costs. And it must cater to the needsand desires of the sailor as well as possi-ble. This includes negotiating with the sailorvia an e-mail correspondence in natural lan-guage. Finally, it must authorize the finallyselected new billet and start the writing ofthe sailor’s orders.

Although the IDA model was not initiallydeveloped to reproduce experimental data,it is nonetheless based on psychologicaland neurobiological theories of conscious-ness and does generate hypotheses and qual-itative predictions (Baars & Franklin, 2003 :Franklin et al., 2005). IDA successfullyimplements much of the global workspacetheory (Baars, 1988), and there is a growingbody of empirical evidence supporting thattheory (Baars, 2002). IDA’s flexible cogni-tive cycle has also been used to analyze therelation of consciousness to working mem-ory at a fine level of detail, offering explana-tions of such classical working memory tasksas visual imagery to gain information and therehearsal of a telephone number (Baars &Franklin, 2003 : Franklin et al., 2005).

In his global workspace theory (see Fig-ure 7.4 and Chapter 8), Baars (1988) postu-lates that human cognition is implementedby a multitude of relatively small, special-purpose processors, which are almost alwaysunconscious (i.e., the modularity hypoth-esis as discussed earlier). Communicationbetween them is rare and over a narrowbandwidth. Coalitions of such processes findtheir way into a global workspace (andthereby into consciousness). This limitedcapacity workspace serves to broadcast themessage of the coalition to all the uncon-scious processors in order to recruit otherprocessors to join in handling the currentnovel situation or in solving the current

Page 181: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 163

problem. Thus consciousness, in this theory,allows us to deal with novel or problem-atic situations that cannot be dealt with eff-ciently, or at all, by habituated unconsciousprocesses. In particular, it provides accessto appropriately useful resources. Globalworkspace theory offers an explanation forthe limited capacity of consciousness. Largemessages would be overwhelming to tinyprocessors. In addition, all activities of theseprocessors take place under the auspices ofcontexts: goal contexts, perceptual contexts,conceptual contexts, and/or cultural con-texts. Though contexts are typically uncon-scious, they strongly influence consciousprocesses.

Let us look into some details of the IDAarchitecture and its main mechanisms. Atthe higher level, the IDA architecture ismodular with module names borrowed frompsychology (see Figure 7.5). There are mod-ules for Perception, Working Memory, Auto-biographical Memory, Transient EpisodicMemory, Consciousness, Action Selection,Constraint Satisfaction, Language Genera-tion, and Deliberation.

In the lower level of IDA, the proces-sors postulated by the global workspace the-ory are implemented by “codelets.” Codeletsare small pieces of code running as indepen-

dent threads, each of which is specialized forsome relatively simple task. They often playthe role of “demons,”5 waiting for a particu-lar situation to occur in response to whichthey should act. Codelets also correspondmore or less to Edelman’s neuronal groups(Edelman, 1987) or Minsky’s agents (Minsky,1985). Codelets come in a number of vari-eties, each with different functions to per-form. Most of these codelets subserve somehigh-level entity, such as a behavior. How-ever, some codelets work on their own, per-forming such tasks as watching for incominge-mail and instantiating goal structures. Animportant type of codelet that works on itsown is the attention codelets that serve tobring information to “consciousness.”

IDA senses only strings of characters,which are not imbued with meaning butwhich correspond to primitive sensations,like, for example, the patterns of activ-ity on the rods and cones of the retina.These strings may come from e-mail mes-sages, an operating system message, or froma database record.

The perception module employs analy-sis of surface features for natural-languageunderstanding. It partially implements per-ceptual symbol system theory (Barsalou,1999); perceptual symbols serve as a uniform

The Dominant Context Hierarchy:

Input Processors:

OtherAvailableContexts:

Global Workspace(conscious)

CompetingContexts:

Goal Contexts

Perceptual Contexts

Conceptual Contexts

Figure 7.4. Baars’ global workspace theory.

Page 182: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

164 the cambridge handbook of consciousness

Action Selectedand Taken

(behavior codelets)

Stimulusfrom InternalEnvironment

Stimulusfrom ExternalEnvironment

Action Selected and Taken

(behavior codelets)

ActionSelection

(Behavior Net)

Behavior Codeletsin priming modeinstantiate, bind

activate

Update

WinningCoalition

Local Associations

Local AssociationsCue

Cue

Consolidation

Percept

Update

ConsciousBroadcast

Procedural Update

Senses

Internal Senses

SensoryMemory

PerceptualMemory(Slipnet)

Codelets

Competition forConsciousness

Long-termWorkingMemory

AutobiographicalMemory

WorkingMemory

TransientEpisodicMemory

Perception Codelets

AttentionCodelets

Figure 7.5 . IDA’s cognitive cycle.

system of representations throughout thesystem. Its underlying mechanism consti-tutes a portion of the Copycat architecture(Hofstadter & Mitchell, 1994). IDA’s per-ceptual memory takes the form of a semanticnet with activation passing, called the slipnet(see Figure 7.6). The slipnet embodies theperceptual contexts and some conceptualcontexts from the global workspace theory.Nodes of the slipnet constitute the agent’sperceptual symbols. Perceptual codelets rec-ognize various features of the incomingstimulus; that is, various concepts. Percep-tual codelets descend on an incoming mes-sage, looking for words or phrases they rec-ognize. When such are found, appropriatenodes in the slipnet are activated. This acti-vation passes around the net until it settles.A node (or several) is selected by its highactivation, and the appropriate template(s)is filled by codelets with selected items fromthe message. The information thus createdfrom the incoming message is then writtento the workspace (working memory, to bedescribed below), making it available to therest of the system.

The results of this process, informationcreated by the agent for its own use, arewritten to the workspace (working mem-ory, not to be confused with Baars’ globalworkspace). (Almost all of IDA’s moduleseither write to the workspace, read from it,or both.)

IDA employs sparse distributed mem-ory (SDM) as its major associative memory(Anwar & Franklin, 2003 ; Kanerva, 1988).SDM is a content-addressable memory.Being content addressable means that itemsin memory can be retrieved by using part oftheir contents as a cue, rather than having toknow the item’s address in memory.

Reads and writes, to and from associativememory, are accomplished through a gate-way within the workspace called the focus.When any item is written to the workspace,another copy is written to the read registersof the focus. The contents of these read reg-isters of the focus are then used as an addressto query associative memory. The results ofthis query – that is, whatever IDA associateswith this incoming information – are writ-ten into their own registers in the focus.

Page 183: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 165

This may include some emotion and someaction previously taken. Thus associationswith any incoming information, either fromthe outside world or from some part of IDAitself, are immediately available. (Writes toassociative memory are made later and aredescribed below.)

In addition to long-term memory, IDAincludes a transient episodic memory(Ramamurthy, D’Mello, & Franklin, 2004).Long-term, content-addressable, associativememories are not typically capable ofretrieving details of the latest of a longsequence of quite similar events (e.g., whereI parked in the parking garage this morn-ing or what I had for lunch yesterday). Thedistinguishing details of such events tend toblur due to interference from similar events.In IDA, this problem is solved by the addi-tion of a transient episodic memory imple-mented with a sparse distributed memory.This SDM decays so that past sequences ofsimilar events no longer interfere with thelatest such events.

The apparatus for producing “conscious-ness” consists of a coalition manager, a spot-light controller, a broadcast manager, and acollection of attention codelets that recog-nize novel or problematic situations. Atten-

tion codelets have the task of bringing infor-mation to “consciousness.” Each attentioncodelet keeps a watchful eye out for someparticular situation to occur that might callfor “conscious” intervention. Upon encoun-tering such a situation, the appropriateattention codelet will be associated with thesmall number of information codelets thatcarry the information describing the situ-ation. This association should lead to thecollection of this small number of codelets,together with the attention codelet that col-lected them, becoming a coalition. Codeletsalso have activations. The attention codeletincreases its activation in proportion to howwell the current situation fits its particularinterest, so that the coalition might competefor “consciousness,” if one is formed.

In IDA, the coalition manager is respon-sible for forming and tracking coalitions ofcodelets. Such coalitions are initiated on thebasis of the mutual associations betweenthe member codelets. At any given time,one of these coalitions finds it way to “con-sciousness,” chosen by the spotlight con-troller, which picks the coalition with thehighest average activation among its mem-ber codelets. Baars’ global workspace the-ory calls for the contents of “consciousness”

Informationrequest

acceptancepreference

San Diego Miami Norfolk

nornorfolkNorfolk NRFK

Jacksonville

location

Figure 7.6. A portion of the slipnet in IDA.

Page 184: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

166 the cambridge handbook of consciousness

to be broadcast to each of the codelets inthe system, and in particular, to the behav-ior codelets. The broadcast manager accom-plishes this task.

IDA depends on the idea of a behavior net(Maes, 1989; Negatu & Franklin, 2002) forhigh-level action selection in the service ofbuilt-in drives. It has several distinct drivesoperating in parallel, and these drives varyin urgency as time passes and the environ-ment changes. A behavior net is composedof behaviors and their various links. A behav-ior has preconditions as well as additions anddeletions. A behavior also has an activation, anumber intended to measure the behavior’srelevance to both the current environment(external and internal) and its ability to helpsatisfy the various drives it serves.

The activation comes from activationstored in the behaviors themselves, from theexternal environment, from drives, and frominternal states. The environment awardsactivation to a behavior for each of its truepreconditions. The more relevant it is tothe current situation, the more activation itreceives from the environment. (This sourceof activation tends to make the systemopportunistic.) Each drive awards activationto every behavior that, by being active, willhelp satisfy that drive. This source of activa-tion tends to make the system goal directed.Certain internal states of the agent can alsosend activation to the behavior net. Thisactivation, for example, might come from acoalition of codelets responding to a “con-scious” broadcast. Finally, activation spreadsfrom behavior to behavior along links.

IDA’s behavior net acts in consort withits “consciousness” mechanism to selectactions (Negatu & Franklin, 2002). Sup-pose some piece of information is written tothe workspace by perception or some othermodule. Attention codelets watch both itand the resulting associations. One of theseattention codelets may decide that this infor-mation should be acted upon. This codeletwould then attempt to take the informa-tion to “consciousness,” perhaps along withany discrepancies it may find with the helpof associations. If the attempt is success-ful, the coalition manager makes a coalition

of them, the spotlight controller eventuallyselects that coalition, and the contents of thecoalition are broadcast to all the codelets.In response to the broadcast, appropri-ate behavior-priming codelets perform threetasks: an appropriate goal structure is instan-tiated in the behavior net, the codelets bindvariables in the behaviors of that struc-ture, and the codelets send activation to thecurrently appropriate behavior of the struc-ture. Eventually that behavior is chosen to beacted upon. At this point, information aboutthe current emotion and the currently exe-cuting behavior is written to the focus by thebehavior codelets associated with the cho-sen behavior. The current contents of thewrite registers in the focus are then writ-ten to associative memory. The rest of thebehavior codelets associated with the chosenbehavior then perform their tasks. Thus, anaction has been selected and carried out bymeans of collaboration between “conscious-ness” and the behavior net.

This background information on the IDAarchitecture and mechanisms should enablethe reader to understand IDA’s cognitivecycle (Baars & Franklin, 2003 : Franklinet al., 2005). The cognitive cycle specifiesthe functional roles of memory, emotions,consciousness, and decision making in cogni-tion, according to the global workspace the-ory. Below, we sketch the steps of the cogni-tive cycle; see Figure 7.5 for an overview.

1. Perception. Sensory stimuli, external orinternal, are received and interpreted byperception. This stage is unconscious.

2 . Percept to Preconscious Buffer. The perceptis stored in preconscious buffers of IDA’sworking memory.

3 . Local Associations. Using the incomingpercept and the residual contents of thepreconscious buffers as cues, local asso-ciations are automatically retrieved fromtransient episodic memory and from long-term autobiographical memory.

4 . Competition for Consciousness. Attentioncodelets, whose job is to bring relevant,urgent, or insistent events to conscious-ness, gather information, form coalitions,and actively compete against each other.

Page 185: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 167

(The competition may also include atten-tion codelets from a recent previouscycle.)

5 . Conscious Broadcast. A coalition of code-lets, typically an attention codelet and itscovey of related information codelets car-rying content, gains access to the globalworkspace and has its contents broadcast.The contents of perceptual memory areupdated in light of the current contents ofconsciousness. Transient episodic mem-ory is updated with the current contentsof consciousness as events. (The contentsof transient episodic memory are sepa-rately consolidated into long-term mem-ory.) Procedural memory (recent actions)is also updated.

6. Recruitment of Resources. Relevant behav-ior codelets respond to the consciousbroadcast. These are typically codeletswhose variables can be bound from infor-mation in the conscious broadcast. Ifthe successful attention codelet was anexpectation codelet calling attention to anunexpected result from a previous action,the responding codelets may be those thatcan help rectify the unexpected situation.(Thus consciousness solves the relevancyproblem in recruiting resources.)

7. Setting Goal Context Hierarchy. Therecruited processors use the contents ofconsciousness to instantiate new goalcontext hierarchies, bind their variables,and increase their activation. Emotionsdirectly affect motivation and determinewhich terminal goal contexts receive acti-vation and how much. Other (environ-mental) conditions determine which ofthe earlier goal contexts receive addi-tional activation.

8. Action Chosen. The behavior net choosesa single behavior (goal context). Thisselection is heavily influenced by acti-vation passed to various behaviors influ-enced by the various emotions. Thechoice is also affected by the current situ-ation, external and internal conditions, bythe relation between the behaviors, andby the residual activation values of vari-ous behaviors.

9. Action Taken. The execution of a behav-ior (goal context) results in the behav-ior codelets performing their specializedtasks, which may have external or internalconsequences. The acting codelets alsoinclude an expectation codelet (see Step6) whose task is to monitor the action andto try and bring to consciousness any fail-ure in the expected results.

IDA’s elementary cognitive activitiesoccur within a single cognitive cycle. Morecomplex cognitive functions are imple-mented over multiple cycles. These includedeliberation, metacognition, and voluntaryaction (Franklin, 2000).

The IDA model employs a methodologythat is different from that which is currentlytypical of computational cognitive models.Although the model is based on experimen-tal findings in cognitive psychology and brainscience, there is only qualitative consistencywith experiments. Rather, there are a num-ber of hypotheses derived from IDA as aunified theory of cognition. The IDA modelgenerates hypotheses about human cogni-tion and the role of consciousness throughits design, the mechanisms of its modules,their interaction, and its performance.

Every agent must sample and act on itsworld through a sense-select-act cycle. Thefrequent sampling allows for a fine-grainedanalysis of common cognitive phenomena,such as process dissociation, recognition vs.recall, and the availability heuristic. At a highlevel of abstraction, the analyses supportthe commonly held explanations of whatoccurs in these situations and why. At a finer-grained level, the analyses flesh out commonexplanations, adding details and functionalmechanisms. Therein lies the value of theseanalyses.

Unfortunately, currently available tech-niques for studying some phenomena at afine-grained level, such as PET, fMRI, EEG,implanted electrodes, etc., are lacking eitherin scope, in spatial resolution, or in temporalresolution. As a result, some of the hypothe-ses from the IDA model, although testablein principle, seem not to be testable at the

Page 186: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

168 the cambridge handbook of consciousness

modules

conscious awareness system

declarative

episodic

memory

executive system

procedural system

response system

knowledge

Figure 7.7. Schacter’s model of consciousness.

present time for lack of technologies withsuitable scope and resolution.

There is also the issue of the breadthof the IDA model, which encompasses per-ception, working memory, declarative mem-ory, attention, decision making, procedurallearning, and more. How can such a broadmodel produce anything useful? The IDAmodel suggests that these various aspectsof human cognition are highly integrated.A more global view can be expected toadd additional understanding to that produ-ced by more specific models. This assertionseems to be borne out by the analyses of vari-ous cognitive phenomena (Baars & Franklin,2003 ; Franklin et al., 2005).

Sketches of Some Other Views

As we have seen, there are many attemptsto explain the difference in conscious acces-sibility. Various explanations have beenadvanced in terms of the content of knowl-edge (e.g., instances vs. rules), the organi-zation of knowledge (e.g., declarative vs.procedural), processing mechanisms (e.g.,spreading activation vs. rule matching andfiring), the representation of knowledge(e.g., localist/symbolic vs. distributed), and

so on. In addition to the two views elab-orated on earlier, let us look into somemore details of a few other views. Althoughsome of the models that are discussedbelow are not strictly speaking computa-tional (because they may not have beenfully computationally implemented), theyare nevertheless important because theypoint to possible ways of constructing com-putational explanations of consciousness.

We can examine Schacter’s (1990) modelas an example. The model is based on neu-ropsychological findings of the dissociationof different types of knowledge (especially inbrain-damaged patients). It includes a num-ber of “knowledge modules” that performspecialized and unconscious processing andmay send their outcomes to a “consciousawareness system,” which gives rise to con-scious awareness (see Figure 7.7). Schacter’sexplanation of some neuropsychological dis-orders (e.g., hemisphere neglect, blindsight,aphasia, agnosia, and prosopagnosia) is thatbrain damages result in the disconnectionof some of the modules from the consciousawareness system, which causes their inac-cessibility to consciousness. However, as hasbeen pointed out by others, this explana-tion cannot account for many findings inimplicit memory research (e.g., Roediger,

Page 187: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 169

visualconvergence

somatosensoryconvergence

auditoryconvergence

multimodalconvergencezone

Figure 7.8. Damasio’s model of consciousness.

1990). Revonsuo (1993) advocated a similarview, albeit from a philosophical viewpoint,largely on the basis of using Schacter’s (1990)data as evidence. Johnson-Laird’s (1983)model was somewhat similar to Schacter’smodel in its overall structure in that therewas a hierarchy of processors and conscious-ness resided in the processes at the top ofthe hierarchy. Shallice (1972) put forwarda model in which a number of “action sys-tems” could be activated by “selector input”and the activated action systems correspondto consciousness. It is not clear, however,what the computational (mechanistic) dif-ference between conscious and unconsciousprocesses is in those models, which did notoffer a mechanistic explanation.

We can compare Schacter’s (1990) modelwith Clarion. It is similar to Clarion in thatit includes a number of “knowledge mod-ules” that perform specialized and uncon-scious processing (analogous to bottom-levelmodules in Clarion) and send their out-comes to a “conscious awareness system”(analogous to the top level in Clarion),which gives rise to conscious awareness.Unlike Clarion’s explanation of the con-scious/unconscious distinction through thedifference between localist/symbolic versusdistributed representations, however, Schac-ter’s model does not elucidate in computa-tional/mechanistic terms the qualitative dis-tinction between conscious and unconsciousprocesses, in that the “conscious awarenesssystem” lacks any apparent qualitative dif-ference from the unconscious systems.

We can also examine Damasio’s neu-roanatomically motivated model (Dama-sio et al., 1990). The model hypothesizesthe existence of many “sensory convergencezones” that integrate information from indi-vidual sensory modalities through forwardand backward synaptic connections and theresulting reverberations of activations, with-out a central location for information stor-age and comparisons; it also hypothesizesthe global “multimodal convergence zone,”which integrates information across modal-ities also through reverberation (via recur-rent connections; see Figure 7.8). Corre-lated with consistency is global informa-tion availability; that is, once “broadcast”or “reverberation” is achieved, all the infor-mation about an entity stored in differenceplaces of the brain becomes available. Thiswas believed to have explained the accessi-bility of consciousness.6 In terms of Clarion,different sensory convergence zones may beroughly captured by bottom-level modules,each of which takes care of sensory inputs ofone modality (at a properly fine level), andthe role of the global multi-modal conver-gence zone (similar to the global workspacein a way) may be played by the top level ofClarion, which has the ultimate responsi-bility for integrating information (and alsoserves as the “conscious awareness system”).The widely recognized role of reverberation(Damasio, 1994 ; Taylor, 1994) may be cap-tured in Clarion through using recurrentconnections within modules at the bottomlevel and through multiple top-down andbottom-up information flows across the twolevels, which leads to the unity of conscious-ness that is the synthesis of all the informa-tion present (Baars, 1988; Marcel, 1983).

Similarly, Crick and Koch (1990) hypoth-esize that synchronous firing at 35–75 Hz inthe cerebral cortex is the basis for conscious-ness – with such synchronous firing, piecesof information regarding different aspectsof an entity are brought together, and thusconsciousness emerges. Although conscious-ness has been experimentally observed tobe somewhat correlated with synchronousfiring at 35–75 Hz, there is no explana-tion of why this is the case and there is

Page 188: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

170 the cambridge handbook of consciousness

no computational/mechanistic explanationof any qualitative difference between 35–75 Hz synchronous firing and other firingpatterns.

Cotterill (1997) offers a “master-module”model of consciousness, which asserts thatconsciousness arises from movement or theplanning of movement. The master-modulerefers to the brain region that is responsi-ble for motor planning. This model sees theconscious system as being profligate with itsresources: Perforce it must plan and organizemovements, even though it does not alwaysexecute them. The model stresses the vitalrole that movement plays and is quite com-patible with the IDA model. This centralityof movement was illustrated by the obser-vation that blind people were able to readbraille when allowed to move their fingers,but were unable to do so when the dots weremoved against their still fingers (Cotterill,1997).

Finally, readers interested in the possibil-ity of computational models of conscious-ness actually producing “conscious” artifactsmay consult Holland (2003) and other workalong that line.

Concluding Remarks

This chapter has examined general frame-works of computational accounts of con-sciousness. Various related issues, such asthe utility of computational models, expla-nations of psychological data, and poten-tial applications of machine consciousness,have been touched on in the process. Basedon existing psychological and philosophicalevidence, existing models were comparedand contrasted to some extent. It appearsinevitable at this stage that there is the coex-istence of various computational accounts ofconsciousness. Each of them seems to cap-ture some aspect of consciousness, but eachalso has severe limitations. To capture thewhole picture in a unified computationalframework, much more work is needed. Inthis regard, Clarion and IDA provide somehope.

Much more work can be conducted onvarious issues of consciousness along thiscomputational line. Such work may includefurther specifications of details of compu-tational models. It may also include recon-ciliations of existing computational modelsof consciousness. More importantly, it may,and should, include the validation of compu-tational models through empirical and the-oretical means. The last point in particularshould be emphasized in future work (seethe earlier discussions concerning Clarion

and IDA). In addition, we may also attemptto account for consciousness computation-ally at multiple levels, from phenomenol-ogy, via various intermediate levels, all theway down to physiology, which will likelylead to a much more complete computa-tional account and a much better picture ofconsciousness (Coward & Sun, 2004).

Acknowledgments

Ron Sun acknowledges support in part fromOffice of Naval Research grant N00014-95 -1-0440 and Army Research Institute grantsDASW01-00-K-0012 and W74V8H-04-K-0002 . Stan Franklin acknowledges supportfrom the Office of Naval Research and otherU.S. Navy sources under grants N00014-01-1-0917, N00014-98-1-0332 , N00014-00-1-0769, and DAAH04-96-C-0086.

Notes

1. There are also various numerical measuresinvolved, which are not important for thepresent discussion.

2 . Cleeremans and McClelland’s (1991) model ofartificial grammar learning can be viewed asinstantiating half of the system (the uncon-scious half), in which implicit learning takesplace based on gradual weight changes inresponse to practice on a task and the result-ing changes in activation of various represen-tations when performing the task.

3 . Note that the accessibility is defined in termsof the surface syntactic structures of the

Page 189: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 171

objects being accessed (at the level of out-comes or processes), not their semantic mean-ings. Thus, for example, a LISP expressionis directly accessible, even though one maynot fully understand its meaning. The inter-nal working of a neural network may be inac-cessible even though one may know what thenetwork essentially does (through an interpre-tive process). Note also that objects and pro-cesses that are directly accessible at a certainlevel may not be accessible at a finer level ofdetails.

4 . This activation of features is important in sub-sequent uses of the information associatedwith the concept and in directing behaviors.

5 . This is a term borrowed from computer oper-ating systems that describes a small piece ofcode that waits and watches for a particularevent or condition to occur before it acts.

6. However, consciousness does not necessar-ily mean accessibility/availability of all theinformation about an entity; for otherwise,conscious inference, deliberate recollection,and other related processes would be unnec-essary.

References

Anderson, J. R. (1983). The architecture of cog-nition. Cambridge, MA: Harvard UniversityPress.

Anderson, J., & Lebiere, C. (1998). The atomiccomponents of thought. Mahwah, NJ: Erlbaum.

Anwar, A., & Franklin, S. (2003). Sparse dis-tributed memory for “conscious” softwareagents. Cognitive Systems Research, 4 , 339–354 .

Baars, B. (1988). A cognitive theory of con-sciousness. New York: Cambridge UniversityPress.

Baars, B. (2002). The conscious access hypothesis:Origins and recent evidence. Trends in CognitiveScience, 6, 47–52 .

Baars, B., & Franklin, S. (2003). How con-scious experience and working memory inter-act. Trends in Cognitive Science, 7, 166–172 .

Barsalou, L. (1999). Perceptual symbol sys-tems. Behavioral and Brain Sciences, 2 2 , 577–609.

Berry, D., & Broadbent, D. (1988). Interac-tive tasks and the implicit-explicit distinction.British Journal of Psychology, 79, 251–272 .

Bower, G. (1996). Reactivating a reactivation the-ory of implicit memory. Consciousness and Cog-nition, 5(1/2), 27–72 .

Bowers, K., Regehr, G., Balthazard, C., & Parker,K. (1990). Intuition in the context of discovery.Cognitive Psychology, 2 2 , 72–110.

Chaiken, S., & Trope, Y. (Eds.). (1999). Dualprocess theories in social psychology. New York:Guilford Press.

Clark, A. (1992). The presence of a symbol. Con-nection Science. 4 , 193–205 .

Clark, A., & Karmiloff-Smith, A. (1993). Thecognizer’s innards: A psychological and philo-sophical perspective on the development ofthought. Mind and Language, 8(4), 487–519.

Cleeremans, A., & McClelland, J. (1991). Learn-ing the structure of event sequences. Journalof Experimental Psychology: General, 12 0, 235–253 .

Collins, A., & Loftus, J. (1975). Spreading activa-tion theory of semantic processing. Psychologi-cal Review, 82 , 407–428.

Cosmides, L., & Tooby, J. (1994). Beyond intu-ition and instinct blindness: Toward an evolu-tionarily rigorous cognitive science. Cognition,50, 41–77.

Cotterill, R. (1997). On the mechanism of con-sciousness. Journal of Consciousness Studies, 4 ,231–247.

Coward, L. A., & Sun, R. (2004). Criteria foran effective theory of consciousness and somepreliminary attempts. Consciousness and Cog-nition, 13 , 268–301.

Crick, F., & Koch, C. (1990). Toward a neurobio-logical theory of consciousness. Seminars in theNeuroscience, 2 , 263–275 .

Damasio. A., et al. (1990). Neural regionalizationof knowledge access. Cold Spring Harbor Sym-posium on Quantitative Biology, LV.

Damasio, A. (1994). Descartes’ error. New York:Grosset/Putnam.

Dennett, D. (1991), Consciousness explained.Boston: Little Brown.

Edelman, G. (1987). Neural Darwinism. NewYork: Basic Books.

Edelman, G. (1989). The remembered present: Abiological theory of consciousness. New York:Basic Books.

Edelman, G., & Tononi, G. (2000). A universe ofconsciousness. New York: Basic Books.

Page 190: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

172 the cambridge handbook of consciousness

Freeman, W. (1995). Societies of brains. Hillsdale,NJ: Erlbaum.

Fodor, J. (1983). The modularity of mind. Cam-bridge, MA: MIT Press.

Franklin, S. (2000). Deliberation and voluntaryaction in ‘conscious’ software agents. NeuralNetwork World, 10, 505–521.

Franklin, S., Baars, B. J., Ramamurthy, U., & Ven-tura, M. 2005 . The Role of consciousness inmemory. Brains, Minds and Media, 1, 1–38.

Franklin, S., & Graesser, A. C. (1997). Is it anagent, or just a program?: A taxonomy forautonomous agents. Intelligent agents III, 21–35 .

Franklin, S., Kelemen, A., & McCauley, L. (1998).IDA: A cognitive agent architecture. IEEE Con-ference on Systems, Man and Cybernetics.

Hadley, R. (1995). The explicit-implicit distinc-tion. Minds and Machines, 5 , 219–242 .

Hirschfeld, L., & Gelman, S. (1994). Mappingthe Mind: Domain Specificity in Cognition andCulture. New York: Cambridge UniversityPress.

Hofstadter, D., & Mitchell, M. (1994). The copy-cat project: A model of mental fluidity andanalogy-making. In K. J. Holyoak, & J. A. Barn-den (Eds.), Advances in connectionist and neuralcomputation theory, Vol. 2 : Logical connections.Norwood, NJ: Ablex.

Holland, O. (2003). Machine consciousness.Exeter, UK: Imprint Academic.

Hunt, E., & Lansman, M. (1986). Unified modelof attention and problem solving. PsychologicalReview, 93(4), 446–461.

Jackendoff, R. (1987). Consciousness and thecomputational mind. Cambridge, MA: MITPress.

Johnson-Laird, P. (1983). A computational anal-ysis of consciousness. Cognition and Brain The-ory, 6, 499–508.

Kanerva, P. (1988). Sparse distributed memory.Cambridge MA: MIT Press.

Karmiloff-Smith, A. (1986). From meta-processes to conscious access: Evidence fromchildren’s metalinguistic and repair data.Cognition, 2 3 , 95–147.

LeDoux, J. (1992). Brain mechanisms of emo-tion and emotional learning. Current Opinionin Neurobiology, 2 (2), 191–197.

Logan, G. (1988). Toward a theory of automati-zation. Psychological Review, 95(4), 492–527.

Maes, P. (1989). How to do the right thing. Con-nection Science, 1, 291–323 .

Marcel, A. (1983). Conscious and uncon-scious perception: An approach to the rela-tions between phenomenal experience andperceptual processes. Cognitive Psychology, 15 ,238–300.

Maslow, A. (1987). Motivation and personality(3d ed.). New York: Harper and Row.

Mathis, D., & Mozer, M., (1996). Consciousand unconscious perception: A computationaltheory. Proceedings of the 18th Annual Con-ference of the Cognitive Science Society, 324–328.

McClelland, J., McNaughton, B., & O’Reilly, R.(1995). Why there are complementary learn-ing systems in the hippocampus and neocortex:Insights from the successes and failures of con-nectionist models of learning and memory. Psy-chological Review, 102 (3), 419–457.

Michalski, R. (1983). A theory and methodologyof inductive learning. Artificial Intelligence, 2 0,111–161.

Minsky, M. (1985). The society of mind. New York:Simon and Schuster.

Moscovitch, M., & Umilta, C. (1991). Consciousand unconscious aspects of memory. In Per-spectives on cognitive neuroscience. New York:Oxford University Press.

Neal, A., & Hesketh, B. (1997). Episodic knowl-edge and implicit learning. Psychonomic Bulletinand Review, 4(1), 24–37.

Negatu, A., & Franklin, S. (2002). An action selec-tion mechanism for ‘conscious’ software agen-ts. Cognitive Science Quarterly, 2 , 363–386.

Nelson, T. (Ed.) (1993). Metacognition: CoreReadings. Boston, MA: Allyn and Bacon.

O’Brien. G., & Opie, J. (1998). A connectionisttheory of phenomenal experience. Behavioraland Brain Sciences, 2 2 , 127–148.

Penrose, R. (1994). Shadows of the mind. Oxford:Oxford University Press.

Quillian, M. R. (1968). Semantic memory. In M.Minsky (Ed.), Semantic information processing(pp. 227–270). Cambridge, MA: MIT Press.

Ramamurthy, U., D’Mello, S., & Franklin, S.(2004). Modified sparse distributed memory astransient episodic memory for cognitive soft-ware agents. In Proceedings of the InternationalConference on Systems, Man and Cybernetics.Piscataway, NJ: IEEE.

Reber, A. (1989). Implicit learning and tacitknowledge. Journal of Experimental Psychology:General, 118(3), 219–235 .

Page 191: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

computational models of consciousness: a taxonomy and some examples 173

Revonsuo, A. (1993). Cognitive models of con-sciousness. In M. Kamppinen (Ed.), Conscious-ness, cognitive schemata and relativism (pp. 27–130). Dordrecht, Netherlands: Kluwer.

Roediger, H. (1990). Implicit memory: Retentionwithout remembering. American Psychologist,45(9), 1043–1056.

Rosenbloom, P., Laird, J., & Newell, A. (1993).The SOAR papers: Research on integrated intel-ligence. Cambridge, MA: MIT Press.

Rosenthal, D. (Ed.). (1991). The nature of mind.Oxford: Oxford University Press.

Rumelhart, D., McClelland, J., & the PDPResearch Group, (1986). Parallel distributedprocessing: Explorations in the microstructures ofcognition. Cambridge, MA: MIT Press.

Schacter, D. (1990). Toward a cognitive neu-ropsychology of awareness: Implicit knowledgeand anosagnosia. Journal of Clinical and Exper-imental Neuropsychology, 12 (1), 155–178.

Schneider, W., & Oliver, W. (1991). Aninstructable connectionist/control architec-ture. In K. VanLehn (Ed.), Architectures forintelligence. Hillsdale, NJ: Erlbaum.

Searle, J. (1980). Minds, brains, and programs.Brain and Behavioral Sciences, 3 , 417–457.

Seger, C. (1994). Implicit learning. PsychologicalBulletin, 115(2), 163–196.

Servan-Schreiber, E., & Anderson, J. (1987).Learning artificial grammars with competitivechunking. Journal of Experimental Psychology:Learning, Memory, and Cognition, 16, 592–608.

Shallice, T. (1972). Dual functions of conscious-ness. Psychological Review, 79(5), 383–393 .

Sloman, A., & Chrisley, R. (2003). Virtualmachines and consciousness. Journal of Con-sciousness Studies, 10, 133–172 .

Smith, E., & Medin, D. (1981). Categories andconcepts. Cambridge, MA: Harvard UniversityPress.

Smith, J. D., Shields, W. E., & Washburn, D. A.(2003). The comparative psychology of uncer-tainty monitoring and metacognition. Behav-ioral and Brain Sciences. 2 6, 317–339.

Smolensky, P. (1988). On the proper treatment ofconnectionism. Behavioral and Brain Sciences,11(1), 1–74 .

Stanley, W., Mathews, R., Buss, R., & Kotler-Cope, S. (1989). Insight without awareness: Onthe interaction of verbalization, instruction andpractice in a simulated process control task.

Quarterly Journal of Experimental Psychology,41A(3), 553–577.

Sun, R. (1994). Integrating rules and connectionismfor robust commonsense reasoning. New York:John Wiley and Sons.

Sun, R. (1995). Robust reasoning: Integratingrule-based and similarity-based reasoning. Arti-ficial Intelligence, 75(2), 241–296.

Sun, R. (1997). Learning, action, and cons-ciousness: A hybrid approach towards mod-eling consciousness. Neural Networks, 10(7),13 17–1331.

Sun, R. (1999a). Accounting for the computa-tional basis of consciousness: A connectionistapproach. Consciousness and Cognition, 8, 529–565 .

Sun, R. (1999b). Computational models ofconsciousness: An evaluation. Journal of Intel-ligent Systems [Special Issue on Consciousness],9(5–6), 507–562 .

Sun, R. (2002). Duality of the mind. Mahwah, NJ:Erlbaum.

Sun, R. (2003). A tutorial on CLARION.Retrieved from http://www.cogsci.rpi.edu/∼rsun/sun.tutorial.pdf.

Sun, R., & Bookman, L. (Eds.). (1994). Computa-tional architectures integrating neural and sym-bolic processes. Norwell, MA: Kluwer.

Sun, R., Merrill, E., & Peterson, T. (2001). Fromimplicit skills to explicit knowledge: A bottom-up model of skill learning. Cognitive Science,2 5(2), 203–244 .

Sun, R., & Naveh, I. (2004 , June). Simulatingorganizational decision making with a cogni-tive architecture Clarion. Journal of ArtificialSociety and Social Simulation, 7(3). Retrievedfrom http://jasss.soc.surrey.ac.uk/7/3 /5 .html.

Sun, R., & Peterson, T. (1998). Autonomouslearning of sequential tasks: Experiments andanalyses. IEEE Transactions on Neural Net-works, 9(6), 1217–1234 .

Sun, R., & Peterson, T. (1999). Multi-agent rein-forcement learning: Weighting and partition-ing. Neural Networks, 12 (4–5). 127–153 .

Sun, R., Peterson, T., & Merrill, E. (1996).Bottom-up skill learning in reactive sequentialdecision tasks. Proceedings of 18th Cognitive Sci-ence Society Conference. Hillsdale, NJ: LawrenceErlbaum Associates.

Sun, R., Merrill, E.. & Peterson, T. (1998). Abottom-up model of skill learning. Proceedingsof the 2 0th Cognitive Science Society Conference

Page 192: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c07 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 27, 2007 20:23

174 the cambridge handbook of consciousness

(pp. 1037–1042). Mahwah, NJ: LawrenceErlbaum Associates

Sun , R., Slusarz, P., & Terry, C. (2005). The inter-action of the explicit and the implicit in skilllearning: A dual-process approach. Psychologi-cal Review, 112 , 159–192 .

Taylor, J. (1994). Goal, drives and consciousness.Neural Networks, 7 (6/7), 1181–1190.

Taylor, J. (1997). The relational mind. In A.Browne (Ed.), Neural network perspectives oncognition and adaptive robotics. Bristol, UK:Institute of Physics.

Toates, F. (1986). Motivational systems.Cambridge: Cambridge University Press.

Tversky, A. (1977). Features of similarity. Psycho-logical Review, 84(4), 327–352 .

Watkins, C. (1989). Learning with delayedrewards. PhD Thesis, Cambridge University,Cambridge, UK.

Willingham, D., Nissen, M., & Bullemer, P.(1989). On the development of proceduralknowledge. Journal of Experimental Psychology:Learning, Memory, and Cognition, 15 , 1047–1060.

Page 193: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

C. Cognitive Psychology

175

Page 194: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

176

Page 195: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

C H A P T E R 8

Cognitive Theories of Consciousness

Katharine McGovern and Bernard J. Baars

Abstract

Current cognitive theories of conscious-ness focus on a few common themes, suchas the limited capacity of conscious con-tents under input competition; the wideaccess enabled by conscious events to sen-sation, memory, problem-solving capacities,and action control; the relation betweenconscious contents and working memory;and the differences between implicit andexplicit cognition in learning, retrieval, andother cognitive functions. The evidentiarybase is large. A unifying principle in themidst of these diverse empirical findings is totreat consciousness as an experimental vari-able and, then, to look for general capacitiesthat distinguish conscious and unconsciousmental functioning. In this chapter, we dis-cuss three classes of theories: information-processing theories that build on modu-lar elements, network theories that focuson the distributed access of conscious pro-cessing, and globalist theories that combineaspects of these two. An emerging con-sensus suggests that conscious cognition isa global aspect of human brain function-

ing. A specific conscious content, like thesight of a coffee cup, is crucially dependenton local regions of visual cortex. But, byitself, local cortical activity is not conscious.Rather, the conscious experience of a cof-fee cup requires both local and widespreadcortical activity.

Introduction

When consciousness became a scientificallyrespectable topic again in the 1980s, it wastackled in a number of different scholarlydisciplines – psychology, philosophy, neuro-science, linguistics, medicine, and others. Bythe late 1990s, considerable interdisciplinarycooperation evolved in consciousness stud-ies, spurred by the biennial Tucson Con-ferences and the birth of two new schol-arly journals, Consciousness and Cognitionand the Journal of Consciousness Studies.The domain of consciousness studies origi-nated in separate disciplines, but has sincebecome cross-disciplinary. Thus, a numberof early theories of consciousness can jus-tifiably be called purely cognitive theories

177

Page 196: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

178 the cambridge handbook of consciousness

of consciousness, whereas most recent the-ories are neurocognitive hybrids – depend-ing on evidence from the brain as well asbehavior. In this chapter, we have, for themost part, restricted discussion to cognitiveor functional models of consciousness withless reference to the burgeoning neuroscien-tific evidence that increasingly supports theglobalist position that we develop here.

Operationally Defining Consciousness

Cognitive Methods That TreatConsciousness as a Variable

There is a curious asymmetry between theassessment of conscious and unconsciousprocesses. Obtaining verifiable experientialreports works very nicely for specifying con-scious representations, but unconscious onesare much more slippery. In many cases ofapparently unconscious processes, such asall the things the reader is not paying atten-tion to at this moment, it could be thatthe “unconscious” representations may bemomentarily conscious, but so quickly orvaguely that we cannot recall them evena fraction of a second later. Or supposepeople cannot report a word shown fora few milliseconds: Does this mean thatthey are truly unconscious of it? Such ques-tions continue to lead to controversy today.William James understood this problem verywell and suggested, in fact, that there wereno unconscious psychological processes atall (1890, p. 162ff.). This has been calledthe “zero point” problem (Baars, 1988). Itshould be emphasized, however, that prob-lems with defining a zero point do not pre-vent scientists from studying phenomena asvariables. Even today, the precise nature ofzero temperature points, such as the freezingpoint of water, continues to lead to debate.But physicists have done extremely produc-tive work on thermodynamics for centuries.Zero points are not the sole criterion for use-ful empirical variables.

The discovery that something we take forgranted as a constant can be treated as a vari-

able has led to scientific advances before.In the late 1600s, contemporaries of IsaacNewton were frustrated in their attemptsto understand gravity. One key to Newton’sgreat achievement was to imagine the pres-ence and the absence of gravity, thus per-mitting gravity to be treated as a variable.In the same way, a breakthrough in thescientific study of consciousness occurredwhen psychologists began to understandthat consciousness can be treated as a vari-able. That is, behavioral outcomes can beobserved when conscious cognitions arepresent and when they are absent. The pro-cess of generalizing across these observa-tions has been called contrastive analysis(explained below).

Beginning in the 1980s, a number ofexperimental methods gained currency asmeans of studying comparable consciousand non-conscious processes. In much ofcognitive science and neuroscience today,the existence of unconscious cognitive pro-cesses, often comparable to conscious ones,is taken for granted. Table 8.1 highlightsmethods that have produced behavioral datarelevant to the study of consciousness.

Working Definitions of “Conscious”and “Unconscious”

In the history of science, formal definitionsfor concepts like “heat” and “gene” tend tocome quite late, often centuries after ade-quate operational definitions are developed.The same point may apply to conscious cog-nition. Although there is ongoing debateabout what consciousness “really” is, therehas long been a scientific consensus on itsobservable index of verbal report. This indexcan be generalized to any other kind ofvoluntary response, such as pressing a but-ton or even voluntary eye movements in“locked-in” neurological patients. Experien-tial reports can be analyzed with sophis-ticated methods, such as process dissocia-tion and signal detection. Thus, empirically,it is not difficult to assess conscious eventsin humans with intact brains, given goodexperimental conditions. We propose the

Page 197: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 179

Table 8.1. Empirical methods used in the study of conscious and unconscious processes

Class of Methods Experimental Paradigm Outcome

Divided attention Dichotic listening Two dense streams of speech are offered tothe two ears, and only one stream at atime can receive conscious word-levelprocessing. Evidence suggests that theunconscious stream continues to receivesome processing.

Selective (“doubleexposure”) viewing

When two overlaid movies are viewed, onlyone is perceived consciously.

Inattentional blindness Aspects of visual scenes to which attentionis not directed are not consciouslyperceived; attended aspects of the samescenes are perceived.

Binocular rivalry/dichopticviewing (including flashsuppression)

Presenting separate visual scenes to eacheye; only one scene reachesconsciousness, but the unconsciousscene receives low-level processing.

Dual task paradigms Driving and talking on a cellphone

Rehearsing words and doingword verification

To the extent that tasks require consciousinitiation and direction, they competeand degrade the performance of eachother; once automatized, multiple tasksinterfere less.

Priming Supraliminal and subliminalpriming

Priming of one interpretationof ambiguous words orpictures

When a “prime” stimulus is presented priorto a “target” stimulus, response to the“target” is influenced by the currentlyunconscious nature and meaning of the“prime.” Supraliminal priming generallyresults in a more robust effect.

Visual backwardmasking

When supra-threshold visual stimuli arefollowed immediately by visual maskingstimuli (visual noise), the original stimuliare not consciously perceived, thoughthey are locally registered in early visualcortex.

Implicit learning Miniature grammar learning Consciously perceived stimuli give rise toknowledge structures that are notavailable to consciousness.

Process dissociationand ironic effects

Participants are told to exclude certainmemorized items from memory reports;if those items nevertheless appear, theyare assumed to be products ofnon-conscious processing.

Fixedness,decontextualization,and being blind tothe obvious (relatedto availability)

Problem-solving tasks,functional fixedness tasks(Duncker), chess playing,garden path sentences,highly automatized actionsunder novel conditions

Set effects in problem solving can excludeotherwise obvious conclusions fromconsciousness. “Breaking set” can lead torecovery of those conclusions inconsciousness.

Page 198: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

180 the cambridge handbook of consciousness

Table 8.2 . Contrastive analysis in perception and imagery

Conscious Events Comparable Unconscious Events

1. Perceived stimuli 1. Processing of stimuli lacking in intensity orduration, or centrally masked stimuli

2 . Preperceptual processing of stimuli3 . Habituated or automatic stimulus processing4 . Unaccessed versions of ambiguous stimuli/words5 . Contexts of interpretation for percepts and

concepts6. Unattended streams of perceptual input

(all modalities)7. Implicit expectations about stimuli8. Parafoveal guidance of eye movements in reading9. Stimulus processing under general anesthesia

10. Images in all sense modalities 10. Unretrieved images in memory11. a. Newly generated visual images 11. Automatized visual images

b. Automatic images that encounter somedifficulty

12 . Inner speech: words currently rehearsed inworking memory

12 . Inner speech, not currently rehearsed in workingmemory

13 . Fleetingly conscious phrases and beliefstatements

13 . Automatized inner speech; the “jingle channel”

14 . Visual search based on conjoined features 14 . Visual search based on single features15 . Retrieval by recall 15 . Retrieval by recognition16. Explicit knowledge 16. Implicit knowledge

following as de facto operational defini-tions of conscious and unconscious that arealready in very wide experimental use in per-ception, psychophysics, memory, imagery,and the like.

We can say that mental processes are con-scious if they

(a) are claimed by people to be conscious;and

(b) can be reported and acted upon,(c) with verifiable accuracy,(d) under optimal reporting conditions (e.g.,

with minimum delay between the eventand the report, freedom from distrac-tion, and the like).

Conversely, mental events can be defined asunconscious for practical purposes if

(a) their presence can be verified (throughfacilitation of other observable tasks, forexample); although

(b) they are not claimed to be conscious;(c) and they cannot be voluntarily reported,

operated upon, or avoided,

(d) even under optimal reporting condi-tions.

The Method of Contrastive Analysis

Using the logic of experimental research,consciousness can be treated as a controlledvariable; then, measures of cognitive func-tioning and neural activity can be com-pared under two levels of the indepen-dent variable – consciousness-present andconsciousness-absent. If there is no clearlyunconscious comparison condition, a low-level conscious condition may be used,as in drowsiness or stimuli in backgroundnoise. The point, of course, is to have atleast two quantitatively different levels forcomparison.

data from contrastive analysis

Examples of conscious versus non-consciouscontrasts from studies of perception,imagery, memory, and attention appear inTable 8.2 . In the left column, consciousmental events are listed; on the right are

Page 199: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 181

Table 8.3 . Capability contrasts between comparable conscious and non-conscious processes

Conscious Processes Unconscious Processes

1. Are computationally inefficient with 1. Are very efficient in routine tasks with• Many errors • Few errors• Relatively low speed • High speed• Mutual interference between conscious

processes.• Little mutual interference.

2 . Have a great range of contents• Great ability to relate different conscious

contents to each other• Great ability to relate conscious events to

unconscious contexts• Flexible

2 . Taken individually, unconscious processes havea limited range of contents• Each routine process is relatively isolated and

autonomous• Each routine process is relatively

context-free, operates in a range of contexts• Fixed pattern

3 .• have high internal consistency at any single

moment• have seriality over time

3 . The set of routine, unconscious processes,taken together, is:• diverse,• can operate concurrently

• have limited processing capacity • have great processing capacity4 . The clearest conscious contents are perceptual

or quasi-perceptual (e.g., imagery, inner speech,and internally generated bodily feelings)

4 . Unconscious processes are involved in allmental tasks, not limited to perception andimagery, but including memory, knowledgerepresentation and access, skill learning,problem-solving, action control, etc.

5 . Are associated with voluntary actions 5 . Are associated with non-voluntary actions

corresponding non-conscious processes.Theoretically, we are interested in findingout what is common in conscious processingacross all these cases.

Capability Contrasts

The difference in mental and neural func-tioning between consciousness-present andconsciousness-absent processing – takenacross many experimental contexts – revealsstable characteristics attributable to con-sciousness. Conscious processes are phenom-enally serial, internally consistent, unitary atany moment, and limited in capacity. Non-conscious mental processes are functionallyconcurrent, often highly differentiated fromeach other, and relatively unlimited in capac-ity, when taken together. Table 8.3 sum-maries these general conclusions.

These empirical contrasts in the capabil-ities of conscious and unconscious mentalprocesses can become the criteria againstwhich models of consciousness can be evalu-ated. Any adequate theory of consciousness

would need to account for these observeddifferences in functioning. Thus, we havea way of judging the explanatory adequacyof proposals concerning the nature andfunctioning of consciousness. We can keepthese capability contrasts in mind as wereview contemporary cognitive models ofconsciousness.

Given the tight constraints that appearrepeatedly in studies of conscious process-ing – that is, limited capacity, seriality,and internal consistency requirements – wemight ask, Why? Would it not be adaptive todo several conscious things at the same time?Certainly human ancestors might have ben-efited from being able to gather food, bealert for predators, and keep an eye on theiroffspring simultaneously; modern humanscould benefit from being able to drive theircars, talk on cell phones, and put on lip-stick without mutual interference. Yet thesetasks compete when they require conscious-ness, so that only one can be done well atany given moment. The question then is,Why are conscious functions so limited in

Page 200: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

182 the cambridge handbook of consciousness

a neuropsychological architecture that is solarge and complex?

Functions of Consciousness in theArchitecture of Cognition

A Note about Architectures

The metaphor of “cognitive architectures”dates to the 1970s when cognitive psychol-ogists created information-processing mod-els of mental processes. In many of thesemodels, different mental functions, such asmemory, language, attention, and sensoryprocesses, were represented as modules, orsets of modules, within a larger information-processing system. The functional layoutand the interactions of the parts of the sys-tem came to be called the cognitive archi-tecture. We have adopted this terminologyhere to capture the idea that consciousnessoperates within a larger neuropsychologicalsystem that has many constituents interact-ing in complex ways.

Consciousness Serves Many Functions

William James believed that “[t]hestudy . . . of the distribution of consciousnessshows it to be exactly such as we mightexpect in an organ added for the sakeof steering a nervous system grown toocomplex to regulate itself (1890, p. 141).”More recently, Baars (1988) identified eightpsychological functions of consciousness,which are defined in Table 8.4 .

Note that each proposed function of con-sciousness is served through an interplay ofconscious and unconscious processes. It hasbeen argued that consciousness fulfills alleight functions by providing access or pri-ority entrance into various subparts of thecognitive system (Baars, 2002). For example,the error-detection function can be accom-plished only when information about animpending or actual error, which cannot behandled by “canned” automatisms, can gainaccess to consciousness. Subsequently, edit-ing occurs when this conscious informationis “broadcast” or distributed to other parts

of the system that are capable of actingto recognize and correct it. Consciousnessfunctions as the central distributor of infor-mation, which is used by subparts of thecognitive system or architecture.

Consciousness Creates Access

A strong case can be made that we can cre-ate access to any part of the brain by way ofconscious input. For example, to gain volun-tary control over alpha waves in the occipi-tal cortex we merely sound a tone or turnon a light when alpha is detected in theEEG, and shortly the subject will be ableto increase the power of alpha at will. Tocontrol a single spinal motor unit we merelypick up its electrical activity and play it backover headphones; in a half-hour, subjectshave been able to play drum rolls on singlemotor units. Biofeedback control over sin-gle neurons and whole populations of neu-rons anywhere in the brain is well estab-lished (Basmajian, 1979). Consciousness ofthe feedback signal seems to be a necessarycondition to establish control, though themotor neural activities themselves remainentirely unconscious. It is as if mere con-sciousness of results creates access to uncon-scious neuronal systems that are normallyquite autonomous.

Psychological evidence leads to similarconclusions. The recognition vocabulary ofeducated English speakers contains about100,000 words. Although we do not use allof them in everyday speech, we can under-stand each word as soon as it is presented ina sentence that makes sense. Yet each indi-vidual word is already quite complex. TheOxford English Dictionary devotes 75 ,000

words to the many different meanings ofthe word “set.” Yet all we do as humansto access these complex unconscious bod-ies of knowledge is to become conscious ofa target word. It seems that understandinglanguage demands the gateway of conscious-ness. This is another case of the general prin-ciple that consciousness of stimuli createswidespread access to unconscious sources ofknowledge, such as the mental lexicon,meaning, and grammar.

Page 201: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 183

Table 8.4. Explaining the psychological functions of consciousness

Function of Consciousness Function Explained

1. Definition and context-setting By relating input to its contextual conditions, consciousnessdefines a stimulus and removes ambiguities in its perceptionand understanding.

2 . Adaptation and learning The more novelty and unpredictability to which thepsychological system must adapt, the greater the consciousinvolvement required for successful problem solving andlearning.

3 . Prioritizing and access control Attentional mechanisms exercise selective control over what willbecome conscious by relating input to unconscious goalcontexts. By consciously relating an event or circumstance tohigher-level goals, we can raise its access priority, making itconscious more often and therefore increasing the chances ofsuccessful adaptation to it.

4 . Recruitment and control ofthought and action

Conscious goals can recruit subgoals and behavior systems toorganize and carry out flexible, voluntary action.

5 . Decision-making andexecutive function

Consciousness creates access to multiple knowledge sourceswithin the psychological system. When automatic systemscannot resolve some choice point in the flow of action, makingit conscious helps recruit knowledge sources that are able tohelp make the decision; in case of indecision, making the goalconscious allows widespread recruitment of conscious andunconscious sources acting for and against the goal.

6. Error detection and editing Conscious goals and plans are monitored by unconscious rulesystems that will act to interrupt execution if errors aredetected. Though we often become aware of making an errorin a general way, the detailed description of what makes anerror an error is almost always unconscious.

7. Reflection and self-monitoring Through conscious inner speech and imagery we can reflect uponand to some extent control and plan our conscious andunconscious functioning.

8. Optimizing the tradeoffbetween organization andflexibility

Automatized, “canned” responses are highly adaptive inpredictable circumstances. However, in unpredictableenvironments, the capacity of consciousness to recruit andreconfigure specialized knowledge sources is indispensable inallowing flexible responding.

Or consider autobiographical memory.The size of long-term episodic memory isunknown, but we do know that simplyby paying attention to as many as 10,000

distinct pictures over several days with-out attempting to memorize them, we canspontaneously recognize more than 90% aweek later (Standing, 1973). Remarkableresults like this are common when we userecognition probes, merely asking peopleto choose between known and new pic-tures. Recognition probes apparently workso well because they re-present the original

conscious experience of each picture in itsentirety. Here the brain does a marvelousjob of memory search, with little effort. Itseems that humans create memories of thestream of input merely by paying attention,but because we are always paying attentionto something, in every waking moment, thissuggests that autobiographical memory maybe very large indeed. Once again we have avast unconscious domain, and we gain accessto it using consciousness. Mere conscious-ness of some event helps store a recogniz-able memory of it, and when we experience

Page 202: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

184 the cambridge handbook of consciousness

it again, we can distinguish it accurately frommillions of other experiences.

The ability to access unconscious pro-cesses via consciousness applies also to thevast number of unconscious automatismsthat can be triggered by conscious events,including eye movements evoked by con-scious visual motion, the spontaneous innerspeech that often accompanies reading, thehundreds of muscle groups that control thevocal tract, and those that coordinate andcontrol other skeletal muscles. None of theseautomatic neuronal mechanisms are con-scious in any detail under normal circum-stances. Yet they are triggered by consciousevents. This triggering function is hamperedwhen the conscious input is degraded by dis-traction, fatigue, somnolence, sedation, orlow signal fidelity.

Consciousness seems to be needed toaccess at least four great bodies of uncon-scious knowledge: the lexicon of natural lan-guage, autobiographical memory, the auto-matic routines that control actions, and eventhe detailed firing of neurons and neuronalpopulations, as shown in biofeedback train-ing. Consciousness seems to create access tovast unconscious domains of expert knowl-edge and skill.

Survey of Cognitive Theoriesof Consciousness

Overview

In the survey that follows, cognitive the-ories of consciousness are organized intothree broad categories based on the architec-tural characteristics of the models. The firstgroup consists of examples of information-processing theories that emphasize modularprocesses: Johnson-Laird’s Operating SystemModel of Consciousness, Schacter’s Modelof Dissociable Interactions and ConsciousExperience (DICE), Shallice’s SupervisorySystem, Baddeley’s Early and Later Mod-els of Working Memory, and Schneiderand Pimm-Smith’s Message-Aware ControlMechanism. The second group includes net-work theories that explain consciousness as

patterns of system-wide activity: Pribram’sHolonomic Theory, Tononi and Edelman’sDynamic Core Hypothesis, and WalterFreeman’s Dynamical Systems Approach.The third group includes globalist mod-els that combine aspects of information-processing theories and network theories:Baars’ Global Workspace Theory, Franklin’sIDA as an implementation of GW theory,and Dehaene’s Global Neuronal NetworkTheory. Theories have been selected thatrepresent the recent history of cognitivemodeling of consciousness from the 1970sforward and that account in some way forthe evidence described above concerning thecapability contrasts of conscious and uncon-scious processes.

Information-Processing Theories ThatEmphasize Modular Processes:Consciousness Depends on a Kind ofProcessing

Theories in this group emphasize theinformation-processing and action controlaspects of the cognitive architecture. Theytend to explain consciousness in terms of“flow of control” or flow of informationamong specialist modules.

johnson-laird’s operating system

model of consciousness

Johnson-Laird’s (1988) operating systemmodel of consciousness emphasizes its rolein controlling mental events, such as direct-ing attention, planning and triggering actionand thought, and engaging in purposefulself-reflection. Johnson-Laird proposes thatthe cognitive architecture performs paral-lel processing in a system dominated bya control hierarchy. His system involves acollection of largely independent processors(finite state automata) that cannot modifyeach other but that can receive messagesfrom each other; each initiates computationwhen it receives appropriate input from anysource. Each passes messages up through thehierarchy to the operating system that setsgoals for the subsystems. The operating sys-tem does not have access to the detailedoperations of the subsystems – it receives

Page 203: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 185

Response System

Lexical Conceptual Facial Spatial Self

Conscious Awareness System EpisodicMemory

Executive System

Procedural-Habit System

Consciousness operates here

Figure 8.1. Schacter’s Dissociable Interactions and Conscious Experience(DICE) Model. (Redrawn from Schacter, 1990). Phenomenal awareness dependson intact connections between the conscious awareness system and theindividual knowledge modules or episodic memory. The conscious awarenesssystem is the gateway to the executive system, which initiates voluntary action.

only their output. Likewise, the operatingsystem does not need to specify the detailsof the actions it transmits to the processors –they take in the overall goal, abstractly spec-ified, and elaborate it in terms of their owncapabilities.

In this model, conscious contents residein the operating system or its working mem-ory. Johnson-Laird believes his model canaccount for aspects of action control, self-reflection, intentional decision making, andother metacognitive abilities.

schacter’s model of dissociable

interactions and conscious

experience (dice)

Accumulating evidence regarding the neu-ropsychological disconnections of process-ing from consciousness, particularly implicitmemory and anosagnosia, led Schacter(1990) to propose his Dissociable Interac-tions and Conscious Experience (DICE)model (see Figure 8.1): “The basic idea moti-vating the DICE model . . . is that the pro-

cesses that mediate conscious identificationand recognition – that is, phenomenal aware-ness in different domains – should be sharplydistinguished from modular systems thatoperate on linguistic, perceptual, and otherkinds of information” (pp. 160–161, 1990).

Like Johnson-Laird’s model, Schacter’sDICE model assumes independent memorymodules and a lack of conscious access todetails of skilled/procedural knowledge. Itis primarily designed to account for mem-ory dissociations in normally functioning anddamaged brains. There are two main obser-vations of interest. First, with the excep-tion of coma and stupor patients, failuresof awareness in neuropsychological casesare usually restricted to the domain of theimpairment; these patients do not have diffi-culty generally in gaining conscious access toother knowledge sources. Amnesic patients,for example, do not necessarily have trou-ble reading words, whereas alexic indi-viduals do not necessarily have memoryproblems.

Page 204: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

186 the cambridge handbook of consciousness

Second, implicit (non-conscious) mem-ory of unavailable knowledge has beendemonstrated in many conditions. For exam-ple, name recognition is facilitated inprosopagnosic patients when the name ofthe to-be-identified face is accompanied bya matching face – even though the patientdoes not consciously recognize the face.Numerous examples of implicit knowledgein neuropsychological patients who do nothave deliberate, conscious access to theinformation are known (see Milner & Rugg,1992). These findings suggest an architec-ture in which various sources of knowl-edge function somewhat separately, becausethey can be selectively lost; these knowledgesources are not accessible to consciousness,even though they continue to shape volun-tary action.

In offering DICE, Schacter has givenadditional support to the idea of a con-scious capacity in a system of separableknowledge sources, specifically to explainspared implicit knowledge in patients withbrain damage. DICE does not aim to explainthe limited capacity of consciousness or theproblem of selecting among potential inputs.In agreement with Shallice (see below) theDICE model suggests that the primary roleof consciousness is to mediate voluntaryaction under the control of an executive.However, the details of these abilities are notspelled out, and other plausible functions arenot addressed.

shallice’s supervisory system

Shallice shares an interest in the relationof volition and consciousness with James(1890), Baars (1988), and Mandler (1975).In 1972 , Shallice argued that psychologistsneeded to have a rationale for using thedata of introspection. To do so, he said,the nature of consciousness would need tobe considered. He thought at that timethat the selector of input to the dominantaction system had properties that corre-sponded to those of consciousness. Shallice’searly theory (1978) focused on consciousselection for a dominant action system, theset of current goals that work together to

control thought and action. Subsequently,Shallice (1988; Norman & Shallice, 1980)modified and refined the theory to accom-modate a broader range of conscious func-tions (depicted in Figure 8.2).

Shallice describes an information-processing system as having five charac-teristics. First, it consists of a very largeset of specialized processors, with severalqualifications on their “modularity”:

� There is considerable variety in the waythe subsystems can interact.

� The overall functional architecture is seenas partly innate and partly acquired, aswith the ability to read.

� The “modules” in the system include notonly input processors but also specializedinformation stores, information manage-ment specialists, and other processingmodules.

Second, a large set of action andthought schemata can “run” on the modules.These schemata are conceptualized as well-learned, highly specific programs for rou-tine activities, such as eating with a spoon,driving to work, etc. Competition andinterference between currently activatedschemata are resolved by another special-ist system, CONTENTION SCHEDUL-ING, which selects among the schematabased on activation and lateral inhibition.Contention scheduling acts during routineoperations.

Third, a SUPERVISORY SYSTEM func-tions to modulate the operation of con-tention scheduling. It has access to repre-sentations of operations, of the individual’sgoals, and of the environment. It comes intoplay when operation of routinely selectedschemata does not meet the system’s goals;that is, when a novel or unpredicted sit-uation is encountered or when an errorhas occurred.

Fourth, a LANGUAGE SYSTEM isinvolved that can function either to acti-vate schemata or to represent the opera-tions of the supervisory system or specialistsystems.

Page 205: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 187

Supervisory System

EpisodicMemory

Language System

ContentionScheduling

TriggerDatabase

Special Purpose Processing System

Consciousness depends on concurrent and coherent operation of several control systems

Figure 8.2 . Shallice’s Supervisory System Model of Conscious Processing. Solidarrows represent obligatory communications; dashed arrows represent optionalcommunications. (Drawn from Shallice, 1988.)

Fifth, more recently an EPISODICMEMORY component containing event-specific traces has been added to the set ofcontrol processes.

Thus, the supervisory system, con-tention scheduling, the language system, andepisodic memory all serve higher-level orcontrol functions in the system. As a firstapproximation, one of these controllers orseveral together might be taken as the “con-scious part” of the system. However, as Shal-lice points out, consciousness cannot residein any of these control systems taken indi-vidually. No single system is either necessaryor sufficient to account for conscious events.Consciousness remains even when one ofthese control systems is damaged or dis-abled, and the individual control systems canall operate autonomously and unconsciously.Instead, Shallice suggests that consciousnessmay arise on those occasions where there isconcurrent and coherent operation of sev-eral control systems on representations of asingle activity. In this event, the contents ofconsciousness would correspond to the flowof information between the control systemsand the flow of information and control fromthe control systems to the rest of the cogni-tive system.

Shallice’s (1988) model aims primarilyto “reflect the phenomenological distinc-

tions between willed and ideomotor action”(p. 319). Shallice identifies consciousnesswith the control of coherent action subsys-tems and the emphasis on the flow of infor-mation among the subsystems.

baddeley’s early and later models

of working memory: 1974 to 2000

Working memory is a functional accountof the workings of temporary memory (asdistinct from long-term memory). Baddeleyand Hitch (1974) first proposed their multi-component model of working memory(WM) as an advance over single-store mod-els, such as Short-Term Memory (STM;Atkinson & Shiffrin, 1968). The original WMmodel was simple, composed of a centralexecutive with two subsystems, the phono-logical loop and the visuospatial sketch-pad. WM was designed to account forshort duration, modality-specific, capacity-limited processing of mnemonic informa-tion. It combined the storage capacity of theolder STM model with an executive pro-cess that could “juggle” information betweentwo slave systems and to and from long-termmemory. The evolving model of WM hasbeen successful in accounting for behavioraland neurological findings in normal partic-ipants and in neuropsychological patients.From the beginning, working memory,

Page 206: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

188 the cambridge handbook of consciousness

Visuospatial Sketchpad

Episodic Buffer

Central Executive

Phonological Loop

Visual Semantics Episodic LTM Language

....................................... Long Term Memory..................................................

Retrieval from the episodic buffer involves conscious processing

Figure 8.3 . Baddeley’s Model of Working Memory. This model incorporates theepisodic buffer. (Adapted from Baddeley, 2000.)

particularly transactions between the centralexecutive and the subsystems, has been asso-ciated with conscious and effortful informa-tion processing. However, these were rarelystated in terms of the question of conscious-ness as such.

Recently, Baddeley (2000, 2001) has pro-posed an additional WM component calledthe episodic buffer (see Figure 8.3 for adepiction of the most recent model). Thisaddition to the WM architecture means thatthe central executive now becomes a purelyattentional, controlled process while mul-timodal information storage devolves ontothe episodic buffer. The episodic buffer“comprises a limited capacity system thatprovides temporary storage of informationheld in a multimodal code, which is capa-ble of binding information from the sub-sidiary systems, and from long-term mem-ory, into a unitary episodic representation.Conscious awareness is assumed to be theprincipal mode of retrieval from the buffer”(Baddeley, 2000, p. 417). Baddeley (2001)believes that the binding function served bythe episodic buffer is “the principal biologi-cal advantage of consciousness” (p. 858).

Conscious processing in WM appears toreside in the transactions of the central exec-utive with the episodic buffer (and perhaps

with the visuospatial sketchpad and phono-logical loop), in which the central executivecontrols and switches attention while theepisodic buffer creates and makes availablemultimodal information.

Baddeley’s episodic buffer resemblesother models of consciousness in its abil-ity to briefly hold multimodal informationand to combine many information sourcesinto a unitary representation. A major dif-ference between WM and other models isthat WM was not proposed as a modelof consciousness in general. It is restrictedto an accounting of mnemonic processes –both conscious and unconscious. In addition,the WM model does not assume that con-tents of the episodic buffer are “broadcast”systemwide as a means of organizing andrecruiting other non-mnemonic processes.No account is given of the further distribu-tion of information from the episodic buffer,once it is accessed by the central executive.

schneider and pimm-smith’s

message-aware control mechanism

Schneider and Pimm-Smith have proposeda model of cognition that incorporatesa conscious processing component andallows widespread distribution of informa-tion from specialist modules (Schneider &

Page 207: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 189

Message awarecontroller

Goal systems

Attentionalcontroller

InnerLoop

VISION

TO

UC

H SPE

EC

H

Consciousnessconsists of thisprocess & itsinteractions

Gain control

Activity &Priority Codes

Modality specific messages

Figure 8.4. A simplified view of Schneider and Pimm-Smith’s Message-AwareControl Mechanism.

Pimm-Smith, 1997). The model is anattempt to capture the adaptive advantagethat consciousness adds to cognitive pro-cessing. According to Schneider and Pimm-Smith, “consciousness may be an evolution-ary extension of the attentional system tomodulate cortical information flow, provideawareness, and facilitate learning particu-larly across modalities. . . . According to [the]model, consciousness functions to monitorand transmit global messages that are gener-ally received by the whole system serially toavoid the cross-talk problem (pp. 65 , 76).”The conscious component of this model,the conscious controller, stands betweenthe serially arriving high-level messages for-warded by the specialist modules and theattentional controller, which sends scalarmessages back to the specialist modulesaccording to their value in ongoing activi-ties. The conscious controller is not privy

to all information flowing within the sys-tem, examining data only after lower-levelmodules have produced invariant codes andselecting those messages that relate to cur-rently active goals. Figure 8.4 illustrates thefunctional relations among components inSchneider and Pimm-Smith’s model.

The message-aware control mechanismmodel of consciousness depends on local-ized specialist processors – auditory, haptic,visual, speech, motor, semantic, and spatialmodules – which each have their own inter-nal levels of processing. These modules feedtheir output codes serially and separatelyto other modules and to the consciousnesscontroller via an inner loop. According toSchneider and Pimm-Smith, “Consciousnessis a module on the inner loop that receivesmessages that evoke symbolic codes that areutilized to control attention to specific goals(p. 72).” Furthermore, “consciousness is the

Page 208: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

190 the cambridge handbook of consciousness

message awareness of the controller. . . . Thismessage awareness allows control decisionsbased on the specific messages transmitted inthe network rather than labeled line activityor priority codes from the various modulesin the network (pp. 72–73).” There is alsoinformation sharing through non-consciouschannels. The attentional controller receivesactivity and priority signals from specialistmodules marking the availability of inputthrough non-conscious outer loop channels,and it sends gain control information back tothe specialists.

Schneider and Pimm-Smith’s model hasseveral notable characteristics: (1) informa-tion is widely distributed; (2) particularcontent is created by localized specialistmodules, which themselves have levels ofprocessing; (3) consciousness is identified asa separable and separate function that mod-ulates the flow of information throughoutthe system: and (4) access to consciousness isthrough reference to goal systems. This ele-gant model accounts for the modulation ofattention by reference both to current goalsand to the recruitment of specialist modulesin the service of goals through the global dis-tribution of conscious messages.

Network Theories That ExplainConsciousness as Patterns ofSystem-Wide Activity

Although the information-processing theo-ries in the previous section attempted tomodel consciousness in terms of the selec-tion and interactions of specialized, semi-autonomous processing modules, the theo-ries described in the present section buildon networks and connectionist webs. Itmay be helpful to view the information-processing models as macroscopic and thenetwork theories as microscopic. It is possi-ble that the activities attributed to modulesin the information-processing models could,in fact, be seen to be carried out by connec-tionist networks when viewed microscopi-cally. As we examine network theories, itshould be kept in mind that, although net-work theorists often make comparisons withand assumptions about networks composed

of neurons, most network theories, includingthose described here, are in fact functionaldescriptions of activities that are presumedto occur in the brain. Brain networks them-selves are not directly assessed.

pribram’s holonomic theory

Karl Pribram has developed a holographictheory (more recently, holonomic theory)of brain function and conscious processing(Pribram, 1971). He has built on the mathe-matical formulations of Gabor (1946), com-bining holography and communication the-ory. To state things in simple form, theholonomic model postulates that the brainis holograph machine. That is, the brainhandles, represents, stores, and reactivatesinformation through the medium of “wet-ware” holograms. Holograms, though com-plex mathematically, are familiar to us as thethree-dimensional images found on creditcards and driver’s licenses. Such a hologramis a photograph of an optical interferencepattern, which contains information aboutintensity and phase of light reflected by anobject. When illuminated with a coherentsource of light, it will yield a diffractedwave that is identical in amplitude andphase distribution with the light reflectedfrom the original object. The resulting three-dimensional image is what we see on ourcredit cards.

In a holonomic model, Pribram says,information is encoded by wave interfer-ence patterns rather than by the binary units(BITs) of computer science. Rather thanencoding sensory experience as a set of fea-tures that are then stored or used in informa-tion processing, sensory input in the holo-nomic view is encoded as the interferencepattern resulting from interacting waves ofneuronal population activity. Stated in thelanguage of visual perception, the retinalimage is understood to be coded by a spa-tial frequency distribution over visual cor-tex, rather than by individual features of thevisual scene. Because the surface layer of thecortex consists of an entangled “feltwork”of dendrites, Pribram suggests that cortexrepresents a pattern of spatial correlations ina continuous dendritic medium, rather than

Page 209: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 191

by discrete, localized features expressed bythe firing of single neurons. The dendriticweb of the surface layers of cortex is themedium in which representations are held.Functionally, it is composed of oscillatinggraded potentials. In Pribram’s view, nerveimpulses have little or no role to play inthe brain web. However, dendritic potentialsobviously trigger axonal firing when they riseabove a neuronal threshold, so that long-distance axons would also be triggered bythe dendro-dendritic feltwork.

The holonomic model finds support inthe neuropsychological finding that focalbrain damage does not eliminate memorycontent. Further, it is consistent with the factthat each sensory neuron tends to prefer onetype of sensory input, but often fires to dif-ferent inputs as well. Neurons do not operateindividually but rather participate in differ-ent cell assemblies or active populations atdifferent times; the cell assemblies are them-selves “kaleidoscopic.”

In a hologram, the information neces-sary to construct an image is inherently dis-tributed. Pribram explains how the notionof distributed information can be illustratedwith a slide projector (Pribram & Meade,1999). If one inserts a slide into the projec-tor and shows a “figure,” then removes thelens from the front of the projector, thereis only a fuzzy bright area. There is noth-ing, “no-thing,” visible on the screen. But thatdoes not mean there is no information in thelight. The information can be re-visualizedby placing a pair of reading glasses into thelight beam. On the screen, one now againsees the “figure” in the slide. Putting twolenses of the eyeglasses in the beam, one seestwo “figures” that can be made to appear inany part of the bright area. Thus any partof the beam of light carries all the informa-tion needed to reconstruct the picture on theslide. Only resolution is lost.

According to the holonomic theory, no“receiver” is necessary to “view” the resultof the transformation (from spectral holo-graphic to “image”), thus avoiding thehomunculus problem (the problem of infi-nite regress, of a little man looking at thevisual scene, which in turn needs a little

man in its brain, and so on ad infinitum).It is the activity of the dendro-dendriticweb that gives rise to the experience. Cor-respondingly, remembering is a form of re-experiencing or re-constructing the initialsensory input, perhaps by cuing a portionof the interference pattern. Finally, Pribrambelieves that we “become aware of our con-scious experience due to a delay between anincoming pattern of signals before it matchesa previously established outgoing pattern”(Pribram & Meade, 1999, p. 207).

Pribram’s holonomic model is attractivein that it can account for the distributedproperties of memory and sensory process-ing. The model makes use of the dendriticfeltwork that is known to exist in the sur-face layer of cortex. However, the modelfails to help us understand the differencebetween conscious and unconscious process-ing or the unique functions of consciousnessqua consciousness. Pribram has not treatedconsciousness as a variable and cannot tellus what it is that consciousness adds to thecognitive system.

edelman and tononi’s dynamic

core hypothesis

Edelman and Tononi’s theory of conscious-ness (2000; see also Tononi & Edelman,1998) combines evidence from large-scaleconnectivities in the thalamus and cortex,behavioral observation, and mathematicalproperties of large-scale brain-like networks.Based on neuropsychological and lesion evi-dence that consciousness is not abolishedby losses of large volumes of brain tissue(Penfield, 1958), Edelman and Tononi rejectthe idea that consciousness depends on par-ticipation of the whole brain in an undif-ferentiated fashion. At the same time, they,along with many others, reject the view thatconsciousness depends only on local prop-erties of neurons. Tononi and Edelman cite,for example, PET evidence suggesting thatmoment-to-moment awareness is highly cor-related with increasing functional connec-tivity between diverse cortical regions (see,for example, McIntosh, Rajah, & Lobaugh,1999). In other words, the same cortical areasseem to participate in conscious experience

Page 210: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

192 the cambridge handbook of consciousness

or not at different times, depending on theircurrent dynamic connectivity. This idea isresonant with Pribram’s description of neu-ral assemblies being “kaleidoscopic.”

The fundamental idea in Edelman andTononi’s theory is the dynamic core hypoth-esis, which states that conscious experiencearises from the activity of an ever-changingfunctional cluster of neurons in the thala-mocortical complex of the brain, character-ized by high levels of differentiation as wellas strong reciprocal, re-entrant interactionover periods of hundreds of milliseconds.The particular neurons participating in thedynamic core are ever changing while inter-nal integration in the dynamic core is main-tained through re-entrant connections. Thehypothesis highlights the functional connec-tions of distributed groups of neurons, ratherthan their local properties; thus, the samegroup of neurons may at times be part of thedynamic core and underlie conscious experi-ence, whereas at other times, this same neu-ronal group will not be part of the dynamiccore and will thus be part of unconsciousprocessing. Consciousness in this view is nota thing or a brain location but rather, asWilliam James argued, a process, occurringlargely within the re-entrant meshwork ofthe thalamocortical system.

Edelman and Tononi (2000) take issuewith Baars’ (1988) concept of global broad-casting (see below) as a way to explaincapacity limits and wide access in consciousprocessing. In Baars’ view, the informationcontent of any conscious state is apparentlycontained in the single message that is beingbroadcast to specialist systems throughoutthe brain at any one moment; informa-tion content is thus limited but widely dis-tributed. Edelman and Tononi (2000) arguefor an alternative view: that the informa-tion is not in the message, but rather in thenumber of system states that can be broughtabout by global interactions within the sys-tem itself. In place of Baars’ broadcasting ortheater metaphor, they offer an alternative:

[A] better metaphor would be . . . a riotousparliament trying to make a decision, sig-naled by its members raising their hands.

Before counting occurs, each member ofparliament is interacting with as manyother members as possible not by persua-sive rhetoric . . . but by simply pushing andpulling. Within 300 msec., a new vote istaken. How informed the decision turnsout to be will depend on the number ofdiverse interactions within the parliament.In a totalitarian country, every member willvote the same; the information content ofconstant unanimity is zero. If there are twomonolithic groups, left and right, such thatthe vote of each half is always the same, theinformation content is only slightly higher.If nobody interacts with anyone, the vot-ing will be purely random, and no infor-mation will be integrated within the sys-tem. Finally, if there are diverse interactionswithin the parliament, the final vote willbe highly informed (Edelman & Tononi,2 000, pp. 2 45–2 46).

A constantly changing array of ever-reorganized mid-sized neuronal groups ina large system of possible groups has highlevels of complexity and integration – char-acteristics of conscious states. Within thismodel, unconscious specialist systems arelocal, non-integrated neuronal groups. Howthe unconscious specialists are recruited intothe dynamic core is not made entirely clearin the theory. Edelman and Tononi say thatconsciousness in its simplest form emerges inthe re-entrant linkage between current per-ceptual categorization and value-categorymemory (short-term and long-term mem-ory). Conscious experience is actually a suc-cession of 100-ms snapshots of the currentlinkages that constitute the “rememberedpresent.”

Perhaps Baars and Tononi and Edelmanare not so different on closer examination.Baars’ (1988, 1998) model supposes thatthere is reciprocal exchange between theglobal workspace (GW) and specialist sys-tems in the architecture of consciousness; itis difficult to see why this is different fromthe re-entrant linkages between neuronalgroups in Edelman and Tononi’s theory. Fur-thermore, within any one “snapshot” of thesystem, the pattern of dynamically linkedelements in Baars’ model – GW and spe-cialists that are able to receive the particular

Page 211: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 193

Figure 8.5 . A Hilbert analysis of analytic phase differences in EEG acrosscortical surface measured over 400 ms in rabbit and human consciousprocessing. Phase differences are calculated in the beta band (12–30 Hz) forhuman EEG and in the gamma band (20–50 Hz) for the rabbit EEG.(With permission of the author.) (See color plates.)

message that has been disseminated – looksvery much like the pattern of momen-tarily linked neuronal groups in Tononiand Edelman’s model that are recruited inthe moment depending on environmentalinput and value memories. Two strengthsof the Tononi and Edelman model are itsacknowledgment of long-distance connec-tivity among specialist brain regions as acharacteristic of conscious processes, as wellas the dynamic nature of these connections.

walter freeman’s dynamical systems

approach: frames in the cinema

Like Pribram, Walter Freeman has workedto obtain empirical support for a cortex-wide dynamic neural system that canaccount for behavioral data observed inconscious activities. Freeman’s DynamicalSystems approach to consciousness is builton evidence for repetitive global phasetransitions occurring simultaneously overmultiple areas of cortex during normalbehavior (see Figure 8.5 Freeman, 2004 ;

Freeman & Rogers, 2003). Freeman and hiscolleagues have analyzed EEGs, recordedfrom multiple high-density electrode arrays(64 electrodes) fixed on the cortex of rab-bits and on the scalp of human volunteers.An index of synchronization was obtainedfor pairs of signals located at different cor-tical sites to detect and display epochsof mutual engagement between pairs. Themeasure was adapted to derive an index ofglobal synchronization among all four cor-tices (frontal, parietal, temporal, and occip-ital) – global epochs of phase stabilization(“locking”) involving all cortices under obser-vation during conscious perceptual activity.These epochs of phase locking can be seen inthe “plateaus” of global coherence in Figure8.5 . The peaks in the figure indicate momen-tary, global decoherence.

To understand Freeman’s findings, wehave to understand the basics of Hilbertanalysis as it is shown in Figure 8.5 . Hilbertanalysis of the EEGs recorded from elec-trode arrays produces a three-dimensionalgraphical representation. In it, the phase

Page 212: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

194 the cambridge handbook of consciousness

difference between pairs of cortical elec-trodes within a particular EEG band is plot-ted against time (in milliseconds) and spatiallocation (represented by electrode number).The resulting plot is called a Hilbert space.The Hilbert space can be read like a topo-graphical map. In the plot in Figure 8.5 ,we can see many flat plateau areas lyingbetween peaked ridges. The plateaus repre-sent time periods (on the order of 50 ms)in which many pairs of EEG signals fromdifferent cortical locations are found to bein phase with each other. The ridges repre-sent very short intervals when all of thesepairs are simultaneously out of phase, beforereturning to phase locking. These out-of-phase or decoherent epochs appear to benon-conscious transitions between momentsof consciousness.

According to Freeman (2004),

The EEG shows that neocortex processesinformation in frames like a cinema. Theperceptual content is found in the phaseplateaus from rabbit EEG; similar contentis predicted to be found in the plateaus ofhuman scalp EEG. The phase jumps showthe shutter. The resemblance across a 33 -fold difference in width of the zones of coor-dinated activity reveals the self-similarityof the global dynamics that may formGestalts (multisensory percepts). (Captionto cover illustration, p. i)

Freeman’s data are exciting in theirability to map the microscopic tempo-ral dynamic changes in widespread corti-cal activity during conscious perception –something not found in other theories. As atheory of consciousness, the dynamical sys-tems approach focuses primarily on describ-ing conscious perceptual processing at thecortical level. It does not attempt to explainthe conscious/non-conscious difference orthe function of consciousness in the neu-ropsychological system. With many neu-rocognitive theorists, we share Freeman’squestion about how the long-range globalstate changes come about virtually simul-taneously. Freeman’s hypothesis of Self-organized Criticality suggests that the neuralsystem is held in a state of dynamic ten-

sion that can change in an all-or-none fash-ion with small environmental perturbations.He says “a large system can hold itself in anear-unstable state, so that by a multitudeof adjustments it can adapt to environmentsthat change continually and unpredictably”(Freeman & Rogers, 2003 , p. 2882).

Globalist Models That Combine Aspectsof Information-Processing Theoriesand Network Theories

baars’ global workspace theory

A theater metaphor is the best way toapproach Baars’ Global Workspace (GW)theory (Baars, 1988, 1998, 2001). Con-sciousness is associated with a global “broad-casting system” that disseminates informa-tion widely throughout the brain. Themetaphor of broadcasting explicitly leavesopen the precise nature of such a wide influ-ence of conscious contents in the brain. Itcould vary in signal fidelity or degree of dis-tribution, or it might not involve “labeledline” transmission at all, but rather activationpassing, as in a neural network. Metaphorsare only a first step toward explicit the-ory, and some theoretical decision points areexplicitly left open.

If consciousness is involved with wide-spread distribution or activation, then con-scious capacity limits may be the price paidfor the ability to make single momentarymessages act upon the entire system for pur-poses of coordination and control. Becauseat any moment there is only one “wholesystem,” a global dissemination capacitymust be limited to one momentary content.(There is evidence that the duration of eachconscious “moment” may be on the order of100 ms, one-tenth of a second – see Blumen-thal, 1977).

Baars develops these ideas through sevenincreasingly detailed models of a globalworkspace architecture, in which many par-allel unconscious experts interact via a serial,conscious, and internally consistent globalworkspace (1983 , 1988). Global workspacearchitectures or their functional equivalentshave been developed by cognitive scientistssince the 1970s; the notion of a “blackboard”

Page 213: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 195

where messages from specialized subsys-tems can be “posted” is common to thework of Baars (1988), Reddy and Newell(1974), and Hayes-Roth (1984). The globalworkspace framework has a family resem-blance to the well-known integrative the-ories of Herbert A. Simon (General Prob-lem Solver or EPAM), Allan Newell (SOAR,1992), and John R. Anderson (ACT*, 1983).Architectures much like this have also seensome practical applications. GW theory iscurrently a thoroughly developed frame-work, aiming to explain an large set of evi-dence. It appears to have fruitful implica-tions for a number of related topics, suchas spontaneous problem solving, voluntarycontrol, and even the Jamesian “self” as agentand observer (Baars, 1988; Baars, Ramsoy, &Laureys, 2003).

GW theory relies on three theoreticalconstructs: unconscious specialized proces-sors, a conscious Global Workspace, andunconscious contexts.

The first construct is the unconscious spe-cialized processor, the “expert” of the psy-chological system. We know of hundredsof types of “experts” in the brain. Theymay be single cells, such as cortical fea-ture detectors for color, line orientation, orfaces, or entire networks and systems of neu-rons, such as cortical columns, functionalareas like Broca’s or Wernicke’s areas, andbasal ganglia. Like human experts, uncon-scious specialized processors may sometimesbe quite “narrow-minded.” They are highlyefficient in limited task domains and ableto act independently or in coalition witheach other. Working as a coalition, they donot have the narrow capacity limitations ofconsciousness, but can receive global mes-sages. By “posting” messages in the globalworkspace (consciousness), they can sendmessages to other experts and thus recruita coalition of other experts. For routine mis-sions they may work autonomously, with-out conscious involvement, or they may dis-play their output in the global workspace,thus making their work conscious and avail-able throughout the system. Answering aquestion like “What is your mother’s maidenname?” requires a mission-specific coalition

of unconscious experts, which report theiranswer to consciousness. Figure 8.6 showsthe major constructs in GW theory and thefunctional relations among them.

The second construct is, of course, theglobal workspace (GW) itself. A globalworkspace is an architectural capability forsystem-wide integration and disseminationof information. It is much like the podiumat a scientific meeting. Groups of experts atsuch a meeting may interact locally aroundconference tables, but to influence the meet-ing as a whole any expert must compete withothers, perhaps supported by a coalition oflike-minded experts, to reach the podium,whence global messages can be broadcast.New links among experts are made possibleby global interaction via the podium and canthen spin off to become new local proces-sors. The podium allows novel expert coali-tions to form that can work on new or dif-ficult problems, which cannot be solved byestablished experts and committees. Tenta-tive solutions to problems can then be glob-ally disseminated, scrutinized, and modified.

The evidence presented in Tables 8.2and 8.3 falls into place by assuming thatinformation in the global workspace cor-responds to conscious contents. Becauseconscious experience seems to be orientedprimarily toward perception, it is conve-nient to imagine that preperceptual pro-cessors – visual, auditory, or multimodal –can compete for access to a brain versionof a global workspace. For example, whensomeone speaks to us, the speech streamreceives preperceptual processing throughthe speech specialist systems before the mes-sage in the speech stream is posted in con-sciousness. This message is then globallybroadcast to the diverse specialist systemsand can become the basis for action, for com-posing a verbal reply, or for cuing relatedmemories. In turn, the outcome of actionscarried out by expert systems can also bemonitored and returned to consciousness asaction feedback.

Obviously the abstract GW architecturecan be realized in a number of differentways in the brain, and we do not know atthis point which brain structures provide

Page 214: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

196 the cambridge handbook of consciousness

Figure 8.6. Global Workspace Architecture: Basic constructs and their relations.

the best candidates. Although its brain cor-relates are not entirely clear at this time,there are possible neural analogs, includ-ing the reticular and intralaminar nucleiof the thalamus, one or more layers ofcortex, long-range cortico-cortico connec-tions, and/or active loops between sensoryprojection areas of cortex and the corre-sponding thalamic relay nuclei. Like otheraspects of GW theory, such neural candi-dates provide testable hypotheses (Newman& Baars, 1993). All of the neurobiologi-cal proposals described in this chapter pro-vide candidates (Freeman, 2004 ; Dehaene &Naccache, 2001; Edelman & Tononi, 2000;Tononi & Edelman, 1998), and some havebeen influenced by GW theory.

Context, the third construct in GW the-ory, refers to the powers behind the scenesof the theater of mind. Contexts are coali-tions of expert processors that provide thedirector, playwright, and stagehands behindthe scenes of the theater of mind. They can

be defined functionally as knowledge struc-tures that constrain conscious contents with-out being conscious themselves, just as theplaywright determines the words and actionsof the actors on stage without being visible.Conceptually, contexts are defined as pre-established expert coalitions that can evoke,shape, and guide global messages withoutthemselves entering the global workspace.

Contexts may be momentary, as in theway the meaning of the first word in a sen-tence shapes an interpretation of a later wordlike “set,” or they may be long lasting, as withlife-long expectations about love, beauty,relationship, social assumptions, professionalexpectations, worldviews, and all the otherthings people care about. Although contex-tual influences shape conscious experiencewithout being conscious, contexts can alsobe set up by conscious events. The word“tennis” before “set” shapes the interpreta-tion of “set,” even when “tennis” is alreadygone from consciousness. But “tennis” was

Context Hier archy

(Unconscious)

GlobalW orkspace(Conscious)

Unconscious Specializ edProcessors

Individual Conte xts: Goal,Perceptual, and

Conceptual Contexts

Automatisms

SkillComponents

Image GenerationFace Recognition

Language

Balance

Vision

Acoustic

Spresthesia

Stimulus Input

Speech and Action Output

ConsciousContexts

Broadcasting

Input Competition

Page 215: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 197

“Is A relevant?” “Is B relevant?” “Is C relevant?”

AD

B C

AD

B C

AD

B C

Figure 8.7. A naive processor approach to environmental novelty.

initially conscious and needed to be con-scious to create the unconscious context thatmade sense of the word “set.”

Thus conscious events can set up uncon-scious contexts. The reader’s ideas aboutconsciousness from years ago may influencehis or her current experience of this chapter,even if the memories of the earlier thoughtsdo not become conscious again. Earlier expe-riences typically influence current experi-ences as contexts, rather than being broughtto mind. It is believed for example thata shocking or traumatic event earlier inlife can set up largely unconscious expecta-tions that may shape subsequent consciousexperiences.

shanahan: an answer to the

modularity and frame problems

Shanahan and Baars (2005) suggest thatthe global workspace approach may pro-vide a principled answer to the widely dis-cussed “modularity” and “frame” problems.Fodor (1983) developed the view that cog-nitive functions like syntax are performedby “informationally encapsulated” modules,an idea that has some empirical plausi-bility. However, as stated by Fodor andothers, modules are so thoroughly isolatedfrom each other that it becomes difficult toexplain how they can be accessed, changed,and mobilized on behalf of general goals.A closely related difficulty, called the frameproblem, asks how an autonomous agent candeal with novel situations without followingout all conceivable implications of the novelevent. For example, a mobile robot on a cartmay roll from one room to another. Howdoes it know what is new in the next roomand what is not, without explicitly testing

out all features of the new environment?This task quickly becomes computation-ally prohibitive. Shanahan and Baars (2005)point out that the following:

What the global workspace architecturehas to offer . . . is a model of informationflow that explains how an information-ally unencapsulated process can draw onjust the information that is relevant to theongoing situation without being swampedby irrelevant rubbish. This is achievedby distributing the responsibility for decid-ing relevance to the parallel specialiststhemselves. The resulting massive paral-lelism confers great computational advan-tage without compromising the serial flowof conscious thought, which corresponds tothe sequential contents of the limited capac-ity global workspace. . . .

Compare the naive processor’s inefficientapproach (depicted in Figure 8.7) with amassively parallel and distributed globalworkspace approach (depicted in Figure 8.8)to dealing with environmental novelty.

The key point here is that the GWarchitecture permits widely distributed localresponsibility for processing global signals.As was pointed out above, conscious andnon-conscious process differ in their capabil-ities – they are two different modes of pro-cessing that, when combined, offer powerfuladaptive possibilities.

franklin’s ida as an implementation

of gw theory

Stan Franklin and colleagues (Franklin, 2001;Franklin & Graesser, 1999) have developeda practical implementation of GW theoryin large-scale computational agents to testits functionality in complex practical tasks.

Page 216: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

198 the cambridge handbook of consciousness

Encapsulated specialist processors

AD

B C

Broadcasting…

Global Workspace …

AD

B C Am I relevant?

Am Irelevant?

Am I relevant?YES!

Am Irelevant?

Conscious solution

Figure 8.8. A GW approach to environmental novelty.

IDA, or Intentional Distribution Agent, thecurrent implementation of the extendedGW architecture directed by Franklin, isdesigned to handle a very complex artifi-cial intelligence task normally handled bytrained human beings (see Chapter 7). Theparticular domain in this case is interac-tion among U.S. Navy personnel experts andsailors who move from job to job. IDA inter-acts with sailors via e-mail and is able tocombine numerous regulations, sailors’ pref-erences, and time, location, and travel con-siderations into human-level performance.Although it has components roughly cor-responding to human perception, memory,and action control, the heart of the sys-tem is a GW architecture that allows inputmessages to be widely distributed, so thatspecialized programs called “codelets” canrespond with solutions to centrally posedproblems (see Figure 8.9).

Franklin writes, “The fleshed out globalworkspace theory is yielding hopefullytestable hypotheses about human cogni-tion. The architectures and mechanismsthat underlie consciousness and intelli-gence in humans can be expected to yieldinformation agents that learn continuously,adapt readily to dynamic environments,and behave flexibly and intelligently whenfaced with novel and unexpected situations”

(see http://csrg.cs.memphis.edu). Althoughagent simulations do not prove that GWarchitectures exist in the brain, they demon-strate their functionality. Few if any large-scale cognitive models can be shown toactually perform complex human tasks,but somehow the real cognitive architec-ture of the brain does so. In that respect,the test of human-level functionality is asimportant in its way as any other sourceof evidence.

dehaene’s global neuronal

network theory

Stanislas Dehaene and his colleagues(Dehaene & Naccache, 2001; Dehaene,Kerszberg, & Changeux, 1998) have recentlyproposed a global neuronal workspace theoryof consciousness based on psychologicaland neuroscientific evidence quite similarto that cited by Baars and others. Dehaeneand colleagues identify three empiricalobservations that any theory of conscious-ness must be able to account for: “namely(1) a considerable amount of processing ispossible without consciousness, (2) atten-tion is a prerequisite of consciousness, and(3) consciousness is required for somespecific cognitive tasks, including thosethat require durable information mainte-nance, novel combinations of operations, or

Page 217: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 199

Action Selectedand Taken

(behavior codelets)

Stimulusfrom InternalEnvironment

Stimulusfrom ExternalEnvironment

Action Selected and Taken

(behavior codelets)

ActionSelection

(Behavior Net)

Behavior Codeletsin priming modeinstantiate, bind

activate

Update

WinningCoalition

Local Associations

Local AssociationsCue

Cue

Consolidation

Percept

Update

ConsciousBroadcast

Procedural Update

Senses

Internal Senses

SensoryMemory

PerceptualMemory(Slipnet)

Codelets

Competition forConsciousness

Long-termWorkingMemory

AutobiographicalMemory

WorkingMemory

TransientEpisodicMemory

Perception Codelets

AttentionCodelets

Figure 8.9. Franklin’s IDA Model.

the spontaneous generation of intentionalbehavior” (p. 1). The Dehaene and Naccachemodel depends on several well-foundedassumptions about conscious functioning.

The first assumption is that non-consciousmental functioning is modular. That is,many dedicated non-conscious modules canoperate in parallel. Although argumentsremain as to whether psychological mod-ules have immediate correlates in the brain,Dehaene and Naccache (2001) say that the“automaticity and information encapsula-tion acknowledged in cognitive theories arepartially reflected in modular brain circuits.”They tentatively propose that “a given pro-cess, involving several mental operations, canproceed unconsciously only if a set of ade-quately interconnected modular systems isavailable to perform each of the requiredoperations” (p. 12 ; see Figure 8.10).

The second assumption, one sharedby other cognitive theories, is that con-trolled processing requires an architecturein addition to modularity that can estab-lish links among the encapsulated proces-sors. Dehaene et al. (1998) argue that a dis-tributed neural system or “workspace” with

long-distance connectivity is needed that can“potentially interconnect multiple special-ized brain areas in a coordinated, thoughvariable manner” (p. 13).

The third assumption concerns the roleof attention in gating access to conscious-ness. Dehaene and Naccache (2001) reviewevidence in support of the conclusion thatconsiderable processing can occur withoutattention, but that attention is required forinformation to enter consciousness (Mack &Rock, 1998). They acknowledge a similaritybetween Michael Posner’s hypothesis of anattentional amplification (Posner, 1994) andtheir own proposal. Attentional amplifica-tion explains the phenomena of conscious-ness as due to the orienting of attention,which causes increased cerebral activationin attended areas and a transient increasein their efficiency. According to Dehaene &Nacache (2001),

[I]nformation becomes conscious . . . if theneural population that represents it is mobi-lized by top-down attentional amplificationinto a brain-scale state of coherent activ-ity that involves many neurons distributedthroughout the brain. The long-distance

Page 218: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

2 00 the cambridge handbook of consciousness

connectivity of these ‘workspace neurons’can, when they are active for a minimalduration, make the information availableto a variety of processes including percep-tual categorization, long-term memoriza-tion, evaluation, and intentional action.(p. 1)

An implication of the Dehaene and Nac-cache model is that consciousness has a gran-ularity, a minimum duration of long-distanceintegration, below which broadcast informa-tion will fail to be conscious.

It is worth noting a small differencebetween Baars’ version of global workspaceand that of Dehaene and colleagues.(Dehaene & Naccache, 2001; Dehaene et al.,1998). They believe that a separate atten-tional system intervenes with specializedprocessors to allow their content to enterthe global workspace and become conscious.Baars (1998), on the other hand, sees atten-tion not as a separate system but rather asthe name for the process of gaining accessto global workspace by reference to long-term or current goals. Clearly, further refine-ment is needed here in thinking throughwhat we mean by attention or an attentionalsystem as separate from the architecture ofconsciousness, in this case, varieties of GWarchitecture.

Dehaene, Sargent, and Changeux (2003)have used an implementation of the globalneuronal workspace model to successfullysimulate attentional blink. Attentional blinkis a manifestation of the all-or-none char-acteristic of conscious processing observedwhen participants are asked to process twosuccessive targets, T1 and T2 . When T2 ispresented between 100 and 500 ms afterT1, the ability to report it drops, as if theparticipants’ attention had “blinked.” Dur-ing this blink, T2 fails to evoke a P300

potential but still elicits event-related poten-tials associated with visual and semantic pro-cessing (P1, N1, and N400). Dehaene et al.(2003) explain,

Our simulations aim at clarifying whysome patterns of brain activity are selec-tively associated with subjective experience.In short, during the blink, bottom-up activ-

ity, presumably generating the P1, N1, andN400 waveforms, would propagate with-out necessarily creating a global reverber-ant state. However, a characteristic neuralsignature of long-lasting distributed activityand g-band emission, presumably generat-ing the P300 waveform, would be associ-ated with global access. (p. 852 0)

In the simulation, a network modeled thecell assemblies evoked by T1 and T2 throughfour hierarchical stages of processing, twoseparate perceptual levels and two higherassociation areas. The network was initiallyassigned parameters that created sponta-neous thalamocortical oscillations, simulat-ing a state of wakefulness. Then, the networkwas exposed to T1 and T2 stimulation at var-ious interstimulus intervals (ISI). T1 exci-tation was propagated bottom-up throughall levels of the processing hierarchy, fol-lowed by top-down amplification signalsthat resulted in sustained firing of T1 neu-rons. Dehaene et al. (2003) hypothesizedthat this sustained firing and global broad-casting may be the neural correlate of con-scious reportability. In contrast, the activa-tion evoked by T2 depended closely on itstiming relative to T1. For simultaneous andlong ISIs, T2 excitation evoked sustained fir-ing. Importantly, when T2 was presentedduring T1-elicited global firing, it evokedactivation only in the low-level perceptualassemblies and resulted in no global propaga-tion. Dehaene and colleagues conclude thatthis detailed simulation has provided ten-tative links between subjective reports and“objective physiological correlates of con-sciousness on the basis of a neurally plausiblearchitecture” (2003 , p. 8524).

The Globalist Argument:An Emerging Consensus

In the last two decades, a degree ofconsensus has developed concerning therole of consciousness in the neuropsy-chological architecture. The general posi-tion is that consciousness operates as adistributed and flexible system offering non-conscious expert systems global accessibility

Page 219: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 2 01

to information that has a high concurrentvalue to the organism. Although conscious-ness is not itself an executive system, a globaldistribution capacity has obvious utility forexecutive control, in much the way that gov-ernments can control nations by influencingnation-wide publicity.

Excerpted below are the views of promi-nent researchers on consciousness revealingconsiderable agreement.

� Baars (1983): “Conscious contents pro-vide the nervous system with coherent,global information.”

� Damasio (1989): “Meaning is reached bytime-locked multiregional retroactivationof widespread fragment records. Only thelatter records can become contents ofconsciousness.”

� Freeman (1991): “The activity patternsthat are formed by the (sensory) dynam-ics are spread out over large areas of cortex,not concentrated at points. Motor out-flow is likewise globally distributed. . . . Inother words, the pattern categorizationdoes not correspond to the selection ofa key on a computer keyboard but toan induction of a global activity pattern.”[Italics added]

� Tononi and Edelman (1998): “Thedynamic core hypothesis avoids the cat-egory error of assuming that certainlocal, intrinsic properties of neurons have,

in some mysterious way, a privilegedcorrelation with consciousness. Instead,this hypothesis accounts for fundamen-tal properties of conscious experience bylinking them to global properties of par-ticular neural processes” (p. 1850).

� Llinas et al. (1998): “ . . . the thalamus rep-resents a hub from which any site in the cor-tex can communicate with any other suchsite or sites. . . . temporal coincidence ofspecific and non-specific thalamic activitygenerates the functional states that char-acterize human cognition” (p. 1841).

� Edelman and Tononi (2 000): “Whenwe become aware of something . . . it isas if, suddenly, many different parts ofour brain were privy to informationthat was previously confined to somespecialized subsystem. . . . the wide dis-tribution of information is guaranteedmechanistically by thalamocortical andcorticocortical reentry, which facilitates theinteractions among distant regions of thebrain” (pp. 148–149).

� Dennett (2 001): “Theorists are converg-ing from quite different quarters on aversion of the global neuronal workspacemodel of consciousness” (p. 42).

� Kanwisher (2 001): “ . . . it seems reason-able to hypothesize that awareness of aparticular element of perceptual informa-tion must entail not just a strong enoughneural representation of information, but

hierarchy of modularprocessors

automaticallyactivated

processors

high-level processorswith strong

long-distanceinterconnectivity

Perceptual categorizationLong-term memoryEvaluation (affect)Intentional action

processorsmobilizedinto the

consciousworkspace

Figure 8.10. A global neuronal network account of conscious processes(Dehaene & Naccache, 2001, p. 27).

Page 220: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

2 02 the cambridge handbook of consciousness

also access to that information by most ofthe rest of the mind/brain.”

� Dehaene and Naccache (2 001): “Wepropose a theoretical framework . . . thehypothesis of a global neuronalworkspace. . . . We postulate that thisglobal availability of information throughthe workspace is what we subjectivelyexperience as the conscious state.”

� Rees (2 001): “One possibility is thatactivity in such a distributed networkmight reflect stimulus representationsgaining access to a ‘global workspace’ thatconstitutes consciousness” (p. 679).

� John et al. (2 001): “Evidence has beensteadily accumulating that informationabout a stimulus complex is distributedto many neuronal populations dispersedthroughout the brain.”

� Varela et al. (2 001): “ . . . the brain. . . transiently settling into a globally con-sistent state . . . [is] the basis for the unityof mind familiar from everyday experi-ence.”

� Cooney and Gazzaniga (2 003): “Inte-grated awareness emerges from mod-ular interactions within a neuronalworkspace. . . . The presence of a large-scale network, whose long-range con-nectivity provides a neural workspacethrough which the outputs of numerous,specialized, brain regions can be intercon-nected and integrated, provides a promis-ing solution . . . In the workspace model,outputs from an array of parallel pro-cessors continually compete for influencewithin the network” (p. 162).

� Block (2 005): “Phenomenally consciouscontent is what differs between experi-ences as of red and green, whereas accessconscious content is information which is‘broadcast’ in the global workspace.”

Although debate continues about the func-tional character of consciousness, the glob-alist position can be summarized in the fol-lowing propositions:

1. The architecture of consciousness com-prises numerous, semi-autonomous spe-

cialist systems, which interact in adynamic way via a global workspace.

2 . The function of the workspace is globaldistribution of information in order torecruit resources in the service of currentgoals.

3 . Specialist systems compete for access tothe global workspace; information thatachieves access to the workspace obtainssystem-wide dissemination.

4 . Access to the global workspace is “gated”by a set of active contexts and goals.

dissenting views

The globalist position argues that conscious-ness provides a momentary unifying influ-ence for a complex system through globaldistribution and global access. In this sense,consciousness may be said to have unity.Alternative views chiefly depart from theglobalist position on this point. They argue,in one way or another, that consciousness isnot fundamentally unified.

One alternative view is that of Marcel(1993) who argued for “slippage” in theunity of consciousness. In part, he made hiscase based on his observation that differ-ent reporting modalities (blink vs. finger tap)could produce conflicting reports about con-scious experience. Marcel took this to indi-cate that consciousness itself is not unifiedin any real sense.

Marcel’s argument bears some similar-ity to Dennett’s “multiple drafts” argument(Dennett, 1991). Dennett pointed to thepuzzle posed by the phi phenomenon. Inthe phi phenomenon, we observe a greenlight and a red light separated by a fewdegrees in the field of vision as they areflashed in succession. If the time betweenflashes is about one second or less, the firstlight flashed appears to move to the posi-tion of the second light. Further, the color ofthe light appears to change midway betweenthe two lights. The puzzle is explaining howwe could see the color change before wesee the position of the second light. Dennetthypothesizes that the mind creates differentanalyses or narratives (multiple drafts) of thescene at different moments from different

Page 221: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 2 03

sensory inputs. All of the accounts are avail-able to influence behavior and report. Agiven scene can give rise to more thanone interpretation. In contrast, with globalbroadcasting models, Dennett says there isno single version of the scene available any-where in the psychological system.

Simiarly, Zeki (2001, 2003) has arguedon neurological grounds that there is “dis-unity” in the neural correlates of conscious-ness. With many others, Zeki notes that thevisual brain consists of many separate, func-tionally specialized processing systems thatare autonomous with respect to one another.He then supposes that activity at each nodereaches a perceptual endpoint at a differenttime, resulting in a perceptual asynchrony invision. From there, Zeki makes the inferencethat activity at each node generates a micro-consciousness. He concludes that visual con-sciousness is therefore distributed in spaceand time, with an organizing principle ofabstraction applied separately within eachprocessing system. It remains to be seenwhether Zeki’s microconsciousnesses can beexamined empirically via contrastive anal-ysis and whether the microconsciousnessesare necessarily conscious or simply poten-tially conscious. The globalist position wouldargue that all neural processing is potentiallyconscious, depending on the needs and goalsof the system. Clearly, this is a point forfuture discussion.

Conclusion

This chapter suggests that current cognitivetheories have much in common. Almost allsuggest an architectural function for con-sciousness. Although the reader’s experienceof these words is no doubt shaped by fea-ture cells in visual cortex, including wordrecognition regions, such local activity is notsufficient for consciousness of the words. Inaddition, some widespread functional braincapacity is widely postulated. Direct func-tional imaging evidence for that hypothesisis now abundant. In that sense, most currentmodels are globalist in spirit, which is not todeny, of course, that they involve multiple

local specializations as well. It is the integra-tion of local and global capacities that marksthese theoretical approaches. Given the factthat scientists have only “returned to con-sciousness” quite recently, this kind of con-vergence of opinion is both surprising andgratifying.

Future work should focus on obtain-ing neuroscientific evidence and correspond-ing behavioral observations that can addressglobal access as the distinguishing feature ofconsciousness. Additional work could con-tribute simulations of the kind offered byDehaene, Sargent, and Changeux (2003),supporting the plausibility of all-or-noneglobal propagation of signals as modelsof the neurocognitive architecture of con-sciousness, and of Franklin, documentingthe real-world potential of global workspacearchitectures as intentional agents. Furtherwork is also needed to resolve the issueof whether consciousness is all-or-none, asBaars, Freeman, and Dehaene and his col-leagues argue, or whether there are multipledrafts (Dennett, 1991) or microconscious-nesses (Zeki, 2001, 2003) playing a role inthe architecture of consciousness (see alsoChapter 15).

References

Atkinson, R., & Shiffrin, R. (1968). Humanmemory: A proposed system and its controlprocesses. In K. W. Spence & J. Spence (Eds.),Advances in the psychology of learning andmotivation: Research and theory (Vol. 2 , pp. 89–195). New York: Academic Press.

Baars, B. J. (1983). Conscious contents pro-vide the nervous system with coherent, globalinformation. In R. J. Davidson, G. Schwartz,& D. Shapiro (Eds.), Consciousness and self-regulation (Vol. 3 , pp. 41–79). New York:Plenum Press.

Baars, B. J. (1988). A cognitive theory of conscious-ness. New York: Cambridge University Press.

Baars, B. J. (1998). In the theater of consciousness:The workspace of the mind. New York: OxfordUniversity Press.

Baars, B. J. (2002). The conscious accesshypothesis: Origins and recent evidence. Trendsin Cognitive Sciences, 6(1), 47–52 .

Page 222: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

2 04 the cambridge handbook of consciousness

Baars, B. J., Ramsoy, T., & Laureys, S. (2003).Brain, conscious experience, and the observ-ing self. Trends in Neurosciences, 2 6(12), 671–675 .

Baddeley, A. D. (2000). The episodic buffer: Anew component of working memory? Trends inCognitive Sciences, 4 , 417–423 .

Baddeley, A. D. (2001). Is working memory stillworking? American Psychologist, 56(11), 851–864 .

Baddeley, A. D., & Hitch, G. J. (1974). Workingmemory. In G. A. Bower (Ed.), Recent advancesin learning and motivation (Vol. 8, pp. 47–90).New York: Academic Press.

Basmajian, J. (1979). Biofeedback: Principles andpractice for the clinician. Baltimore: Williams &Wilkins.

Block, N. (2005). Two neural correlates of con-sciousness, Trends in Cognitive Sciences, 9, 46–52 .

Blumenthal, A. L. (1977). The process of cognition.Englewood Cliffs, NJ: Prentice-Hall.

Cooney, J. W., & Gazzaniga, M. S. (2003). Neu-rological disorders and the structure of humanconsciousness. Trends in Cognitive Sciences, 7(4), 161–165 .

Damasio, A. (1989). Time-locked multiregionalretroactivation: A systems-level proposal forthe neural substrates of recall and recognition.Cognition, 33 , 25–62 .

Dehaene, S., & Naccache, L. (2001). Towards acognitive neuroscience of consciousness: Basicevidence and a workspace framework. Cogni-tion, 79, 1–37.

Dehaene, S., Kerszberg, M., & Changeux,J. P. (1998). A neuronal model of a globalworkspace in effortful cognitive tasks. Proceed-ings of the National Academy of Sciences USA,95 , 14529–14534 .

Dehaene, S., Sargent, C., & Changeux, J. (2003).A neuronal network model linking subjectivereports and objective physiological data dur-ing conscious perception. Proceedings of theNational Academy of Science USA, 100(14),8520–8525 .

Dennett, D. (1991). Consciousness explained.Boston: Back Bay Books.

Dennett, D. E. (2001). Are we explaining con-sciousness yet? Cognition, 79, 221–237.

Edelman, G. M., & Tononi, G. (2000). A universeof consciousness. New York: Basic Books.

Fodor, J. (1983). The modularity of mind: An essayon faculty psychology. Cambridge, MA: MITPress.

Franklin, S. (2001). Conscious software: A com-putational view of mind. In V. Loia & S. Sessa(Eds.), Soft computing agents: New trends fordesigning autonomous systems (pp. 1–46). Berlin:Springer (Physica-Verlag).

Franklin, S., & Graesser, A. (1999). A softwareagent model of consciousness. Consciousnessand Cognition, 8, 285–301.

Freeman, W. J. (2004). Origin, structure, and roleof background EEG activity. Part 1. Analyticamplitude. Clinical Neurophysiology, 115 , 2077–2088 (Including issue cover).

Freeman, W. J. (1991). The physiology of percep-tion. Scientific American, 2 64 , 78–85 .

Freeman, W. J., & Rogers, L. (2003). A neurobi-ological theory of meaning in perception. PartV. Multicortical patterns of phase modulationin gamma EEG. International Journal of Bifur-cation and Chaos in Applied Sciences and Engi-neering, 13(10), 2867–2887.

Gabor, D. (1946, November). Theory of commu-nication. Journal of the IEE (London), 93(26),429–457.

Hayes-Roth, B. (1984). A blackboard model ofcontrol. Artificial Intelligence, 16, 1–84 .

James, W. (1890). The principles of psychology.New York: Holt.

John, E. R., Prichep L. S., Kox, W., Valdes-Sosa,P., Bosch-Bayard, J., Aubert, E., Tom, M., diMichele, F., & Gugino, L. D. (2001). Invari-ant reversible qeeg effects of anesthetics. Con-sciousness and Cognition, 10, 165–183 .

Johnson-Laird, P. N. (1988). A computa-tional analysis of consciousness. In A. Mar-cel & E. Bisiach (Eds.), Consciousness incontemporary science (pp. 357–368). Oxford:Clarendon.

Kanwisher, N. (2001). Neural events andperceptual awareness. Cognition, 79, 89–113 .

Llinas, R., Ribary, U., Contreras, D., & Pedro-arena, C. (1998). The neuronal basis of con-sciousness. Philosophical Transaction of theRoyal Society, London, 353 , 1841–1849.

Mack, A., & Rock, I. (1998). Inattentional blind-ness. Cambridge, MA: MIT Press.

Marcel, A. (1993). Slippage in the unity of con-sciousness. CIBA Foundation Symposium. 174 ,168–80; discussion, pp. 180–186.

Page 223: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

cognitive theories of consciousness 2 05

Mandler, G. A. (1975). Consciousness: Res-pectable, useful and probably necessary. InR. Solso (Ed.), Information processing andcognition: The Loyola Symposium. Hillsdale, NJ:Erlbaum.

McIntosh, A. R., Rajah, M. N., & Lobaugh, N.J. (1999). Interactions of prefrontal cortex inrelation to awareness in sensory learning. Sci-ence, 2 84 , 1531–1533 .

Milner, B., & Rugg, M. D. (Eds.), (1992).The neuropsychology of consciousness. London:Academic Press.

Newell, A. (1992). SOAR as a unified theory ofcognition: Issues and explanations. Behavioraland Brain Sciences, 15(3), 464–492 .

Newman, J., & Baars, B. J. (1993). A neural atten-tional model for access to consciousness: Aglobal workspace perspective. Concepts in Neu-roscience, 4(2), 255–290.

Norman, D. A., & Shallice, T. (1980). Attention toaction: Willed and automatic control of behaviour(CHIP Report No. 99). San Diego: Universityof California.

Penfield, W. (1958). The excitable cortex in con-scious man. Springfield, IL: Thomas.

Posner, M. (1994). Attention: The mechanismsof consciousness. Proceedings of the NationalAcademy of Sciences USA, 91, 7398–7403 .

Pribram, K. H. (1971). Languages of the brain:Experimental paradoxes and principles in neu-ropsychology. Englewood Cliffs, NJ: Prentice-Hall.

Pribram, K., & Meade, S. D. (1999). Consciousawareness: Processing in the synaptodendriticweb. New Ideas in Psychology, 17(205), 214 .

Reddy, R., & Newell, A. (1974). Knowledge andits representations in a speech understand-ing system. In L. W. Gregg (Ed.), Knowledgeand cognition (pp. 256–282). Potomac, MD:Erlbaum.

Rees, G. (2001). Seeing is not perceiving. NatureNeuroscience, 4 , 678–680.

Schacter, D. L. (1990). Toward a cognitive neu-ropsychology of awareness: Implicit knowl-edge and anosognosia. Journal of Clinicaland Experimental Neuropsychology, 12 (1), 155–178.

Schneider, W., & Pimm-Smith, M. (1997). Con-sciousness as a message-aware control mech-anism to modulate cognitive processing. InJ. D. Cohen & J. W. Schooler (Eds.), Scien-tific approaches to consciousness (pp. 65–80).Mahwah, NJ: Erlbaum.

Shallice, T. (1972). The dual functions of con-sciousness. Psychological Review, 79(5), 383–393 .

Shallice, T. (1978). The dominant action system:An information processing approach to con-sciousness. In K. S. Pope & J. L. Singer (Eds.),The stream of consciousness: Scientific investiga-tions into the flow of experience (pp. 117–157).New York: Plenum.

Shallice, T. (1988). Information-processing mod-els of consciousness: Possibilities and problems.In A. J. Marcel & E. Bisiach (Eds.), Conscious-ness in contemporary science (pp. 305–333).Oxford: Clarendon Press.

Shanahan, M., & Baars, B. J. (2005), Applyingglobal workspace theory to the frame problem.Cognition, 98, 157–176.

Standing, L. (1973). Learning 10,000 pictures.Quarterly Journal of Experimental Psychology,52 5 , 207–222 .

Tononi, G., & Edelman, G. (1998). Consciousnessand complexity. Science, 2 82 , 1846–1851.

Varela, F., Lachaux, J., Rodriguez, E., & Mar-tinerie, J. (2001). The brainweb: Phase synchro-nization and large-scale integration. NatureNeuroscience, 2 , 229–239.

Zeki, S. (2001). Localization and globalizationin conscious vision. Annual Review of Neuro-science, 2 4 , 57–86.

Zeki, S. (2003). The disunity of consciousness.Trends in Cognitive Science, 7(5), 214–218.

Page 224: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c08 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:12

206

Page 225: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

C H A P T E R 9

Behavioral, Neuroimaging,and Neuropsychological Approaches

to Implicit Perception

Daniel J. Simons, Deborah E. Hannula, David E. Warren,and Steven W. Day

Abstract

For well over a century, the idea thatrich, complex perceptual processes canoccur outside the realm of awareness haseither intrigued or exasperated researchers.Although popular notions of implicit pro-cessing largely focus on the practical conse-quences of implicit perception, the empir-ical literature has addressed more focused,basic questions: (a) Does perception occurin the absence of awareness? (b) what typesof information are perceived in the absenceof awareness? and (c) what forms of pro-cessing occur outside of awareness? Thischapter discusses recent advances in thestudy of implicit perception, consideringthe ways in which they do and do notimprove on earlier approaches. We contrastthe conclusions a skeptic and a believermight draw from this literature. Our reviewconsiders three distinct but related classesof evidence: behavioral studies, neuroimag-ing, and brain-damaged patient case stud-ies. We conclude by arguing that qualitative

differences between perceptual mechanismsare interesting regardless of whether or notthey demonstrate the existence of percep-tion without awareness.

Introduction

. . . [T]here is now fairly widespread agree-ment that perception can occur even whenwe are unaware that we are perceiving.(Merikle & Joordens, 1997a, p. 2 19)

Unconscious cognition is now solidly estab-lished in empirical research (Greenwald,1992 , p. 766).

My contention is that most, if not all, claimsfor SA/CI [semantic activation withoutconscious identification] in dichotic listen-ing, parafoveal vision, and visual maskingare in reality based on the failure of theseexperimental methods to reveal whether ornot the meaning of the critical stimulus wasavailable to consciousness at the time of pre-sentation (Holender, 1986, p. 3 ; bracketsadded)

2 07

Page 226: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 08 the cambridge handbook of consciousness

For well over a century, the idea that rich,complex perceptual processes can occuroutside the realm of awareness has either in-trigued or exasperated researchers. The no-tion that many of the cognitive process-es that occur with awareness might alsooccur without awareness is both excitingand frightening; it would not only revealuntapped or unnoticed powers of mind butwould also raise the specter of undesirablemechanisms of mind. If implicit cognitiveprocesses are rich and powerful, then giventhe right tools, we might be able to exploitthese resources – we might be capable ofusing far more information than reachesawareness. Alternatively, implicit processesmight counteract our explicitly held atti-tudes, thereby changing our behavior with-out our knowledge (Greenwald & Banaji,1995).

This fear has its roots in psychodynamicviews of unconscious processing that attri-bute many psychological problems to un-conscious conflicts and motivations (Freud,1966). It manifests itself in the fear that sub-liminal advertising can affect our beliefsagainst our will (Pratkanas, 1992). These de-sires and fears drive a large market in sublim-inal self-help tapes as well as public outcryabout apparent attempts at implicit influ-ence. Yet, evidence for subliminal persua-sion of this sort is scant at best (Greenwald,Spangenberg, Pratkanis, & Eskenazi, 1991;Pratkanis, Eskenazi, & Greenwald, 1994).

Although popular notions of implicit pro-cessing focus largely on the practical conse-quences of implicit perception, the empir-ical literature has addressed more focused,basic questions: (a) Does perception occurin the absence of awareness? (b) What typesof information are perceived in the absenceof awareness? and (c) What forms of pro-cessing occur outside of awareness? Fewresearchers question the idea that some per-ceptual processing occurs outside of aware-ness. For example, we are not usually awareof the luminance changes that lead to theperception of motion. Rather, we just per-ceive the motion itself. Some processing ofthe luminance boundaries occurs outside ofawareness even if we are aware of the stim-ulus itself.

The more subtle, more interesting ques-tion is whether the meaning of a stimulusis processed without awareness. This prob-lem is of fundamental theoretical impor-tance because any evidence of semantic pro-cessing in the absence of awareness stronglysupports late-selection models of attentionand awareness (Deutsch & Deutsch, 1963).Presumably, implicit processes occur inde-pendent of explicit attentional selection, soif the meaning of a stimulus can be perceivedimplicitly, selective attention is not neces-sary for semantic processing. Each of thesequestions, at its core, asks how implicit per-ception is like explicit perception.

For more than a century, strong claimsfor the existence of complex perceptual pro-cesses in the absence of awareness havebeen dismissed on methodological grounds.In one early study, for example, observersviewed a card with a letter or digit on it,but their viewing distance was such thatthe character was hard to see – it wasreported to be blurry, dim, or not visible atall. Although subjects could not consciouslyreport the nature of the stimulus, they accu-rately guessed whether it was a letter ordigit, and they could even guess its identitybetter than chance (Sidis, 1898). This lackof a clear conscious percept combined withbetter performance on an indirect, guess-ing task might provide evidence for implicitperception. However, alternative interpre-tations that require no implicit perceptionare equally plausible. For example, observersmight simply be more conservative whenasked to produce the name of a digit or letterthan they would be when making a forced-choice decision (see Azzopardi & Cowey,1998, for a similar argument about blind-sight). This bias alone could account for bet-ter performance on a forced-choice task evenif there were no difference in conscious per-ception. Moreover, the forced-choice taskmight just be a more sensitive measure ofconscious awareness, raising the possibil-ity that the dissociation between the twotasks is a dissociation within conscious per-ception rather than between conscious andnon-conscious perception. Finally, the mea-sure of awareness – the ability to recognizethe character from a distance – might be

Page 227: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 09

inadequate as an assessment of awareness,leaving open the possibility that some con-scious perception had occurred.

This example illustrates some of theweaknesses inherent in many studies ofimplicit perception. Although the behav-ioral and methodological tools for studyingimplicit perception became far more sophis-ticated toward the end of the 20th cen-tury, and despite some claims to the contrary(e.g., Greenwald, 1992 ; Merikle, Smilek, &Eastwood, 2001), the controversy over themere existence of implicit perception per-sists (Dulany, 2004). Often, the same dataare taken by some as convincing support forthe existence of implicit perception and byothers as unpersuasive (see the critique andresponses in Holender, 1986).

In fact, theoretical reviews of the existingliterature often arrive at strikingly differentconclusions. Whereas Holender (1986) con-cludes that most demonstrations of implicitsemantic processing are unconvincing, oth-ers consider the converging support forimplicit effects to be overwhelming (e.g.,Greenwald, 1992). In part, these divergentconclusions simply reflect different defaultassumptions. “Skeptics” assume the absenceof implicit perception unless definitive evi-dence supports its presence. “Believers”assume the presence of implicit perceptiongiven converging evidence, even if none ofthe evidence is strictly definitive. At its core,the debate often devolves into little morethan arguments over parsimony or over thecriteria used to infer implicit processing.

The goal of this chapter is not to resolvethis controversy. Nor is it to provide athorough review of this century-old debate.Rather, we discuss recent advances in thestudy of implicit perception, consideringthe ways in which they do and do notimprove on earlier approaches. We also con-trast the conclusions a skeptic and a believermight draw from this literature. Since themid-1980s, claims about implicit perceptionhave become more nuanced, focusing lesson the mere existence of the phenomenonand more on the nature of the informa-tion that might be implicitly perceived andon the mechanisms underlying implicit per-ception. Our review considers three distinct

but related classes of evidence: behavioralstudies, neuroimaging, and brain-damagedpatient case studies.

Limits on the Scope of our Chapter

Given the availability of many excellentand comprehensive reviews/critiques of theearly literature on implicit perception (e.g.,Greenwald, 1992 ; Holender, 1986; Merikle,1992), our chapter focuses primarily on thetheoretical and methodological innovationsintroduced in recent years. Many disciplinesinclude claims about implicit processing,and incorporating all of them in a singleoverview would be impractical. Instead,we highlight claims for implicit perceptualor semantic processing of discrete stimuli,largely overlooking implicit skill learning,artificial grammar learning, or other formsof procedural knowledge that might wellbe acquired without awareness. Our neglectof these areas does not imply any denigra-tion of the evidence for implicit perceptionthey have produced. Although we limit ourreview to the possibility of semantic process-ing without awareness and closely relatedquestions, we also consider recent argumentsabout how best to study implicit percep-tion. Finally, we discuss how qualitative dif-ferences in the nature of perceptual process-ing may be of theoretical significance evenwithout a clear demonstration that process-ing occurs entirely outside of awareness.

Early Evidence for and againstImplicit Perception

Claims for and against implicit perceptionreceived extensive empirical attention start-ing in the late 1950s, with sentiment in thefield vacillating between acceptance and ske-pticism. Many early studies used a dichoticlistening method in which observers attendto a stream of auditory information in oneear and verbally shadow that content whilesimultaneously ignoring another stream intheir other ear (Cherry, 1953 ; Moray, 1959;Treisman, 1960, 1964). If the ignored chan-nel is actually unattended and informa-tion from the ignored channel intrudes into

Page 228: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 10 the cambridge handbook of consciousness

awareness, then the ignored informationmust have been processed implicitly. Withthis technique, observers occasionally heartheir own name in an ignored channel(Moray, 1959), and they sometimes momen-tarily shift their shadowing to the ignoredchannel when the auditory information pre-sented to each ear is swapped (Treisman,1960). If ignored information is truly unat-tended, then these findings support a strongform of late selection in which unattendedinformation is processed to a semantic leveland sometimes intrudes on awareness. Otherstudies using this dichotic listening tech-nique found evidence for skin conductancechanges to words in the ignored streamthat were semantically related to shock-associated words (Corteen & Dunn, 1974 ;Corteen & Wood, 1972).

Of course, the central assumption under-lying these conclusions is that an ignoredauditory stream is entirely unattended. Ifparticipants periodically shift attention tothe “ignored” channel, then the influence ofsemantic information in the ignored chan-nel might occur only with attention. To con-clude that perception of the semantic con-tent of the ignored stream was caused byimplicit processing, the experimenter mustshow that it did not result from explicitshifts of attention at the time of presenta-tion. The difficulty of verifying that atten-tion was never directed to the ignored chan-nel gave meat to skeptics (Holender, 1986).In fact, this critique can be applied farmore generally. The vast majority of stud-ies of implicit perception, including thosein the past 20 years, rely on what is com-monly known as the dissociation paradigm(Merikle, 1992). To demonstrate the exis-tence of implicit perception, experimentersmust eliminate explicit perception and showthat something remains. Applied to dichoticlistening, the task for experimenters is torule out attention to the ignored streamand then show that something remains. Thefailure of the premise, that ignored meansunattended in the case of dichotic listen-ing, weakens evidence for implicit percep-tion. Given the fairly convincing critiques ofevidence based on dichotic listening (Holen-

der, 1986), few current studies use dichoticlistening to study implicit perception. Thedissociation paradigm, however, remains thedominant approach to studying implicitperception.

The modern use of the dissociationparadigm in the study of implicit percep-tion was triggered by a series of experimentsin the 1980s in which masked primes wereshown to influence subsequent processing ofa target stimulus even though observers didnot notice the primes themselves (Marcel,1983a, b). This approach is a classic applica-tion of the dissociation paradigm: Rule outexplicit awareness of the prime stimulus andshow that it still influences performance insome other way. Importantly, these studiesprovided evidence not just that somethingwas perceived but also that its meaning wasprocessed as well; the semantic content ofa masked word served as a prime for a sub-sequent response to a semantically relatedtarget word (Marcel, 1983b). Many of therecent behavioral studies of implicit per-ception use variants of this masked primeapproach.

The Merits and Assumptionsof the Dissociation Paradigm

The dissociation paradigm is particularlyappealing because it requires no assump-tions about the nature of or mechanismsunderlying implicit perception. In its purestform, the dissociation paradigm has a singleconstraint: Implicit perception can only bedemonstrated in the absence of explicit per-ception. Superficially, this constraint seemsstraightforward. Yet, it amounts to confirm-ing the null hypothesis – demonstrating noeffect of explicit perception – leading someto decry its usefulness for the study ofimplicit perception (Merikle, 1994). Giventhat most claims for implicit perception arebased on the dissociation paradigm, most cri-tiques of these claims focus on violationsof this assumption, often producing evi-dence that some contribution from explicitperception can explain the residual effectspreviously attributed to implicit perception(see Mitroff, Simons, & Franconeri, 2002 for

Page 229: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 11

a similar approach to critiquing evidencefor implicit change detection). For exam-ple, critiques of dichotic listening studiestypically focus on the possibility that sub-jects devoted some attention to the ignoredstream (Holender, 1986). Given that thedichotic listening paradigm does not allowa direct measure of the absence of atten-tion to the ignored channel, it cannot ruleout the possibility that explicit factors con-tributed to perception of ignored material.More subtle critiques raise the possibilitythat observers were momentarily aware ofignored or unattended material, but rapidlyforgot that they had been aware. If so, thenexplicit awareness could have contributed toany effects of the “unattended” information.This “amnesia” critique has been appliedmore recently to such phenomena as inat-tentional blindness (Wolfe, 1999).

To meet the assumptions of the disso-ciation paradigm, the measure of explicitperception must be optimally sensitive – itmust exhaustively test for explicit influenceson performance (Merikle, 1992). If a maxi-mally sensitive measure reveals no evidenceof explicit perception, we can be fairly confi-dent that explicit factors did not contributeto performance, and any residual effectscan be attributed to implicit perception.This criterion was adopted by some of themore ardent critics of the early literature onimplicit perception (Holender, 1986). Theexplicit measure most typically adopted asa sensitive measure of explicit awareness isthe simple detection of the presence of astimulus. If subjects cannot detect the pres-ence of a stimulus, but the stimulus still hasan effect on performance, then that effectpresumably resulted from implicit percep-tion. In essence, this approach served as thebasis for early work on priming by maskedstimuli (Marcel, 1983b). If a masked primecannot be detected but still influences per-formance, it must have been implicitly per-ceived. Note, however, that even a simpledetection task may not exhaustively mea-sure all explicit influences on performanceand residual effects of a stimulus that can-not be detected might still reflect someexplicit processing (Merikle, 1992). Later

in this chapter, we review new behavioralstudies that attempt to meet these assump-tions, but we also note that few of themsystematically demonstrate null explicitsensitivity to the presence of a stimulus.

Objective vs. Subjective Thresholds –What Is the Appropriate Measureof Awareness?

One recurring controversy in the study ofimplicit perception concerns whether thethreshold for explicit perception should bebased on an objective or subjective criterion.Although the notion of thresholds has falleninto disfavor with the advent and increaseduse of signal detection theory in percep-tion (e.g., Green & Swets, 1966; Macmillan,1986), it still has intuitive appeal in the studyof implicit perception. Later in this chap-ter, we discuss the importance of using sig-nal detection to measure awareness in thedissociation paradigm. In the interim, thedistinction between objective and subjectivethresholds may still provide a useful rubricfor explaining some of the continuing con-troversy in the literature.

Most studies of implicit perception relyon a subjective threshold to determinewhether or not a stimulus was explic-itly noticed; this approach assumes thatobservers will report a stimulus if they areaware of it and will not if they are unaware.For example, blindsight patients typicallywill report no awareness of a static stim-ulus presented to their blind field – thestimulus falls below their subjective thresh-old. Use of the subjective threshold to ruleout explicit perception essentially treats theobservers’ reports of their experiences asthe best indicator of whether or not theywere aware. More often than not, studiesusing subjective thresholds are interested inperformance on each individual trial, andclaims about implicit perception are derivedfrom the consequences of a specific stimu-lus that was not reported. This approach isappealing because it treats observers’ reportsof their own mental states as more legitimatethan the experimenter’s ability to infer theobservers’ state of awareness.

Page 230: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 12 the cambridge handbook of consciousness

Objective thresholds are based on theidea that observers might fail to report astimulus even if they did have some explicitawareness of its presence. They might adopta conservative response bias, responding onlywhen certain. Or, they might lack the meansto express verbally what they saw. Typically,objective thresholds are measured across alarge set of trials. The threshold is that levelat which a stimulus is not perceivable ratherthan simply not perceived. In using thisapproach, experimenters often adopt thestandard of null explicit sensitivity requiredby the dissociation paradigm, assuming thatif a series of trials show that a stimulus isnot explicitly perceivable, then it could nothave been perceived on any individual trial.Consequently, any influence of that stimu-lus must be implicit. Unlike the subjectivethreshold approach, objective thresholds arebased on the idea that observers might failto report a stimulus not because they failedto see it, but because they adopted too con-servative a criterion. This approach does nottrust an observer’s subjective experience ona given trial to be a true indicator of his orher actual awareness of the stimulus.

In a sense, the terms “objective” and “sub-jective” are misnomers. Both approaches relyon explicitly reported experiences, so bothare subjective. Subjective thresholds arebased on experiences on each trial, whereasobjective thresholds are based on cumula-tive experiences across a larger number oftrials. Thus, when measuring an objectivethreshold, responses on individual trials donot necessarily indicate the observer’s aware-ness. Observers might respond that they sawa stimulus, but that response might simplybe a guess. Similarly, they might report hav-ing no conscious experience, even if theyhad some vague inkling that failed to sur-pass their criterion for responding. Findingan objective threshold requires manipulat-ing the stimulus presentation such that judg-ments of stimulus presence are no betterthan chance over a reasonably large num-ber of trials. If responding to this sort ofexplicit task is at chance over a set of trials,then presumably any individual trial is basedon a guess. The challenge is in demonstrat-

ing that explicit performance was truly ran-dom and not somewhat better than wouldbe expected by chance alone.

The use of an objective threshold canlead to a seeming paradox wherein sub-jects report no conscious awareness of astimulus (i.e., they report guessing) but stillshow better than chance performance; theirperformance exceeds the objective thresh-old even though their subjective impres-sion is of guessing. Those adopting a sub-jective threshold approach would concludethat such a finding reflects implicit process-ing. The appeal of relying on the subjec-tive threshold is that it accepts what theobserver reports at face value. If observersreport no awareness, then they had no aware-ness. However, it also relies on the observer’sability to judge probabilities over a seriesof trials. Does the subjective report of noawareness really mean that they were guess-ing, or does it mean that they thought thatthey were guessing? If observers lack pre-cise access to their probability of a suc-cessful response, they might report guess-ing when in actuality, they were slightly,but significantly performing better thanchance.

The primary difference between theobjective threshold approach and the sub-jective threshold approach is that objectivethresholds take the responsibility of estimat-ing the extent of correct responding out ofthe observer’s hands. Rather than relyingon the observer to estimate when they feltthey were guessing, the objective thresholdtechnique objectively measures when theiractual performance across a series of trialsreflected guessing. In both cases, though,the subjects’ subjective experience on agiven trial contributes to the assessment ofwhether or not they were aware of the crit-ical stimulus.

Differential reliance on objective and sub-jective thresholds underlies much of thecontroversy in the field. Most critiquesof implicit perception simply show thatperformance actually exceeded an objec-tive threshold for awareness. For exam-ple, evidence for implicit priming frommasked stimuli was premised on the idea

Page 231: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 13

that subjects were no better than chanceat determining whether or not the primewas present – explicit performance didnot exceed the objective threshold (Marcel,1983b). Yet critiques of those studies sug-gest that the thresholds were not ade-quately measured and that explicit perfor-mance might well have exceeded threshold(Holender, 1986). Even studies that doattempt to demonstrate that explicit detec-tion was no better than chance rarely meetthe statistical requirements necessary toinfer null explicit sensitivity (Macmillan,1986). Many studies, especially those ofpatients, make no attempt to measure anobjective threshold, but instead rely entirelyon the observer’s self-assessment of aware-ness, much as early behavioral studies did(e.g., Sidis, 1898). Such studies are open tothe criticism that explicit perception mightwell affect performance even when subjectsdo not consciously report the presence of astimulus.

As we discuss later in this chapter, thisissue is only of importance when question-ing whether or not an example of per-ception is entirely implicit. Finding a dis-sociation in the types of processing thatoccur above and below a subjective thresh-old would still be of theoretical (and practi-cal) import even if explicit perception con-tributed to both types of processing. Forexample, in studies of inattentional blind-ness, observers view a single critical trial andquite often fail to notice the presence ofsalient but unexpected objects and events(Mack & Rock, 1998; Most et al., 2001;Simons, 2000; Simons & Chabris, 1999).When counting the total number of timesone team of basketball players passes a balland simultaneously ignoring another team ofplayers passing a ball, approximately 50%of observers fail to notice a person in agorilla suit who walks through the display(Simons & Chabris, 1999). The interestingaspect of these studies is that observers canfail to notice or consciously detect surpris-ingly salient unexpected events. Most peopleexpect that they would notice such events,and the fact that they do not report objectsas unusual as a gorilla is startling (see Levin,

Momen, Drivdahl, & Simons, 2000 for sim-ilar examples from the change blindness lit-erature).

Unfortunately, the studies are not idealfor demonstrating implicit perception.Imagine, for instance, that observers in thisstudy reported not noticing the gorilla,but then showed priming for the word“monkey.” Would that provide evidencefor implicit perception of the gorilla? Thestudy uses the dissociation paradigm, andsubjects subjectively report no awarenessof a gorilla. This finding suggests thatany priming effects might be implicit.However, observers might have had someawareness of the gorilla, or they might havehad momentary awareness of some furryobject, even if they failed to report noticinganything unusual. Given that the methodonly allows one critical trial and the “gorilla”is demonstrably perceivable (i.e., it is abovethe objective threshold), the possibility ofsome residual explicit awareness cannot beeliminated.

Arguments for implicit perception on thebasis of such one-trial studies rest on theplausibility of the alternative explanationsfor the priming effects. As the measureof explicit awareness becomes less “objec-tive” and more reliant on the observer’s self-assessment, it is more likely to miss someaspect of explicit processing. The sufficiencyof the measure of explicit awareness, regard-less of whether it is considered objective orsubjective, rests on the plausibility of thepossibility that some explicit awareness wasnot tapped by the measure. Of course, evenif the gorilla exceeded an objective thresh-old for awareness, this hypothetical findingwould still be interesting because it wouldreveal a discrepancy between what peoplesee and what they can explicitly report.Moreover, their surprise at having missedthe gorilla suggests that their awareness of itlikely was limited. Consequently, evidencefor inattentional blindness may have impor-tant practical consequences even if someresidual awareness of the unexpected eventexists.

Rather than viewing the objective-subjective difference as a dichotomy, we

Page 232: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 14 the cambridge handbook of consciousness

prefer to characterize it as a continuum thatvaries along the dimension of the experi-menter’s confidence in the accuracy of thesubjective judgments. With a subjectivejudgment on a single trial, the experimentershould lack confidence in the veracity of theobserver’s claim of no explicit awareness.One-trial approaches do not systematicallyeliminate the possibility that the stimuluswas perceived and then forgotten, that someless-easily-reportable aspect of the stimuluswas consciously perceived, or that the stim-ulus was explicitly detected and partially butnot completely identified.

Critiques of the Dissociation Paradigm

Although the dissociation paradigm hasintuitive appeal, some critics argue that theexhaustiveness requirement is a fatal short-coming – that no task can fully satisfy theexhaustiveness assumption (Merikle, 1992).Even if a task were optimally sensitive toexplicit perception and even if it showed nullsensitivity, some other unmeasured aspectof explicit perception could still influenceperformance. Logically, this view is unas-sailable. Even if a task showed null sensi-tivity for all known explicit influences, itmight neglect some as yet unknown andunmeasured explicit influence. Practically,however, if a task eliminates all known,plausible explicit influences, then claims ofimplicit perception might be more parsi-monious than defaulting to some unknownexplicit factor.

A second critique of the dissociationparadigm rests on the idea that no task mea-sures just explicit or just implicit perception(Reingold & Merikle, 1988). Performanceon any task involves a mixture of implicitand explicit influences. Consequently, find-ing null sensitivity on an “explicit” task mightalso eliminate implicit perception becausethe task likely measures aspects of both. Byanalogy, a sledgehammer to the head wouldeliminate all explicit awareness, but it alsowould eliminate most implicit effects on per-formance. Any manipulation that leads tonull explicit sensitivity might simply be sodraconian that no measure would be suffi-

ciently sensitive to detect any implicit pro-cesses.

This exclusivity critique is based on thepremise that tasks do not provide a puremeasure of either implicit or explicit per-ception. Whether or not this premise isvalid, the exclusiveness critique carries lessforce than the exhaustiveness critique. Thefailure to use an exclusive measure ofexplicit awareness is one reason why stud-ies using the dissociation paradigm mightfail to find evidence for implicit perception.The lack of exclusivity can only decrease theprobability of finding implicit perception,and it should not spuriously produce evi-dence for implicit perception. Thus, posi-tive evidence for implicit perception derivedfrom the dissociation paradigm cannot beattributed to the lack of pure measures ofimplicit and explicit processing. If evidencefor implicit perception using the dissociationparadigm is not forthcoming, failed exclu-sivity would provide a plausible explana-tion for how implicit perception might occurbut be undetectable via the dissociationparadigm.

Recent Behavioral Approachesto Studying Implicit Perception

Despite concerns about the need for exhaus-tive measures of awareness, most recentstudies of implicit perception have reliedheavily on the dissociation logic. Theapproaches to studying implicit perceptionhave become somewhat more refined intheir treatment of the problem. In this sec-tion, we review several relatively new behav-ioral approaches to studying implicit percep-tion. In some cases, these approaches followthe dissociation logic, but with improvedattempts to exhaustively measure explicitinfluences. Others dismiss the dissociationparadigm as flawed and propose new app-roaches to measuring implicit perception.For each topic, we consider possible criti-cisms of the evidence for implicit percep-tion, and at the end of the section, we pro-vide contrasting conclusions that might bedrawn by a believer and by a skeptic.

Page 233: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 15

Modern Applications of the DissociationParadigm

Since the mid-1980s, the tools and tech-niques used to measure implicit percep-tion have developed substantially, largely atthe goading of skeptics (Holender, 1986).However, straightforward applications of thedissociation logic still dominate studies ofimplicit perception, and many (if not most)of them neglect to address the standard cri-tiques of the dissociation paradigm. Thisshortcoming is particularly true of neu-roimaging work and of studies using patientpopulations, where failures to provide anadequate exhaustive measure of awarenessare commonplace (see Hannula, Simons, &Cohen, 2005 for a detailed discussion of neu-roimaging evidence for implicit perception).In part, the methods in these studies are con-strained by the need to include imaging mea-sures or by the nature of the patient’s deficit.However, behavioral studies of implicit per-ception are not limited in these ways, and anumber of new techniques have emerged toprovide sensitive and relatively rigorous testsof the existence of implicit perception.

Some of the simplest approaches arebased closely on early studies of maskedpriming, focusing on the ability to per-ceive a target as a function of an unseenprime (e.g., Bar & Biederman, 1998; Watan-abe, Nanez, & Sasaki, 2001). For example,one study examined naming accuracy forbriefly presented line drawings (Bar & Bie-derman, 1998). For the first time a stimu-lus was presented, subjects were only ableto name it correctly approximately 15% ofthe time. However, when the same stim-ulus was presented a second time, sub-jects were far more successful, suggestingthat having seen the stimulus before, evenwithout being able to name it, facilitatedsubsequent processing. This priming ben-efit only occurred when the same objectwas presented (a different exemplar of thesame category received no priming) and wasmaximal when the object was presentedin the same location. These results suggestthat implicit processing of the prime stim-ulus led to facilitated naming of the target

stimulus even when subjects typically wereunsuccessful at naming the prime. Althoughthis study is consistent with implicit per-ception, critics might well raise the objec-tion that the explicit measure (naming) wasnot an exhaustive test of explicit aware-ness. Given that the logic of this task fol-lows from that of the dissociation paradigm,unless explicit awareness of the prime iseliminated, naming improvements couldresult from residual explicit awareness.

Other studies adopted the repetitionapproach with a more rigorous measure ofawareness of the initial stimulus (Watanabeet al., 2001), although these studies focusedon perceptual learning rather than primingper se. While subjects performed a primarytask involving the perception of letters in thecenter of a display, a set of dots behind wereorganized into somewhat coherent motion;most of the dots moved randomly, but asubset moved in a coherent direction. Criti-cally, a small enough subset of the dots (5%)was coherent that subjects could not reliablydiscriminate the coherent motion displaysfrom displays in which all dots moved ran-domly. The dots were entirely irrelevant tothe primary task during the first phase of theexperiment. Then, in a later phase, subjectsattempted to judge the direction of coherentmotion of another set of dot arrays, this timewith somewhat more coherence (10%). Sub-jects were reliably better at determining thedirection of these dot displays if they movedin the same direction as the previouslyviewed displays. Thus, even though subjectswere unable to determine that the dots weremoving coherently at all in the first phase ofthe experiment, the frequent repetition ofa particular motion direction led to betterperformance with a somewhat easier judg-ment task. This indirect test provides evi-dence for implicit perception of the coher-ent motion of dots in the first phase, eventhough subjects had no conscious awarenessof their motion. This approach is an ele-gant instance of the dissociation paradigm;subjects could not reliably detect the pres-ence of coherent motion in the prime stim-ulus, but the motion coherence still affectedsubsequent judgments. Perceptual learning

Page 234: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 16 the cambridge handbook of consciousness

approaches like this one have distinct advan-tages over typical priming experiments inthat awareness of the prime stimulus canbe psychophysically eliminated. Other morerecent priming studies have attempted toadopt more rigorous measures of awarenessas well.

Many of these recent studies exploitresponse compatibility as an indirect, butsensitive measure of perceptual processing.For example, an experiment might measureresponse latency to a supraliminal target pre-ceded by a supposedly subliminal prime. Ifthe target would require a different responsethan the prime, subjects might be slowedby the presence of the prime. If subjectsdo not consciously detect the prime, thenresponse compatibility effects likely resultedfrom implicit processing of the prime. Onelarge advantage of this approach over tradi-tional semantic priming studies in the disso-ciation paradigm is that response compat-ibility effects can be positive, negative, orabsent, allowing additional ways to measurethe effects of an unseen stimulus.

Given that this approach adopts the dis-sociation logic, experiments must providedirect evidence for the invisibility of theprime. As for studies of masked semanticpriming, most decrease detectability by lim-iting presentation times and by adding mask-ing stimuli before and/or after the prime(e.g., Eimer & Schlaghecken, 2002 ; Nac-cache & Dehaene, 2001b). Others have usedsmall differences in contrast to camouflageprimes against a background of a similarcolor (e.g., Jaskowski, van der Lubbe, Schlot-terbeck, & Verleger, 2002). Even withinthe masked presentation approach, how-ever, studies vary in terms of how system-atically they manipulate the visibility of theprime. Some studies use a single stimulusduration, contrast level, or type of maskingfor all subjects (e.g., Naccache & Dehaene,2001b), whereas others adjust the stimuluspresentation to account for individual differ-ences in perceptibility (Greenwald, Draine,& Abrams, 1996). Both approaches can workprovided that neither shows any evidenceof explicit detection of the prime stimulus.Unfortunately, many of the studies using aconstant prime and mask across subjects do

not entirely eliminate explicit perceptibilityfor all subjects, raising some concerns aboutthe exhaustiveness assumption.

Although early studies of priming bymasked stimuli focused on semantic prim-ing by words, more recent studies usingresponse compatibility have adopted a hostof different stimuli and judgment tasks,including left/right discrimination of arrows(Eimer, 1999; Eimer & Schlaghecken, 2002 ;Klapp & Hinkley, 2002); concrete/abstractword discrimination (Damian, 2001); lex-ical decision (Brown & Besner, 2002);words and pictures in animacy judgments(Dell’Acqua & Grainger, 1999; Klinger,Burton, & Pitts, 2000); words and non-word stimuli in Stroop interference tasks(Cheesman & Merikle, 1984 ; Daza, Ortells,& Fox, 2002); words in positive/negativevalence judgments (Abrams & Greenwald,2000; Abrams, Klinger, & Greenwald, 2002);numerals and number words in relativemagnitude judgments (Greenwald, Abrams,Naccache, & Dehaene, 2003 ; Naccache& Dehaene, 2001b; Naccache, Blandin, &Dehaene, 2002 ;); names in male/femalejudgment (Greenwald et al., 1996); anddiamonds and rectangles in shape catego-rization (Jaskowski et al., 2002). Despitethe varied stimuli and judgment tasks, theresults of these studies are remarkablyconsistent.

Moreover, all of these approaches to com-patibility effects fall into roughly four types:(1) centrally presented masked primes fol-lowed by a target, (2) centrally presentedmasked primes followed by a target with alimited interval for an allowed response (i.e.,a “response window”), (3) masked flankertasks, and (4) Stroop tasks. Findings from thefirst two approaches are reviewed below. Afew of these studies were accompanied byneuroimaging results, some of which are dis-cussed in this section and some of which areconsidered in the section on neuroimagingevidence for implicit perception.

Masked Priming withouta Response Window

The influence of masked primes on responsetime and accuracy to subsequently presented

Page 235: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 17

target items varies as a function of thecompatibility of the responses mapped tothose items (Dehaene et al., 1998; Eimer,1999; Eimer & Schlaghecken, 1998; Koech-lin, Naccache, Block, & Dehaene, 1999; Nac-cache & Dehaene, 2001b; Neumann & Klotz,1994). In many cases, target items elicitfaster, more accurate responses when the tar-get and prime require the same, compat-ible response than when they require dif-ferent or incompatible responses (Dehaeneet al., 1998; Koechlin et al., 1999; Nac-cache & Dehaene, 2001b; Neumann & Klotz,1994). In one task, subjects judged whetheran Arabic numeral or number word targetwas greater than or less than 5 (Dehaeneet al., 1998; Koechlin et al., 1999; Naccache& Dehaene, 2001a; Naccache & Dehaene,2001b). Target numbers were preceded by acompatible or incompatible number prime(e.g., if 6 were the target, a prime of 7

would be compatible and a prime of 4 wouldbe incompatible). In this case, compati-ble primes benefited performance regardlessof whether or not the prime was masked(Koechlin et al., 1999). Moreover, the com-patibility effects persisted even when thenotation of the target and prime were dif-ferent (i.e., Arabic numerals primed bothArabic numerals and number words), sug-gesting that the priming effect must bemore abstract than feature-based visualmatching.

Not all studies show a positive effect ofcompatibility, however. In fact, some studiesshow a negative compatibility effect (NCE)in which responses are slower and moreerror prone for compatible primes (Eimer& Schlaghecken, 1998)! For example, whena post-masked priming arrow pointed inthe same direction as a subsequent targetarrow, subjects were slower and less accuratethan when the prime arrow pointed in theopposite direction (Eimer & Schlaghecken,1998). One explanation for these contradic-tory results appeals to the effects of delaysbetween the prime and the response on com-patibility effects. In one experiment thatsystematically manipulated the delay, pos-itive compatibility effects were found forshort delays between the prime and theresponse, but negative effects of compatibil-

ity resulted from delays longer than 350–400

ms (Eimer, 1999; Eimer & Schlaghecken,1998).

The transition from positive to negativeeffects has been characterized more com-pletely using recordings of ERPs. The lateral-ized readiness potential (LRP), detected viaERP recording, measures the activation frommotor cortex of the hemisphere opposite theresponse hand (Coles, Gratton, & Donchin,1988) and provides a direct way to deter-mine whether a stimulus leads to activationof motor cortex. On incompatible trials, theprime should elicit transient activation ofmotor cortex ipsilateral to the respondinghand followed by contralateral motor cor-tex activation in response to the target. Witha compatible prime and target, this ipsi-lateral activation should be absent. In fact,the behavioral compatibility studies oftenincorporated ERP recording and consistentlyfound LRPs in response to masked primes(Dehaene et al., 1998; Eimer, 1999; Eimer& Schlaghecken, 1998). Ipsilateral activa-tion was evident shortly after the prime,both for arrow primes and numerical stim-uli. Assuming that the masked primes werenot consciously perceived, these LRPs pro-vide evidence of processing in the absence ofawareness.

The time course of neural activation cor-responding to a masked prime might alsohelp explain the paradoxical negative com-patibility effect sometimes observed withlonger lags between the prime and response(Eimer, 1999; Eimer & Schlaghecken, 1998).The burgeoning neural activity associatedwith a subliminal prime diminishes rapidlywhen observers do not make an overtresponse. If inhibitory mechanisms, not yetfully characterized, are responsible for pre-venting an overt motor response to themasked prime (Eimer, 1999), then theymight also induce a refractory period dur-ing which activation consistent with theprime is suppressed. Thus, activation inresponse to the consistent target would over-lap temporally with this refractory period,leading to the paradoxical result of slowedresponses with compatible primes. Regard-less of whether the prime produces a pos-itive or negative compatibility effect, these

Page 236: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 18 the cambridge handbook of consciousness

studies confirm that masked primes activatecorresponding motor cortices.

Together, the behavioral and ERP evi-dence for compatibility effects suggests thatunseen primes influence both performanceand neural activity. However, these stud-ies still follow the logic of the dissocia-tion paradigm, and any claims for implicitperception must satisfy the exhaustivenessassumption. Otherwise, differences betweenvisible and “subliminal” primes might justreflect different levels of explicit activationrather than a dissociation between explicitand implicit perception. In most responsecompatibility studies, the perceptibility ofthe prime is measured not during the pri-mary task, but in a separate set of tri-als or separate control experiments (e.g.,Naccache & Dehaene, 2001b). Although fewsubjects report having seen the primes afterthe primary task, performance in these sepa-rate prime perceptibility trials implies someawareness of the “subliminal” primes. Forexample, sensitivity using signal detectionmeasure (d′) ranged from 0 for some subjectsto as high as 1.3 for other subjects (Naccache& Dehaene, 2001b). Given that d′ levels ofas low as .3 can reflect some reliable sensi-tivity to the presence of a prime and a d′

level of 1 represents fairly good sensitivity,these studies do not adequately eliminateexplicit awareness of the prime stimuli. Con-sequently, claims of compatibility effectsthat are devoid of any explicit awareness arenot entirely supported; the masked primesmight well have been explicitly detec-ted by some of the subjects on some trials.

Masked Priming with a ResponseWindow

One recent refinement of the masked prim-ing approach involves the use of a speededresponse to maximize the effects of implicitprocessing (Draine & Greenwald, 1998;Greenwald et al., 1996; Greenwald, Schuh,& Klinger, 1995). In this approach, subjectsmust make their judgment within a fixedtemporal window after the presentation ofthe target (e.g., between 383 and 517 msinstead of a more typical response latency ofabout 600 ms). The goal in forcing speeded

responses is to maximize any implicit com-patibility effects based on the premise thatsuch implicit compatibility effects might beshort-lived. As is typical of implicit responsecompatibility studies, prime visibility wasmeasured by asking subjects to detect themasked prime stimulus either in a simpledetection task or in a discrimination task(e.g., distinguish between a word prime anda random string of digits). Not surprisinglygiven the fixed prime presentation durations,a number of subjects had d′ levels above 0.However, these studies did not simply lookat performance on the compatibility taskand then presume that explicit awarenesswas nil. Rather, a new analytical approachwas adopted: Regression was used to predictthe level of the compatibility effect whenexplicit awareness was absent (d′ = 0). If theintercept of the regression of the compati-bility effect on explicit sensitivity is greaterthan 0, then the study provides evidence forimplicit perception. That is, implicit pro-cessing is revealed when the indirect mea-sure reveals some consequence of the per-ception of the prime even when explicitsensitivity is extrapolated to d′ = 0. Thisapproach revealed significant response com-patibility effects for prime durations rangingfrom 17–50 ms when explicit sensitivity wasextrapolated to d′ = 0.

This approach was premised on theassumption that the response window wasnecessary to detect implicit compatibilityeffects. Another experiment tested the valid-ity of this assumption by varying the stim-ulus onset asynchrony (SOA) between theprime and the target (Greenwald et al.,1996). If more than 67 ms elapsed betweenthe prime and target onsets, masking theprime eliminated the compatibility effect. Incontrast, unmasked primes produced com-patibility effects at a wide range of SOAs.This finding represents an important qual-itative difference between visible and sub-liminal primes. Moreover, the regressiontechnique and the response-window meth-odology are valuable contributions to thestudy of implicit perception.

More importantly, the findings raise someimportant limitations on implicit process-ing. Findings from this response-window

Page 237: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 19

technique suggest that implicit effects areextremely short-lived and are disrupted byeven slight increases to the delay betweenthe prime and the target. If this form of per-ception proves to be the only reliable wayto find evidence for implicit perception, itwould undermine more radical claims aboutthe pervasiveness of implicit processes, espe-cially implicit persuasion. Most hypothe-sized processes of implicit persuasion wouldrequire a much longer delay between thepriming stimulus and the changed belief oraction. These findings also provide an expla-nation for why studies of implicit percep-tion often fail to replicate – the effects areephemeral.

Although the response compatibilityeffects seem to provide evidence for implicitsemantic processing, many of the findingscould be attributed to motor interferencerather than to semantic priming. Subjectslearn responses to a stimulus, and it is theresponses that conflict, not the abstract orsemantic representations of those stimuli.In the response-window approach, semanticpriming effects are difficult to produce, andmost results can be attributed to responsecompatibility rather than any more abstractpriming (Klinger et al., 2000). In fact, theeffects, at least in some cases of word primes,seem not due to the semantic content ofthe word, but rather to response associa-tions formed earlier in the experiment. Inone striking example, subjects were asked tomake positive/negative valence judgmentsabout words. In the critical trials, wordsthat previously had been used as targetswere recombined in a way that changed thevalence and then were used as a prime word.For example, the targets “smut” and “bile”would become the prime word “smile.”Although the semantic representation of“smile” should lead to a compatibility benefitfor a positive target, it instead facilitated pro-cessing of negative words (Abrams & Green-wald, 2000; Greenwald et al., 2003)! More-over, unless a word or part of a word hadbeen consciously perceived as a target dur-ing an earlier phase of the experiment, itproduced no priming at all. In fact, otherevidence not using a response-window tech-nique suggests that not only must a word

be consciously perceived to later serve as aneffective prime but it must also have beenused as a target such that the word would beassociated with a motor response (Damian,2001). In the example above, the word“smile” without prior exposure to “smut”and “bile” did not prime positive or negativewords (Abrams & Greenwald, 2000; Green-wald et al., 2003). This claim directly con-tradicts other evidence of implicit semanticprocessing (Dehaene et al., 1998; Koechlinet al., 1999; Naccache & Dehaene, 2001b)by suggesting that only fragments of wordsare processed implicitly and that the asso-ciations they prime are developed throughconscious experience as part of the exper-iment. Yet, evidence for priming of Arabicnumerals by number words implies primingof more abstract representations, and suchstudies also showed priming from stimulithat had not previously been the target ofa judgment (Naccache & Dehaene, 2001b).Moreover, switching the required responsedid not eliminate priming, so the effectcannot be entirely due to some automatedform of response priming (Abrams et al.,2002).

In a recent intriguing paper, the pri-mary adversaries in the argument over thenature of priming in the response-windowparadigm combined their efforts to deter-mine whether the effects were due to morethan response compatibility (Greenwaldet al., 2003). These experiments adoptednumerical stimuli (Naccache & Dehaene,2001b) in a response-window task. The stim-uli were all two-digit Arabic numerals, andthe judgment task required subjects to deter-mine whether the target was greater or lessthan 55 . Unfortunately, the use of two-digit numbers precluded the assessment ofcross-notation priming, which was one ofthe strongest arguments for semantic pro-cessing in earlier experiments (Naccache& Dehaene, 2001b). The experiment repli-cated the finding of response compatibil-ity effects with stimuli that had not pre-viously been used as targets, refuting theargument that subjects must have formeda response association to a stimulus for itto produce priming (Abrams & Greenwald,2000; Damian, 2001). However, the study

Page 238: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 2 0 the cambridge handbook of consciousness

also replicated the counter-intuitive findingthat prior judgments affect the directionalityof priming (Abrams & Greenwald, 2000). If73 had served as a target, then subsequentlyusing 37 as a prime enhanced responsetimes to numbers greater than 55 ! Takentogether, these findings imply that previ-ously unclassified primes can produce com-patibility effects and that they do so based onlong-term semantic representations. How-ever, such representations are overriddenonce a prime has been consciously classified,and then its features lead to priming basedon the response association formed duringthe experiment.

Interim Conclusions

The response-window and regressionapproach lend new credibility to the tradi-tional dissociation technique, and they showexceptional promise as a way to produceconsistent evidence for priming by maskedstimuli. Although some of the findingsusing this method are counter-intuitive andothers are contradictory, the basic approachrepresents one of the best existing attemptsto meet the challenges of critics of implicitperception (e.g., Holender, 1986). Theapproach is firmly couched in the disso-ciation logic, and most experiments makea laudable attempt to eliminate explicitsensitivity. The regression approach, inparticular, is a clever way to examineperformance in the absence of awareness.However, the existing literature does leaveplenty of wiggle room for skeptics unwillingto accept the existence of implicit percep-tion. First, because the approach adoptsthe dissociation paradigm, the measures ofexplicit sensitivity might fail to measureexplicit sensitivity exhaustively (Merikle &Reingold, 1998).

Perhaps of greater concern to those whoare otherwise willing to adopt the dissoci-ation logic is the nature of the regressionapproach itself. The approach has been crit-icized for making assumptions about thenature of the relationship between the directand indirect tasks and measures (Dosher,1998). For example, the conclusions from

the regression approach often depend onextrapolation, with relatively few subjects(e.g., 25%) performing at chance on theexplicit detection task and a sizable minor-ity of subjects (25%) showing substantialexplicit sensitivity with d′ levels greater than1 (Greenwald et al., 1996). If most subjectsshow greater than chance explicit sensitivity,the extrapolation to zero sensitivity mightnot be appropriate. A skeptic could eas-ily imagine a non-linearity in the relation-ship between implicit and explicit measureswhen explicit performance is just barelyabove d′ = 0. Perhaps there is a qualitativedifference between minimal sensitivity andfairly good sensitivity. If so, then extrapolat-ing to no sensitivity from fairly good sensi-tivity would not allow a clear conclusion infavor of implicit effects. Of course, this con-cern could be remedied with a more system-atic manipulation of prime visibility withinrather than across subjects, thereby obviat-ing the need for any extrapolation. Giventhe trend toward progressively more sophis-ticated analyses and methodologies in thisliterature, this new approach shows greatpromise as an effective use of the dissoci-ation paradigm.

Alternatives to Dissociation

The concerns about exhaustiveness and thepossible role of failed exclusivity in mini-mizing evidence for implicit perception havespurred a new approach to studying implicitperception: Concentrate on qualitative orquantitative differences between tasks thatpurportedly measure implicit perception todifferent degrees. Examining differences inperformance on these tasks as a functionof an experimental manipulation can revealthe operation of distinct implicit and explicitprocesses. Two types of “relative differences”methodologies have used this logic: (1) therelative sensitivity procedure, which looks forgreater sensitivity to stimulus presence withindirect measures than with direct measuresof awareness, and (2) the process dissociationprocedure, which looks for qualitatively dif-ferent performance for implicit and explicitperception. Neither methodology requires

Page 239: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 2 1

process-pure measures of implicit or explicitperception. Rather, both assume that alltasks have implicit and explicit components.Carefully designed experiments can pullapart the underlying processes, revealing dif-ferences between the implicit and explicitprocessing in a given task. Both approachesalso assume that implicit and explicit pro-cesses underlie functionally different typesof behavior; explicit processes underlieintentional actions, whereas implicit pro-cesses govern automatic (non-intentional)behaviors. A behavior may result from a con-scious, deliberate decision or from an auto-matic predisposition, or a combination ofthe two. The relative differences method-ologies attempt to show that such automaticand deliberate processes can lead to qualita-tively different performance.

Relative Sensitivity: An Alternativeto Dissociation

The goal of the relative sensitivity proce-dure is to reveal implicit processes by show-ing instances in which indirect measures aremore sensitive than comparable direct mea-sures in making a given discrimination. Thisapproach assumes that performance of anytask involves both implicit and explicit con-tributions, neither of which can be measuredexclusively by any task. Direct tasks measureperformance when subjects are instructed touse their percept of a critical stimulus tomake a judgment or discrimination. Indirecttasks involve an ostensibly unrelated behav-ior or judgment that nevertheless can beinfluenced by perception of a critical stim-ulus. Although the direct task might notexclusively measure explicit contributions,on its face it is demonstrably more explicitthan the indirect task. Any decision-makingprocess that relies on conscious awarenessof the critical stimulus should lead to bet-ter performance on a direct measure than onan indirect measure because subjects shouldoptimally rely on their conscious percept. Incontrast, indirect measures do not requireconscious perception of the critical stimulus,so subjects are unlikely to rely on consciousprocessing of that stimulus in making their

judgment. Therefore, if indirect measuresreveal better performance than direct mea-sures, implicit processes must have influ-enced performance.

One critical component of this paradigmis that the two tasks must be equated inmost respects. Unless the visual displays areequivalent and the task requirements com-parable, any performance differences couldbe caused by the differences between thedisplays or the task demands and require-ments. Proponents of this approach rightlytake pains to make sure that the only differ-ence between the direct and indirect tasksare in the instructions (a similar approachhas been adopted in the study of implicitmemory; see Schacter, 1987).

Note that this criterion – equivalencyacross direct and indirect measures – is notoften met in studies of implicit perception.Many experiments use entirely distinct indi-rect and direct measures, making compara-bility more difficult. When observers reportno awareness of a stimulus on a direct mea-sure, indirect measures such as eye move-ments, patterns of neural activation, skin-conductance changes, or ERPs might revealsensitivity to the presence of a stimulus.Although these sorts of indirect measure-ments certainly provide important insightinto the nature of the processing of thestimulus, they do not provide conclusiveevidence for processing in the absence ofawareness. They might only reveal greatersensitivity of the measure itself; using suchmeasures to provide corroborating evidencefor qualitative differences in implicit andexplicit processing may prove more fruit-ful (see the neuroimaging evidence sectionbelow). For the inference of implicit process-ing to follow from the relative sensitivity ofdirect and indirect measures, however, themeasures must be comparable.

In one of the first experiments to adoptthe relative sensitivity approach for thestudy of implicit perception (Kunst-Wilson& Zajonc, 1980), subjects viewed a seriesof briefly presented pictures of geomet-ric shapes. Then, they either performed anold/new recognition task (the direct task)or they picked which of two shapes they

Page 240: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 2 2 the cambridge handbook of consciousness

preferred (the indirect task). When perform-ing the direct task, subjects performed nobetter than chance at discriminating pre-viously viewed from novel shapes. In con-trast, when performing the indirect pref-erence judgment task, they preferred thepreviously studied shape over a novel shapeat rates significantly above chance levels. Inother words, a direct measure of consciousrecognition showed less sensitivity to thepresence of a representation than an indi-rect measure of preference. This experimentmeets the standards necessary for inferringimplicit processing in the relative sensitiv-ity approach: (a) the experimental environ-ment was constant across tasks, with onlythe task instructions changing across con-ditions, and (b) performance on the indi-rect task exceeded that on the direct task.By the logic of the relative sensitivityapproach, the direct task represents a puta-tively better measure of conscious aware-ness, so the relatively increased sensitivityof the indirect task must have resulted fromimplicit processes.

One possible concern about this con-clusion derives from the use of separatestudy and test phases rather than testingperformance at the time of presentation;subjects may have perceived and forgot-ten the consciously experienced shape evenif a vague, explicitly generated preferencepersisted longer and affected performanceon the indirect task. Furthermore, subjectsmight have been less motivated to make themore difficult intentional recognition judg-ments than the simpler preference judg-ment, so they were more likely to selectresponses randomly. If so, then responding inboth cases might reflect access to an explicitrepresentation, with the relative increase insensitivity for the indirect task resulting notfrom implicit processing but from a dif-ferential effect of motivation (cf. Visser &Merikle, 1999).

One representative experiment pitted arecognition task (direct) against a perceptualcontrast judgment (indirect) in which sub-jects judged the contrast of a word againstthe background (Merikle & Reingold, 1991).In a study phase, subjects viewed pairs of

words and were asked to read the cued one.Then, in the test phase, they viewed indi-vidual words against a noise background andeither judged whether it was old or new (adirect recognition task) or judged whetherit was presented in high or low contrast (anindirect measure). Performance on the con-trast judgment task revealed greater sensi-tivity to the presence of a word in the studyphase than did the direct recognition task (atleast for the first block of trials). Presumably,the prior presentation reduced the process-ing demands, leading to a subjective impres-sion that the words were easier to see againsta noisy background even if the words werenot recognized. Once again, the study usedcomparable stimuli in the direct and indirecttasks and found greater performance for theindirect task, suggesting implicit processing.

Problems with Relative Sensitivityas an Approach

Although this approach is touted as an alter-native to the classic dissociation paradigm,any positive evidence for implicit percep-tion is subject to many of the same assump-tions. Positive evidence for implicit percep-tion requires some task to have a greaterimplicit contribution than explicit contribu-tion. Otherwise, performance on the indi-rect task could not exceed that on anytask with a greater explicit component. Ifsome task has a greater implicit than explicitcomponent, then it should also be possi-ble to make the task sufficiently difficultthat the explicit component would be elim-inated, leaving only the residual implicitcomponent. That is, the “indirect > direct”approach is a superset of the standard dis-sociation paradigm that does not requirethe elimination of an explicit component.Yet, any case in which the indirect > directapproach reveals implicit perception wouldalso support the possibility that the disso-ciation paradigm could reveal implicit per-ception. In essence, this approach amountsto a more liberal variant of the dissocia-tion paradigm in which explicit processingneed not be eliminated. However, as criticsof early work on implicit perception have

Page 241: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 2 3

noted, whenever a stimulus is consciouslyperceptible, explicit factors may contami-nate estimates of implicit processing.

A more general concern about thisparadigm is that it assumes a unitary explicitcontribution and a unitary implicit contri-bution. In arguing that a direct measureinvolves a greater explicit contribution thanan indirect measure, the assumption is thatthe explicit contributions to each task areof the same sort. If more than one sort ofexplicit contribution exists, then a “direct”task might exceed an “indirect” task on someforms of explicit contribution but not others.Unless the direct task exceeds the indirecttask on all explicit contributions, the logicunderlying the paradigm fails. Just as the dis-sociation paradigm suffers from the prob-lem of exhaustively eliminating all possibleexplicit contributions, the indirect > directapproach requires that the two tasks mea-sure the same explicit component and onlythat explicit component. Consequently, forthe logic of the paradigm to hold, the exper-imenter must exhaustively eliminate anyextraneous explicit contributions to the indi-rect task that might explain superior per-formance on the indirect task. Given thatthis exhaustiveness assumption applies tothe relative sensitivity approach, its advan-tage over the standard dissociation paradigmis somewhat unclear.

Qualitative Differences

One criterion often used to infer the exis-tence of implicit perception relies on differ-ences in the patterns of performance derivedfrom implicit and explicit processes. Whenthe pattern of performance diverges fromwhat would be expected with explicit per-ception, then the processes leading to thisqualitative difference might well be implicit.Qualitative differences in performance forimplicit and explicit tasks or measures oftenprovide an intuitive way to infer the exis-tence of implicit perception. The negativecompatibility effects described earlier pro-vide one illustration of the importance ofsuch differences for inferring implicit pro-cessing (Eimer, 1999). However, the inter-

pretation of qualitative differences is oftenmuddied by the challenge of determin-ing whether differences in performanceare qualitative rather than quantitative. Aneffect that initially appears to reflect a qual-itative difference might simply be a differ-ence along a non-linear dimension.

More importantly, though, qualitativedifferences in performance can occur evenwhen subjects are aware of the stimulus(Holender, 1986). That is, qualitative dif-ferences are possible within explicit per-ception, so the existence of a qualitativedifference in performance alone does notunequivocally demonstrate implicit percep-tion. Rather, the qualitative difference mustbe accompanied by an exhaustive measureof explicit awareness. Consequently, qual-itative differences can provide convergingevidence for the existence of implicit per-ception, but they are not definitive in andof themselves (Holender, 1986). Perhaps thebest example of the use of qualitative dif-ferences in studies of implicit perceptioncomes from the use of the process disso-ciation paradigm (otherwise known as the“exclusion” paradigm).

Process Dissociation

In the process dissociation technique,implicit and explicit performance are putin opposition (Jacoby, 1991). As in the rela-tive sensitivity approach, direct and indirectmeasures are thought to rely differentiallyon explicit and implicit processing. In thisapproach, intentional actions are assumed tobe under explicit control, whereas automaticresponses are thought to reflect implicit pro-cessing. Presumably, people will use con-sciously available information to guide theirintentional actions. In contrast, informa-tion available only to implicit processeswill be less subject to intentional con-trol. Consequently, when subjects produceresponses that differ from those associatedwith intentional actions, they may have beeninfluenced by non-conscious processes. Thecritical difference between the process disso-ciation procedure and the relative sensitivityprocedure is that the task instructions and

Page 242: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 2 4 the cambridge handbook of consciousness

goals are constant. Subjects always performthe same task. Rather than manipulating thetask across conditions, the perceptibility ofthe critical stimulus itself is varied so thatsome responses are consistent with aware-ness of the stimulus and others are not.

Subjects are instructed to respond oneway if they are aware of a stimulus, butimplicit or indirect influences lead them torespond the opposite way by default. Forexample, in the original instantiation of thisapproach in the memory literature, subjectsstudied a list of words and then were askedto complete word fragments with words thathad not been on the studied list (Jacoby,1991). Presumably, if they remembered thestudied word, they would successfully avoidit in the fragment task. If they did not explic-itly remember the word, they might auto-matically or implicitly be more likely tocomplete a fragment with a studied wordthan a non-studied word. Implicit influencesshould increase the likelihood of complet-ing fragments with studied words, whereasexplicit influences should decrease the like-lihood of completing fragments with stud-ied words. The same logic can be applied toimplicit and explicit perception: If subjectsexplicitly detect the presence of a word, theyshould avoid using it to complete a wordfragment. However, if they do not detect itand it still influences performance implic-itly, they should be more likely to completea fragment with a studied word than a non-studied word.

Studies using this procedure have beentaken to support the existence of implicitperception. For example, one study var-ied the presentation time for words. Imme-diately after viewing each word, subjectswere given a word stem completion task inwhich they were asked to complete the stemwith a word other than the one that hadbeen presented (Debner & Jacoby, 1994).With long presentation durations, subjectswere aware of the words and successfullyavoided completing stems with the “studied”words relative to the baseline performanceof subjects who had never been shown theword. In contrast, with shorter presenta-

tions, subjects completed the stems withthe “studied” word more often than thebaseline condition. Even when they wereunable to use their memory for the word toguide their intentional actions (i.e., chooseanother word), the briefly presented wordstill received enough processing to increaseits availability in the stem completion task.

A similar pattern emerges when atten-tional focus rather than presentation dura-tion is manipulated (see Merikle et al., 2001

for an overview). In one study, subjectsviewed a briefly presented cross and judgedwhich of its two lines was longer. During asubset of trials, a word was presented brieflyalong with the cross (see Mack & Rock, 1998

for the origins of this method). Depend-ing on the condition, subjects were asked tofocus attention either on the cross judgmentor on the words. In both conditions, subjectssubsequently attempted to complete a wordstem with a word that had not been pre-sented (this study was described by Merikleet al., 2001). Those subjects who focusedon the words performed well, rarely usingthe presented words to complete the stem.In contrast, those who focused attention onthe cross judgment completed the stem withpresented words more often than would beexpected based on a previously determinedbaseline (see also Mack & Rock, 1998).When the words were the focus of attention,they presumably were available to aware-ness, and subjects could use that informa-tion to exclude them in the stem completiontask. However, when subjects focused atten-tion on the cross judgment, they were lessaware of the words, but automatic process-ing of the words biased them to use the pre-sented words in the stem completion task.

A variety of exclusion tasks have beenused to study implicit perception. For exam-ple, subjects show differential effects ofinterference in a variant of the Stroop taskwhen aware and unaware of a stimulus. Typ-ically, when color patches are incongruentwith a preceding word (e.g., a green patchpreceded by the word “red”), subjects areslower to identify the color of the patch thanif the two matched. However, if mismatches

Page 243: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 2 5

occur on a large proportion of trials (80%),subjects use this information to performfaster when the word and patch mismatchthan when they match (see Merikle & Joor-dens, 1997b for a discussion of these studies).When words were presented long enough tobe consciously detected, subjects used theirexplicit knowledge to override Stroop inter-ference. In contrast, briefly presented wordswere not consciously detected, and subjectswere significantly slowed when a word-colorpatch mismatch occurred (Merikle & Joor-dens, 1997a,b). Stroop interference can becounteracted if subjects are aware of theword and of the predictiveness of the word,but if the word is not consciously perceived,subjects cannot override these automaticinterference effects.

Other exclusion studies have used sim-ilar manipulations of target predictabilityin response compatibility paradigms (e.g.,McCormick, 1997). Subjects were asked todecide whether an X or O was in the dis-play, and this target item was presentedeither on the right or left side of the fix-ation cross. Before the presentation, a cueappeared on the left or right side of the dis-play. On approximately 80% of trials, thecue was on the side opposite where the tar-get would appear. Thus, the cue predictedthat the target would be on the opposite sideof the display. When the cue was presentedfor long enough to be consciously detected,subjects responded more rapidly to targetson the side opposite the cue. In contrast,when the cue was presented too briefly tobe consciously detected, subjects were fasterto respond when the cue and target were onthe same side of the display (McCormick,1997). Presumably, the cue automaticallyattracts attention, and only with awarenesscan subjects override this automatic shift ofattention. Without awareness, the cue auto-matically draws attention, leading to betterperformance when the target appears at thecued location. Although this finding does notinvolve semantic processing without aware-ness, it does suggest that attention shifts canbe induced without awareness of the induc-ing stimulus.

Problems with Process Dissociation as aMeasure of Implicit Perception

This approach has promise as a means ofstudying implicit perception. One concernabout this approach, however, is that itmight be subject to biases and motivationalfactors that affect the criterion that sub-jects adopt. If so, estimates of implicit pro-cessing might be inflated (Visser & Merikle,1999). Any case in which the subject’s cri-terion is differentially affected by exclu-sion and inclusion instructions can producea change in the criterion that could theninfluence estimates of unconscious process-ing. For example, increasing incentives toexclude studied items led to improved per-formance, thereby decreasing estimates ofunconscious processing (Visser & Merikle,1999). More broadly, variations in the degreeof confidence or certainty in a represen-tation or a percept can lead to differentdegrees of success on the exclusion task.Given that the exclusion task provides thebasis for inferring implicit representations,such variations are problematic. A word thatis explicitly detected, but with low confi-dence, might lead to a failure to exclude thatitem on a stem completion task even thoughthere was an explicit contribution to percep-tion. In terms of signal detection, if subjectswere conservatively biased when reportingexplicit detection, estimates of implicit per-ception would be inflated. Thus, as in thedissociation paradigm, the explicit task mustdemonstrably eliminate all explicit detec-tion and must not be subject to conservativeresponse biases for this paradigm to providea clear estimate of implicit perception.

A Believer’s Interpretation

The past 15 years have seen tremendousimprovements in the behavioral methodsused to study implicit perception. Moreimportantly, many of the early critiquesof the implicit perception literature havebeen addressed. Most studies using the dis-sociation paradigm now use signal detec-tion theory to determine the explicit per-ceptibility of the prime stimulus, thereby

Page 244: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 2 6 the cambridge handbook of consciousness

providing a more convincing demonstrationthat priming results from implicit process-ing rather than from explicit contamina-tion. The recently introduced technique ofregressing performance on an indirect mea-sure (e.g., a response compatibility effect)on performance on an explicit detection taskprovides a more nuanced approach to thedissociation technique. Even when perfor-mance on the explicit task is extrapolated tonull sensitivity, performance on some indi-rect measures is still better than chance. Theuse of response compatibility allows an indi-rect measure that can, under the right cir-cumstances, reveal implicit semantic pro-cessing. For example, priming persists evenwhen the format of a number (text vs. Ara-bic numeral) changes from prime to test.The combination of the regression techniqueand response compatibility paradigms pro-vides a powerful new tool to study implicitperception, one that has produced consis-tent and replicable evidence for implicit per-ception. Finally, work using process disso-ciation and relative sensitivity approachesreveals evidence for qualitative differencesbetween implicit and explicit processing.These qualitative differences suggest thatdifferent mechanisms underlie implicit andexplicit perception, thereby providing fur-ther evidence for the existence of implicitperception. In sum, evidence from a widevariety of tasks and measures provides sup-port for implicit perception and even forsemantic processing in the absence of aware-ness. Given the wide variety of tools usedin the study of implicit perception, the con-verging evidence for the existence of implicitperception is overwhelming.

A Skeptic’s Interpretation

The tools and techniques used to studyimplicit perception have improved immen-sely over the past 20 years. Many studieshave adopted signal detection theory as away to verify the absence of explicit percep-tion, thereby making evidence from the dis-sociation paradigm less subject to the stan-dard criticisms. Moreover, seeking evidenceof qualitative differences using the process

dissociation paradigm or other relative sen-sitivity approaches is a promising avenue forthe exploration of implicit perception. How-ever, none of these approaches or studiesprovides airtight evidence for implicit per-ception, and all are subject to fairly plausiblealternative explanations that rely solely onexplicit mechanisms. For example, in stud-ies using the regression approach, the directmeasure often reveals sensitivity to the pres-ence of the prime stimulus at levels far aboved′ = 0 (Naccache & Dehaene, 2001b); theprime is readily visible to some subjects.Consequently, the inference for implicit per-ception relies on extrapolation of perfor-mance to d′ = 0 from a number of sub-jects who show positive sensitivity to thestimulus. This extrapolation is potentiallyhazardous, particularly if the distributionof subjects is not centered on d′ of 0. Ifthe relationship between explicit percep-tion and the indirect measure is non-linear,the extrapolation may be invalid (Dosher,1998). Moreover, the presence of a pos-itive indirect effect might require only aminimal amount of explicit sensitivity. Nopublished studies have examined the effectof varying explicit sensitivity systematically(within subjects) on the magnitude of theindirect response compatibility effect. Anyapplication of the dissociation paradigm,including the regression approach, dependscritically on demonstrating null sensitivityto the presence of the critical stimulus.None of the studies to date have done soadequately.

Evidence from the process dissociationparadigm suggests a qualitative differencebetween implicit and explicit perception,something that would be more difficult toexplain via explicit contamination. Moststudies of implicit perception simply reveal“implicit” effects that are weaker versionsof what would be expected with explicitprocessing. The process dissociation pro-cedure, in contrast, suggests that implicitand explicit mechanisms differ. However, asaccurately noted in critiques of the implicitperception literature, qualitative differen-ces alone are insufficient to claim eviden-ce for implicit perception. The qualitative

Page 245: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 2 7

difference could simply be a dissociationbetween two forms of explicit perceptionrather than between implicit and explicitperception. Moreover, inferences of implicitperception depend on the extent to whichthe explicit, intentional response fully mea-sures all of the explicit processing. Theonly studies to address this question suggestthat performance on the explicit task canbe enhanced via motivation manipulations,thereby decreasing the evidence for implicitperception (Visser & Merikle, 1999).

In sum, the new tools introduced tostudy implicit perception may be promising,but the evidence for implicit perception isnot yet convincing. Moreover, the impliciteffects that have been reported are smalland tend to vary with the extent of demon-strated explicit awareness, hinting that the“implicit” effects might well be driven byresidual explicit processing. For a study usingthe dissociation paradigm to make a strongclaim for implicit perception, no subjectshould show explicit sensitivity to the vis-ibility of the critical stimulus; no study todate has met this strict criterion. Converg-ing solid evidence from a variety of tech-niques can provide powerful support for aclaim of implicit perception, but the conver-gence of weak and controvertible evidencefor implicit perception does not merit strongsupport for the claim. If all of the evidencecan be explained by plausible explicit con-founds, then there is no need to infer theexistence of a separate mechanism or set ofmechanisms.

Evidence for Implicit Perception –Neuroimaging Data

Neuroimaging approaches provide sev-eral distinct advantages over behavioralapproaches in the study of implicit per-ception. First, the effects of a subliminalstimulus can be assessed without an overtresponse; neuroimaging techniques providean additional dependent measure of the con-sequences of perception, one that may allowdissociations that would be impossible withstrictly behavioral measures. Moreover, dif-

ferences in the pattern of activation forexplicit and implicit perception might revealadditional qualitative differences betweenthese forms of processing even if behavioralresponses show no difference; neuroimagingmight simply provide a more sensitive mea-sure. Finally, the known functions of vari-ous brain regions can be mapped onto thepattern of activation produced in responseto seen and unseen stimuli, allowing yetanother way to determine the richness ofimplicit percepts.

Although such approaches have greatpromise as a new tool for the study ofimplicit perception, in many respects theexisting research on the neural bases ofimplicit perception falls prey to the samecritiques leveled at the behavioral research.Perhaps more importantly, as our review sug-gests, the neural activity elicited by implicitperception often is similar to that corre-sponding to overt perception, just dimin-ished in amplitude. In the absence of qualita-tive differences in the pattern of activation,such diminished effects might well resultfrom low-level overt perception. In suchcases, the same standards and criteria appliedto the use of the dissociation paradigmin behavioral research must be applied tothe neuroimaging methods (see Hannulaet al., 2005 for a detailed treatment ofthese issues).

A wide array of neuroimaging tools,most notably functional magnetic resonanceimaging (fMRI) and event-related potentials(ERPs), have been adapted to the study ofimplicit perception. Most often, these inves-tigations draw on existing knowledge of thefunctional brain regions likely to be involvedin overt perception of a class of stimuli (e.g.,emotional faces, words, etc.), and then tryto determine whether those same regions areactive even when observers report no aware-ness of the stimuli. Neuroimaging studies ofimplicit perception typically rely on severaldifferent types of processing and stimulusclasses, and for the sake of organizing thisrapidly expanding field, we consider threetypes of evidence for implicit perception:implicit perception of faces, implicit per-ception of words and numbers, and ERPs

Page 246: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 2 8 the cambridge handbook of consciousness

in priming studies. Within the neuroimag-ing literature, most inferences about implicitperception depend critically on the patternof neural localization or the magnitude ofactivation resulting from explicitly detectedand implicitly perceived stimuli.

Implicit Perception of Faces

Face processing represents one of the morepromising avenues for the study of implicitperception because the neural regions acti-vated in response to faces are fairly welldescribed in the neuroimaging literature. Aslew of recent neuroimaging studies of faceperception reveal an area in the fusiformgyrus that responds relatively more to facesthan to other stimuli (the Fusiform FaceArea, or FFA; Kanwisher, McDermott, &Chun, 1997; McCarthy, Puce, Gore, & Alli-son, 1997). Is this area active even whenobservers are unaware of the presence of aface stimulus? Also, fearful faces are associ-ated with activation of the amygdala (Breiteret al., 1996; Morris et al., 1996). Do fear-ful faces lead to amygdala activation evenwhen they are not consciously perceived?Finally, recent neuroimaging studies usingthe phenomenon of binocular rivalry haveexplored the areas that are activated by stim-uli when they are consciously perceived andwhen rivalry removes them from awareness.

Recent neuroimaging studies of visualextinction patients have explored whetheran extinguished face leads to activation inthe FFA (Rees et al., 2000; Vuilleumier etal., 2001). Unilateral brain lesions, particu-larly those located in the right posterior infe-rior parietal lobe, are associated with spa-tial neglect of the contralesional visual field.Many neglect patients exhibit visual extinc-tion, accurately detecting isolated stimulipresented in either visual field, but failingto identify a contralesional stimulus whenitems are presented simultaneously in bothvisual fields. Behavioral research (discussedin later sections of this chapter) provides evi-dence for residual processing of extinguishedstimuli, perhaps due to intact striate andextrastriate cortex along with ventral infer-otemporal areas that process object identity.

One study required a patient to responddifferently to a stimulus presented solely onthe left, solely on the right, or simultane-ously on the left and right (Rees et al., 2000).Given that the patient had right inferiorparietal lobe damage, extinction would berevealed by incorrect “right-side” responseswhen a stimulus was presented on boththe left and the right simultaneously. Bycomparing fMRI data corresponding tocorrect “right-side” responses and incor-rect “right-side” responses on extinctiontrials, residual neural activity associated withextinction could be revealed. Extinguishedstimuli activated striate and early extras-triate cortex in the damaged right hemi-sphere – a pattern of activation no differ-ent from that elicited by left-side stimulithat were consciously perceived. This acti-vation of early visual cortex occurred regard-less of whether the patient was aware ofthe stimulus, suggesting that these areasare not sufficient for conscious awareness.More importantly, a region of interest analy-sis revealed low-threshold, category-specificactivation in the right FFA in associationwith extinguished face stimuli, suggestingthat the extinguished face was processed bythe same regions as for consciously perceivedfaces (for reviews of evidence for preservedactivation in response to unreported stim-uli, see Driver & Vuilleumier, 2001; Driver,Vuilleumier, Eimer, & Rees, 2001).

This basic pattern was replicated in a sim-ilar experiment using both fMRI and ERPs(Vuilleumier et al., 2001). An extinctionpatient with right-lateralized posterior infe-rior parietal damage indicated on each trialwhether or not a face was presented. Stimuli(i.e., schematic faces and shapes) were pre-sented unilaterally in the right or left hemi-field or bilaterally. Again, extinguished facesactivated right striate cortex as well as anarea of inferior temporal cortex just lateralto the FFA, although the level of activationwas much reduced relative to that for visi-ble face stimuli. Furthermore, ERPs revealeda right-lateralized negativity over posteriortemporal regions approximately 170–180 msafter a face was presented in the left hemi-field. This N170, a component known to be

Page 247: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 2 9

face-selective, was evident regardless ofwhether the face was perceptible or not.Interestingly, this experiment varied theduration of the bilateral presentations inorder to vary whether or not extinctionoccurred. Awareness of the left visual fieldstimulus evoked activation of striate cortexand fusiform gyrus coupled with increasedactivation of a network of frontal and pari-etal brain regions, reflecting the sorts oflong-range associations or widespread acti-vation thought to accompany consciousness(Baars, 1988). Thus, differences in activationstrength and functional connectivity distin-guish conscious from unconscious percep-tion.

Evidence from patients with bilateralamygdala damage (e.g., Adolphs, Tranel,Damasio, & Damasio, 1995) and neuroimag-ing of intact individuals (Breiter et al., 1996;Morris et al., 1996) support a role for theamygdala in processing fear-related stimuli,such as fearful faces. At least one theorysuggests that a direct short-latency path-way between the thalamus and the amygdalamight underlie the processing of emotionalstimuli even in the absence of awareness (LeDoux, 1996). In one fMRI study (Whalenet al., 1998), fearful and happy faces werepresented for 33 ms followed immediatelyby a neutral-face mask. Based on previ-ous behavioral studies, the 33 -ms maskedpresentation was assumed to be below thethreshold for awareness. Post-study ques-tioning found that eight of ten subjectsdenied having seen emotional faces and didnot select these faces as having been in thestimulus set. Under these conditions, theunnoticed fearful faces did elicit a relativelycircumscribed increase in amygdala activa-tion relative to masked happy faces and afixation baseline. This amygdala activationwas attenuated with repeated exposure tomasked fearful faces, a finding consistentlyobserved with visible faces as well. Fur-ther, increased activation in response to bothmasked fearful and happy faces extendedinto the adjacent sublenticular substantiainnominata of the basal forebrain, a regionthought to be involved in more generalprocessing of emotional stimuli and arousal

(although activation was more pronouncedfor fearful faces). This pattern of results isconsistent with the notion that the amyg-dala is selectively recruited when subliminalfear stimuli are presented (for additional evi-dence of early affective word processing inthe absence of awareness see Bernat, Bunce,& Shevrin, 2001a), but as for similar behav-ioral results, such dissociations must be inter-preted with caution because of method-ological shortcomings in the assessment ofawareness. For example, awareness was notmeasured directly on each trial – doing somight change the subject’s strategy from oneof passive viewing to active search (Whalenet al., 1998).

In another study, relative to neutral faces,fearful faces were associated with significantactivation of the left amygdala, left fusiform,lateral orbitofrontal, and right intraparietalcortex (Vuilleumier et al., 2002). Activa-tion of fusiform gyrus in response to extin-guished faces was much reduced relative toactivation for visible faces though, and theactivation evident in association with extin-guished stimuli may be a consequence offeedback from the amygdala. Together, thesefindings suggest that emotional stimuli canreceive substantial processing even if theyfail to reach awareness. Emotional stimuliare among the most promising approachesto the study of implicit processing, pre-cisely because of the hypothesized existenceof a direct, perhaps more primitive neu-ral pathway that bypasses higher cognitiveareas.

These studies provide interesting, sug-gestive support for the hypothesized short-latency pathway originating in the thalamus(LeDoux, 1996). Such a pathway might rea-sonably allow for processing even in theabsence of more complex cognitive pro-cesses, and by inference without awareness.More importantly, amygdala activation wasnot significantly modulated by awareness(Vuilleumier et al., 2002), suggesting thatprocessing of extinguished stimuli extendsbeyond early visual processing areas andthat activation need not be less robust inthe absence of conscious detection. Similarapproaches have been taken in the study

Page 248: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 30 the cambridge handbook of consciousness

of implicit processing of unnoticed, emo-tionally arousing stimuli (see Lane & Nadel,2000 for an overview of work on the cogni-tive neuroscience of emotion).

Another neuroimaging-based approachto studying processing in the absence ofawareness relies on the phenomenon ofbinocular rivalry. When two patterns arepresented simultaneously, one to each eye,the contents of conscious awareness spon-taneously alternate between one monocu-lar percept and the other over time. Thevisual percepts compete for awareness suchthat only one image is consciously per-ceived, and the other is suppressed (Lev-elt, 1965 ; Wheatstone, 1838). The oscilla-tion of perceptual awareness between twosimultaneously presented stimuli providesa useful tool to identify the neural corre-lates of conscious awareness (for reviews, seeRees, Kreiman, & Koch, 2002 ; Tong, 2001,2003).

A growing number of investigations havebeen conducted using fMRI to address, inparticular, the contributions of specific brainregions to perceptual awareness of rival-rous stimuli. One recent study using fMRI(Tong, Nakayama, Vaughan, & Kanwisher,1998) presented face and house images sep-arately to each eye and measured the neu-ral activity in two predefined regions ofinterest: the FFA, which responds prefer-entially to faces (Kanwisher et al., 1997;McCarthy et al., 1997), and a parahippocam-pal region that responds most strongly toplaces and less so to faces (the Parahip-pocampal Place Area, or PPA; Epstein &Kanwisher, 1998). During imaging, partici-pants continuously reported whether theysaw a face or a house, and the pattern of neu-ral activity extracted from a region of inter-est analysis was time-locked to these con-scious perceptual experiences. Interestingly,neural activation corresponded to the con-scious perceptual experience, even thoughthe stimulus pair was invariant within atrial; FFA activation increased when partic-ipants reported perception of a face stimu-lus, and PPA activation increased when theyreported a house. Critically, the pattern ofactivation when subjects consciously per-

ceived a face or a house when both werepresent (in the rivalrous stimulus) was nodifferent than when the face or house waspresented alone, suggesting that the com-petitive neural interactions responsible forrivalry are largely resolved before consciousperception occurs.

This finding might suggest that activationin the FFA or the PPA produces visual aware-ness of the presence of a face or a house.However, the FFA also is active when facesare not consciously reported (Rees et al.,2000), suggesting that reliable FFA activa-tion is not sufficient for conscious percep-tion of a face. This discrepancy might resultfrom different degrees of activation, though.If neural activity is graded with respect tothe level of perceptual awareness (i.e., low-level activity reflects low-level awareness)or if activity must surpass some thresholdbefore conscious awareness occurs, then itis entirely possible that sufficient activationof the FFA or PPA does correspond to con-scious awareness of a face or house, respec-tively. Stricter criteria for measuring con-scious awareness are needed to determinewhether activation in these specialized pro-cessing regions is sufficient for conscious per-ception.

Implicit Processing of Wordsand Numbers

Just as consciously perceived emotionalstimuli activate the amygdala, read wordstend to activate a prescribed set of brainregions more than do other stimuli (e.g.,left-lateralized extrastriate cortex, fusiformgyrus, and precentral sulcus). Therefore,studies of implicit word perception canuse neuroimaging evidence to determinewhether words activate a similar set ofregions without awareness. Such studies firstassess the visibility of the critical words usingbehavioral measures. In one study (Dehaeneet al., 2001), masked words were presentedsuch that they were detected only 0.7% ofthe time (a rate slightly higher than the falsealarm rate of 0.2% for trials in which no wordwas presented) and almost never named suc-cessfully (see also Rees, 2001). Moreover,

Page 249: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 31

recognition tests after the imaging portionof the study revealed no memory for themasked words. Of course, subjects mightadopt a conservative criterion for indicatingwhether or not a word was present if theyknew they would then be asked to nameit. If so, then the task might not exhaus-tively measure conscious awareness, raisingthe possibility that the masked words were atleast temporarily available to consciousness.

Assuming that low detection rates andfailed recognition performance imply theabsence of conscious awareness of the pres-ence of the masked words, and if neu-ral activity is consistent with reading, thenperception presumably occurred implicitly.Interestingly, when compared to controlconditions that mimicked the masking con-ditions of the critical trials but withoutany masked words, the unseen stimuli acti-vated the previously mentioned set of brainregions known to be associated with reading(Dehaene et al., 2001). This pattern is con-sistent with the idea that the unseen stim-uli were processed similarly to visible words.However, the pattern of neural activityevoked by the masked words in the ventralvisual pathway was less widely distributedand of smaller magnitude than that obtainedwith consciously perceived words. The dis-crepancy was increasingly evident from pos-terior to anterior brain regions, suggestingthat visual masking begins to suppress neuralactivity early in the visual processing stream,rendering later stages of visual process-ing less likely. Furthermore, visible wordselicited neural activity in parietal, prefrontal,and cingulate cortices, but correspondingactivation was not evident when the wordswere not available to conscious awareness.Finally, increased correlated activity amongthe ventral visual stream, parietal, and pre-frontal areas was evident only when thewords were visible. Some of these differ-ences might well result from the namingtask used in the study rather than from theperceptibility of the stimuli. Visible wordscould be named, but the masked words werenot. However, it cannot be determined onthe basis of these results whether some ofthe activity associated with visible words is

a consequence of the naming task. In sum,masking resulted in less robust neural acti-vation, but also in reduced correlated neuralactivity that might contribute to consciousawareness.

Similar patterns have emerged in neu-roimaging studies of the perception ofnumerical stimuli (Naccache & Dehaene,2001a). Neuroimaging and lesion data sug-gest a role for the parietal lobe (and partic-ularly the intraparietal sulcus) in the men-tal representation and understanding of thequantity meaning of numbers (for a review,see Dehaene, Piazza, Pinel, & Cohen, 2003).Can implicit stimuli lead to similar pat-terns of activation? A recent paper (Nac-cache & Dehaene, 2001a) reanalyzed ear-lier neuroimaging data (Dehaene et al.,1998) and addressed this issue by usingthe phenomenon of repetition suppression.A number of imaging studies have shownthat when a stimulus is repeated, local-ized neural activity associated with process-ing of that stimulus or its attributes typ-ically decreases (Schacter, Alpert, Savage,Rauch, & Albert, 1996; Schacter & Buckner,1998; Squire et al., 1992). Whole brainanalysis of fMRI data revealed two iso-lated brain regions with reduced activitywhen the target repeated the prime rela-tive to an otherwise categorically congru-ent prime: the left and right intraparietalsulci (Naccache & Dehaene, 2001a). Thepriming effect was not influenced by theuse of different notations for the prime andtarget (1 vs. one), suggesting that the intra-parietal sulcus encodes numbers in a moreabstract format. Assuming that the primestimuli were not consciously perceived,these effects indicate that repetition sup-pression can occur even when observers areunaware of the repetition. Presumably, thiseffect reflects the fairly extensive processingof an implicitly perceived stimulus.

Additionally, ERP studies of the responsecompatibility effect reveal covert activationfrom an incongruent prime – a lateralizedreadiness potential (LRP) on the incorrectside of response – presumably because theincongruent prime activates the incorrectmotor response (Dehaene et al., 1998). fMRI

Page 250: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 32 the cambridge handbook of consciousness

data revealed greater overall activation inright motor cortex when both the primeand target were consistent with a left handresponse (and vice versa), providing addi-tional evidence for processing of the primestimulus without awareness (Dehaene etal., 1998). In all of these studies, percep-tion of the prime in the absence of aware-ness was not limited to sensory mecha-nisms alone, but also influenced higher-levelprocessing.

ERPs in Priming Experiments

The influence of an unseen prime stimu-lus has been explored by examining gen-eral changes in ERPs to a target as a resultof the presence of a prime. These studiesmeasure the influence of an implicit primeindirectly, looking for changed neural pro-cessing of the target rather than activationdirectly in response to the prime stimulus.Studies of priming by masked stimuli rep-resent the paradigmatic application of thedissociation paradigm, and the use of ERPsin conjunction with this approach may wellcontribute to a more complete assessment ofthe processing of an unseen stimulus. To theextent that semantic processing of a primetakes place, it should lead to modulation ofthe N400 (i.e., a negative-going ERP compo-nent sensitive to manipulations of semanticrelatedness). Experiments with supraliminalwords and sentences consistently find largerdeflections in N400 amplitude for incongru-ent than for congruent targets (Kutas & Hill-yard, 1980). For example, the N400 gener-ated in response to the word “lemon” wouldlikely be more negative when preceded bythe unrelated prime “chair” than when pre-ceded by the related prime “citrus.”

Unfortunately, studies of N400 modula-tion by semantically related, unseen primeshave produced mixed results (for a review,see Deacon & Shelley-Tremblay, 2000).For instance, in one experiment, maskedprimes led to faster responses to semanticallyrelated targets, but modulation of N400 wasevident only when primes were completelyvisible (Brown & Hagoort, 1993). This find-ing implied that the N400 might consti-

tute an electrophysiological marker of con-scious semantic processes. Yet, other exper-iments that account for potential method-ological shortcomings of this experimentinduce modulation of the N400 even whenthe primes were consciously inaccessible(e.g., Deacon, Hewitt, Yang, & Nagata,2000; Kiefer, 2002). Moreover, the effects ofa prime on the N400 are qualitatively differ-ent for visible and masked primes. Maskedprimes modulate the N400 with a shortSOA between the prime and target, butnot with a longer SOA. In contrast, for vis-ible primes, the modulation of the N400

increases as the SOA increases (Kiefer &Spitzer, 2000). This qualitative differencesuggests that implicit and explicit percep-tion of prime stimuli might rely on differentprocessing mechanisms.

Taken together, these studies provide sup-port for N400 activation in response to anunseen prime. However, they are subject tomany of the critiques leveled at the dis-sociation paradigm (Holender, 1986). Forexample, visibility of the prime on some tri-als might well contribute to the observedeffects – the measure of awareness might nothave been exclusive. One recent ERP studymade a valiant effort to address many of therequirements of the dissociation paradigm(Stenberg, Lindgren, Johansson, Olsson, &Rosen, 2000). Most dissociation paradigmstudies attempt to render the prime invisibleusing a masking procedure, assuming thatthe prime is invisible to all subjects on alltrials. An alternative approach is to vary thevisibility of the target itself and to measurethe ERP response to an unseen target stim-ulus (Stenberg et al., 2000). This approachhas the advantage of allowing the trial-by-trial assessment of target visibility.

In these experiments (Stenberg et al.,2000), a visible category name (the prime)was followed by a word that either was fromthe primed category or from a different cat-egory. Target perceptibility was varied acrossblocks so that individual subjects could suc-cessfully name the target on 50% of trials.Because this subjective naming task leavescriterion setting in the hands of the subjectand does not sample conscious awareness

Page 251: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 33

exhaustively, several measures of consciousawareness were also administered at the endof each trial: (a) subjects indicated whetheror not the word had been a member ofthe prime category, (b) they named theword (guessing if necessary), and (c) theyattempted to select the target word fromeither a 2- or 6-alternative forced-choicetest. The 6-alternative test was consideredthe most sensitive, hence the most exhaus-tive measure of awareness. Interestingly, thesemantic priming effect (i.e., N400) dis-tinguished between categorically consistentand categorically inconsistent words, irre-spective of visibility. Although modulationof the N400 was less pronounced when thewords could not be explicitly identified, thetopographical pattern of activation did notdiffer across conditions. Qualitative differ-ences in hemispheric lateralization were evi-dent in an extended positive-going com-plex that typically accompanies cognitivetasks like the one employed in these exper-iments. This ERP component remainedconsistent irrespective of categorical clas-sification, but had a different topographydepending upon whether or not targets wereexplicitly identified. Consciously reportedtargets were associated with left- lateralizedactivity, whereas implicitly perceived targetselicited more distributed or right-lateralizedactivity, suggesting that different neural pop-ulations were recruited under these circum-stances (Stenberg et al., 2000).

Together, the consistency of the N400

irrespective of visibility and differences inlateralization of raw amplitudes for visi-ble and implicit targets strengthen claimsfor semantic processing of words that arenot readily identified. When the crite-rion for conscious awareness was based onthe more conservative 6-alternative forced-choice test, 30% of the words that couldnot be named were correctly identifiedand dropped from subsequent analyses. Thebinary categorization responses collected onthe remaining trials were used to calcu-late d′, which was not different from 0 –providing even stronger evidence that theremaining target words were not available toconscious awareness. Despite using a more

stringent objective criterion, modulation ofthe N400 remained intact (Stenberg et al.,2000). In fact, when a regression analysis wasconducted to determine whether the N400

was more sensitive to categorical deviationsthan the binary-choice discrimination task,the intercept was reliably greater than 0.This experiment adopts most of the controlsneeded to make clear inferences from behav-ioral studies using the dissociation paradigm,but also adds a more sensitive neuroimagingmeasure to provide additional evidence forboth quantitative and qualitative differencesin the processing of consciously perceivedand implicitly perceived stimuli.

Additional evidence for a change in theERP pattern in response to an unseen stim-ulus comes from studies of the P300, a com-ponent typically occurring 260–500 ms afterexposure to a relatively rare stimulus. Inthis case, the “rarity” of the target stimulusdepends on its relation to other stimuli pre-sented in the study. Would the target stim-ulus reveal this rarity response if the otherstimuli were not consciously perceived? Anumber of studies have explored this ques-tion (e.g., Brazdil, Rektor, Dufek, Jurak, &Daniel, 1998; Devrim, Demiralp, & Kurt,1997), but most are subject to the critiquethat subjects were aware of the regular orfrequent stimuli and that they had a strongresponse bias when awareness was assessedin a separate block of trials (Bernat, Shevrin,& Snodgrass, 2001b).

One more recent experiment (Bernat etal., 2001b) showed modulation of the P300

to a rare target word even when more rigor-ous criteria for measuring conscious aware-ness were applied to make sure that the fre-quent words were not consciously detected(for a review, see Shevrin, 2001). The wordsLEFT and RIGHT were presented tachisto-scopically in an oddball design with an 80:20

frequent-to-rare ratio. Frequent stimuli weremade subliminal by presenting them for only1 ms, and subjects were given a forced-choicedetection block after the experiment. Col-lapsed across subjects, d′ did not differ from0, but not all subjects showed a d′ of 0. Con-sequently, the effect could be driven by afew subjects who showed awareness on some

Page 252: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 34 the cambridge handbook of consciousness

trials. However, the correlation betweenthe d′ score for a given subject and theirP300 was negative, suggesting that moreawareness of the frequent stimuli actuallydiminished the P300 amplitude. Moreover,a regression of P300 magnitude against d′

revealed a significant P300 effect even whend′ was extrapolated to 0.

Summary

One recurrent theme in this overview ofthe neuroimaging of implicit perception isthat, when stimuli are not consciously per-ceptible, activation is often reduced rela-tive to when they are consciously perceived.Importantly, activation in response to anunseen stimulus is not limited to early sen-sory processing and often activates brainregions associated with processing that par-ticular type of stimulus. These findings sug-gest that implicit perception might be aweaker version of the same processes occur-ring for explicit perception. As for moststudies of implicit perception, neuroimag-ing studies rely almost exclusively on thedissociation paradigm, attempting to elim-inate explicit awareness and then attribut-ing the residual effects to implicit percep-tion. To the extent that these studies fail tomeet the exhaustiveness assumption of thedissociation paradigm, they are subject tothe same critiques often leveled at behav-ioral studies (Hannula et al., 2005). Thestrength of the evidence for implicit per-ception based on neuroimaging approachesdepends on the extent to which the studiessuccessfully demonstrate that processing hasreally occurred in the absence of awareness.

A Believer’s Interpretation

Although some of the experiments fail toaddress the exhaustiveness assumption suffi-ciently, others provide more convincing testsof explicit awareness. Few individual studiesprovide unequivocal evidence for the effectsof an unseen stimulus on brain activity; how-ever, when considered holistically, the liter-ature provides strong converging evidence.Some experiments provide evidence that

processing was implicit and simultaneouslydemonstrate neural consequences of implicitperception. The strongest evidence comesfrom studies of differences in N400 ampli-tude in response to an implicitly perceivedstimulus (Stenberg et al., 2000). Reliable dif-ferences in N400 amplitude were evidenteven when a fairly conservative 6-alternativeforced-choice task was used to rule outexplicit awareness on a trial-by-trial basis. Byprobing for awareness of the critical stimulusimmediately after presentation, this studyreduced concerns about fleeting consciousperception of the stimuli (i.e., memory fail-ure following conscious perception). Fur-ther, the study adopted the regression tech-nique (Greenwald et al., 1995) to show thatN400 patterns persisted even when the d′

measure was extrapolated to null sensitivity.Finally, and perhaps most importantly, thepatterns of neural activity elicited by implic-itly and explicitly visible stimuli were qual-itatively different, suggesting different neu-ral mechanisms for the processing of implicitand explicit stimuli. Each piece of evidencecan be criticized if considered in isolation,but taken together they provide one of themost complete and convincing demonstra-tions of implicit perception.

Further valuable evidence for implicitperception comes from fMRI studies ofemotionally valenced faces (Whalen et al.,1998). Implicitly perceived fearful faces pro-duce amygdala activation, and a subtractionanalysis revealed no additional activation ofvisual cortex relative to happy faces, suggest-ing the possibility that fearful faces are pro-cessed automatically via a non-cortical route.Of course, this subtraction does not elim-inate the possibility of cortical activation;both happy and fearful faces could producevisual cortex activation, and the subtractionjust reveals the lack of additional corticalprocessing of fearful faces. Even so, the factthat amygdala activation was greater for fear-ful faces in the absence of greater activationof visual cortex is suggestive of an alterna-tive, non-cortical source of the activation.Together, for these results provide converg-ing support for implicit perception.

Page 253: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 35

A Skeptic’s Interpretation

Although these investigations provide someof the strongest evidence for implicit per-ception, and despite the advantages of usingsensitive neuroimaging measures, all of themadopt the dissociation paradigm withoutfully meeting the exhaustiveness assumptionfor each subject (Hannula et al., 2005). Mostof these studies find diminished responses toless visible stimuli, raising the possibility thatthe effect results from residual explicit pro-cessing rather than from a different mech-anism altogether. That is, these findings areconsistent with a failure to meet the exhaus-tiveness assumption. Moreover, neural acti-vation might not be as sensitive a measureas we assume. Perhaps a sizable amountof conscious processing is necessary to pro-duce robust neural activation and to pro-duce the distributed processing that is typi-cally attributed to consciousness. If so, then“implicit” stimuli may have been fleetinglyor weakly perceived, and the amount of con-scious information available might not beenough to drive robust neural activation.This distinction might account for qualita-tive differences in the pattern of activationfor identified and unidentified words (Sten-berg et al., 2000). The unidentified wordsmight have received an insufficient amountof conscious processing to produce the pat-tern typically associated with full awareness;however, that qualitative difference does notimply the absence of explicit processing.Implicitly and explicitly perceived stimulimay produce qualitatively different patternsof activation only because the implicit stim-uli received less conscious processing (not noconscious processing).

The strongest evidence reviewed here isthe N400 effect for unseen stimuli. Thisseries of studies represents the most care-ful and systematic exploration of implicitperception that we are aware of in anyof the studies discussed in this chapter.The studies carefully segregated aware andunaware trials on the basis of both subjec-tive (word identification) and objective (6-alternative forced-choice [6AFC] decisions)

measures and then examined the N400 forboth correct and incorrect/absent responses.Although explicit sensitivity (measuredusing d′ for a binary category decision task)was effectively nil for mistaken responses inthe 6AFC task, it was reliably above chancefor the word identification task (in Experi-ments 2 and 3). Thus, the identification taskclearly was not exhaustive. The 6AFC taskcomes closer, but a skeptic could quibblewith several of the procedures in this study.First, the mean d′ was often greater than0, and some subjects had d′ values greaterthan 0.5 on the binary choice. Although theregression method revealed an intercept sig-nificantly greater than 0, suggesting implicitprocessing even when d′ was extrapolated to0, the fact that many observers had greaterthan nil sensitivity raises concerns that afew of the subjects might partially drive theresults. A better approach would be to setthe stimulus characteristics separately foreach subject such that d′ is as close as pos-sible to 0 on the explicit task. Another con-cern is that the task used to measure d′ was abinary category judgment (in the category vs.not in the category). This task might not beas sensitive as a presence/absence judgment,raising the possibility that a more sensitivemeasure might reveal some explicit process-ing even when observers show no sensitiv-ity in the category judgment. These critiquesaside, this study represents one of the great-est challenges to a skeptic because it usesmultiple explicit measures and a sensitiveimaging measure to examine implicit pro-cessing.

Evidence for Implicit Perception –Patient Data

Studies of brain-damaged patients providesome of the most compelling evidence forimplicit perception. In fact, some have notedthe surprising acceptance of evidence forimplicit perception in brain-damaged sub-jects even by researchers who reject simi-lar methods in the study of unimpaired sub-jects (Merikle & Reingold, 1992). In part, this

Page 254: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 36 the cambridge handbook of consciousness

acceptance of evidence from patient pop-ulations derives from the belief that braindamage can entirely disrupt some aspectsof conscious perception or memory. If so,then the brain damage may provide the mosteffective elimination of explicit perception,much more so than simply reducing the vis-ibility of a stimulus via masking. In unim-paired populations, the mechanisms for con-scious perception are potentially available,leaving the persistent concern that any evi-dence for implicit perception might derivefrom explicit contamination. However, ifthe mechanisms themselves are eliminatedby brain damage, then any residual process-ing must be attributed to implicit processes.

The challenge for researchers wishing toprovide evidence for implicit perception isdifferent for patient studies. Rather than try-ing to show that a particular task rules outthe use of explicit perception, researchersmust demonstrate that the patient entirelylacks the capacity for explicit processingin any task. Given that most such “nat-ural experiments” are inherently messy,with some spared abilities intermixed withimpairments, conclusions from patient stud-ies depend on a systematic exploration of thenature and extent of the deficit in process-ing. In many cases, such studies require thesame level of empirical precision necessaryin behavioral studies, but they are furtherhampered by the limited subject population.

In this section, we consider three differentsorts of evidence for a distinction betweenimplicit and explicit processing. In two ofthese cases, conclusions rely heavily on thedata of a relatively small number of patients.First, we consider the implications of stud-ies of DF who is a visual form agnosic. Wethen consider two different classes of brain-damage phenomena, each of which has ledto striking findings of preserved processingin the absence of awareness: blindsight andvisual neglect.

DF and the Two Pathways Argument

The visual form agnosic patient DF(Goodale & Milner, 1992 ; Milner & Goodale,1995) acquired her deficit from bilateral

damage to portions of extrastriate visual cor-tex in the ventral visual processing stream.Although she can perceive and discriminatesurface features such as color and texture,she shows a strikingly impaired ability tovisually discriminate figural properties ofobjects, such as form, size, and orientation.Her preserved haptic and auditory discrimi-nation of objects reveals preserved generalknowledge and object recognition abilities;her deficit is one of visual object perception.Despite her inability to recognize objectsvisually, she can use the visual structure ofobjects to guide her motor responses. Forexample, she shows normal performancewhen trying to insert a slate into a slot,using the proper orientation and directedmovement even though she cannot reportthe orientation of the slate in the absenceof a motor interaction (Goodale, Milner,Jakobson, & Carey, 1991). Furthermore, shecannot report the orientations of blocksplaced on tables, but can still reach out andpick up the blocks with appropriate gripaperture and limb movements (Jakobson &Goodale, 1991).

These results countermand the intuitionthat perception produces a unitary represen-tation of the world, that interactions withthe visual world should rely on the samerepresentations and mechanisms as visualinterpretation of the world. The dissocia-tion in DF’s ability to interpret and act onthe world provides evidence for two distinctmechanisms to process visual information.One system involves the phenomenal recog-nition of parts of the visual world, and theother, operating without our awareness ofthe identities of objects, allows us to act onthe world. In other words, this system seemsto allow guided motor responses to objectseven if we are unaware of what those objectsmight be.

The case of DF does not provide evidencefor implicit perception in the same sense dis-cussed throughout the rest of this chapter; atsome level, DF is aware of the existence ofthe object even if she cannot name it. How-ever, the case has some interesting parallels,and it reveals the importance of looking forqualitative differences in performance. One

Page 255: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 37

obvious parallel is that a visual stimulus canelicit an appropriate action or response evenif some aspects of it are unavailable to con-sciousness. Visual analysis of an object doesnot guarantee conscious perception of itsproperties. More importantly, some aspectsof visual processing occur outside of whatcan be consciously reported. The case of DFdiffers from other studies of implicit per-ception in that her spared abilities do notinvolve the processing of symbolic repre-sentations outside of awareness. Rather, shecan engage in actions toward objects withoutneeding to use a symbolic representation orany recognition-based processes. Most stud-ies of implicit perception focus on whetheror not implicit symbol manipulation or rep-resentation is possible (Dulany, 2004).

Blindsight

Neurologists had long speculated that somevisual functioning might persist even inpatients blinded by cortical damage (seeTeuber, Battersby, & Bender, 1960 for areview), but the phenomenology of “blind-sight” was not convincingly demonstrateduntil the 1970s (Poppel, Held, & Frost, 1973 ;Weiskrantz, Warrington, Sanders, & Mar-shal, 1974). Patients suffering damage to pri-mary visual cortex (V1) experience a visualscotoma; they fail to consciously perceiveobjects that fall into the affected portionof their visual field. They do not perceivea black hole or an empty space. Rather, themissing region of the visual field simply doesnot reach awareness, much as neurologicallyintact individuals do not normally noticetheir blind spot when one eye is closed.Blindsight refers to the finding that some cor-tically blind patients show evidence of per-ception in their damaged field in the absenceof awareness. In essence, such patient evi-dence constitutes an application of the dis-sociation logic; the patient reports no aware-ness of the stimulus but still shows someeffect of it. In a classic study of blind-sight (Weiskrantz et al., 1974), lights wereflashed in the damaged visual field of patientDB. Although DB reported no awarenessof the lights, he could point out the loca-

tion of the light more accurately than wouldbe expected by chance. This finding sug-gests that V1 contributes to visual awareness,because in its absence, patients do not con-sciously experience visual stimuli. Perhapsthe most established explanation for blind-sight posits two routes to visual perception:(a) a pathway via V1 that leads to consciousawareness and (b) a more primitive path-way bypassing V1, perhaps via the supe-rior colliculus. The latter route presumablyallows perception in the absence of aware-ness. Indeed, in animals, cells in MT special-ized for the detection of motion continue torespond normally to moving stimuli in thescotoma (Rosa, Tweedale, & Elston, 2000).

The two-routes hypothesis provides astrong claim about the nature of implicitperception, with one route operating out-side awareness and the other generatingawareness. Over the past 20 years, thishypothesis has faced a number of chal-lenges designed to undermine the claim thatconscious perception is entirely absent inblindsight. In other words, these alterna-tive explanations question the exhaustive-ness of the measure of conscious aware-ness, which in this case is the subjectivereport of the subject. For example, dam-age to V1 might be incomplete, with islandsof spared cortex that function normally,thereby allowing degraded visual experiencein small portions of the scotoma region(Fendrich, Wessinger, & Gazzaniga, 1992 ,1993 ; Gazzaniga, Fendrich, & Wessinger,1994 ; Wessinger, Fendrich, & Gazzaniga,1997). Brain imaging of blindsight patientshas returned mixed results: at least onepatient (CLT) showed a small region ofmetabolically active visual cortex (Fendrichet al., 1993), whereas other researchersfound no evidence for intact visual cor-tex in structural scans of other blindsightpatients (e.g., Trevethan & Sahraie, 2003 ;Weiskrantz, 2002). Moreover, lesions of V1

in animals produce blindsight-like behav-ior even though these controlled lesionslikely are complete (e.g., Cowey & Stoerig,1995). Another alternative is that neurolog-ically spared regions surrounding the sco-toma receive differential sensory input as a

Page 256: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 38 the cambridge handbook of consciousness

result of the presence of an item in the blindregion, thereby allowing better-than-chanceguessing (Campion, Latto, & Smith, 1983).For example, a light source in the blind fieldmight also generate some visual input forregions outside the blind field via scatter-ing of light, thereby indicating the presenceof something unseen (see the commentaryin Campion et al., 1983 for a discussion).A final challenge comes from the argumentthat blindsight itself might indicate a changein response criterion rather than a change inawareness or sensitivity per se (Azzopardi &Cowey, 1998). This challenge is based on theidea that subjective reports on single trials donot fully measure awareness and that a sig-nal detection approach is needed to verifythat the response criterion cannot entirelyaccount for the results. We address this finalalternative in more detail here because it isthe most theoretically relevant to the topicof this chapter.

Most evidence for blindsight comes froma comparison of performance on two tasks: apresence/absence judgment (direct measureof awareness) and a forced- choice task (indi-rect measure of perception), and most dataare reported in terms of percentage correct(Azzopardi & Cowey, 1998). Yet, the use ofpercent correct to compare performance inthese two tasks could well lead to spuriousdissociations between implicit and explicitperception because percent correct mea-sures are affected by response biases (Cam-pion et al., 1983). For example, subjectstend to adopt a fairly conservative responsecriterion (responding only when certain)when asked to make a presence/absencejudgment about a near-threshold stimu-lus. Furthermore, subjects may well varytheir criterion from trial to trial. In con-trast, when subjects are forced to choosebetween two alternative stimuli or to pickwhich temporal interval contained a stimu-lus, response bias is less of a concern; sub-jects have to choose one of the two stimuli.Direct comparisons of presence/absence andforced choice performance are thereforepitting.

To examine the possibility of bias, a fre-quently tested blindsight patient’s (GY) sen-sitivity to the presence of stimuli was mea-

sured with d′ (or da where appropriate)along with his response criterion for a vari-ety of tasks often used to study blindsight(Azzopardi & Cowey, 1998). As expected,responding was unbiased in a forced-choicetask. In contrast, response criterion in apresence/absence judgment was fairly con-servative (c = 1.867), and interestingly,it was substantially reduced by instruct-ing GY to guess when unsure (c = .228).These findings reveal the danger of relyingon percent correct as a primary measureof blindsight; with sensitivity set to d′ =1.5 , these levels of bias elicit 75% correctresponding for a forced-choice task, but 55%performance for a presence/absence judg-ment. In fact, any d′ > 1 would lead to anapparent dissociation in percentage correct,but the result could entirely be attributedto response criterion rather than differen-tial sensitivity. Using this signal detectionapproach, GY showed greater sensitivity forstatic displays in forced-choice responsesthan in presence/absence responses, butthe same did not hold for moving dis-plays. Thus, evidence for “blindsight” tomotion stimuli in which patients report noawareness (presence/absence) but still showaccurate forced-choice performance mightresult entirely from shifts in response cri-terion. These results underscore the dan-ger of relying on percent correct scores ininvestigations employing blindsight patientsand highlight the benefits of using bias-free tasks. To date, relatively few investi-gations of blindsight have adopted theseimportant methodological changes, despitean active literature that possesses surprisingscope given the apparent rarity of blindsightpatients.

Inferences from one recent study are lesssubject to the criterion problem (Marcel,1998). Two patients (TP and GY) com-pleted a series of tasks that required forced-choice judgments, and only some showedevidence of implicit perception. For exam-ple, neither showed much priming from sin-gle letters presented to their blind field whenthe task was to pick the matching letter.Also, neither was more likely to select asynonym of a word presented to the blindfield. However, when defining a polysemous

Page 257: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 39

word, both showed priming in their choiceof a definition when a word presented tothe blind field disambiguated the meaning.Given that these tasks all involve forced-choice decisions, differences between themare unlikely to result from response biases.Interestingly, the finding that the least directmeasure shows an effect implies that seman-tic concepts are activated without activatingthe representation of the word itself.

One intriguing finding is that some blind-sight patients apparently consciously expe-rience afterimages of stimuli presented totheir blind field (Marcel, 1998; Weiskrantz,2002). Such afterimages, if frequently expe-rienced by blindsight patients, might explainsome residual perception in the damagedfield. Interestingly, the afterimages can ariseafter information from the blind and sightedfields have been combined. When differ-ent colored filters were used for the blindand sighted field, patient DB experienced anafterimage that was specific to the combi-nation of those two colors, suggesting thatinformation from the blind field was pro-cessed beyond the point required to resolvebinocular differences (Weiskrantz, 2002).

The phenomenon of blindsight repre-sents one of the most striking demonstra-tions of non-conscious perception. It pro-vides potentially important insights into theneed for V1 in order to consciously perceiveour environment. However, the approachestypically used to study blindsight are sub-ject to methodological critiques becausethey often do not account for responsebiases in the measurement of awareness.Perhaps more importantly, most such stud-ies are couched in the dissociation frame-work, inferring implicit perception based onthe absence of direct evidence for consciousperception. Consequently, blindsight find-ings are subject to many of the same objec-tions raised for behavioral work on implicitperception.

Parietal Neglect

Visual neglect involves deficient awarenessof objects in the contralesional visual field,typically resulting from damage to the pos-terior inferior parietal lobe in the right hemi-

sphere, secondary to middle cerebral arteryinfarction. Although both blindsight andneglect are associated with spared process-ing in the absence of awareness, neglect ischaracterized as an attentional (rather thansensory) deficit, and commonly occurs in theabsence of a visual scotoma (or blind spot).The damage in neglect occurs later in theperceptual processing stream than it doesin cases of blindsight, raising the possibil-ity that neglected stimuli might be processedsemantically to a greater extent as well (seeDriver & Mattingley, 1998 for a review).In many patients, the failure to notice orattend to contralesional stimuli is exacer-bated when stimuli are presented simulta-neously to both left and right visual fields,presumably because these stimuli competefor attention (visual extinction, as describedearlier). For example, neglect patients mightfail to eat food on the left side of their plate.Some patients fail to dress the left side oftheir body or to brush the left side of theirhair. Although neglect can affect other pro-cessing modalities (e.g., haptic and auditoryprocessing), we limit our discussion to visualneglect.

Evidence for preserved processing ofneglected visual stimuli takes several forms:(a) successful same/different discriminationof bilaterally presented stimuli despite a fail-ure to report the contralesional stimulus,(b) intact lexical and semantic processingof extinguished stimuli, and (c) activationof responses consistent with an extinguishedprime. Here we review evidence from eachof these areas, and we also describe exper-iments designed to test the claim thatextinguished stimuli are not consciouslyperceived.

Many studies demonstrate the preservedability to discriminate identical pairs of itemsfrom those that are physically or categori-cally dissimilar (Berti et al., 1992 ; Verfaellie,Milberg, McGlinchey-Berroth, & Grande,1995 ; Volpe, Ledoux, & Gazzaniga, 1979).Typically, two pictures or words are brieflypresented to the right and left of fixation,and patients judge whether they are thesame (have the same name) or are differ-ent. Both patients and intact control subjectsperform this task better than chance even

Page 258: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 40 the cambridge handbook of consciousness

when the object orientations differ or whenthey are two different exemplars of the samecategory (e.g., two different cameras). Fur-ther, patients can reliably indicate that physi-cally similar (and semantically related) itemsare in fact different from one another. In allcases, patients report little or no awarenessof the stimulus in the contralesional visualfield.

These findings suggest that neglectpatients can process extinguished stimulisemantically and that their representationof these unseen stimuli is fairly complexand complete. The dissociation between thenaming and matching indicates that visualprocessing of extinguished stimuli proceedsrelatively normally despite the absence ofawareness (but see Farah, Monheit, & Wal-lace, 1991). However, more concrete evi-dence for the absence of explicit aware-ness of extinguished stimuli is requiredfor a clear conclusion in favor of implicitperception. This approach is logically equiv-alent to the dissociation paradigm; demon-strate that subjects cannot perceive a stim-ulus and then look for residual effects onperformance. Subjects claim no consciousexperience of extinguished stimuli but stillare able to perform a fairly complex discrim-ination on the basis of the stimulus presenta-tion. In fact, when patients were required toname both stimuli, they frequently namedonly the ipsilesional item. However, some ofthem felt that something had appeared onthe contralesional side. None of these studiesdemonstrate that sensitivity to the presenceof the extinguished stimulus is objectivelyno better than chance.

An alternative to examining preservedjudgments about extinguished stimuli is toexplore whether such stimuli produce lexi-cal or semantic priming. To the extent thatneglect is only a partial disruption of theability to form representations, the inabilityto name extinguished stimuli might resultfrom a failure to access existing represen-tations rather than a failure to form a rep-resentation (McGlinchey-Berroth, Milberg,Verfaellie, Alexander, & Kilduff, 1993).Repetition priming tasks have been usedto examine lexical, orthographical, or

phonological priming by neglected stimuli(Schweinberger & Stief, 2001), and seman-tic priming studies have explored whetherneglected primes receive more extensivecognitive processing (e.g., Ladavas, Paladini,& Cubelli, 1993 ; McGlinchey-Berroth etal., 1993). Primes are presented briefly ineither the contralesional or ipsilesional visualfield followed by a visible target stimulus,and priming is reflected in faster process-ing of the target stimulus. Lexical prim-ing apparently survives visual neglect: Prim-ing was evident for both patients and nor-mal controls only when word stimuli wererepeated and not when non-word stimuliwere repeated, suggesting that the neglectedword activated an existing representation(Schweinberger & Stief, 2001). Furthermore,the magnitude of priming was comparablein the contralesional and ipsilesional visualfields. In fact, left visual field priming wasactually greater than that of normal con-trols for patients who neglected their leftvisual field. This counter-intuitive findingmay result from a center-surround mech-anism that increases activation for weaklyaccessible or subconscious visual stimuluswhile simultaneously inhibiting activation ofother related items (see Carr & Dagenbach,1990).

Similar claims have been made withrespect to higher-level semantic processingof neglected visual stimuli. In one of theseexperiments (McGlinchey-Berroth et al.,1993), pictures, used as prime stimuli, werepresented peripherally in the left or the rightvisual field, and filler items (a meaninglessvisual stimulus made up of components ofthe target items) were presented on the sideopposite. After 200 ms, the pictures werereplaced by a central target letter string, andsubjects indicated whether or not the stringwas a word. Semantic priming should lead tofaster lexical decisions if the prime pictureswere related to the word. Although patientsresponded more slowly than controls, theyshowed semantic priming even though theycould not identify the prime pictures in a 2-alternative forced-choice task (McGlinchey-Berroth et al., 1993). Other semantic prim-ing tasks have found faster processing

Page 259: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 41

of a right-lateralized word following aleft-lateralized prime word (Ladavas et al.,1993). Given that the patient in this studywas unable to read a single word presentedin the left visual field, and performed no bet-ter than chance with lexical decision, seman-tic discrimination, and stimulus detectiontasks even without a bilateral presentation,the prime presumably was not consciouslyperceived.

However, none of the studies discussedthus far provided an exhaustive test for con-scious awareness of left-lateralized stimuli,leaving open the possibility that residualawareness of the “neglected” stimulus mightaccount for preserved priming effects. Moregenerally, the use of different paradigms orstimuli in tests of awareness and measuresof priming does not allow a full assessmentof awareness during the priming task; mea-sures of awareness may not generalize to theexperiment itself. Many of the priming stud-ies also introduce a delay interval betweenthe prime and target, leaving open the pos-sibility that patients shift their attention tothe extinguished stimulus in advance of tar-get presentation (see Cohen, Ivry, Rafal, &Kohn, 1995).

Other studies of implicit perception invisual neglect have adopted a response com-patibility approach in which a central tar-get item is flanked by an irrelevant itemon either the left or right side of fixation(Cohen et al., 1995). These flanker itemswere either compatible, incompatible, orneutral with respect to the response requiredfor the target. Interestingly, responses tothe target were slower when the flankerwas incompatible, even when it was pre-sented to the contralesional visual field. Ina control experiment using the same mate-rials, the patients were asked to respondto the flankers and were given unlimitedtime to respond. Responses were reliablyslower and more error prone for stimuli pre-sented to the contralesional visual field. Thisfinding confirms the impairment of process-ing of stimuli in the contralesional visualfield, but it also undermines the responsecompatibility results as a demonstration ofimplicit perception. That subjects could,

when instructed, direct their attention to the“neglected” contralesional stimulus impliesthat they might have had some residualawareness of flankers presented to the con-tralesional visual field. The patients in thisexperiment also had more diffuse damagethan is typical in neglect experiments, andone had left-lateralized damage. The diffusedamage might affect performance on theflanker task for reasons other than hemispa-tial neglect.

Most studies of implicit perception byneglect patients have focused on determin-ing the richness of processing of neglectedstimuli, but relatively few studies havefocused on producing convincing demon-strations that neglected stimuli truly escapeconscious awareness. One recent studyadopted the process dissociation procedurein an attempt to provide a more thoroughdemonstration that processing of neglectedstimuli is truly implicit (Esterman et al.,2002). A critical picture appeared in onevisual field and a meaningless filler pictureappeared in the other. After a 400-ms delay,a two-letter word stem appeared in the cen-ter of the screen, and subjects were eitherinstructed to complete the stem with thename of the critical picture (inclusion) orto complete it with any word other thanthe picture name (exclusion). Relative tonormal control subjects, patients were lesslikely to complete word stems with pic-ture names in the inclusion task, particu-larly when the picture was presented in theneglected visual field. In contrast, patientswere more likely than controls to completethe stems with picture names in the exclu-sion task when the picture was presentedin the neglected field. Moreover, they com-pleted such stems with the picture namemore frequently than in a baseline conditionwith no pictures. If patients had explicitlyperceived the stimulus, they would not haveused it to complete the word stem. How-ever, they still processed it enough that itinfluenced their stem completion. Althoughthis study provides clearer evidence forimplicit perception of neglected stimuli,the methods are subject to the same cri-tiques discussed in our review of behavioral

Page 260: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 42 the cambridge handbook of consciousness

evidence using the exclusion paradigm.Clearly, more systematic assessments ofexplicit perception of neglected words areneeded before unequivocal claims aboutimplicit perception in neglect are possible.

A Believer’s Interpretation

Perhaps more interesting than the evidenceitself is the face validity of evidence forimplicit perception in patient populations.Neglect and blindsight illustrate the seri-ous behavioral ramifications of the absenceof awareness, and their normal behaviorsare in essence a constant, real-world versionof the process dissociation paradigm. Suchpatients’ daily actions reflect their lack ofawareness of some aspects of their visualworld, and if they had awareness of thoseaspects, they would perform differently. Evi-dence for perception despite the absence ofawareness in these patients is particularlyconvincing because the absence of explicitawareness is their primary deficit. In com-bination with behavioral and neuroimagingevidence, these data confirm that implicitperception is possible in the absence ofawareness.

A Skeptic’s Interpretation

Studies of patient populations rely exten-sively on the logic of the dissociationparadigm; patients lack awareness of partsof their visual world, so any residualprocessing of information in those areasmust reflect implicit perception. Unfortu-nately, few studies exhaustively eliminatethe possibility of partial explicit process-ing of visual information in the face ofthese deficits. Nobody doubts that explicitawareness is affected in both blindsightand neglect. These deficits of awarenesshave clear behavioral consequences. How-ever, impaired awareness does not meanabsent awareness. One of the few studies ofblindsight to measure awareness using sig-nal detection theory found that many ofthe most robust findings supporting implicitperception could be attributed to biasrather than residual sensitivity (Azzopardi &Cowey, 1998). This study reveals the dan-

ger in relying solely on subjective reportsof awareness rather than on systematicmeasurement of performance. Most patientstudies use subjects’ ability to report theirvisual experience as the primary measureof explicit awareness, with implicit percep-tion inferred from any spared processing inthe “blind” field. Such subjective reports inthe context of the dissociation paradigm donot provide an adequately exhaustive mea-sure of explicit perception. Consequently,performance on indirect measures mightreflect residual explicit perception ratherthan implicit perception.

What do Dissociationsin Perception Mean?

Despite protestations to the contrary, thecentury-old debate over the mere exis-tence of implicit perception continues tothis day. The techniques and tools haveimproved, but the theoretical arguments aresurprisingly consistent. In essence, believersargue that the converging evidence providesoverwhelming support for the existence ofimplicit perception, whereas skeptics arguethat almost all findings of implicit percep-tion fail to provide adequate controls forexplicit contamination. As with most suchdebates, different conclusions can be drawnfrom the same data. Believers can point toimproved methodologies that provide moresensitive measures of implicit processing orthat more effectively control for explicit pro-cessing. Skeptics can point to the fact thatnone of these controls are airtight and thatthe effects, when present, tend to be small.Believers point to converging evidence frompatients and from imaging studies with neu-rologically intact individuals, and skepticspoint to the even greater inadequacies ofthe controls for explicit processing in thosedomains. The conclusions drawn from thesedata are colored by assumptions about theparsimony of each conclusion. Believers findthe conclusion in favor of implicit pro-cessing more parsimonious because a vari-ety of critiques, some fairly convoluted, areneeded to account for all of the converging

Page 261: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 43

support for implicit processing. Skeptics findconclusions in favor of implicit processingunappetizing because they often posit theexistence of additional mechanisms whenall of the data can potentially be explainedusing solely explicit processing.

More recent behavioral techniques havemade progress toward eliminating the moreobvious objections of skeptics. Qualitativedifferences, signal detection measures ofsensitivity, and regression techniques areappropriate first steps toward overcomingthe critiques of the dissociation paradigm,although a staunch critic might never besatisfied. From this possibly irresolvabledebate, perhaps some additional insights canbe gleaned. Regardless of whether or notimplicit perception exists, what can we learnabout perceptual processing from attemptsto reveal implicit perception? What do dis-sociations in perception, whether betweenimplicit and explicit or entirely withinexplicit processing, tell us about the natureof awareness and about the nature ofperception? What do these dissociationsmean for our understanding of perception?

For the moment, let’s assume thatimplicit perception exists. If it does exist,what does it do? Does it play a functionalrole in the survival of the perceiver? Ourperceptual systems exist to extract informa-tion from the world to allow effective behav-ior. The world, then, presents a challenge toa perceptual system: The information avail-able far exceeds our ability to consciouslyencode and retain it. Our perceptual sys-tems evolved to extract order and system-aticity from the available data, to encodethose aspects of the world that are relevantfor the behavioral demands of our ecolog-ical niche (Gibson, 1966). Our perceptualsystems adapted to extract signal from thenoise, allowing us to survive regardless ofwhether we are aware of the variables thatinfluence our behavior.

We are only aware of a subset of our worldat any time, and we are, of course, unawareof those aspects that fail to reach aware-ness. Just as a refrigerator light is always onwhenever we look inside, any time we exam-ine the outputs of perceptual processing, we

are aware of those outputs. Someone unfa-miliar with the workings of a refrigeratormight assume that the light is on when thedoor is closed. Similarly, given that the onlyinformation available to consciousness is thatinformation that has reached awareness, anintuitive inference would be to assume thatall processing involves awareness – we never“see” evidence of processing without aware-ness. Claims that mental processes happenoutside of consciously mediated operationsrun counter to this intuitive belief. Thatsame belief might also underlie the will-ingness to accept subjective reports as ade-quate measures of conscious processing (seethe subjective vs. objective threshold discus-sion above). The goal of the implicit percep-tion literature is to determine whether ornot perceptual processing occurs with themetaphorical refrigerator light out.

A fundamental issue in the implicit per-ception literature concerns the similaritiesof the types of processing attributed toimplicit and explicit mechanisms. Is there acommonality to those operations that applywith and without awareness? If implicit per-ception exists, do the implicit mechanismsapply to everything outside of awarenessequally, or is there some selectivity? Doesthe spotlight of consciousness perpetuallymove about, randomly illuminating implic-itly processed information, thereby bring-ing it to awareness? Or are there fundamen-tal differences between implicit and explicitprocesses? This question is, in many respects,more interesting and important than thequestion of whether or not implicit percep-tion exists at all. Implicit perception wouldlack its popular appeal and broad implica-tions if it produced nothing more than aweak version of the same processing thatwould occur with awareness. The broad pop-ular appeal (or fear) of the notion of implicitprocessing is that it could, under the rightcircumstances, lead to behaviors differentfrom those we would choose with awareness.

One way to conceptualize the implicit/explicit distinction is to map it ontothe intentional/automatic dichotomy. Con-sciousness presumably underlies intentionalactions (Searle, 1992), those in which

Page 262: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 44 the cambridge handbook of consciousness

perceivers can perform new operations,computations, or symbol manipulations onthe information in the world. Automaticbehaviors, in contrast, reproduce old oper-ations, computations, or symbol manipula-tions, repeating processes that were effec-tive in the past in the absence of intentionalcontrol. Automatic computations occur ina data-driven, possibly encapsulated fash-ion (Fodor, 1986). Much of the evidencefor perception without awareness is basedon this sort of data-driven processing thatcould potentially affect explicit processing,but that occurs entirely without consciouscontrol. Previous exposure to a stimulusmight lead to more automatic processingof it the next time. Or, if the implicit andexplicit processing mechanisms overlap sub-stantially, a prior exposure might providemetaphorical grease for the gears, increas-ing the likelihood that bottom-up processingwill lead to explicit awareness.

The vast majority of evidence forimplicit perception takes this form, produc-ing behavioral responses or neural activa-tion patterns that mirror those we mightexpect from explicit processing. Behavioralresponse compatibility findings are a niceexample of this form of implicit perception:The interference shown in response to animplicitly perceived prime is comparable tothat we might expect from a consciouslyperceived prime (except for NCE effects).Similarly, much of the fMRI evidence forimplicit perception shows that activationfrom unseen stimuli mirrors the pattern ofactivation that would occur with awareness.

These findings are not as theoreticallyinteresting as cases in which the outcomeof implicit perception differs from what wewould expect with conscious perception –cases in which implicit and explicit pro-cesses lead to qualitatively different out-comes. When the results are the same forimplicit and explicit processing, the standardskeptical critiques weigh heavily. The fail-ure to meet the assumptions of the dissoci-ation paradigm adequately leaves open thepossibility that “implicit” behaviors resultfrom explicit processing. In contrast, qual-

itative differences are theoretically signifi-cant regardless of whether or not they reflecta difference between implicit and explicitprocessing. Take, for example, the case ofDF (Goodale & Milner, 1992). She can accu-rately put an oriented card through a slot,but lacks conscious access to the orientationof the card. Although she cannot subjec-tively perceive the orientation, other mech-anisms allow her to access that informationand to use it in behavior. In other words,the dissociation implies the operation of twodifferent processes. Moreover, the findingis significant even if her accurate behaviorinvolves some degree of explicit awareness.The dissociation reveals the operation of twodifferent processes and different uses of thesame visual information. If the behavior hap-pened to result entirely from implicit per-ception, that would be interesting as well,but it is not as important as the findingthat two different processes are involved.Similarly, evidence for amygdala activationfrom unreported fearful faces is interestingnot because the faces cannot be reportedbut because it suggests a possible alterna-tive route from visual information to neu-ral activation (LeDoux, 1996). Subsequentcontrol experiments might show that theunreported faces were explicitly perceiv-able. However, the more interesting ques-tion is whether a subcortical route exists,not whether that subcortical route oper-ates entirely without awareness. Of course,to the extent that inferences about alter-native processing mechanisms depend onthe complete absence of awareness, thesefindings will always be open to critique.Although evidence from the process disso-ciation paradigm is subject to shifts in biasand motivation (Visser & Merikle, 1999), theunderlying goal of that paradigm has at itsbase the demonstration of a qualitative dif-ference in performance. Whether or not thisdifference reflects the distinction betweenimplicit and explicit processing or betweentwo forms of explicit processing is of sec-ondary importance.

In sum, the evidence for implicit percep-tion continues to be mixed and likely will

Page 263: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 45

remain that way in spite of improved toolsand methods. A diehard skeptic likely willbe able to generate some alternative, how-ever implausible, in which explicit process-ing alone can explain a dissociation betweenimplicit and explicit perception. Similarly,believers are unlikely to accept the skeptic’sdiscomfort with individual results, relyinginstead on the convergence of a large bodyof evidence. Methodological improvementsmight well force the skeptic to adopt moreconvoluted explanations for the effects, butare unlikely to eliminate those explanationsaltogether.

Here we propose a somewhat differentfocus for efforts to explore perception withand without awareness. Rather than trying toeliminate all aspects of explicit perception,research should focus instead on demon-strating differences in the perceptual mech-anisms that vary as a function of manipu-lating awareness. Qualitative differences inperception are interesting regardless ofwhether they reflect purely implicit per-ception. Differences between performancewhen most explicit processes are elimi-nated and performance when all explicitprocesses can be brought to bear are interest-ing in their own right and worthy of furtherstudy.

References

Abrams, R. L., & Greenwald, A. G. (2000). Partsoutweigh the whole (word) in unconsciousanalysis of meaning. Psychological Science, 11(2),118–124 .

Abrams, R. L., Klinger, M. R., & Greenwald, A.G. (2002). Subliminal words activate semanticcategories (not automated motor responses).Psychonomic Bulletin & Review, 9(1), 100–106.

Adolphs, R., Tranel, D., Damasio, H., & Damasio,A. R. (1995). Fear and the human amygdala.Journal of Neuroscience, 15 , 5879–5891.

Azzopardi, P., & Cowey, A. (1998). Blindsightand visual awareness. Consciousness & Cogni-tion, 7(3), 292–311.

Baars, B. J. (1988). A cognitive theory of conscious-ness. New York: Cambridge University Press.

Bar, M., & Biederman, I. (1998). Subliminal visualpriming. Psychological Science, 9(6), 464–469.

Bernat, E., Bunce, S., & Shevrin, H. (2001). Event-related brain potentials differentiate positiveand negative mood adjectives during bothsupraliminal and subliminal visual processing.International Journal of Psychophysiology, 42 ,11–34 .

Bernat, E., Shevrin, H., & Snodgrass, M. (2001).Subliminal visual oddball stimuli evoke a P300

component. Clinical Neurophysiology, 112 (1),159–171.

Berti, A., Allport, A., Driver, J., Dienes, Z.,Oxbury, J., & Oxbury, S. (1992). Levels of pro-cessing for visual stimuli in an “extinguished”field. Neuropsychologia, 30, 403–415 .

Brazdil, M., Rektor, I., Dufek, M., Jurak, P., &Daniel, P. (1998). Effect of subthreshold targetstimuli on event-related potentials. Electroen-cephalography & Clinical Neurophysiology, 107 ,64–68.

Breiter, H. C., Etcoff, N. L., Whalen, P. J.,Kennedy, W. A., Rauch, S. L., Buckner, R. L.,et al. (1996). Response and habituation of thehuman amygdala during visual processing offacial expression. Neuron, 17 , 875–887.

Brown, C., & Hagoort, P. (1993). The process-ing nature of the N400: Evidence from maskedpriming. Journal of Cognitive Neuroscience, 5 ,34–44 .

Brown, M., & Besner, D. (2002). Seman-tic priming: On the role of awareness invisual word recognition in the absence of anexpectancy. Consciousness and Cognition, 11(3),402–422 .

Campion, J., Latto, R., & Smith, Y. M. (1983).Is blindsight an effect of scattered light, sparedcortex, and near-threshold vision? Behavioral& Brain Sciences, 6(3), 423–486.

Carr, T. H., & Dagenbach, D. (1990). Semanticpriming and repetition priming from maskedwords: Evidence for a center-surround atten-tional mechanism in perceptual recognition.Journal of Experimental Psychology: Learning,Memory and Cognition, 16, 341–350.

Cheesman, J., & Merikle, P. M. (1984). Primingwith and without awareness. Perception & Psy-chophysics, 36(4), 387–395 .

Cherry, E. C. (1953). Some experiments upon therecognition of speech, with one and with twoears. Journal of the Acoustical Society of America,2 5 , 975–979.

Page 264: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 46 the cambridge handbook of consciousness

Cohen, A., Ivry, R. B., Rafal, R. D., & Kohn, C.(1995). Activating response codes by stimuli inthe neglected visual field. Neuropsychology, 9,165–173 .

Coles, M. G., Gratton, G., & Donchin, E. (1988).Detecting early communication: Using mea-sures of movement-related potentials to illumi-nate human information processing. BiologicalPsychology, 2 6, 69–89.

Corteen, R. S., & Dunn, D. (1974). Shock-associated words in a nonattended message: Atest for momentary awareness. Journal of Exper-imental Psychology, 102 (6), 1143–1144 .

Corteen, R. S., & Wood, B. (1972). Autonomicresponses to shock-associated words in an unat-tended channel. Journal of Experimental Psy-chology, 94(3), 308–313 .

Cowey, A., & Stoerig, P. (1995). Blindsight inmonkeys. Nature, 373 , 247–249.

Damian, M. (2001). Congruity effects evokedby subliminally presented primes: Automatic-ity rather than semantic processing. Journal ofExperimental Psychology: Human Perception andPerformance, 2 7(1), 154–165 .

Daza, M. T., Ortells, J. J., & Fox, E. (2002). Per-ception without awareness: Further evidencefrom a Stroop priming task. Perception & Psy-chophysics, 64(8), 13 16–1324 .

Deacon, D., Hewitt, S., Yang, C., & Nagata,M. (2000). Event-related potential indices ofsemantic priming using masked and unmaskedwords: Evidence that N400 does not reflect apost-lexical process. Cognitive Brain Research,9, 137–146.

Deacon, D., & Shelley-Tremblay, J. (2000). Howautomatically is meaning accessed: A review ofthe effects of attention on semantic processing.Frontiers in Bioscience, 5 , E82–94 .

Debner, J., & Jacoby, L. (1994). Unconsciousperception: Attention, awareness and control.Journal of Experimental Psychology: Learning,Memory, and Cognition, 2 0(2), 304–317.

Dehaene, S., Naccache, L., Cohen, L., Le, B., D.,Mangin, J., Poline, J., et al. (2001). Cerebralmechanisms of word masking and unconsciousrepetition priming. Nature Neuroscience, 4(7),752–758.

Dehaene, S., Naccache, L., Le Clech, G., Koech-lin, E., Mueller, M., Dehaene-Lambertz, G., etal. (1998). Imaging unconscious semantic prim-ing. Nature, 395(6702), 597–600.

Dehaene, S., Piazza, M., Pinel, P., & Cohen,L. (2003). Three parietal circuits for number

processing. Cognitive Neuropsychology, 2 0(3–6), 487–506.

Dell’Acqua, R., & Grainger, J. (1999). Uncon-scious semantic priming from pictures. Cogni-tion, 73(1), B1–B15 .

Deutsch, J. A., & Deutsch, D. (1963). Attention:Some theoretical considerations. PsychologicalReview, 70(1), 80–90.

Devrim, M., Demiralp, T., & Kurt, A. (1997). Theeffects of subthreshold visual stimulation onP300 response. Neuroreport, 8, 3113–3117.

Dosher, B. A. (1998). The response-windowregression method – some problematic ass-umptions – comment on Draine and Green-wald (1998). Journal of Experimental Psychology:General, 12 7(3), 311–317.

Draine, S. C., & Greenwald, A. G. (1998). Repli-cable unconscious semantic priming. Journal ofExperimental Psychology: General, 12 7(3), 286–303 .

Driver, J., & Mattingley, J. B. (1998). Parietalneglect and visual awareness. Nature Neuro-science, 1, 17–22 .

Driver, J., & Vuilleumier, P. (2001). Perceptualawareness and its loss in unilateral neglect andextinction. Cognition, 79(1–2), 39–88.

Driver, J., Vuilleumier, P., Eimer, M., & Rees, G.(2001). Functional magnetic resonance imagingand evoked potential correlates of consciousand unconscious vision in parietal extinctionpatients. Neuroimage, 14 , S68–75 .

Dulany, D. E. (2004). Higher order representa-tion in a mentalistic metatheory. In R. J. Gen-naro (Ed.), Higher-order theories of conscious-ness. Amsterdam: John Benjamins.

Eimer, M. (1999). Facilitatory and inhibitoryeffects of masked prime stimuli on motor acti-vation and behavioural performance. Acta Psy-chologica, 101(2–3), 293–313 .

Eimer, M., & Schlaghecken, F. (1998). Effectsof masked stimuli on motor activation: Behav-ioral and electrophysiological evidence. Journalof Experimental Psychology: Human Perceptionand Performance, 2 4 , 1737–1747.

Eimer, M., & Schlaghecken, F. (2002). Linksbetween conscious awareness and responseinhibition: Evidence from masked priming.Psychonomic Bulletin & Review, 9(3), 514–520.

Epstein, R., & Kanwisher, N. (1998). A corticalrepresentation of the local visual environment.Nature, 392 , 598–601.

Esterman, M., McGlinchey-Berroth, R., Verfael-lie, M., Grande, L., Kilduff, P., & Milberg, W.

Page 265: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 47

(2002). Aware and unaware perception inhemispatial neglect: Evidence from a stemcompletion priming task. Cortex, 38(2), 233–246.

Farah, M. J., Monheit, M. A., & Wallace, M.A. (1991). Unconscious perception of “extin-guished” visual stimuli: Reassessing the evi-dence. Neuropsychologia, 2 9, 949–958.

Fendrich, R., Wessinger, C. M., & Gazzaniga, M.S. (1992). Residual vision in a scotoma: Impli-cations for blindsight. Science, 2 58, 1489–1491.

Fendrich, R., Wessinger, C. M., & Gazzaniga, M.S. (1993). Sources of blindsight – Reply to Sto-erig and Weiskrantz. Science, 2 61, 493–495 .

Fodor, J. (1986). Modularity of mind. In Z. W.Pylyshyn & W. Demopoulos (Ed.), Meaningand cognitive structure: Issues in the computa-tional theory of mind (pp. 129–167). Norwood,NJ: Ablex.

Freud, S. (1966). Introductory lectures on psycho-analysis (J. Strachey, Trans.). New York: W. W.Norton.

Gazzaniga, M. S., Fendrich, R., & Wessinger,C. M. (1994). Blindsight reconsidered. CurrentDirections in Psychological Science, 3(3), 93–96.

Gibson, J. J. (1966). The senses considered as per-ceptual systems. Boston: Houghton Mifflin.

Goodale, M. A., & Milner, A. D. (1992). Sepa-rate visual pathways for perception and action.Trends in Neurosciences, 15(1), 20–25 .

Goodale, M. A., Milner, A. D., Jakobson, L. S., &Carey, D. P. (1991). A neurological dissociationbetween perceiving objects and grasping them.Nature, 349, 154–156.

Green, D. M., & Swets, J. A. (1966). Signal detec-tion theory and psychophysics. New York: Wiley.

Greenwald, A. G. (1992). New Look 3 : Uncon-scious cognition reclaimed. American Psycholo-gist, 47 , 766–779.

Greenwald, A., Abrams, R., Naccache, L.,& Dehaene, S. (2003). Long-term semanticmemory versus contextual memory in uncon-scious number processing. Journal of Experi-mental Psychology: Learning, Memory, and Cog-nition, 2 9(2), 235–247.

Greenwald, A. G., & Banaji, M. R. (1995).Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review,102 (4–27).

Greenwald, A. G., Draine, S. C., & Abrams, R.L. (1996). Three cognitive markers of uncon-scious semantic activation. Science, 2 73(5282),1699–1702 .

Greenwald, A. G., Schuh, E. S., & Klinger,M. R. (1995). Activation by marginally per-ceptible (subliminal) stimuli: Dissociation ofunconscious from conscious cognition. Jour-nal of Experimental Psychology: General, 12 4(1),22–42 .

Greenwald, A. G., Spangenberg, E. R., Pratkanis,A. R., & Eskenazi, J. (1991). Double-blind testsof subliminal self-help audiotapes. Psychologi-cal Science, 2 (119–122).

Hannula, D. E., Simons, D. J., & Cohen, N. J.(2005). Imaging implicit perception: Promiseand pitfalls. Nature Reviews Neuroscience, 6,247–255 .

Holender, D. (1986). Semantic activation with-out conscious identification in dichotic listen-ing, parafoveal vision, and visual masking: Asurvey and appraisal. Behavioral and Brain Sci-ences, 9, 1–66.

Jacoby, L. L. (1991). A process dissociationframework: Separating automatic and inten-tional uses of memory. Journal of Memory &Language, 30, 513–541.

Jakobson, L. S., & Goodale, M. A. (1991). Factorsaffecting higher-order movement planning:A kinematic analysis of human prehension.Experimental Brain Research, 8, 199–208.

Jaskowski, P., van der Lubbe, R. H. J., Schlot-terbeck, E., & Verleger, R. (2002). Traces lefton visual selective attention by stimuli that arenot consciously identified. Psychological Science,13(1), 48–54 .

Kanwisher, N., McDermott, J., & Chun, M. M.(1997). The fusiform face area: A modulein human extrastriate cortex specialized forface perception. Journal of Neuroscience, 17 ,4302–4311.

Kiefer, M. (2002). The N400 is modulated byunconsciously perceived masked words: Fur-ther evidence for an automatic spreading acti-vation account of N400 priming effects. Cog-nitive Brain Research, 13(1), 27–39.

Kiefer, M., & Spitzer, M. (2000). Time courseof conscious and unconscious semantic brainactivation. Neuroreport, 11(11), 2401–2407.

Klapp, S., & Hinkley, L. (2002). The negativecompatibility effect: Unconscious inhibitioninfluences reaction time and response selec-tion. Journal of Experimental Psychology: Gen-eral, 13 1(2), 255–269.

Klinger, M., Burton, P., & Pitts, G. (2000). Mech-anisms of unconscious priming: I. Responsecompetition, not spreading activation. Journal

Page 266: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 48 the cambridge handbook of consciousness

of Experimental Psychology: Learning, Memory,and Cognition, 2 6(2), 441–455 .

Koechlin, E., Naccache, L., Block, E., &Dehaene, S. (1999). Primed numbers: Explor-ing the modularity of numerical representa-tions with masked and unmasked semanticpriming. Journal of Experimental Psychology:Human Perception & Performance, 2 5(6), 1882–1905 .

Kunst-Wilson, W. R., & Zajonc, R. B. (1980).Affective discrimination of stimuli that cannotbe recognized. Science, 2 07 , 557–558.

Kutas, M., & Hillyard, S. A. (1980). Readingsenseless sentences: Brain potentials reflectsemantic incongruity. Science, 2 07 , 203–205 .

Ladavas, E., Paladini, R., & Cubelli, R. (1993).Implicit associative priming in a patient withleft visual neglect. Neuropsychologia, 31, 1307–1320.

Lane, R. D., & Nadel, L. (2000). Cognitive neuro-science of emotion. New York: Oxford Univer-sity Press.

Le Doux, J. E. (1996). The emotional brain. NewYork: Simon & Shuster.

Levelt, W. J. M. (1965). On binocular rivalry.Assen, The Netherlands: Royal VanGorcum.

Levin, D. T., Momen, N., Drivdahl, S. B.,& Simons, D. J. (2000). Change blindnessblindness: The metacognitive error of overes-timating change-detection ability. Visual Cog-nition, 7 , 397–412 .

Mack, A., & Rock, I. (1998). Inattentional blind-ness. Cambridge, MA: MIT Press.

Macmillan, N. A. (1986). The psychophysics ofsubliminal perception. Behavioral and BrainSciences, 9, 38–39.

Marcel, A. J. (1983a). Conscious and uncon-scious perception: An approach to the rela-tions between phenomenal experience andperceptual processes. Cognitive Psychology, 15 ,238–300.

Marcel, A. J. (1983b). Conscious and unconsciousperception: Experiments on visual maskingand word recognition. Cognitive Psychology, 15 ,197–237.

Marcel, A. J. (1998). Blindsight and shapeperception: Deficit of visual conscious-ness or of visual function? Brain, 12 1(8),1565–1588.

McCarthy, J. C., Puce, A., Gore, J. C., & Alli-son, T. (1997). Face-specific processing in thehuman fusiform gyrus. Journal of CognitiveNeuroscience, 9, 604–609.

McCormick, P. (1997). Orienting attentionwithout awareness. Journal of ExperimentalPsychology: Human Perception & Performance,2 3(1), 168–180.

McGlinchey-Berroth, R., Milberg, W. P., Verfael-lie, M., Alexander, M., & Kilduff, P. T. (1993).Semantic processing in the neglected visualfield: Evidence from a lexical decision task.Cognitive Neuropsychology, 10, 79–108.

Merikle, P. (1992). Perception without awareness.Critical issues. American Psychologist, 47(6),766–779.

Merikle, P. M. (1994). On the futility of attempt-ing to demonstrate null awareness. Behavioral& Brain Sciences, 17(3), 412 .

Merikle, P., & Joordens, S. (1997a). Parallelsbetween perception without attention and per-ception without awareness. Consciousness &Cognition, 6(2–3), 219–236.

Merikle, P. M., & Joordens, S. (1997b). Measuringunconscious influences. In J. W. Schooler (Ed.),Scientific approaches to consciousness (pp. 109–123). Mahwah, NJ: Erlbaum.

Merikle, P. M., & Reingold, E. M. (1991). Com-paring direct (explicit) and indirect (implicit)measures to study unconscious memory. Jour-nal of Experimental Psychology: Learning, Mem-ory, and Cognition, 17(2), 224–233 .

Merikle, P. M., & Reingold, E. M. (1992). Mea-suring unconscious perceptual processes. In T.S. Pittman (Ed.), Perception without awareness:Cognitive, clinical, and social perspectives(pp. 55–80). New York: Guilford Press.

Merikle, P., & Reingold, E. (1998). On demon-strating unconscious perception – Commenton Draine and Greenwald (1998). Journalof Experimental Psychology: General, 12 7(3),304–310.

Merikle, P., Smilek, D., & Eastwood, J. (2001).Perception without awareness: Perspectivesfrom cognitive psychology. Cognition, 79(1–2),115–134 .

Milner, A. D., & Goodale, M. A. (1995). Thevisual brain in action. New York: Oxford Uni-versity Press.

Mitroff, S. R., Simons, D. J., & Franconeri, S. L.(2002). The siren song of implicit change dete-ction. Journal of Experimental Psychology: Hu-man Perception & Performance, 2 8(4), 798–815 .

Moray, N. (1959). Attention in dichotic listening:Affective cues and the influence of instructions.Quarterly Journal of Experimental Psychology, 11,56–60.

Page 267: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

behavioral, neuroimaging, and neuropsychological approaches to implicit perception 2 49

Morris, J. S., Frith, C. D., Perrett, D. I., Rowland,D., Young, A. W., Calder, A. J., et al. (1996).A differential neural response in the humanamygdala to fearful and happy facial expres-sions. Nature, 383 , 812–815 .

Most, S. B., Simons, D. J., Scholl, B. J., Jimenez,R., Clifford, E., & Chabris, C. F. (2001). Hownot to be seen: The contribution of similarityand selective ignoring to sustained inattentio-nal blindness. Psychological Science, 12 (1), 9–17.

Naccache, L., Blandin, E., & Dehaene, S. (2002).Unconscious masked priming depends on tem-poral attention. Psychological Science, 13(5),416–424 .

Naccache, L., & Dehaene, S. (2001a). The prim-ing method: Imaging unconscious repetitionpriming reveals an abstract representation ofnumber in the parietal lobes. Cerebral Cortex,11(10), 966–974 .

Naccache, L., & Dehaene, S. (2001b). Uncon-scious semantic priming extends to novelunseen stimuli. Cognition, 80, 223–237.

Neumann, O., & Klotz, W. (1994). Motorresponses to nonreportable, masked stimuli:Where is the limit of direct paramaterspecification? In M. Moscovitch (Ed.), Atten-tion and performance (Vol. XV, pp. 123–150).Cambridge, MA: MIT Press.

Poppel, E., Held, R., & Frost, D. (1973). Resid-ual visual function after brain wounds involv-ing the central visual pathways in man. Nature,2 43 , 295–296.

Pratkanis, A. R. (1992). The cargo-cult scienceof subliminal persuasion. Skeptical Inquirer, 16,260–272 .

Pratkanis, A., Eskenazi, J., & Greenwald, A.(1994). What you expect is what you believe(but not necessarily what you get) – A test ofthe effectiveness of subliminal self-help audio-tapes. Basic & Applied Social Psychology, 15(3),251–276.

Rees, G. (2001). Seeing is not perceiving. NatureNeuroscience, 4(7), 678–680.

Rees, G., Kreiman, G., & Koch, C. (2002). Neuralcorrelates of consciousness in humans. NatureReviews Neuroscience, 3 , 261–270.

Rees, G., Wojciulik, E., Clarke, K., Husain, M.,Frith, C., & Driver, J. (2000). Unconscious acti-vation of visual cortex in the damaged righthemisphere of a parietal patient with extinc-tion. Brain, 12 3(8), 1624–1633 .

Reingold, E., & Merikle, P. (1988). Using directand indirect measures to study perception

without awareness. Perception & Psychophysics,44(6), 563–757.

Rosa, M. G. P., Tweedale, R., & Elston, G. N.(2000). Visual responses of neurons in the mid-dle temporal area of new world monkeys afterlesions of striate cortex. Journal of Neuroscience,2 0(14), 5552–5563 .

Schacter, D. L. (1987). Implicit memory: His-tory and current status. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,13 , 501–518.

Schacter, D. L., Alpert, N. M., Savage, C. R.,Rauch, S. L., & Albert, M. S. (1996). Con-scious recollection and the human hippocam-pal formation: Evidence from positron emis-sion tomography. Proceedings of the NationalAcademy of Sciences, 93(1), 321–325 .

Schacter, D. L., & Buckner, R. L. (1998). On therelations among priming, conscious recollec-tion, and intentional retrieval: Evidence fromneuroimaging research. Neurobiology of Learn-ing & Memory, 70(1–2), 284–303 .

Schweinberger, S., & Stief, V. (2001). Implicit per-ception in patients with visual neglect: Lexi-cal specificity in repetition priming. Neuropsy-chologia, 39(4), 420–429.

Searle, J. R. (1992). The rediscovery of the mind.Cambridge, MA: MIT Press.

Shevrin, H. (2001). Event-related markers ofunconscious processes. International Journal ofPsychophysiology, 42 , 209–218.

Sidis, B. (1898). The psychology of suggestion. NewYork: D. Appleton.

Simons, D. J. (2000). Attentional capture andinattentional blindness. Trends in Cognitive Sci-ences, 4(4), 147–155 .

Simons, D. J., & Chabris, C. F. (1999). Gorillasin our midst: Sustained inattentional blindnessfor dynamic events. Perception, 2 8, 1059–1074 .

Squire, L. R., Ojemann, J. G., Miezin, F. M.,Petersen, S. E., Videen, T. O., & Raichle, M. E.(1992). Activation of the hippocampus in nor-mal humans: A functional anatomical study ofmemory. Proceedings of the National Academyof Sciences, 89(5), 1837–1841.

Stenberg, G., Lindgren, M., Johansson, M., Ols-son, A., & Rosen, I. (2000). Semantic process-ing without conscious identification: Evidencefrom event-related potentials. Journal of Exper-imental Psychology: Learning, Memory, & Cog-nition, 2 6(4), 973–1004 .

Teuber, H. L., Battersby, W. S., & Bender, M.B. (1960). Visual field defects after penetrating

Page 268: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c09 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 16:21

2 50 the cambridge handbook of consciousness

missile wounds of the brain. Cambridge, MA:Harvard University Press.

Tong, F. (2001). Competing theories of binocularrivalry: A possible resolution. Brain and Mind,2 , 55–83 .

Tong, F. (2003). Primary visual cortex and visualawareness. Nature Reviews Neuroscience, 4 ,219–229.

Tong, F., Nakayama, K., Vaughan, J. T., & Kan-wisher, N. (1998). Binocular rivalry and visualawareness in human extrastriate cortex. Neu-ron, 2 1, 753–759.

Treisman, A. M. (1960). Contextual cues in selec-tive listening. Quarterly Journal of ExperimentalPsychology, 12 , 242–248.

Treisman, A. (1964). Monitoring and storage ofirrelevant messages in selective attention. Jour-nal of Verbal Learning and Verbal Behavior, 3 ,449–459.

Trevethan, C. T., & Sahraie, A. (2003). Spatialand temporal processing in a subject with cor-tical blindness following occipital surgery. Neu-ropsychologia, 41(10), 1296–1306.

Verfaellie, M., Milberg, W. P., McGlinchey-Berroth, R., & Grande, L. (1995). Comparisonof cross-field matching and forced-choice iden-tification in hemispatial neglect. Neuropsychol-ogy, 9, 427–434 .

Visser, T., & Merikle, P. (1999). Conscious andunconscious processes: The effects of motiva-tion. Consciousness & Cognition, 8(1), 94–113 .

Volpe, B. T., Ledoux, J. E., & Gazzaniga, M. S.(1979). Information processing of visual stimuliin an “extinguished” field. Nature, 2 82 , 722–724 .

Vuilleumier, P., Armony, J., Clarke, K., Husain,M., Driver, J., & Dolan, R. (2002). Neural res-ponse to emotional faces with and with-out awareness: Event-related fMRI in a pari-

etal patient with visual extinction and spatialneglect. Neuropsychologia, 40(12), 2156–2166.

Vuilleumier, P., Sagiv, N., Hazeltine, E., Pol-drack, R. A., Swick, D., Rafal, R. D., et al.(2001). Neural fate of seen and unseen faces invisuospatial neglect: A combined event-relatedfunctional MRI and event-related potentialstudy. Proceedings of the National Academy ofSciences, 98(6), 3495–3500.

Watanabe, T., Nanez, J., & Sasaki, Y. (2001). Per-ceptual learning without perception. Nature,413(6858), 844–848.

Weiskrantz, L. (2002). Prime-sight and blind-sight. Consciousness & Cognition, 11(4), 568–581.

Weiskrantz, L., Warrington, E. K., Sanders, M.D., & Marshal, J. (1974). Visual capacity in thehemianopic field following a restricted occipi-tal ablation. Brain, 97 , 709–728.

Wessinger, C. M., Fendrich, R., & Gazzaniga, M.S. (1997). Islands of residual vision in hemi-anopic patients. Journal of Cognitive Neuro-science, 9(2), 203–221.

Whalen, P. J., Rauch, S. L., Etcoff, N. L., McIn-erney, S. C., Lee, M. B., & Jenike, M. A.(1998). Masked presentations of emotionalfacial expressions modulate amygdala activitywithout explicit awareness. Journal of Neuro-science, 18, 411–418.

Wheatstone, C. (1838). Contributions to thephysiology of vision – Part the first. On someremarkable and hitherto unobserved phenom-ena of binocular vision. Philosophical Trans-actions of the Royal Society of London, 12 8,371–394 .

Wolfe, J. M. (1999). Inattentional amnesia. In V.Coltheart (Ed.), Fleeting memories: Cognitionof brief visual stimuli (pp. 71–94). Cambridge,MA: MIT Press.

Page 269: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

C H A P T E R 10

Three Forms of Consciousnessin Retrieving Memories

Henry L. Roediger III, Suparna Rajaram,and Lisa Geraci

Abstract

The study of conscious processes duringmemory retrieval is a relatively recentendeavor. We consider the issue and reviewthe literature using Tulving’s distinctionsamong autonoetic (self-knowing), noetic(knowing), and anoetic (non-knowing) typesof conscious experience during retrieval.One index of autonoetic consciousness is theexperience of remembering (mental timetravel to recover past events and the sense ofre-experiencing them). We review the litera-ture on judgments of remembering (express-ing autonoetic consciousness) and those ofknowing (being confident something hap-pened without remembering it, an expres-sion of noetic consciousness). These intro-spective judgments during retrieval haveproduced a sizable body of coherent liter-ature, even though the field remains filledwith interesting puzzles (such as how toaccount for the experience of rememberingevents that never actually occurred). Prim-ing on implicit memory tests can be consid-ered an index of anoetic consciousness, whenpast events influence current behavior with-

out intention or awareness. In addition toreviewing the Remember/Know judgmentliterature and the topic of priming, we con-sider such related topics as objective mea-sures of conscious control in Jacoby’s processdissociation procedure and the thorny issueof involuntary conscious memory.

Accessing Memories: Three Formsof Consciousness

During most of the first hundred yearsthat researchers worked on issues in humanmemory, considerations of conscious expe-rience were rare. Scholars interested in con-scious experience did not consider states ofconsciousness during retrieval from memory,and memory researchers rarely consideredconscious states of awareness of their sub-jects performing memory tasks. Conscious-ness and memory were considered separateareas of inquiry, with few or no points ofcontact. Researchers working on the vex-ing problem of consciousness were inter-ested in such topics as sleeping and waking,

2 51

Page 270: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 52 the cambridge handbook of consciousness

hypnosis, states of alertness and awareness,and how skilled tasks become automatic andappear to drop out of conscious control,among other issues. Researchers working intraditions of human memory considered theproducts of memory – what people recalledor recognized when put through their pacesin experimental paradigms – but they gen-erally did not concern themselves with thestate of conscious awareness accompanyingthe memory reports.

In fact, in the first empirical studiesof memory, Ebbinghaus (1885 /1964) cham-pioned his savings method of measuringretention because it avoided reliance on“introspective” methods of assessing mem-ory (recall and recognition). Nonetheless,Ebbinghaus did clearly state his opinion ofthe relation of consciousness and memory,and the relevant passage is still worth quot-ing today:

Mental states of every kind – sensations,feelings, ideas, – which were at one timepresent in consciousness and then have dis-appeared from it, have not with their disap-pearance ceased to exist . . . they continue toexist, stored up, so to speak, in the memory.We cannot, of course, directly observe theirpresent existence, but it is revealed by theeffects which come to our knowledge with acertainty like that with which we infer theexistence of stars below the horizon. Theseeffects are of different kinds.

In a first group of cases we can call backinto consciousness by an exertion of the willdirected to this purpose the seemingly loststates . . . that is, we can produce them vol-untarily . . .

In a second group of cases this survival iseven more striking. Often, even after years,mental states once present in consciousnessreturn to it with apparent spontaneity andwithout any act of the will; that is, theyare produced involuntarily . . . in the major-ity of the cases we recognize the returnedmental state as one that has already beenexperienced; that is, we remember it.

Finally, there is a third and large groupto be reckoned with here. The vanishedmental states give indubitable proof of theircontinuing existence even if they themselvesdo not return to consciousness at all . . . Theboundless domain of the effect of accumu-

lated experiences belongs here . . . Most ofthese experiences remain concealed fromconsciousness and yet produce an effectwhich is significant and which authenti-cates their previous existence (Ebbinghaus,1885/1964 , pp. 1–2 ).

In today’s terminology, we might say thatEbbinghaus was outlining different meansof retrieval or accessing information frommemory. The first case, voluntary recol-lection, resembles retrieval from episodicmemory (Tulving, 1983) or conscious, con-trolled recollection (Jacoby, 1991). The sec-ond case, involuntary recollection, has sev-eral modern counterparts, but perhaps themost direct is the concept of involuntaryconscious memory discussed by Richardson-Klavehn, Gardiner, and Java (1996), amongothers. Finally, the idea that aftereffects ofexperience may be expressed in behaviorand the person may never be conscious ofthe fact that current behavior is so guidedis similar to the contemporary idea of prim-ing on implicit or indirect tests of memory(Schacter, 1987).

Although Ebbinghaus raised the issue ofthe relation of consciousness to memory onthe first two pages of his great book thatbegan the empirical investigation of mem-ory, later generations of researchers gener-ally did not take up the puzzles he posed,at least until recently. Today, research onconsciousness and memory is proceedingapace, although the field is still fumblingtoward a lucid and encompassing theory.One origin of the current interest in con-sciousness among memory researchers canbe traced, quite after the fact, to the 1980s,with the rise of the study of priming in whatare now called implicit or indirect memoryexperiments.

Warrington and Weiskrantz (1968, 1970)first showed that amnesic patients could per-form as well as normal control subjects onindirect tests of memory, such as complet-ing fragmented pictures or words. Whenthe corresponding pictures or words hadbeen studied recently, patients could com-plete the fragments as well as control sub-jects (see too Graf, Squire, & Mandler, 1984).

Page 271: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 53

However, these same patients did muchmore poorly than controls on free recalland recognition tests. The critical differencebetween these types of test was the instruc-tions given to subjects. In standard memorytests like recall and recognition, subjects areexplicitly told to think back to the recentexperiences to be retrieved. These are calledexplicit or direct tests of memory. In whatare now called implicit (or indirect tests),subjects are presented with material in oneguise or another in an experiment and arelater told to perform the criterial task (nam-ing fragmented words or pictures, answeringgeneral knowledge questions, among manyothers) as well and quickly as possible. Usu-ally no mention is made about the test havinganything to do with the prior study episode,and often researchers go to some effort todisguise the relation between the two. Thefinding, as in the work of Warrington andWeiskrantz (1968, 1970), is that prior expe-rience with a picture or word facilitates,or primes, naming of the fragmented items.The phenomenon is called priming and hasbeen much studied in the past 25 years.Because retention is measured indirectly andthe study of memory is implicit in the proce-dure, these tasks are called implicit or indi-rect memory tasks. Graf and Schacter (1985)first used the terms “explicit” and “implicitmemory” to refer to these different types ofmeasures. Schacter (1987) provided a finehistorical review of the concept of implicitmemory, and a huge amount of research hasbeen conducted on this topic. Here we makeonly a few points about this research to setthe stage for the chapter.

First, hundreds of experiments haveshown dissociations between measures ofexplicit memory and implicit memory withboth neuropsychological variables (differ-ent types of patients relative to con-trol subjects; see Moscovitch, Vriezen,& Goshen-Gottstein, 1993) and variablesunder experimental control (e.g., type ofstudy condition, type of material, and manyothers; see Roediger & McDermott, 1993).Sometimes a variable can have a powerfuleffect on an explicit task and no effect onpriming (Jacoby & Dallas, 1981), whereas in

other cases the situation is reversed (Church& Schacter, 1994). And a variable can evenhave opposite effects on an explicit andimplicit task (e.g., Blaxton, 1989; Jacoby,1983b; Weldon & Roediger, 1987), evenunder conditions when all variables exceptinstructions to the subjects are held constant(Java, 1994). There is no doubt that explicitand implicit measures are tapping differentqualities of memory.

Second, one straightforward and appeal-ing way to think of the contrast betweenexplicit and implicit forms of memory isto align them with states of conscious-ness. Explicit memory tests are thought tobe reflections of memory with awareness,or conscious forms of memory, whereasimplicit memory tests are thought to reflectan unaware, unconscious, or even auto-matic form of memory. This appealing argu-ment, which was put forward in one formor another by many authors in the 1980s(e.g., Graf & Schacter, 1985 ; Jacoby &Witherspoon, 1982 , among others) seemsvalid up to a point, but that point is quicklyreached, and in 2006, no researcher wouldagree with this assessment. States of con-sciousness (e.g., aware and unaware) can-not be directly equated with performanceon explicit and implicit tests. Jacoby (1991)refers to this as a process purity assump-tion (that a task and state of consciousnessin performing the task can be equated). Heargued that it is difficult to provide con-vincing evidence that an implicit test doesnot involve some component of consciousawareness and even harder to show that anexplicit test does not involve unconsciouscomponents. His process dissociation proce-dure was developed in the same paper as apromising way to cut this Gordian knot andmeasure conscious and unconscious compo-nents underlying task performance. Anothermethod of measuring these components thatmakes different assumptions is the Remem-ber/Know paradigm originally created byTulving (1985) and developed by others(Gardiner, 1988; Rajaram, 1993). Still, nomethod is generally agreed upon in the fieldto perfectly measure consciousness or its role(or lack thereof) in various memory tasks.

Page 272: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 54 the cambridge handbook of consciousness

In this chapter we adopt Tulving’s (1985)tripartite distinction among three statesof consciousness to provide coherence toour review of the literature. Tulving dis-tinguished among autonoetic, noetic, andanoetic forms of consciousness, which refer,respectively, to self-knowing, knowing, andnon-knowing states of consciousness. (Wedefine each concept more fully below.) Eventhough Tulving’s distinctions are not neces-sarily used by all psychologists, we see histheory as a fruitful and useful way of organiz-ing our chapter, and we consider it the lead-ing theory on the relations of conscious statesof awareness to memory performance. TheRemember/Know paradigm just mentionedwas intended to measure autonoetic con-sciousness (remembering) and noetic con-sciousness (knowing). We turn to the firstof these concepts in the next section.

Autonoetic Consciousness

Tulving (1985) defined autonoetic con-sciousness of memory as one’s awarenessthat she has personally experienced an eventin her past. This ability to retrieve and,in a sense, relive events from the past hasbeen characterized as a kind of mental timetravel that allows one to “become awareof [their] protracted existence across sub-jective time” (Wheeler, Stuss, & Tulving,1997, p. 334). By this definition, autonoeticconsciousness constitutes what most peoplethink of as memory: thinking back to a par-ticular episode in life and mentally relivingthat event. In addition to imagining oneselfin the past, autonoetic consciousness allowsone to imagine the future and make long-term plans. In this section, we focus on theautonoetic consciousness that is associatedwith mental time travel into the past, orremembering.

Autonoetic consciousness includes notonly the ability to travel mentally throughtime but also the complementary ability torecognize that a particular mental experi-ence is from one’s past (as opposed to beingperceived for the first time). This recogni-tion of one’s past gives rise to a feeling that

is uniquely associated with remembering,and it is this recognition of oneself in thepast that characterizes the kind of mem-ory that people with amnesia lack. WhenTulving described K. C., “a man withoutautonoetic consciousness” (1985), he illus-trated the distinction between the rich reliv-ing of one’s past, an ability of which mostpeople are capable, and the cold fact-likeknowledge of one’s life that most memory-impaired patients retain. K. C. does not seemto have the concept of personal time. Tulvingnoted that K. C. can understand the conceptof yesterday, but cannot remember himselfyesterday. Similarly, he can understand theconcept of tomorrow, but cannot imaginewhat he might do tomorrow. When askedwhat he thinks when he hears a questionabout tomorrow, K. C. describes his mindas “blank” (p. 4).

Measurement Issues and TheoreticalAccounts

Tulving (1985) introduced the concepts ofautonoetic and noetic consciousness andthe Remember/Know paradigm for measur-ing these states of consciousness. The basicparadigm for studying these two differentforms of subjective experience involves giv-ing subjects explicit memory instructions (tothink back to some specific point in time)and asking them either to recall or recog-nize events from this time. For a recogni-tion test, subjects are told that if they recog-nize the item from the study episode (if theyjudge it “old” or “studied”), then they shouldtry to characterize their experience of recog-nition as involving the experience of eitherremembering or knowing. They are told thatthey should assign a Remember response toitems when they can vividly remember hav-ing encountered the item; that is, they canremember some specific contextual detail(e.g., what they were thinking or the item’sposition in the study list) that would providesupporting evidence that they are indeedremembering the item’s occurrence (seeRajaram, 1993 , for published instructionsgiven to subjects). Subjects are told thatthey should assign a Know response to a

Page 273: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 55

recognized item when they are sure thatthe item occurred in the study list, but theycannot recollect its actual occurrence; theycannot remember any specific details associ-ated with the item’s presentation at study.In short, subjects know it was presentedin the past, but they cannot remember itsoccurrence.

In the example just given, subjects areasked first to make a yes/no recognition deci-sion and then to assign either a Rememberor Know response to the recognized item.This variety of the task is the most widelyused version of the Remember/Know proce-dure. A small number of studies have useda different procedure where the recognitionand Remember/Know judgments are madesimultaneously rather than sequentially; thatis, subjects are asked to say for each test itemwhether they remember or know that theitem was on the study list or whether theitem is new.

Research shows that the results usingthe Remember/Know procedure can changecritically depending on whether the sequen-tial or simultaneous method is used (Hicks& R. Marsh, 1999). The standard sequentialmethod of measurement implicitly assumesthat the Remember/Know judgments arepost-recognition judgments, whereas theother method assumes that rememberingand knowing drive recognition. As we dis-cuss later, this difference in assumption par-allels the debate as to whether these judg-ments should be considered as subjectivestates that a person assesses after recogni-tion or as subjective states that uniquelymap onto two processes that drive recog-nition judgments. Regardless, Hicks and R.Marsh argue that the simultaneous decisionis more difficult than the sequential method,because the judgment for each test item hasto be weighed against two other possibilities(e.g., “Do I remember or know that the itemwas on the study list, or is it a new item?”). Incontrast, in the sequential method, subjectsonly have to weigh the judgment against oneother possibility (e.g., “Was the item pre-sented earlier or not?” and “Do I remem-ber it or do I know it?”). In support ofthis hypothesis, Hicks and R. Marsh found

that, as compared to the sequential method,the simultaneous method leads subjects torespond more liberally and increases therecognition hit rate as well as the falsealarm rate.

In addition to these differences in proce-dure, several important measurement issueshave arisen that reflect fundamental pointsof disagreement regarding what processesRemember and Know responses measure.Most of the measurement controversies sur-round the theoretical question of whatstates of awareness or processes are reflectedby remembering and by knowing. As wedescribe in the next section, there are sev-eral accounts of remembering and knowing(see Yonelinas, 2002 , for a recent review).

Remembering, Knowing, and Confidence

One proposal is that remembering andknowing reflect different levels of confi-dence that can be explained by appealingto a signal detection model (Donaldson,1996; Hirshman & Masters, 1997; Inoue &Bellezza, 1998; Wixted & Stretch, 2004).The idea is that subjects place two thresholdson a continuum of strength; the more strin-gent threshold is used for making Remem-ber judgments, and the more lenient thresh-old is used for making Know judgments.In other words, when people are very con-fident that they recognize an item, theyassign it a Remember response, and whenthey are less confident, they assign it aKnow response. By this view, certain inde-pendent variables influence these judgmentsby affecting the amount of memory infor-mation available at retrieval. The availabil-ity of this information, in turn, determineswhere people place their criteria (if they donot have much information, they may bevery conservative and set a high threshold forresponding). Thus, these models conceptu-alize Remember/Know judgments as quan-titatively different judgments that vary alonga single continuum of degree of confidence.It follows from this view that Know judg-ments are isomorphic with low-confidencejudgments and do not capture any otherexperiential state. This criterion shift model

Page 274: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 56 the cambridge handbook of consciousness

can fit several different patterns of Remem-ber and Know data (see Dunn, 2004 , for arecent review from this perspective).

Although single-process models do a goodjob of accounting for various associations anddissociations, they seem to lack explanatorypower. It is difficult to know what deter-mines the placement of criteria at particu-lar points on the continuum in these modelsand how this placement might vary with dif-ferent experimental conditions. More prob-lematic for this view are the reports thatboth meta-analyses of a large set of stud-ies (Gardiner & Conway, 1999; Gardiner,Ramponi, & Richardson-Klavehn, 2002) andanalyses of sets of individual data (Gardiner& Gregg, 1997) have not supported a keyprediction of single-process models. Thesemodels predict that the bias-free estimatesfor Remember judgments should be com-parable to that obtained for overall recog-nition (that includes both Remember andKnow judgments). In other words, Knowjudgments are assumed to contribute lit-tle to the bias-free estimate, but con-trary to this prediction, overall recogni-tion shows a larger bias-free estimate thando Remember judgments by themselves(but see Wixted & Stretch, 2004 , for analternative view).

Other empirical evidence is also incon-sistent with the view that Remember andKnow judgments simply reflect high- andlow-confidence judgments. Gardiner andJava (1990) had shown in an earlier studythat memory-intact subjects give signifi-cantly more Remember judgments to wordsthan non-words and significantly moreKnow judgments to non-words than words.In contrast, words and non-words simplyproduce a main effect on high- and low-confidence judgments. Single-process mod-els can account for the double dissociationobserved for Remember/Know judgmentsand the main effect observed for confidencejudgments by assuming certain shifts of cri-teria, but it is not clear why the criteriawould shift in different ways for the twosets of judgments (Remember/Know andhigh/low confidence) if the types of judg-ment are isomorphic.

To test for the presumed equivalencebetween Remember/Know and high/lowconfidence judgments, Rajaram, Hamiltonand Bolton (2002) adapted Gardiner andJava’s design and conducted the study withamnesic subjects. Impaired conscious expe-rience is a hallmark of amnesia, and bothRemember judgments (to a greater extent)and Know judgments (to a lesser extent) areimpaired in amnesia. Consistent with thesefindings, Rajaram et al. (2002) showed thatamnesic subjects were severely impaired atmaking Remember/Know judgments, andthey did not produce even a hint of thedouble dissociation observed with matched-control subjects. In contrast, the perfor-mance of matched-control and amnesicsubjects did not differ for high- and low-confidence judgments. Such findings arequite difficult to reconcile with the notionthat Remember and Know judgments areredundant with confidence judgments.

The findings described here show thatconfidence judgments and experiential judg-ments can be differentiated. Clearly though,one is highly confident when reportingthat one remembers an event, and so it islikely that recollective experience is closelytied to confidence. Roediger (1999) sug-gested that “ . . . theorists may have the rela-tion backwards. Rather than differing levelsof confidence explaining Remember/Knowresponses, it may well be that the study ofretrieval experience through the Remem-ber/Know technique may help explain whysubjects feel more or less confident” (p. 231).According to this view, confidence and rec-ollective experience can be correlated, butconfidence does not explain remembering.Remembering explains confidence.

Remember/Know Responses andDual-Process Models of Recognition

Dual-process models in general suggestthat remembering and knowing reflect twoindependent processes in memory, termed“recollection” and “familiarity,” respectively(Jacoby, 1991; Jacoby, Yonelinas, & Jen-nings, 1997). Applied to the Remember/Know paradigm, Remember judgments

Page 275: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 57

reflect primarily recollection-based memory,whereas Know judgments reflect primar-ily familiarity-based memory performance.According to this model, recollection andfamiliarity represent independent processes:They can work together or separately toaffect memory performance. This meansthat Remember judgments can arise from arecollective process alone or from occasionswhen recollection and familiarity co-occur.Know responses, on the other hand, arisefrom a familiarity process that occurs in theabsence of recollection. By this view, Knowresponses alone can underestimate the truecontribution of familiarity-driven processesto recognition performance. If one assumesthat Remember and Know responses reflectthe contribution of recollection and famil-iarity processes, then it is important tomeasure Remember and Know responsesin a slightly different manner than is usu-ally reported in the literature. The Inde-pendence Remember Know (IRK) proce-dure (Jacoby et al., 1997; Lindsay & Kelley,1996; Yonelinas & Jacoby, 1995) addressesthis issue by measuring familiarity as aproportion of Know judgments divided bythe opportunity to make a Know response(or K/1-Remember).

Implicit in this view of remembering andknowing is the assumption that the pro-cesses driving Remember responses overlapwith those that drive Know responses. Thatis, events that are remembered can also beknown. However, there are other ways toconceive of the relation between remember-ing and knowing (see Jacoby et al., 1997). Forexample, one could assume that everythingthat is remembered is also known, but eventsthat are known are not always remembered(e.g., Joordens & Merikle, 1993). Or, onecould assume that the two responses areexclusive: things are either remembered orknown (Gardiner & Parkin, 1990). How oneconceives of the relation between these twostates of awareness and the processes under-lying these states will have implicationsfor how they should be measured (Rotello,Macmillan, & Reeder, 2004).

Another dual-process model proposesthat remembering and knowing are graded

differently (e.g., Yonelinas, 1994). Accord-ing to this view, only responding based onfamiliarity or fluency (as measured by Knowresponses) can be modeled by a signal detec-tion theory that assumes a shifting crite-rion. In contrast, responding that is drivenby recollection, as measured by Rememberresponses, does not fit this model. Instead,these responses reflect a retrieval processthat can be characterized as an all-or-nonethreshold process. The idea is that partici-pants can recall various types of informationabout an event (e.g., its appearance, sound,or associated thoughts) that either exceed ordo not exceed some set retrieval threshold.If any one of these qualities exceeds somethreshold, then participants determine thatthey remember the item. If these qualitiesdo not exceed this threshold, then partici-pants might rely on various levels of familiar-ity when endorsing the item as recognized.Therefore, unlike a signal detection model ofRemember and Know responses, this modelcan be considered a dual-process model (seeJacoby, 1991). Recognition can be character-ized by two distinct processes that behavedifferently and give rise to distinct subjectivestates of awareness: Remember responses areassociated with a threshold retrieval processthat is driven by the qualitative features of anevent, whereas Know responses are associ-ated with a familiarity retrieval process thatis driven by sheer memory strength. In thismodel, remembering is an all-or-none pro-cess, whereas knowing is based purely onfamiliarity.

The varying assumptions of these mod-els are tied to different conceptualizationsabout the ways in which Remember andKnow judgments denote states of conscious-ness. The assumption that a single pro-cess underlies both states of retrieval putsthe emphasis on overall memory perfor-mance, but at the cost of shifting focusfrom capturing (through experimentation)the conscious states that accompany per-formance. The dual-process assumption rec-ognizes the role of conscious states moreovertly, although some versions of dual-process models place greater emphasis onthe relation of performance to distinct

Page 276: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 58 the cambridge handbook of consciousness

underlying processes, whereas other dual-process models focus directly on the func-tional states of consciousness that accom-pany retrieval.

Measurement Issues and the Roleof Instructions

Another measurement issue has to do withthe instructions that one gives to subjects.This topic has not been the focus of manydiscussions on the Remember/Know pro-cedure (but see Geraci & McCabe, 2006).Like the other topics discussed so far, howsubjects interpret the Remember/Know dis-tinction based on the instructions given tothem may determine what these responsesreflect. As such, the issue of interpretationof instructions also has theoretical impli-cations. We note that for Remember judg-ments, the issue of instructions may beless of a problem. People tend to be inagreement over what remembering means.For knowing, the psychological experiencethat elicits this response may depend onthe instructions. Some published instruc-tions (Rajaram, 1993) tell subjects to givea Know judgment when they are certainthat the event has occurred, but theirrecognition lacks the recollective detail thatwas described to them. By this definition,Know responses should, and probably do,reflect high confidence, similar to Remem-ber responses. With these instructions, thedistinction between the two judgments islikely to be based on distinct conscious expe-riences. However, because Know instruc-tions differ across experimenters and acrosslabs, the definition of knowing may alsodiffer from lab to lab. If the instructionssay something along these lines of “Give aRemember judgment if you vividly remem-ber the item from the study list. Otherwise,give it a Know judgment,” then knowingcould reflect high- or low-confident mem-ory, some sort of feeling or familiarity, or sim-ple guessing (see Java, Gregg, & Gardiner,1997, for examples of the variety of waysin which subjects interpret this instruction).Kelley and Jacoby (1998) capture these var-ious interpretations when they suggest, “A

Know response is defined as the inability torecollect any details of the study presenta-tion in combination with a feeling of famil-iarity or certainty that the word was studied”(p. 134). As we see in the next section onnoetic consciousness, these possibilities maponto the theoretical debate in the literatureregarding the definition of knowing.

The nature of the instructions given tosubjects also has been used to argue forexclusivity of the responses. Gardiner andJava (1993) have suggested that the instruc-tions implicitly suggest that rememberingand knowing are two states of consciousexperience that cannot coexist. They argue,“A person cannot at one and the same timeexperience conscious recollection and feel-ings of familiarity in the absence of consciousrecollection” (p. 179). They note that onestate can lead to the other across time andrepeated retrievals, but that retrieval on anyone occasion will be associated with eitherone state of awareness or the other. Althoughthis idea is consistent with the instructionsresearchers give to subjects, it is at oddswith Jacoby’s conception that events that areremembered are also familiar (Jacoby et al.,1997). Recall that this independence viewassumes that the two underlying processes(recollection and familiarity) are separateand can therefore act together or in oppo-sition. That is, recognition can be driven byboth recollection and familiarity, by just rec-ollection, or by just familiarity. Researchersdisagree on which conception is the rightone, but it may be that both are correctbut in different situations. This matter awaitsfuture research.

Factors That Increase AutonoeticConsciousness

Another way to understand the distinc-tion between remembering and knowing hasbeen to examine the various factors that giverise to each state of recollective experience.Several factors increase reports of remem-bering and include conceptual or elabora-tive processing, generation, imagery, and dis-tinctive processing. Alongside this collectionof empirical findings, several complementary

Page 277: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 59

theories have arisen. In this section, we dis-cuss these findings and theories together.Lastly, we discuss other factors that selec-tively influence remembering by decreasingit. These factors include various forms ofbrain damage associated with amnesia andthe cognitive decline associated with aging.

Several studies have provided empiri-cal evidence for the experiential distinc-tion between remembering and knowing byshowing that these two states are selectivelyaffected by different independent variables.Because the body of evidence is now large,we classify the effects of various indepen-dent variables into general categories of con-ceptual processing, imagery and generation,distinctiveness, and emotion.

conceptual influences

on remembering

Several accounts of remembering suggestthat this recollective state is driven largelyby prior conceptual processing. This idea fol-lows from dual-process theories of recog-nition (Atkinson & Juola, 1973 ; 1974 ;Jacoby, 1983a, b; Jacoby & Dallas, 1981;Mandler, 1980) that propose that the rec-ollective component of recognition mem-ory is affected by conceptual or elabo-rative processing, whereas the familiaritycomponent is driven by perceptual pro-cesses. Based on this processing distinction,Gardiner (1988) proposed that Rememberresponses are affected by conceptual pro-cessing that arises from the episodic mem-ory system, and Know responses are affectedby perceptual processing that arises fromthe semantic or procedural memory system.In support of this idea, Gardiner showedthat reports of remembering increased whensubjects performed some meaningful pro-cessing at study. Using a level of processingmanipulation (Craik & Lockhart, 1972), thisstudy found that people’s reports of remem-bering increased after studying words fortheir meaning (as opposed to their lexicalor physical properties). Also, using a gener-ation manipulation (Jacoby, 1978; Slamecka& Graf, 1978), this same study showed thatpeople’s reports of remembering increased

after generating study targets to semanticcues, rather than simply reading them.

A similar hypothesis emphasized the roleof processing rather than different memorysystems and proposed that remembering isaffected by conceptual processing (Rajaram,1993). This work showed that ostensiblyconceptually driven memory effects, includ-ing the levels of processing effect and thepicture superiority effect, were obtained andselectively associated with remembering.That is, subjects’ reports of remembering(but not their reports of knowing) increasedwhen they studied words for meaning andwhen they saw pictures at study. This worknot only showed that remembering is influ-enced by conceptual processing but it alsodemonstrated that knowing was differentlyinfluenced by perceptual processing (we dis-cuss factors that affect knowing in the nextsection on noetic consciousness).

Subsequently, a number of studies havebeen conducted that support the proposalthat remembering is associated with priorconceptual processing and knowing is asso-ciated with prior perceptual processing(see Gardiner & Richardson-Klavehn, 2000;Rajaram, 1999; Rajaram & Roediger, 1997;Richardson-Klavehn, Gardiner, & Java, 1996;Roediger, Wheeler, & Rajaram, 1993 , forreviews). For example, reports of remem-bering increase after elaborative rehearsal(as compared to rote rehearsal) at study(Gardiner, Gawlick, & Richardson-Klavehn,1994). Reports of remembering are alsoaffected by attention at study: Rememberresponses increase after study under fullattention as compared to divided attention(Gardiner & Parkin, 1990; Mangels, Picton,& Craik, 2001; Parkin, Gardiner, & Rosser,1995 ; Yonelinas, 2001). Because dividingattention at study has been interpreted asa manipulation that decreases elaborative,conceptual processing but not perceptualprocessing, the findings showing selectiveeffects of dividing attention on Rememberresponses can be taken as support for theidea that remembering is associated withprior conceptual processing.

Although the evidence just reviewed sup-ports the idea that conceptual processes

Page 278: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 60 the cambridge handbook of consciousness

underlie remembering and perceptual pro-cesses underpin knowing, more recent datahave undercut these claims. These dataare inconsistent with both sides of theargument: They show that remembering canbe influenced by perceptual processes andthat knowing can be influenced by con-ceptual process (Conway, Gardiner, Perfect,Anderson, & Cohen, 1997; Mantyla, 1997;Rajaram, 1996, 1998; Rajaram & Geraci,2000). For now, we focus on research thatis inconsistent with the remembering sideof the account. Two perceptual manipula-tions have been found to affect remember-ing. Changes in both size and orientationof objects across study and test influenceRemember responses, but have little effecton Know responses (Rajaram, 1996; see alsoYonelinas & Jacoby, 1995). This work showsthat reports of remembering increase whenobjects are presented in the same size andorientation at study and at test and decreasewhen the size and orientation are differ-ent at test. In retrospect, the possibilitythat both meaning and perceptual featuresinfluence remembering is not altogether sur-prising because much of autonoetic con-sciousness is associated with retrieval ofvivid perceptual details. The challenge thenis to ascertain a priori the nature of vari-ables – conceptual or perceptual – thatwould influence remembering (or auto-noetic consciousness) and thereby developa framework that can generate usefulpredictions.

To this end, Rajaram (1996, 1998; Rajaram& Roediger, 1997) developed an alternativetheory that involved distinctiveness of pro-cessing. This alternate hypothesis proposesthat autonoetic consciousness or remem-bering reflects distinctiveness of the pro-cessing at study (Rajaram, 1996, 1998),whereas knowing, or noetic consciousness,is influenced by the fluency of processingat study (Rajaram, 1993 ; Rajaram & Geraci,2000). This interpretation that autonoeticconsciousness especially reflects distinctiveprocessing during encoding can accommo-date many of the studies mentioned so far,as well as more recent findings, which arediscussed next.

distinctiveness effects

on remembering

Recent work shows that the distinctivenessof the study episode influences reports ofremembering; importantly, this work showsthat both perceptual and conceptual sourcesof distinctiveness cause increases in remem-bering. First, take for example a manipula-tion of distinctiveness that arises from per-ceptual oddities of the word form, suchas orthographic distinctiveness. Words withunusual letter combinations, such as “sub-poena,” are remembered better than wordswith more common letter combinations,such as “sailboat” (Hunt & Elliott, 1980;Hunt & Mitchell, 1978; 1982 ; Hunt & Toth,1990; Zechmeister, 1972). The effects of suchorthographic distinctiveness on Rememberand Know judgments were examined inone study by asking subjects to study alist of orthographically common and dis-tinct words. Replicating the standard findingin the literature, results showed that peo-ple had superior recognition for the ortho-graphically distinct words. Critically, thismanipulation selectively affected Remem-ber responses, and not Know responses(Rajaram, 1998). In other words, remember-ing increased with the perceptual distinc-tiveness of the items at study.

Similarly, noting the distinctive featuresof a face affects remembering and notknowing (Mantyla, 1997). In this study,subjects studied faces and either exam-ined the differences among them by not-ing the facial distinctiveness of various fea-tures or categorized faces together basedon general stereotypes, such as “intellec-tual” or “party-goer.” Distinctive process-ing of individual features increased Remem-ber responses, whereas categorizing facesincreased Know responses. These results allconverge on the conclusion that distinctiveprocessing leads to conscious recollection asreflected in Remember responses.

effects of emotion

on remembering

Defining what constitutes a distinctive eventin memory is difficult (see Schmidt, 1991),

Page 279: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 61

although the issue has received a resurgenceof investigation and theorizing (Geraci &Rajaram, 2006; Hunt, 1995 , 2006; McDaniel& Geraci, 2006). Does distinctiveness referto information that is simply unusual againsta background context (e.g., von Restorff,1933)? Is something considered distinctive ifit is surprising or unexpected within a certaincontext? Is particularly salient informationdistinctive? Or, does distinctiveness refer toa type of processing, as Hunt and McDaniel(1993) proposed in distinguishing betweendistinctive and relational processing?

Emotionally laden events are often con-sidered distinctive and are well remem-bered (but see Schmidt, 2006, for conditionsunder which emotionally arousing eventsaffect memory differently from other dis-tinctive events). Evidence that emotionalinformation is remembered well and in vividdetail comes both from studies investigat-ing emotional experimental stimuli, such asarousing words or pictures, and from stud-ies investigating powerful emotional occur-rences outside the lab in what are calledflashbulb memories (Brown & Kulik, 1977).Flashbulb memories are so named becausedramatic life events seem to be remem-bered in striking detail, just as is a picturecaught in a photographic flash (althoughlater research shows that the term may besomething of a misnomer). Many studies offlashbulb memory have examined subjects’memory for large-scale naturally occurringevents, such as JFK’s assassination, the Chal-lenger explosion of 1996, or more recently,the terrorist attacks of September 11, 2001.Although there are doubtless important dif-ferences between laboratory and naturallyoccurring emotional memories, in generalfindings from these studies demonstrate thatemotional events produce more vivid mem-ories for the events in question. We reviewevidence from the two types of study in turn.

Several laboratory experiments demon-strate that retention is superior for emotion-ally laden items presented as pictures, words,or sentences relative to neutral items (seeBuchanan & Adolphs, 2002 and Hamann,2001, for reviews). This work attempts toprovide a laboratory analog to emotional

events that people experience outside thelab. Because the hallmark of a flashbulbmemory is that people report being ableto remember many contextual details fromhaving first encoded the emotional event(e.g., people often report that they canremember where they were and what theywere wearing when they first heard the newsthat JFK had been shot; Neisser & Harsch,1992 ; Rubin & Kozin, 1984), recent investi-gation is aimed at examining the quality ofthese emotional memories.

To examine whether emotional memo-ries are qualitatively different from non-emotional or neutral memories, some studieshave begun examining metamemory judg-ments for these events, including Remem-ber/Know responses and source judgments(Kensinger & Corkin, 2003 ; Ochsner, 2000).In the Ochsner study, subjects studiedpositive pictures (e.g., a flower), negativepictures (e.g., a burned body), and neu-tral pictures (e.g., a house) that system-atically varied in the amount of arousalthey produced. Results showed that partic-ipants had best retention of the negativepictures, followed by the positive pictures,with the worst memory for the neutral pic-tures. Importantly for our purposes, peoplewere much more likely to indicate that theyhad a rich and vivid memory for emotionalpictures that were at least mildly arous-ing relative to the neutral pictures. Subjectsassigned a higher proportion of Rememberresponses to emotional than to neutral pic-tures, whereas Know responses were associ-ated mostly with neutral and positive items.Kensinger and Corkin further demonstratedthat not only were people more likely toremember the emotional events but theywere also more likely to remember accuratesource details from the emotional events.

Similar to the studies just described, peo-ple often report that their flashbulb mem-ories of real-life events are also extremelyvivid and full (e.g., Christianson & Lof-tus, 1990). Of course, the term flashbulbmemory was developed to suggest this veryfact. Although, flashbulb memories may feelvivid, much debate has ensued regarding theaccuracy of these events and the relation

Page 280: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 62 the cambridge handbook of consciousness

between accuracy and vivid memory reports(see Conway, 1995 , for a review).

A recent study examined the relationbetween accuracy and subjective experi-ence using the Remember/Know paradigmto examine flashbulb memories for theevents of September 11th (Talarico & Rubin,2003). Participants were asked questionsabout their flashbulb memories of Septem-ber 11th (e.g., when they first heard whathappened, who told them, etc.), and theywere asked questions about everyday sorts ofevents from before the attack. People weremore likely to assign Remember responsesto their flashbulb memories as compared tocontrol (common) events. This pattern heldwhen participants were tested on Septem-ber 12 , 2001 and became more pronounced atlonger delays. Talarico and Rubin found thatpeople claimed to remember the emotionalevents more vividly than the everyday eventsdespite the fact that they were no more accu-rate at recalling the flashbulb memories ascompared to the everyday memories. Thus,flashbulb memories may be quite suscep-tible to error despite the great confidencewith which they are held, especially afterlong delays (see also Neisser & Harsch, 1992 ;Schmolck, Buffalo, & Squire, 2000).

Recollective Experienceand Memory for Source

As the preponderance of the evidencereviewed so far indicates, autonoetic con-sciousness is characterized by vivid, detailedfeelings associated with one’s personal past,at least for distinctive events. Using theRemember/Know paradigm, “vividness” ischaracterized by using a single Rememberresponse, whereas the lack of this vivid detailconstitutes a Know response. However, thismemory for details has also been exam-ined by requiring subjects to assess the qual-ity of their memories by assigning the cor-rect source of these memories (see Johnson,Hashtroudi, & Lindsay, 1993). In this line ofresearch, source is defined broadly and caninclude, for example, one list of items versusanother, items presented in one voice ver-sus another, in one location or another, on

a computer monitor, and so on. A partic-ularly interesting case (called reality moni-toring) asks subjects to determine whetheran event actually occurred or was imag-ined. In some sense, all recognition judg-ments require source-specifying informationof some sort.

According to the source-monitoringframework (e.g., Johnson, 1988), peopleoften rely on the qualities of their mem-ories to determine their source by com-paring the characteristics of the retrievedmemory to memories generally associatedwith that source. For example, with a real-ity monitoring decision, people may relyon the knowledge that memories of per-ceived or experienced events tend to bemore vivid and have more associated detailsthan memories of imagined events. Con-versely, they may rely on the knowledgethat memories from imagined sources tendto be characterized by more informationabout cognitive processes associated withcognitive effort and elaboration relative tothose that are perceived (e.g., Johnson,Foley, Suengas, & Raye, 1988; Johnson,Hashtroudi, & Lindsay, 1993 ; Johnson, Raye,Foley, & Foley, 1981). As with the Remem-ber/Know studies, much research on source(or reality) monitoring focuses on definingthe information that characterizes memo-ries from various sources and shows thatsources can be discriminated flexibly amongmany dimensions. The dimensions includeperceptual features of the target (e.g.,Ferguson, Hashtroudi, & Johnson, 1992 ;Henkel, Franklin, & Johnson, 2000; John-son, DeLeonardis, Hashtroudi, & Ferguson,1995 ; Johnson, Foley, & Leach, 1988), cog-nitive processes (e.g., Finke, Johnson, &Shyi, 1988), the plausibility of remembereddetails (e.g., Sherman & Bessenoff, 1999),as well as related experiences (Geraci &Franklin, 2004 ; Henkel & Franklin, 1998).Thus, source decisions rely on multipleaspects of experience that give rise to auto-noetic consciousness.

Some studies have compared Remem-ber/Know responses and source judgments.Conway and Dewhurst (1995) had sub-jects watch, perform, or imagine doing a

Page 281: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 63

series of tasks. Later, they were presentedwith the task and asked about its source:Was it watched, performed, or imagined?Accurate source memory for tasks that thesubjects performed was primarily associ-ated with Remember responses, whereasaccurate source memory for tasks thatthey only imagined performing was associ-ated mostly with Know responses. Accuratesource memory for the observed tasks wasassociated more equally with both Remem-ber and Know responses. These results cor-roborate the idea that Remember responsescan reflect detailed perceptual memo-ries associated with personally experiencedevents.

In conjunction with asking source ques-tions, subjects are often asked to rate thequalities of their memories using the Mem-ory Characteristics Questionnaire (MCQ;Johnson, Foley, Suengas, & Raye, 1988). TheMCQ asks people to assess the qualities oftheir remembrances; in this context, we maythink of the MCQ as a further attempt togain introspective knowledge of autonoeticstates of consciousness. Remember/Knowresponses have also been compared to MCQratings (Mather, Henkel, & Johnson, 1997).The Mather et al. study was designed todetermine why people falsely rememberwords that they never saw using the Deese-Roediger-McDermott (DRM) false memoryparadigm (Deese, 1959; Roediger & McDer-mott, 1995) in which subjects study listsof related words (bed, rest, awake, tired,dream . . . ) and often remember a wordthat was not presented on the list (sleep,in this example). To examine the basis ofthese false Remember responses, this studyexamined their qualities using MCQ rat-ings. (We include more discussion on thetopic of illusory remembering toward theend of this chapter.) Results from the MCQratings showed that people did report lessauditory perceptual detail for false Remem-bered items than for correct Remembereditems (see also Norman & Schacter, 1997),whereas both types of items were associatedwith details of semantic associations. Thesefindings show that autonoetic conscious-ness is influenced by a number of variables,

including perceptual and emotional quali-ties of the information. Interestingly, auto-noetic consciousness as reflected in mem-ory for details is not always associated withaccurate memory, as we also observed in dis-cussing emotional memories. However, thequalitatively distinct nature of memory –accurate or inaccurate – that is accompa-nied by autonoetic consciousness providesgreater consistency and confidence in sub-jects’ judgments. Finally, these studies alsoshow that remembering is the central pro-cess in conscious recollections of our past,and that Tulving’s (1985) Remember/Knowprocedure and Johnson’s Memory Charac-teristics Questionnaire are useful method-ological tools for investigating the nature ofautonoetic consciousness.

Noetic Consciousness

In Tulving’s (1983 , 1985) theory, noetic con-sciousness is associated with the experienceof knowing and with the semantic memorysystem (Tulving, 1985). In this sense, Knowjudgments should be associated with seman-tic knowledge. However, in experimentalpractice, Know judgments seem to capturevarious types of awareness. In particular, sub-jects generally give Know judgments for twotypes of cognitive experiences – knowledgeand familiarity. In fact, in some theoreti-cal treatments (e.g., Jacoby, Jones & Dolan,1998), Know judgments are aligned with theprocess of familiarity.

At the experiential level, noetic con-sciousness lacks the intensity and immedi-acy that are associated with autonoetic con-sciousness. This is true by definition becausenoetic consciousness represents the less per-sonal and the more generic sense in whichwe retrieve factual events and information.This lack of personal involvement in theretrieved information applies not only togeneral knowledge about the world (theusual definition of semantic memory) butalso can apply to knowledge about ourselvesand our own experiences. We may know wehad a fifth birthday party without it beingremembered. In this case the memory takes

Page 282: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 64 the cambridge handbook of consciousness

on an impersonal quality. Another exampleof the operation of noetic consciousness canbe seen in one’s memory of a trip by air-plane taken ten years ago. Although a personmay know that he or she traveled from NewYork to Calcutta, any remembrance of theevents of the trip may have vanished. Theargument is that noetic consciousness differsfrom autonoetic consciousness not only interms of the content of retrieval but also inthe very nature of the retrieval process. Weknow about the airplane ride in the sameway that we know that Thomas Jeffersonwas president of the United States.

The Influence of Instructions on theInterpretations of Know Judgments

The interpretation of Know judgments canbe traced back to the specific instructionsprovided to subjects. An abbreviated versionof these instructions (taken from Rajaram,1996) is provided here to illustrate thispoint. We include here the instructionsfor Remember judgments as well, becauseKnow judgments are typically operational-ized in experimental studies in the contextof Remember judgments:

Remember judgments: If your recognitionof the item is accompanied by a consciousrecollection of its prior occurrence in thestudy list, then write R. “Remember” isthe ability to become consciously awareagain of some aspect or aspects of whathappened or what was experienced at thetime the word was presented (e.g., aspectsof the physical appearance of the word, orof something that happened in the room,or of what you were thinking and doingat the time). In other words, the “remem-bered” word should bring back to minda particular association, image, or some-thing more personal from the time of studyor something about its appearance or posi-tion (i.e., what came before or after thatword).

Know judgments: “Know” responsesshould be made when you recognize thatthe word was in the study list, but youcannot consciously recollect anything aboutits actual occurrence, or what happened,or what was experienced at the time of itsoccurrence. In other words, write K (for

know) when you are certain of recogniz-ing the words but these words fail to evokeany specific conscious recollection from thestudy list.

To further clarify the difference betweenthese two judgments (i.e., R versus K), hereare a few examples. If someone asks foryour name, you would typically respondin the “know” sense without becoming con-sciously aware of anything about a par-ticular event or experience; however, whenasked the last movie you saw, you wouldtypically respond in the “Remember” sense,that is, becoming consciously aware againof some aspects of the experience.

It is clear from these instructions thatKnow judgments may be used in the senseof knowledge (as in the semantic senseof knowing one’s own name or knowingThomas Jefferson was president) or as asense of familiarity where no specific detailscan be evoked from the study phase (e.g.,recognizing a face as familiar but not beingable to recover who the person is or whereyou met her). In other words, Know judg-ments are defined in terms of what they arenot (they are confident memories that lackdetail), rather than what they are. However,the examples and description provided inthe instructions do lead to the two interpre-tations – knowledge and familiarity – thatare most commonly associated with Knowjudgments.

Knowing and Retrievalfrom Semantic Memory

It is common for people to know that Mt.Everest is in the Himalayas, that mango is atropical fruit, and that Chicago in Novem-ber is colder than Houston. However, peo-ple almost certainly do not know when andwhere they learned these bits of knowl-edge. The conception of semantic memoryis not without its critics, and the very def-inition of semantic memory is sometimes asource of debate. Nevertheless, according toTulving’s theory within which the Remem-ber/Know distinction is embedded, seman-tic memory is defined as a repository of orga-nized information about concepts, words,people, events, and their interrelations in the

Page 283: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 65

world. Information from semantic memoryis retrieved as facts and without memoryfor the details of the learning experienceor the time and place where learning tookplace.

An understanding of the process by whichthe sense of knowing may be associatedwith semantic memory requires a system-atic investigation of the learning and testingconditions. This approach has the potentialto lead us to an understanding of the waysin which specific experimental conditionsgive rise to noetic consciousness associatedwith memory.

The original sense of knowing as aware-ness associated with semantic knowledge isperhaps best illustrated in a study by Mar-tin Conway and his colleagues (Conway,Gardiner, Perfect, Anderson, & Cohen,1997). In this study, subjects gave Remem-ber and Know judgments to different typesof course material learned over an extendedperiod of time. Conway and colleagues askedsubjects to make an important distinctionin their study between two different inter-pretations of noetic consciousness by askingthem to judge between Just Knowing andFamiliarity. This distinction was made forthe precise reason of separating knowledgefrom a sense of familiarity. The authors alsostudied two types of courses. One type con-sisted of lecture materials (Introduction toPsychology, Physiological Psychology, Cog-nitive Psychology, and Social and Devel-opmental Psychology), and another typeconsisted of learning scientific methodol-ogy (research methods courses). The natureof learning differs in these two types ofcourses: Lecture courses entail learning mas-sive amounts of content material, whereasmethodology courses usually require activelearning and application in smaller class set-tings. The nature of awareness systemati-cally varied with this distinction; subjectsgave more Remember judgments to infor-mation learned in lecture courses and moreJust Know judgments to material from themethodology courses. Furthermore, therewas a shift from Remember to Just Knowjudgments over time, further supportingthe idea that once episodic details were

lost, information became schematized andconceptually organized.

This sense of knowing (or just know-ing) has not been used frequently in theliterature, even though the distinction ledto interesting results in the Conway et al.(1997) study. Several reasons may exist forthe scant use of Know judgments as ameasure of semantic knowledge. For exam-ple, the Conway et al. study clearly foundnumerous repetitions of material to be nec-essary for schematization to occur and tolead to just knowing the response. Spac-ing between repetitions may also be impor-tant for conceptual organization to occur.The variety of content and learning experi-ence accumulated over time may also inter-act with repetition and spacing and creatememories that can shift from remember-ing to just knowing, instead of being sim-ply forgotten over time. These are but threenotable variables, and there are probablymore. Classroom education in the Conwayet al. (1997) study brought together theseconditions of repetition, spacing, and var-ied encoding quite nicely, but it is usu-ally difficult to create such conditions inthe laboratory.

In a recent study, Rajaram and Hamil-ton (2005) created the following labora-tory conditions as a first step toward test-ing the effects of varied and deep encodingon Remember and Know states of aware-ness. Subjects studied unrelated word pairseither once or twice where the repeated pre-sentation varied the context and was spacedapart (e.g., a single presentation might bepenny-cousin, whereas a repeated presen-tation would have been fence-bread, thenguard-bread). At test, subjects gave recogni-tion and Remember/Know judgments to thetarget words (cousin, bread). Among severalconditions in this study, the most relevant forour present purposes are those that involveda deep level of processing at encoding. Theresults showed that even after 48 hoursof delay (relative to 30 minutes), subjectsgave significantly more Remember responsesto words repeated under different contextsthan to once-presented target words. Impor-tantly for the present discussion, subjects

Page 284: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 66 the cambridge handbook of consciousness

also gave significantly more Know responsesafter 48 hours than after 30 minutes. Thus,Know judgments were responsive to bothconceptual encoding and varied repetitioneven after considerable delay.

A recent study by E. Marsh, Meade,and Roediger (2003) reported a paradigmthat could be very useful for investigatingknowing as a measure of semantic memory.In this study, E. Marsh et al. investigatedwhether reading a story before being testedon general knowledge questions can influ-ence subjects’ ability to identify correctlythe story or prior knowledge as the sourceof their answers. Both immediate sourcejudgments and retrospective source judg-ments on the general knowledge answersshowed that subjects attributed many detailsfrom the story to being part of their priorknowledge. This finding nicely illustratesthat episodic information was converted tofacts or semantic memory. Other experi-ments using this paradigm and ones likeit might answer the question about howinformation that once held great recollectivedetail may be transformed into impersonalknowledge over time, or how rememberingbecomes knowing.

Knowing as Fluency and Familiarity

In contrast to the limited experimental workon knowing as a measure of semantic mem-ory, there has been a flurry of researchaimed at characterizing Know judgments asa measure of fluency or familiarity. Thiseffort may be attributed, in large part, tothe ways in which Know judgments aredefined through instructions. As describedearlier, subjects are asked to give Knowjudgments when they have a feeling thatsomething has been encountered recentlybut no details come to mind about thatencounter. In an interesting project on theactual reports by subjects, Gardiner, Ram-poni, and Richardson-Klavehn (1998) exam-ined subjects’ transcripts of reasons theygave Know judgments and found they didso typically based on a feeling of familiarityas expressed in some of the statements madeby subjects (“It was one of those words that

rang a bell,” “There was no association, I justhad a feeling that I saw it, I was sure” [p. 7].)

The fluency-familiarity interpretation ofKnow judgments has featured prominentlyin our own conceptualization (see Rajaram,1993 , 1996, 1999; Rajaram & Geraci, 2000;Rajaram & Roediger, 1997). This approachproposes that Remember judgments areinfluenced by the processing of distinctiveattributes of the stimuli, whereas Knowjudgments are sensitive to the fluency orease with which stimuli are processed. Con-siderable evidence supports this interpreta-tion of Know judgments. For example, sub-jects give more Know responses to wordsthat are preceded by a masked repetition ofthe same word (hence increasing fluency)compared to words that are preceded by amasked presentation of an unrelated word(Rajaram, 1993). Similarly, words that arepreceded by a very brief (250 ms) presen-tation of semantically related words elicitmore Know judgments than words thatare preceded by unrelated words (Rajaram& Geraci, 2000; see also Mantyla, 1997;Mantyla & Raudsepp, 1996). Along theselines, having modality match across studyand test occasions (e.g., items presentedvisually in both cases) increases perceptualfluency and selectively increases Know judg-ments relative to when the modalities mis-match between study and test (Gregg &Gardiner, 1994).

The idea that fluency or familiarityincreases Know judgments and the sup-porting evidence are also consistent withGardiner and colleagues’ original proposalabout the nature of Know judgments(Gardiner, 1988; Gardiner & Java, 1990;Gardiner & Parkin, 1990). According to thisview, Know judgments are mediated byprocesses of the procedural memory sys-tem. This view ties Know judgments notonly to the familiarity process but also toperceptual priming. In more recent works,Gardiner and his associates have reconsid-ered Tulving’s original classification systemof associating Know judgments with seman-tic memory that we described in an earliersection (see for example, Gardiner & Gregg,1997; Gardiner, Java, & Richardson-Klavehn,

Page 285: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 67

1996; Gardiner, Kaminska, Dixon, & Java,1996).

Other recent theories that distinguishbetween a recollective basis and a famil-iarity basis of recognition memory havealso contributed to the interpretation ofKnow judgments as reflecting the fluencyor familiarity process (Jacoby, et al., 1997;Yonelinas, 2001). These dual-process modelsare based on the process dissociation pro-cedure (Jacoby, 1991) that was developedto measure the independent and oppos-ing influences of recollective (consciouslycontrolled) and familiarity (more auto-matic) processes. According to this view,Know judgments in Tulving’s (1985) Re-member/Know procedure provide an under-estimation of the extent to which thefamiliarity process contributes to memoryperformance. As noted in an earlier sec-tion, Jacoby and colleagues have assumedindependence between these processes andhave proposed a mathematical correction(Know/(1-Remember)) to compute the in-fluence of familiarity. On this point thesedual-process models diverge from the frame-works and models described earlier, butthese models are nevertheless in agreementwith the main point under considerationhere – Know judgments measure the effectsof fluency or familiarity.

Know Judgments and Perceptual Priming

Some of the theoretical interpretationsreviewed in the previous section suggest thestrong possibility that the same processesshould mediate Know judgments and per-ceptual priming. Issues surrounding this pro-posal are tricky, however, and the evidenceis mixed. We review the issues and evidencehere, albeit briefly.

Perceptual priming is measured withimplicit memory tasks, such as word stemcompletion, word fragment completion, andperceptual identification (see Roediger, 1990

and Roediger & McDermott, 1993 , forreviews.) For example, on a task such as wordfragment completion, subjects first study alist of words (e.g., elephant) and are laterasked to complete the first word that comes

to mind in response to fragmented wordcues that could be solved with studied words(e.g., l p h n for elephant) or non-studied words (e.g., a l b a forsailboat). The advantage in completing frag-ments of studied words compared to non-studied words is that it gives a measureof priming, and priming can be dissociatedfrom explicit measures of memory by manyvariables. Extensive experimental efforts aremade in these studies to discourage subjectsfrom using explicit retrieval strategies andto exclude subjects who nevertheless useexplicit or deliberate retrieval to completethese tasks. Thus, such tasks are assumedto measure a relatively automatic processthat contributes to memory performance.Furthermore, a subset of this class of tasksis also particularly, though not exclusively,sensitive to match or mismatch in the per-ceptual attributes of study and test stim-uli. For example, changes in the presen-tation modality (from auditory to visual)reduce the magnitude of priming comparedto matched modality across study and test(visual to visual; Rajaram & Roediger, 1993 ;Roediger & Blaxton, 1987; see also Weldon& Roediger, 1987, for similar conclusionsbased on changes in surface format acrosspictures and words.) Thus, perceptual prim-ing is a measure of performance on implicitmemory tasks where perceptual featuresexert a strong influence on the magnitudeof priming.

In contrast to perceptual priming, Knowjudgments reflect retention while subjectsare engaged in the explicit retrieval of stud-ied information. Therefore, the possibilitythat these two measures, one derived fromimplicit memory tasks and the other fromexplicit memory tasks, have the same under-lying basis is intriguing. The earlier proposalof Gardiner and colleagues that Know judg-ments are influenced by processes of theprocedural memory system suggests suchequivalence because perceptual priming isassumed to be mediated by the proceduralmemory system (Tulving & Schacter, 1990).

Furthermore, dual-process models pro-posed by Jacoby, Yonelinas, and colleaguesalso suggest such equivalence. In these

Page 286: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 68 the cambridge handbook of consciousness

approaches, automatic processes and famil-iarity processes appear to be interchangeableconcepts; the former is typically associatedwith priming on implicit tests, and the lat-ter is associated with the Know componentof explicit memory performance. Whetheror not these measures are isomorphic, andthe evidence reviewed below is mixed onthis issue, it seems intuitive to assume thatKnow judgments share some of the prop-erties both of perceptual priming and ofexplicit memory. If priming is placed at oneend of the continuum of conscious aware-ness and Remember judgments at the otherend, Know judgments by definition fall inbetween, albeit on the conscious side of thiscontinuum. By virtue of being at the brink ofconscious awareness, some processing com-ponent of Know judgments might share itsbasis with priming.

As just mentioned, the evidence seemsmixed on this issue, although only a handfulof studies have addressed it. For instance,dividing attention during study with an audi-tory tone or digit monitoring task (whenthe subjects’ main task is to read the words)adversely affects explicit memory tasks, butleaves perceptual priming intact (Jacoby,Woloshyn, & Kelley, 1989; Parkin & Russo,1990). Parallel effects of tone monitoringduring study are observed on Rememberand Know judgments, respectively (Gar-diner & Parkin, 1990). Similarly, a study ofpictures and words dissociates performanceon explicit memory tasks and implicit mem-ory tasks such that the picture superiorityeffect (better memory for pictures thanwords – Madigan, 1983) is reliably observedon the explicit memory task, but this effectreverses on the perceptual priming taskof word fragment completion (Weldon &Roediger, 1987). This dissociative patternhas been reported for Remember andKnow judgments as well, where the picturesuperiority effect was observed for Remem-ber judgments and its reversal was observedon Know judgments (Rajaram, 1993 ,Experiment 2).

In contrast to these findings where inde-pendent variables affected Know judgments

in ways similar to their effects on prim-ing tasks, other studies have not shownsuch parallels. For example, the generationeffect (when words generated from seman-tic cues are better recalled and recognizedthan words that are simply read) that is reli-ably observed in explicit memory tasks (e.g.,Jacoby, 1978; Slamecka & Graf, 1978) is usu-ally reversed in perceptual priming (Blaxton,1989; Srinivas & Roediger, 1990), but sucha reversal is not observed for Know judg-ments (Gardiner & Java, 1990) even underoptimally designed conditions (Java, 1994).This result clearly undermines any straight-forward notion that Know judgments solelyreflect perceptual priming.

Word frequency effects across a repetitionpriming task and a recognition memory taskalso challenge the notion that Know judg-ments and priming performance are sim-ilarly responsive to independent variables.For example, Kinoshita (1995) reported thaton a priming task of making lexical deci-sions (where subjects decide whether aletter string is a word or a non-word),low-frequency words yielded greater prim-ing than high-frequency words even fol-lowing unattended study conditions. Thisfinding suggests that low-frequency wordsshould lead to greater Know judgments thanhigh-frequency words. However, Gardinerand Java (1990) had previously reported anunambiguous advantage for low-frequencywords over high-frequency words in Remem-ber judgments, and this variable had littleeffect on Know judgments. Other recent evi-dence also suggests a distinction betweentypes of familiarity processes that mediaterecognition relative to priming. For example,Wagner, Gabrieli, and Verfaellie (1997) havereported that the familiarity process associ-ated with recognition is more conceptuallybased than processes that underlie percep-tual priming. Wagner and Gabrieli (1998)have further argued that processes support-ing perceptual priming and the familiaritycomponent of recognition are both function-ally and anatomically distinct.

The evidence on this issue is also mixedin studies of individuals with anterograde

Page 287: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 69

amnesia. Whereas explicit memory per-formance is severely impaired in amnesia,perceptual priming for single words or pic-ture is found to be intact (see Moscovitch,Vriezen, & Goshen-Gottstein, 1993 ;Schacter, Chiu, & Ochsner, 1993 ; Verfaellie& Keane, 2002 , for reviews). Preservedperceptual priming in amnesia suggeststhat Know judgments (or the familiaritycomponent of recognition) should also bepreserved if perceptual priming and know-ing have the same bases. Recent evidencehas started to delineate conditions in theRemember/Know paradigm that show thatamnesic subjects can indeed utilize thefamiliarity component to boost their Knowjudgments (Verfaellie, Giovanello, & Keane,2001). However, converging evidencefrom different paradigms (see Knowlton &Squire, 1995 ; Schacter, Verfaellie, & Pradere,1996; Verfaellie, 1994) has also shown adeficit in Know judgments in amnesicpatients, although this deficit is far lower inmagnitude than the deficit in Rememberjudgments (see Yonelinas, Kroll, Dobbins,Lazzara, & Knight, 1998). Evidence fromstudies with amnesic patients has furtheridentified neuroanatomical regions that aredifferentially associated with rememberingand knowing. Specifically, evidence fromstudies of amnesic patients (Moscovitch &McAndrews, 2002 , Yonelinas et al., 2002)suggests that Remember judgments aremediated by the hippocampus, whereasfamiliarity is mediated by parahippocampalstructures in the medial temporal loberegion. Neuroimaging evidence also sup-ports this distinction (but see Squire, Stark,& Clark, 2004 , for a different view on theproposed structural dichotomies).

Together, a comparison between per-ceptual priming and Know judgments hasrevealed at best mixed evidence. As somehave noted, the key to the differencesbetween processes that affect perceptualpriming and Know judgments (or familiar-ity) may lie in the greater involvement ofconceptual processes in explicit recognition(Verfaellie & Keane, 2002 , Wagner et al.,1997.) It is clear that both for theoretical

reasons outlined earlier and mixed empiri-cal evidence reviewed here, this area is ripefor extensive investigation.

Knowing and Confidence

Earlier we considered the contention thatjudgments of knowing may simply reflectlow-confidence judgments. That is, perhapspeople give Remember judgments whenthey are highly confident that an eventoccurred previously and Know judgmentswhen they believe it occurred but are lessconfident. Proponents of this view (e.g.,Donaldson, 1996) argue that the Remember/Know distinction is merely a quantitativeone (how much “memory strength” does thetested event have?), rather than a qualitativedifference (reflecting, say, recollection andfluency). To review a few points made pre-viously, the signal detection models with sev-eral criteria can often account for data afterthe fact, but lack true explanatory powerin predicting dissociations and associationsbetween Remember and Know judgments.It is difficult to know what determines theplacement of criteria at particular points onthe continuum in these models and how thisplacement might vary with different exper-imental conditions. Further, experimentalevidence cited in the earlier part of thechapter (e.g., Rajaram et al., 2002) reveal-ing different effects of independent andsubject variables on Remember/Know judg-ments and confidence judgments is incon-sistent with the idea the Know judgmentsmerely reflect confidence. Although con-fidence and Remember/Know judgmentsare related, remembering may explain con-fidence judgments rather than the otherway around.

Knowing and Guessing

A potential problem in interpreting thenature of Know judgments is the extentto which subjects include guesses whenmaking Know judgments. In this situation,Know judgments would reflect not onlymemory processes but also pure guesses,thereby complicating the inferences we

Page 288: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 70 the cambridge handbook of consciousness

might draw about the nature of know-ing. An even more serious concern mightbe that Know judgments simply reflectguesses and nothing more. However, nei-ther of these concerns seems to compromisethe data typically obtained for Know judg-ments. The first concern is circumvented inmost studies by the careful use of instruc-tions that strongly discourage guessing. Infact, the generally low false alarm ratesthat are typical in these studies suggest thatthis approach is largely successful. The sec-ond concern about Know judgments sim-ply being equivalent to guesses is addressedby Gardiner and colleagues in studies wherethey required subjects to make Remem-ber, Know, and Guess judgments to itemspresented in recognition memory tests(see Gardiner & Conway, 1999; Gardiner,Java, & Richardson-Klavehn, 1996; Gardiner,Kaminska, Dixon, & Java, 1996; Gardiner,Ramponi, & Richardson-Klavehn, 1998;Gardiner, Richardson-Klavehn, & Ramponi,1997). By and large, Know and Guessresponses turn out to be functionally dis-tinct such that Know judgments reflectmemory for prior information and Guessjudgments do not. As Gardiner and Con-way (1999) note, the relevance of takingguesses into account in the Remember/Know paradigm seems to be to minimizenoise for Know judgments. We note thatthis could be achieved either by includinga separate response category or by instruct-ing subjects to refrain from guessing. As dis-cussed in a previous section, the potentialvariations across laboratories in communi-cating Remember/Know instructions mayaccount for some differences in the liter-ature. For these reasons, we have empha-sized elsewhere the care and effort that areneeded to administer the Remember/Knowtask properly (Rajaram & Roediger, 1997).Despite these concerns, the experimentaleffort of nearly 20 years has yielded fairly sys-tematic, informative, and interesting findingsabout the nature of Know judgments. Thiseffort has also answered as many questionsas it has raised critical issues for future inves-tigation. Finally, this empirical effort in theliterature has begun to identify the nature

of memory that is associated with noeticconsciousness.

Knowing and Other MetamemoryJudgments

Remember/Know judgments are by defini-tion metamemory judgments – subjects indi-cate their assessment of retrieval experiencefor items that they have recalled or recog-nized. People are able to make other judg-ments that may seem similar to Know judg-ments, but are in fact quite distinct. Forexample, people can reliably report feelings-of-knowing indicating that they can rec-ognize an item on a multiple-choice testeven though they are unable to recall it(see Koriat, 1995), and people can also reli-ably differentiate whether or not they arein a tip-of-the-tongue state (where they feelthat the answer or the word they are look-ing for is on the tip of their tongue andcould be retrieved; see Brown, 1991, for areview). Both of these types of experiencesdiffer from Know judgments in that the lat-ter experience is associated with informa-tion that is already retrieved (as in recall)or has been presented (as in recognition).That is, Know judgments characterize a par-ticular experiential state that accompaniesretrieved information. There has been littleexperimental effort as yet directed towardthe possible relation between Know judg-ments and these other states of awareness.

Concluding Remarks about InterpretingKnow Judgments

The preceding sections bring into focus boththe difficulties in characterizing Know judg-ments and the successes that have beenaccomplished so far in doing so. Know judg-ments have been defined both in termsof what they are (semantic memory, flu-ency, familiarity), what they are not (low-confidence responses, guesses, and othermetamemorial judgments), and what theymight or might not partly reflect (percep-tual priming.) These efforts show the chal-lenges associated with distinguishing dif-ferent states of consciousness – autonoeticand noetic – experimentally and the role

Page 289: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 71

research can play in successfully delineat-ing these mental states. As such, these find-ings have refined the questions and sharp-ened the direction for future studies aimedat understanding the relationship betweennoetic consciousness and memory.

Anoetic Consciousness

So far our discussion has focused on twotypes of conscious experience: autonoeticconsciousness where one feels as thoughone is mentally reliving a past experience inthe present and noetic consciousness whereone simply knows that one has experiencedthe event before, but cannot vividly reliveit. These two states of conscious aware-ness have in common that they both indi-cate knowledge of past events. However,there is a third class of memory phenomenathat is characterized by the lack of aware-ness that an event occurred in the pasteven though the event changes behavior.Ebbinghaus (1885 /1964) described this classof event and Tulving (1985) referred to thisoccurrence as exemplifying anoetic, or non-knowing, consciousness. One could quib-ble that the characteristic of “non-knowing”means that subjects are not conscious, and sothis state should not be included in the list.However, in other realms of inquiry, beingasleep or in a coma is referred to as a stateof consciousness even though both indicatethe absence of awake consciousness.

In anoetic consciousness, a person is fullyawake and alert, but is unaware that somepast event is influencing current behavior.Unlike the first two states that accompanyperformance on explicit memory tests likefree recall and recognition, anoetic con-sciousness is associated with memory perfor-mance on a separate class of memory tests,called implicit memory tests. Of course,many phenomena in other areas of psychol-ogy (particularly social psychology) mightalso be said to refer to anoetic consciousnessbecause all sorts of factors affect behaviorwithout the person becoming aware of thecritical variables controlling behavior (e.g.,Wegner, 2002 ; Wilson, 2002). Here we con-

fine our remarks to responding on implicitmemory tests.

Measurement Issues: Responding onImplicit Tests

Implicit memory tests differ from explicitones because they are designed to measureretention when people are not aware of theinfluence of prior events on their behavior.As discussed previously, implicit tests mea-sure memory indirectly (and therefore arealso called indirect tests) by having subjectsperform tasks that, unbeknownst to them,can be accomplished using previously stud-ied items. These tasks may include fillingin fragmented words or naming fragmentedpictures, generating items that belong to acategory, or simply answering general knowl-edge questions. Subjects can perform thetask at some level whether or not they havestudied relevant material. However, they aremore likely to fill in a fragmented word, forexample, if they had been exposed to it pre-viously than if the fragment is filled by aword that was not studied. The study expe-rience is said to prime the correct comple-tion, so the phenomenon is called priming.Paul Rozin (1976) was among the first to callattention to this notion. Importantly, prim-ing, by definition, occurs without autonoeticor noetic awareness of the study episode.In this way, implicit tests measure memorythat is associated with anoetic consciousness.However, once a person has produced anitem on the test, he or she may becomeaware, after the fact, that it was from arecently experienced episode. In this case,retrieval of the item seems to occur auto-matically, but the experience of recognitionoccurs later; this type of experience has beenreferred to as involuntary conscious mem-ory by Richardson-Klavehn and Gardiner(2000) and is discussed in a later section.

Perceptual and Conceptual Tests

All implicit tests are designed to measureanoetic consciousness, but they differ in thenature of the processes they require. Roedi-ger and Blaxton (1987) first proposed thatthere were (at least) two types of implicit

Page 290: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 72 the cambridge handbook of consciousness

and explicit tests, perceptual and concep-tual tests. It is probably best to think of per-ceptual and conceptual dimensions as sepa-rate continua, so that tests could rely mostlyon perceptual processes or conceptual pro-cesses or some combination of the two (seeRoediger, Weldon & Challis, 1989, for a dis-cussion of converging operations that can beapplied to define tests). Most explicit testsare primarily conceptual in nature. Simi-larly, implicit tests have been broadly clas-sified into those that require primarily per-ceptual analysis of the target items and thosethat require primarily meaningful analysis.According to the transfer appropriate pro-cessing view (Blaxton, 1989; Kolers & Roedi-ger, 1984 ; Roediger, 1990; Roediger, Weldon,& Challis, 1989), retention on both explicitand implicit tests benefits to the extent thatthe tests require similar stimulus analysis(i.e., similar conceptual or perceptual pro-cessing between study and test), regardlessof whether one is consciously aware of thestudy episode. A large body of work is con-sistent with this prediction (although thereare clear exceptions, too). Here, we describejust a few examples of these classes of teststhat have emerged from this line of thinking(see Toth, 2000, for a comprehensive list ofimplicit memory tests).

Perceptual implicit memory tests gener-ally require participants to identify a phys-ically degraded or rapid presentation of astimulus. Priming on these tests is influencedby the perceptual format used at encod-ing, such as modality (auditory or visual)or form (e.g., picture or word) of presen-tation. Conversely, these tests are relativelyunaffected by meaningful analysis, such asthe semantic analysis required by levels-of-processing manipulations (e.g., Jacoby &Dallas, 1981). There are several popular per-ceptual implicit memory tests, and here wedescribe only a few.

The word stem completion test is one ofthe most popular perceptual implicit tests.In verbal versions of this task, subjects areexposed to a list of words in one phase ofthe experiment (e.g., elephant) and then, ina later phase, presented with the first threeletters of words (e.g., “ele ”) and are asked

to complete them with the first words thatcome to mind (Warrington & Weiskrantz,1968). There are usually ten or more possiblesolutions in this kind of test (element, ele-gant, etc.) so priming is measured by the biasto produce elephant after study of that wordrelative to the case in which it has not beenstudied. Similarly, the word fragment com-pletion test requires subjects to completewords with missing letters, such as e e h n ,with the first word that comes to mind (e.g.,Tulving, Schacter & Stark, 1982). Primingin both tests is measured by the propor-tion of fragments completed with studiedsolutions minus the proportion completedwith non-studied solutions. The picture frag-ment completion test provides fragmentedpictures (following study of intact pictures)with instructions for subjects to name thepictures (Weldon & Roediger, 1987).

Other perceptual tests require namingof degraded words or pictures. These arecalled word identification and picture iden-tification because people are required toidentify words or pictures presented verybriefly (and sometimes followed by a back-ward mask). A variant on this task requiresparticipants to identify increasingly com-plete fragments of stimuli. Here, the item isrevealed slowly, and what is measured is thelevel of clarity needed to identify the item,either a picture (e.g., Snodgrass & Corwin,1988; Snodgrass, Smith, Feenan, & Corwin,1987) or a word (e.g., Hashtroudi, Fergu-son, Rappold, & Cronsniak, 1988; Johnston,Hawley & Elliot, 1991). Again, priming onthese tasks is measured by the percentage ofclarity required for identification when theitem was studied relative to when it was notstudied.

Conceptual tests represent the other classof implicit memory tests (Roediger & Blax-ton, 1987). These tests are largely unaffectedby perceptual manipulations at encoding(e.g., the modality of presentation doesnot affect priming). Instead, priming onthese tests is affected by meaningful fac-tors manipulated during study, such thatmore priming occurs with more meaning-ful analyses. One commonly used concep-tual implicit test is the word association test

Page 291: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 73

(see Shimamura & Squire, 1984). In this test,participants see words (some of which areassociated with the studied words; e.g., ele-phant) during study and are asked to quicklyproduce all the associated words that cometo mind in response to the cue word pre-sented during the test (e.g., lion). In a similartest, the category exemplar production test,participants see category names at test (ani-mals) and are asked to quickly produce asmany examples from the category that cometo mind (e.g. Srinivas & Roediger, 1990).As always, priming in both tests is obtainedby comparing performance when a relevantitem was studied to when it was not stud-ied. The category verification test (e.g., Ten-penny & Shoben, 1992) is similar to the cat-egory production test, except that partici-pants do not have to produce the categoryexemplar. Instead, they are given the cate-gory name and a possible exemplar and mustindicate whether or not the item is a memberof the category (animals: elephant). Primingon this task is measured by examining thedecrease in reaction time to studied exem-plars as compared to non-studied exemplars.Finally, general knowledge tests can func-tion as a conceptual implicit memory test(e.g. Blaxton, 1989). In this test, participantsattempt to answer general knowledge ques-tions (e.g., “What animal did the Carthage-nian general Hannibal use in his attack onRome?”). Priming is obtained when partici-pants are more likely to answer the questionscorrectly when they have studied the answerthan when they have not.

The Problem of Contamination

Implicit memory tests are designed to mea-sure anoetic consciousness, but these testscan become contaminated by consciouslycontrolled uses of memory. That is, despitethe test instructions to complete the frag-ment or answer the question with the firstword that comes to mind, subjects mayrecognize the items they produce as beingfrom the earlier phase in the experiment,and they may change their retrieval strategyto attempt explicit recollection. Whethercontamination should be considered a great

problem for implicit memory research isup for debate (see Roediger & McDermott,1993), but certainly it is possible thatneurologically intact participants may treatimplicit tests like explicit ones (see Geraci& Rajaram, 2002). Given this possibil-ity, several researchers have provided rec-ommendations to help limit participants’awareness of the study-test relation and todevise procedures for determining when theimplicit test is compromised by this con-scious recollection. Many of these strate-gies have been described at length elsewhere(Roediger & Geraci, 2004 ; Roediger & Mc-Dermott, 1993) so we discuss them here onlybriefly.

experimental methods to minimize

contamination

One suggestion has been to give inciden-tal learning instructions at encoding to tryto disguise the fact that participants haveentered a memory experiment. It may alsohelp to use several filler tasks between thestudy and test phases of the experiment sothat the criterial test seems, to the subjects,to be just one more task in a long series. Ifintentional learning instructions are requiredat encoding, then the implicit test itself canbe disguised as a filler test before an expectedexplicit memory test (e.g., Weldon & Roedi-ger, 1987). In fact, one can even give anexample of the expected explicit test (e.g., arecognition test or a cued recall test) beforeencoding, so that participants will be lesslikely to recognize the implicit test as a mem-ory test and think of it as only another fillertask before the explicit test they expect. Inaddition to including a good cover story, thetest list can be constructed such that stud-ied items make up a smaller proportion oftest items than non-studied or filler items,and appear later in the list. With this testconstruction, participants may be less likelyto notice the studied items (but see Challis& Roediger, 1993 , for evidence on whetherthis factor matters). Finally, there is someevidence that rapid presentation of the testfragments or stems (for example) helps pro-mote performance associated with anoeticconsciousness (Weldon, 1993).

Page 292: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 74 the cambridge handbook of consciousness

methods for detecting autonoetic

(or noetic) consciousness in

implicit tests

Despite using the recommendations out-lined above, it is still possible that subjectswill become aware that they encounteredthe test items recently. There are severalprocedures for measuring whether implicittests have been compromised by this levelof awareness. Perhaps the simplest mea-sure is to use a post-test questionnaire toassess autonoetic and noetic consciousness(e.g., Bowers & Schacter, 1990). Many stud-ies of implicit memory have used this tech-nique, and data from these questionnaireshave permitted the partitioning of subjectsinto those who are aware and unaware ofthe relations between the study and testphases; critical dissociations between awareand unaware participants are sometimesobtained as a function of independent vari-ables (e.g., Geraci & Rajaram, 2002). Asan aside, we note that the data from thesequestionnaires may overestimate the level ofparticipants’ awareness because (1) partici-pants may only become aware at the timeof the questioning, especially if the ques-tions are leading ones, and (2) participantsmay not have had time or motivation toengage in conscious recollection when per-forming the task, even if they did becomeaware during the test. This latter possibil-ity can be thought of as illustrating auto-noetic consciousness occurring after auto-matic retrieval; as noted above, this phe-nomenon is called involuntary consciousrecollection (or memory) and is discussedin depth below (Richardson-Klavehn, Gar-diner, & Java, 1994).

A second procedure that has been devel-oped to assess whether implicit memorytests are compromised by autonoetic con-sciousness is the retrieval intentionality cri-terion (Schacter, Bowers, & Booker, 1989).This procedure is based on the fact thatexplicit tests of memory reliably show cer-tain encoding effects, such as the levelsof processing effect (superior memory forwords processed for meaning as opposed tosurface detail). If the criterial test is a percep-tual implicit memory test, one can manipu-

late the nature of the encoding task (phys-ical or semantic processing) to determineif the perceptual implicit test is compro-mised by explicit processes. If the implicittest shows a level of processing effect, thenone can conclude that the test is contami-nated by explicit recollection; if there is lit-tle or no effect of this powerful variable onthe perceptual implicit test, then it is proba-bly a relatively pure measure of priming in ananoetic state (see Roediger, Weldon, Stadler,& Riegler, 1992). Note, however, that thisspecific procedure only works for perceptualimplicit tests, because conceptual implicittests, by definition, are sensitive to mean-ingful processing. Other techniques must beused for conceptual tests (e.g., Hashtroudi,Ferguson, Rappold, & Cronsniak, 1988).

A third procedure for separating con-sciously controlled from automatic pro-cesses is the process dissociation procedure(Jacoby, 1991). Jacoby argued that attemptsto isolate pure types of processing (inci-dental or automatic processing on the onehand, and intentional or consciously con-trolled processing, on the other) are unlikelyto be completely successful even when usingquestionnaires or the retrieval intention-ality criterion. So, although implicit testsare designed and often assumed to rely onunconscious automatic processes, they arenot immune to more consciously controlledprocesses. Similarly, and just as seriously,explicit memory tests may be affected byincidental or automatic retrieval. To addressthese issues Jacoby and his colleagues devel-oped the process dissociation procedure(PDP) that incorporates a technique calledthe opposition method (see Jacoby, 1991,1998; Jacoby, Toth, & Yonelinas, 1993). Herewe sketch in the logic of the procedure, butthe method can be a bit tricky to use; per-haps the best general “user’s guide” to thePDP is Jacoby (1998).

In the PDP technique as applied toimplicit memory tests, participants study aset of material, such as words in a list (oftenunder several encoding conditions), and thentake one of two types of tests using dif-ferent retrieval instructions called inclusionand exclusion instructions. The test cues are

Page 293: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 75

held constant (e.g., the same word stemsmight be presented on both the inclusionand exclusion test). Consider again a word-stem completion test (see Jacoby et al., 1993 ,for an experiment that used this procedure).After studying a long list of words such asmercy under various encoding conditions,participants are given the stems of words,such as “mer ”, that could either be com-pleted with a studied word on the previ-ous list (e.g., mercy) or a non-studied (e.g.,merit). On a typical cued recall test, peo-ple would be given a cue and asked to useit to remember the studied word. Here, asin all explicit memory tests, correct recall ofthe item could be achieved through eitherintentional recollection of the study episodeor by a more automatic process in which theitem pops to mind and is then recognized.The inclusion test instructions are similar tothose in a typical explicit memory task inthat participants are asked to respond to thecue with an item from the study list; how-ever, if they cannot remember the item, theyare instructed to guess, so the test includesboth the product of intentional recollec-tion and, failing that, incidental or automaticpriming due to familiarity.

On an exclusion test, participants are toldto respond to the word stem without using aword from the studied list. So, if mercy comesto mind and they recognize it from the list,they should not respond with mercy, butthey must respond with merit or merchant orsome other word beginning with mer. Nowparticipants’ use of conscious recollectionopposes their responding with a list word; ifthey respond with the list word (above thenon-studied base rate of producing the listword when it has not been recently studied)then this effect is due to incidental retrievalthat is unopposed by recollection.

The logic of the PDP is that inclusionperformance is driven both by intentionaland incidental (or automatic) retrieval,whereas exclusion performance is producedonly by incidental (automatic) retrieval. Ifwe assume that these processes are inde-pendent, then an estimate of intentionalrecollection in a particular condition or aparticular participant can be derived by sub-

tracting performance under the exclusioninstruction from performance under inclu-sion instruction. That is, if Inclusion per-formance = Probability of retrieval usingintentional recollection + Probability of rec-ollection using automatic retrieval, whereasExclusion performance = Probability ofrecollection using automatic retrieval, thenthe difference between the two reflects theinfluence of intentional recollection.

Probability of recall in the exclusioncondition represents a measure of per-formance that is driven by incidental orautomatic processes. This automatic useof memory is analogous to implicit mem-ory in that it is the information thatleaks into memory and affects behaviorwithout intention or awareness. However,several researchers (Richardson-Klavehn &Gardiner, 1996; Richardson-Klavehn, Gar-diner, & Java, 1994) have suggested that theautomatic form of memory measured by thePDP may not be completely analogous topriming on implicit memory tests becauseof involuntary recollection. We turn to thatissue next.

Involuntary Conscious Recollection

Some researchers have suggested that auto-matic forms of memory measured by theprocess dissociation procedure may not becompletely analogous to priming on implicitmemory tests due to the bugaboo of invol-untary conscious recollection (Richardson-Klavehn, et al. (1994); Richardson-Klavehn,et al. 1996). The criticism arises from thefact that controlled processes and auto-matic processes are often, but not always,accompanied by autonoetic consciousnessand anoetic consciousness, respectively. Theprocess dissociation procedure assumes thatforms of memory that are automatic arealso unconscious. However, it is logicallypossible that people may vividly remem-ber events after they come to mind spon-taneously (see the Ebbinghaus quote at thebeginning of the chapter). A procedure usedto capture this kind of memory experienceinstructs subjects to try not to produce stud-ied items to fit a cue (e.g., a word stem), but

Page 294: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 76 the cambridge handbook of consciousness

instead to complete the stems with only non-studied words; this is Jacoby’s exclusion testand embodies the logic of opposition. If thesubject produces any studied words underthese instructions, the assumption can bemade that the words came to mind automat-ically. However, Richardson-Klavehn and hiscolleagues altered the test to determinewhether this spontaneous retrieval is associ-ated with later awareness. To do this, afterthe exclusion test they gave subjects anopportunity to write the word again nextto the fragment if they recognize it as hav-ing been in the list earlier. Words that werestudied and “accidentally” used to completethe fragments but are then later recognizedas having been studied provide a measure ofinvoluntary conscious (aware) memory. Tothe extent that such recognition occurs dur-ing exclusion tests, the automatic compo-nent from the PDP may be underestimated.

Interestingly, this form of memoryappears to be useful in reconciling some con-tradictory results in the literature regard-ing whether cross-modality priming (e.g.,the effect of auditory presentation on avisual test, Rajaram & Roediger, 1993) resultsfrom explicit memory contamination. Usingthe retrieval intentionality criterion, Craik,Moscovitch, and McDowd (1994) arguedthat valid cross-modal priming occurs onperceptual implicit memory tests and thatit is not the result of explicit memory pro-cesses being used on the implicit test. Onthe other hand, another set of results usingthe process dissociation procedure indicatedthat the effect is not associated with auto-matic processes (Jacoby et al., 1997). Thetwo methods therefore lead to different con-clusions. These paradoxical results can bereconciled if it is assumed that the processdissociation procedure conflates awarenesswith volition. Using the procedure to studyinvoluntary conscious recollection outlinedabove, Richardson-Klavehn and Gardiner(1996) showed that cross-modality primingwas associated both with awareness and withautomatic retrieval. Cross-modality primingdoes occur due to an automatic (priming)component, and the apparent lack of anautomatic influence using the PDP occurs

because the PDP overestimates the amountof conscious recollection (by mixing in invol-untary conscious recollection).

Recently, Kinoshita (2001) has attemptedto provide a theoretical account of involun-tary aware memory. Following Moscovitch’scomponent process model of memory (1992 ,1994), Kinoshita distinguishes between thememory systems involved in intentionalretrieval and those involved in awarenessof the past. According to Moscovitch, thefrontal lobes are responsible for our abilityto intentionally retrieve the past, whereasthe medial-temporal lobes are responsiblefor binding the features of an event together,including time and place information thathelps define the episode in memory. Theidea is that cues at retrieval (either onesprovided experimentally or those producedinternally) automatically reactivate memo-ries, bringing events to mind. (Tulving [1983 ,1985] referred to this kind of process asecphory). If the subject is in an explicit mem-ory experiment at the time of this ecphoricprocess and is by definition required to usethe cue to retrieve the past, then volitionand awareness work together: The subjectboth intends to retrieve the past and is alsoaware of the past. If, on the other hand,the subject is in an implicit memory exper-iment and is not required to intentionallyretrieve the past, then volition and aware-ness can either occur together or separately.Because the medial-temporal lobes bindepisodic information associated with thestudy context together, this information canbecome automatically available at retrieval,despite the lack of intention to recallthese events.

Our interpretation of this argument isthat if all aspects of the event including theepisodic features of that event (the time andplace) are activated, then one may becomeaware of the past even in the absence ofan intention to retrieve (hence involuntaryconscious recollection). That is, this processassociated with the medial-temporal lobesallows for involuntary aware memory. AsKinoshita suggests, “ . . . this retrieval of atrace imbued with consciousness accountsfor the felt experience of remembering, the

Page 295: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 77

feeling of reexperiencing the event” (2001,p. 61).

Although it is possible for awareness toaccompany automatic retrieval, the con-verse is possible as well. Even on a freerecall test a person may simply know thatthe retrieved item was presented earlier,indicating retrieval accompanied by noeticconsciousness (e.g., Hamilton & Rajaram,2003 ; Tulving, 1985). Similarly, the phe-nomenon of recognition failure of recallablewords (Tulving & Thomson, 1973) indi-cates that a person can fail to recognizeretrieved items as from the past (an exam-ple of anoetic consciousness). The pointis that volition, or intention to retrieve, isa separate and orthogonal construct fromconscious awareness. The thorny thicket ofissues surrounding these complex issues ofintention, awareness, and memory perfor-mance are just beginning to be investigatedand understood.

Illusions of Rememberingand Knowing

So far we have discussed conscious experi-ences associated with accurate memories –instances when people vividly remember apast event, when they know that the eventhas occurred, or when the event (unbe-knownst to them) influences their behavior.In all three cases, we are concerned with thevarious levels of conscious experience asso-ciated with memory for events that actuallyoccurred. However, one of the most com-pelling findings from recent studies is thatsubjects sometimes report vivid consciousexperiences (Remember responses) forevents that never occurred (e.g., Roediger& McDermott, 1995). This phenomenonhas been termed false remembering(Roediger & McDermott, 1995), illusory

recollection (Gallo & Roediger, 2003), orphantom recollection (Brainerd, Payne,Wright, & Reyna, 2003).

The paradigm that has been usedmost frequently to study this phenomenoninvolves having subjects study lists of 15

associatively related words (bed, rest, awake

tired, dream, slumber. . . . ) and is calledthe Deese-Roediger-McDermott paradigmor DRM (after its originators; Deese, 1959;Roediger & McDermott, 1995). The lists areall associates of one word that is not pre-sented, sleep in this case, as determined byword association norms. The finding is that,even on immediate recall tests with warningsagainst guessing, subjects recall the criticalnon-presented words at levels comparable tothose of words that were presented (Roedi-ger & McDermott, 1995). The effect alsooccurs on cued recall tests (e.g., E. Marsh,Roediger, & McDermott, 2004). When givena recognition test, the subjects falsely rec-ognize the critical item at the same levelas the list words. Even more importantlyfor present purposes, when asked to provideRemember/Know judgments on recognizeditems, subjects judge the critical words likesleep to be remembered just as often as theydo for the list words that were actually stud-ied. The fact that subjects vividly rememberthese falsely recognized items produces aninteresting paradox, because in most recog-nition studies (often with unrelated words)false alarms are assigned Know rather thanRemember responses. (After all, if the itemwas never presented, shouldn’t it just beknown because it was familiar? How couldsubjects remember features associated withthe moment of occurrence of a word thatwas never presented?)

The finding of false recall, false recog-nition, and false remembering using theDRM paradigm has been confirmed andstudied by many other researchers (e.g.,Gallo & Roediger, 2003 ; Gallo, McDermott,Percer, & Roediger, 2001; Neuschatz, Payne,Lampinen, & Toglia, 2001). In addition, illu-sory recollection is obtained using otherparadigms that produce high levels of falsealarms in recognition (Lampinen, Copeland,& Neuschatz, 2001; Miller & Gazzaniga,1998) and also in cases of false recall in theLoftus misinformation paradigm (Roediger,Jacoby, & McDermott, 1996).

Remember judgments are sometimesviewed as a purified form of episodic rec-ollection, so the finding of false Remem-ber responses raises the issue of veracity

Page 296: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 78 the cambridge handbook of consciousness

of Remember judgments. The outcome isperplexing for theories of remembering andknowing (see Rajaram, 1999; Rajaram &Roediger, 1997, for further discussion). Oneidea is that people misattribute (Gallo &Roediger, 2003) or incorrectly bind (Hicks &Hancock, 2002) features from studied itemsto the related non-studied item (the criti-cal lure, in this case). When subjects studylists of words that are associated to the crit-ical item, many of the studied words mayspark associative arousal (at either a con-scious or unconscious level, or both). There-fore, features from studied events becomebound to the critical item even though itis never explicitly presented. Norman andSchacter (1997) required subjects to justifytheir Remember responses in a DRM exper-iment by providing details they remem-bered. Subjects had no trouble doing so foritems such as sleep, and in fact the lev-els of false remembering were just as highin an instructional condition in which sub-jects had to justify responses as in otherconditions in which no justification wasrequired (as is customary in Remember/Know experiments).

A related idea is that reports of illusoryrecollection are driven in part by accurateepisodic memory for the surrounding listcontext (Geraci & McCabe, 2006). In sup-port of this hypothesis, Geraci and McCabeshowed that reports of false Rememberresponses decreased when subjects weregiven modified Remember instructions thatdid not contain the instruction to use rec-ollection of surrounding items. When sub-jects were not instructed to rely on recollec-tion for the surrounding words as a basis forremembering, the magnitude of the illusiondecreased (although it did not vanish). Theseresults suggest that Remember responses tofalsely recognized items are driven partly byretrieval of studied items. These findings fur-ther highlight the critical role of instructionsin affecting reports of conscious experience,which has been a theme of this chapter.

A critic might complain that the fact thatsubjects can have full-blown recollectiveexperiences of events that never occurredmight cast doubt on the utility of studying

Remember/Know judgments. Don’t theseresults from the DRM paradigm show theyare invalid? We believe that such skepticismis misplaced. The fact that autonoetic con-sciousness is subject to illusions is quite inter-esting, but in our opinion does not cast doubton this type of conscious experience. Con-sider the case of visual perception and ourconscious experience of seeing the world.Complex cognitive processes can give rise topowerful visual illusions in which our per-cepts differ dramatically from the objects inthe world that give rise to them. Still, no onedoubts the conscious experience of seeingjust because what we see sometimes differsfrom what we ought to see. Just as errors ofperception do not invalidate the notion ofseeing, so we do not think that errors of rec-ollection invalidate the concept of remem-bering.

Conclusion

As noted at the outset of this chapter,the issue of states of consciousness in thestudy of memory has only recently becomean active topic of study. The discussion ofmental states associated with various formsof retrieval was avoided through much ofthe history of cognitive psychology, proba-bly because investigators worried about thelegacy of introspection. Introspective studiesof attention and perception conducted earlyin the 20th century are today largely consid-ered blind alleys into which the field was led.Yet even Ebbinghaus (1885 /1964), the greatpioneer who eschewed introspective meth-ods in his own work, began his famous bookwith a lucid discussion of mental states dur-ing retrieval.

In this chapter, we have described twoempirical movements, with their attendanttheoretical frameworks, that have shapedthe recent study of consciousness in rela-tion to memory. The first breakthrough canbe traced to the reports of implicit memoryin severely amnesic individuals. The disso-ciative phenomena of conscious or explicitmemory and indirect or implicit memoryprovided, in retrospect, one important way

Page 297: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 79

to characterize two distinct states of con-sciousness associated with retrieval. Thesecond impetus – this a deliberate effortto map the relation between conscious-ness and memory – came from the dis-tinction Endel Tulving introduced in 1985

between remembering and knowing. Unlikethe explicit/implicit distinction, where theformer signified conscious memory and thelatter non-conscious memory, the experi-ences of remembering and knowing bothdenote conscious memory but of two dif-ferent forms. The two states can be distin-guished by subjects given careful instruc-tions and seem to map onto experiencespeople have every day. Remembering rep-resents the ability to mentally travel acrossthe temporal continuum of the personal pastwith attendant feelings of immediacy andwarmth, whereas knowing represents mem-ory for the past in terms of facts, knowl-edge, or familiarity but without any re-experiencing of the events.

We have used Tulving’s tripartite distinc-tion among autonoetic, noetic, and anoeticstates of consciousness to organize some ofthe key research findings in memory. Thisapproach helps us understand the propertiesof these three states of consciousness in rela-tion to different forms of memory – remem-bering, knowing, and priming, respectively.An interesting observation to emerge fromthis approach is that the experience ofremembering can be documented with con-siderable clarity. Even though remember-ing – the mental time travel associated withthis form of memory – seems introspectiveand highly personal, it represents a state ofconsciousness and a form of memory thatneurologically intact subjects can use. Theexperimental work that has been producedusing the Remember/Know procedure (andrelated techniques) has resulted in a siz-able body of research with consistent andreplicable effects across laboratories. Theunreliability of introspective reports, whichundermined certain research programs pro-mulgated by Wundt and Titchener, does notseem to afflict this modern work, which isrobust and replicable. Research on remem-bering, knowing, and priming reveals the

systematic responsiveness of these measuresto the influence of specific independent andsubject variables.

The study of remembering is in someways more advanced that the study ofknowing, which presents unique challenges.Knowing is relatively more difficult to com-municate in experiments, and the usual tac-tic of instructions is to define knowing inrelation to remembering, rather than as anexperience in its own right. Noetic con-sciousness can, in the abstract, be definedon its own (the conscious state of know-ing, just as we know the meaning of platy-pus without remembering when and wherewe first learned about this creature), but inexperimental practice has been defined inrelation to remembering. This methodolog-ical challenge of definition through instruc-tions is manifested in our attempts at the-oretical interpretations as well. We haveidentified these challenges in the sectionon noetic consciousness, and we considerthese issues to be important topics for futureinvestigation (see also Rajaram, 1999). Bet-ter characterization of the nature of knowingremains an important piece of the puzzle tosolve in our pursuit of relating consciousnessto memory.

The third state of consciousness underconsideration here – anoetic consciousness –is best described in memory research interms of priming on implicit memory testsunder conditions when conscious aware-ness of the study/test relation can be elimi-nated or minimized. Priming has been doc-umented most dramatically in individualswith severe anterograde amnesia who showintact priming with little or no capacity forconscious recollection. This non-knowing ornon-aware state of consciousness and itsexpression in priming on implicit memorytests have also been extensively studied inindividuals with intact memory. In these lat-ter cases, much effort has been expendedto control, minimize, or eliminate the issueof consciously controlled processes affectingperformance that is supposed to be, in Tul-ving’s terms, anoetic. The great challenge inthe work reported in this chapter is to sepa-rate and study the three states of awareness,

Page 298: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 80 the cambridge handbook of consciousness

attempts that have been partially successful.Of course, in many situations the subject’smind may slip among the various states ofconsciousness in performing tasks, and thechallenge would be to chart the ebb and flowof different states of consciousness duringmemory tasks.

In 1885 Ebbinghaus provided examples ofstates of conscious awareness during mem-ory retrieval. One hundred years later, Tulv-ing named and delineated a theory of threestates of conscious awareness and, mostimportantly, provided a method by whichthey might be studied. We may hope thatby 2085 great progress will have been madein studying consciousness and memory. Ourchapter is a progress report (written in 2006)that marks the steps taken along the firstfifth of the path. Although we as a field havemade good progress in the past 20 years, itis clear to us that great breakthroughs mustlay ahead because we are far from our goalof understanding the complex relations ofmemory and consciousness.

References

Atkinson, R. C., & Joula, J. F. (1973). Factors influ-encing speed and accuracy of word recogni-tion In S. Kornblum (Ed.), Attention and perfor-mance (Vol. 4 , pp. 583–612). San Diego: Aca-demic Press.

Atkinson, R. C., & Joula, J. F. (1974). Search anddecision processes in recognition memory. InD. H. Krantz & R. C. Atkinson (Eds.), Contem-porary developments in mathematical psychology:I. Learning, memory, and thinking (p. 299).Oxford: W. H. Freeman.

Blaxton, T. A. (1989). Investing dissociationsamong memory measures: Support for atransfer-appropriate processing framework.Journal of Experimental Psychology: Learning,Memory, and Cognition, 15 , 657–668.

Bowers, J. S., & Schacter, D. L. (1990). Implicitmemory and test awareness. Journal of Experi-mental Psychology: Learning, Memory, and Cog-nition, 16, 404–416.

Brainerd, C. J., Payne, D. G., Wright, R., & Reyna,V. F. (2003). Phantom recall. Journal of Memory& Language, 48, 445–467.

Brown, R. (1991). A review of the tip-of-the-tongue experience. Psychological Bulletin, 109,204–223 .

Brown, R., & Kulik, J. (1977). Flashbulb memo-ries. Cognition, 5 , 73–99.

Buchanan, T. W., & Adolphs, R. A. (2002). Therole of the human amygdala in emotional mod-ulation of long-term declarative memory. InS. Moore & M. Oaksford (Eds.), Emotionalcognition: From brain to behavior. London: JohnBenjamins.

Challis, B. H., & Roediger, H. L. (1993). Theeffect of proportion overlap and repeated test-ing on primed word fragment completion.Canadian Journal of Experimental Psychology,47 , 113–123 .

Christianson, S., & Loftus, E. F. (1990). Somecharacteristics of people’s traumatic memories.Bulletin of the Psychonomic Society, 2 8, 195–198.

Church, B. A., & Schacter, D. L. (1994) Percep-tual specificity of auditory priming: Implicitmemory for voice intonation and fundamentalfrequency. Journal of Experimental Psychology:Learning, Memory, and Cognition, 2 0, 521–533 .

Conway, M. A. (1995). Flashbulb memories. Hills-dale, NJ: Erlbaum.

Conway, M. A., & Dewhurst, S. A. (1995).Remembering, familiarity, and source mon-itoring. Quarterly Journal of ExperimentalPsychology: Human Experimental Psychology,48A, 125–140.

Conway, M. A., Gardiner, J. M., Perfect, T. J.,Anderson, S. J., & Cohen, G. M. (1997).Changes in memory awareness during learning:The acquisition of knowledge by psychol-ogy undergraduates. Journal of ExperimentalPsychology: General, 12 6, 393–413 .

Craik, F. I. M., & Lockhart, R. S. (1972). Levels ofprocessing: A framework for memory research.Journal of Verbal Learning and Verbal Behavior,11, 671–684 .

Craik, F. I. M., Moscovitch, M., & McDowd, J. M.(1994). Contributions of surface and concep-tual information to performance on implicitand explicit memory tasks. Journal of Experi-mental Psychology: Learning, Memory, and Cog-nition, 2 0, 864–875 .

Deese, J. (1959). On the prediction of occurrenceof particular verbal intrusions in immediaterecall. Journal of Experimental Psychology, 58,17–22 .

Page 299: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 81

Donaldson, W. (1996). The role of decision pro-cesses in remembering and knowing. Memory& Cognition, 2 4 , 523–533 .

Dunn, J. C. (2004). Remember-know: A matterof confidence. Psychological Review, 111, 524–542 .

Ebbinghaus, H. (1964). Memory: A contributionto experimental psychology (H. A. Ruger & C. E.Bussenius, Trans.). New York: Dover Publica-tions. (Original work published 1885).

Ferguson, S. A., Hashtroudi, S., & Johnson, M. K.(1992). Age differences in using source-relevantcues. Psychology and Aging, 7 , 443–452 .

Finke, R. A., Johnson, M. K., & Shyi, G. C. (1988).Memory confusions for real and imagined com-pletions of symmetrical visual patterns. Mem-ory & Cognition, 16, 133–137.

Gallo, D. A., McDermott, K. B., Percer, J. M.,& Roediger, H. L., III. (2001). Modality effectsin false recall and false recognition. Journal ofExperimental Psychology: Learning, Memory, &Cognition, 2 7 , 339–353 .

Gallo, D. A., & Roediger, H. L., III. (2003). Theeffects of associations and aging on illusory rec-ollection. Memory & Cognition, 31, 1036–1044 .

Gardiner, J. M. (1988). Functional-aspects of rec-ollective experience. Memory & Cognition, 16,309–313 .

Gardiner, J. M., & Conway, M. A. (1999). Levelsof awareness and varieties of experience. In B.H. Challis & B. M. Velichkovsky (Eds.), Strati-fication in cognition and consciousness (pp. 237–254). Amsterdam: John Benjamins.

Gardiner, J. M., Gawlick, B., & Richardson-Klavehn, A. (1994). Maintenance rehearsalaffects knowing, not remembering; elaborativerehearsal affects remembering, not knowing.Psychonomic Bulletin and Review, 1, 107–110.

Gardiner, J. M., & Gregg, V. H. (1997). Recogni-tion memory with little or no remembering:Implications for a detection model. Psycho-nomic Bulletin and Review, 4 , 474–479.

Gardiner, J. M., & Java, R. I. (1990). Recollectiveexperience in word and nonword recognition.Memory & Cognition, 18, 23–30.

Gardiner, J. M., & Java, R. I. (1993). Recognizingand remembering. In A. F. Collins, S. E. Gath-ercole, M. A. Conway, & P. E. Morris (Eds.),Theories of memory (pp. 163–188). Hillsdale, NJ:Erlbaum.

Gardiner, J. M., Java, R. I., & Richardson-Klavehn, A. (1996). How level of process-ing really influences awareness in recognition

memory. Canadian Journal of Experimental Psy-chology, 50, 114–122 .

Gardiner, J. M., Kaminska, Z., Dixon, M., & Java,R. I. (1996). Repetition of previously novelmelodies sometimes increases both rememberand know responses in recognition memory.Psychonomic Bulletin and Review, 3 , 366–371.

Gardiner, J. M., & Parkin, A. J. (1990). Atten-tion and recollective experience in recognitionmemory. Memory & Cognition, 18, 579–583 .

Gardiner, J. M., Ramponi, C., & Richardson-Klavehn, A. (1998). Experiences of remember-ing, knowing, and guessing. Consciousness andCognition: An International Journal, 7 , 1–26.

Gardiner, J. M., Ramponi, C., & Richardson-Klavehn, A. (2002). Recognition memory anddecision processes: A meta-analysis of remem-ber, know, and guess responses. Memory, 10,83–98.

Gardiner, J. M., & Richardson-Klavehn, A.(2000). Remembering and knowing. In E. Tulv-ing & F. I. M. Craik (Eds.), Handbook of memory.Oxford: Oxford University Press.

Gardiner, J. M., Richardson-Klavehn, A., & Ram-poni, C., (1997). On reporting recollectiveexperiences and “direct access to memory sys-tems.” Psychological Science, 8, 391–394 ,

Geraci, L., & Franklin, N. (2004). The influenceof linguistic labels on source monitoring deci-sions. Memory, 12 , 571–585 .

Geraci, L., & McCabe, D. P. (2006). Examin-ing the basis for illusory recollection: The roleof Remember/Know Instructions. PsychonomicBulletin & Review, 13 , 466–473 .

Geraci, L., & Rajaram, S. (2002). The ortho-graphic distinctiveness effect on direct andindirect tests of memory: Delineating theawareness and processing requirements. Jour-nal of Memory and Language, 47 , 273–291.

Geraci, L., & Rajaram, S. (2006). The distinctive-ness effect in explicit and implicit memory. InR. R. Hunt & J. Worthen, (Eds.), Distinctivenessand memory (pp. 211–234). New York: OxfordUniversity Press.

Graf, P., & Schacter, D. L. (1985). Implicit andexplicit memory for new associations in normaland amnesic subjects. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,11, 501–518.

Graf, P., Squire, L. R., & Mandler, G. (1984). Theinformation that amnesic patients do not for-get. Journal of Experimental Psychology: Learn-ing, Memory, and Cognition, 10, 164–178.

Page 300: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 82 the cambridge handbook of consciousness

Gregg, V. H., & Gardiner, J. M. (1994). Recogni-tion memory and awareness: A large effect ofthe study-test modalities on “know” responsesfollowing a highly perceptual orienting task.European Journal of Cognitive Psychology, 6, 13 1–147.

Hamilton, M., & Rajaram, S. (2003). Statesof awareness across multiple memory tasks:Obtaining a “pure” measure of conscious rec-ollection. Acta Psychologica, 112 , 43–69.

Hamann, S. (2001). Cognitive and neural mecha-nisms of emotional memory. Trends in CognitiveSciences, 5 , 394–400.

Hashtroudi, S., Ferguson, S. A., Rappold, V. A.,& Cronsniak, L. D. (1988). Data-driven andconceptually driven processes in partial-wordidentification and recognition. Journal of Exper-imental Psychology: Learning, Memory, and Cog-nition, 14 , 749–757.

Henkel, L. A., & Franklin, N. (1998). Reality mon-itoring of physically similar and conceptuallyrelated objects. Memory & Cognition, 2 6, 659–673 .

Henkel, L. A., Franklin, N., & Johnson, M. K.(2000). Cross-modal source monitoring confu-sions between perceived and imagined events.Journal of Experimental Psychology: Learning,Memory, and Cognition, 2 6, 321–335 .

Hicks, J. L., & Hancock, T. (2002). The associ-ation between associative strength and sourceattributions in false memory. Psychonomic Bul-letin & Review, 9, 807–815 .

Hicks, J. L., & Marsh, R. L. (1999). Remember-know judgments can depend on how memoryis tested. Psychonomic Bulletin & Review, 6, 117–122 .

Hirshman, E., & Masters, S. (1997). Modeling theconscious correlates of recognition memory:Reflections on the remember-know paradigm.Memory & Cognition, 2 5 , 345–351.

Hunt, R. R. (1995). The subtlety ofdistinctiveness: What von Restorff reallydid. Psychonomic Bulletin & Review, 2 , 105–112 .

Hunt, R. R. (2006). What is the meaning ofdistinctiveness for memory research? In R.R. Hunt & J. Worthen, (Eds.), Distinctive-ness and memory. Oxford: Oxford UniversityPress.

Hunt, R. R., & Elliot, J. M. (1980). The role ofnonsemantic information in memory: Ortho-graphic distinctiveness effects on retention.Journal of Experimental Psychology: General,109, 49–74 .

Hunt, R. R. & McDaniel, M. A. (1993). Theenigma of organization and distinctiveness.Journal of Memory and Language, 32 , 421–445 .

Hunt, R. R., & Mitchell, D. B. (1978). Speci-ficity in nonsemantic orienting tasks and dis-tinctive memory traces. Journal of Experimen-tal Psychology: Human Learning and Memory,4 , 121–135 .

Hunt, R. R., & Mitchell, D. B. (1982). Indepen-dent effects of semantic and nonsemantic dis-tinctiveness. Journal of Experimental Psychology:Learning, Memory, and Cognition, 8, 81–87.

Hunt, R. R., & Toth, J. P. (1990). Perceptualidentification, fragment completion, and freerecall: Concepts and data. Journal of Experimen-tal Psychology: Learning, Memory, and Cogni-tion, 16, 282–290.

Inoue, C., & Bellezza, F. S. (1998). The detectionmodel of recognition using know and remem-ber judgments. Memory & Cognition, 2 6, 299–308.

Jacoby, L. L. (1978). On interpreting the effectsof repetition: Solving a problem versus remem-bering a solution. Journal of Verbal Learning andVerbal Behavior, 17 , 649–667.

Jacoby, L. L. (1983a). Perceptual enhancement:Persistent effects of an experience. Journalof Experimental Psychology: Learning, Memory,and Cognition, 9, 21–38.

Jacoby, L. L. (1983b). Remembering the data:Analyzing interactive processes in reading.Journal of Verbal Learning and Verbal Behavior,2 2 , 485–508.

Jacoby, L. L. (1991). A process dissociationframework: Separating automatic from inten-tional uses of memory. Journal of Memory andLanguage, 30, 513–541.

Jacoby, L. L. (1998). Invariance in automaticinfluences of memory: Toward a user’s guidefor the process-dissociation procedure. Journalof Experimental Psychology: Learning, Memory,& Cognition, 24 , 3–26.

Jacoby, L. L., & Dallas, M. (1981). On the relation-ship between autobiographical memory andperceptual learning. Journal of ExperimentalPsychology: General, 110, 306–340.

Jacoby, L. L., Jones, T. C. & Dolan, P. O.(1998). Two effects of repetition: Support fora dual-process model of knowledge judgmentsand exclusion errors. Psychonomic Bulletin &Review, 5 , 705–709.

Jacoby, L. L., Toth, J. P., & Yonelinas, A. P.(1993). Separating conscious and unconscious

Page 301: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 83

influences of memory: Measuring recollection.Journal of Experimental Psychology: General,12 2 , 139–154 .

Jacoby, L. L., & Witherspoon, D. (1982). Remem-bering without awareness. Canadian Journal ofPsychology, 36, 300–324 .

Jacoby, L. L., Woloshyn, V., & Kelley, C. (1989).Becoming famous without being recognized:Unconscious influences of memory producedby dividing attention. Journal of ExperimentalPsychology: General, 118, 115–125 .

Jacoby L. L., Yonelinas, A. P., & Jennings, J.M. (1997). The relation between consciousand unconscious (automatic) influences: Adeclaration of independence. In J. D. Cohen& J. W. Schooler (Eds.), Scientific approachesto consciousness (pp. 13–47). Hillsdale, NJ:Erlbaum.

Java, R. I. (1994). States of awareness follow-ing word stem completion. European Journalof Cognitive Psychology, 6, 77–92 .

Java, R. I., Gregg, V. H., & Gardiner, J. M.(1997). What do people actually remember(and know) in “remember/know” experiments?European Journal of Cognitive Psychology, 9,187–197.

Johnson, M. K. (1988). Discriminating the originof information. In T. F. Oltmanns & B. A. Maher(Eds.), Delusional beliefs (pp. 34–65). Oxford:John Wiley and Sons.

Johnson, M. K., DeLeonardis, D. M., Hashtroudi,S., & Ferguson, S. A. (1995). Aging and singleversus multiple cues in source monitoring. Psy-chology and Aging, 10, 507–517.

Johnson, M. K., Foley, M. A., & Leach, K. (1988).The consequences for memory of imagining inanother person’s voice. Memory & Cognition,16, 337–342 .

Johnson, M. K., Foley, M. A., Suengas, A. G.,& Raye, C. L. (1988). Phenomenal character-istics of memories for perceived and imaginedautobiographical events. Journal of Experimen-tal Psychology: General, 117 , 371–376.

Johnson, M. K., Hashtroudi, S., & Lindsay, D. S.(1993). Source monitoring. Psychological Bul-letin, 144 , 3–28.

Johnson, M. K., Raye, C. L., Foley, H. J., & Foley,M. A. (1981). Cognitive operations and decisionbias in reality monitoring. American Journal ofPsychology, 94 , 37–64 .

Johnston, W. A., Hawley, K. J., & Elliott, J. M.(1991). Contribution of perceptual fluency torecognition judgments. Journal of Experimental

Psychology, Learning, Memory, and Cognition,17 , 210–223 .

Joordens, S., & Merikle, P. M. (1993). Inde-pendence or redundancy: 2 models of con-scious and unconscious influences. Journal ofExperimental Psychology: General, 12 2 , 462–467.

Kelley, C. M., & Jacoby, L. L. (1998). Subjec-tive reports and process dissociation: Fluency,knowing, and feeling. Acta Psychologica, 98,127–140.

Kensinger, E. A., & Corkin, S. (2003). Memoryenhancement for emotional words: Are emo-tional words more vividly remembered thanneutral words? Memory & Cognition, 31, 1169–1180.

Kinoshita, S. (1995). The word frequency effect inrepetition memory versus repetition priming.Memory & Cognition, 2 3 , 569–580.

Kinoshita, S. (2001). The role of involuntaryaware memory in the implicit stem and frag-ment completion tasks: A selective review. Psy-chonomic Bulletin & Review, 8, 58–69.

Knowlton, B. J., & Squire, L. R. (1995). Remem-bering and knowing: Two different expressionsof declarative memory. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 1, 699–710.

Kolers, P. A., & Roediger, H. L., III. (1984). Pro-cedures of mind. Journal of Verbal Learning andVerbal Behavior, 2 3 , 425–449.

Koriat, A. (1995). Dissociating knowing and thefeeling of knowing: Further evidence for theaccessibility model. Journal of ExperimentalPsychology: General, 12 4 , 311–333 .

Lampinen, J. M., Copeland, S. M., & Neuschatz,J. S. (2001). Recollections of things schematic:Room schemas revisited. Journal of Experimen-tal Psychology: Learning, Memory, and Cogni-tion, 2 7 , 1211–1222 .

Lindsay, D. S., & Kelley, C. M. (1996). Creatingillusions of familiarity in a cued recall remem-ber/know paradigm. Journal of Memory andLanguage, 35 , 197–211.

Madigan, S. (1983). Picture memory. In J. C.Yuille (Ed.), Imagery, memory, and cognition:Essays in honor of Allan Paivio (pp. 65–89).Hillsdale, NJ: Erlbaum.

Mandler, G. (1980). Recognizing: The judgmentof previous occurrence. Psychological Review,87 , 252–271.

Mangels, J. A., Picton, T. W., & Craik, F. I.M. (2001). Attention and successful episodic

Page 302: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 84 the cambridge handbook of consciousness

encoding: An event-related potential study.Cognitive Brain Research, 11, 77–95 .

Mantyla, T. (1997). Recollections of faces:Remembering differences and knowing similar-ities. Journal of Experimental Psychology: Learn-ing, Memory, and Cognition, 2 3 , 1–14 .

Mantyla, T., & Raudsepp, J. (1996). Recollec-tive experience following suppression of focalattention. European Journal of Cognitive Psy-chology, 8, 195–203 .

Marsh, E. J., Meade, M. L., & Roediger, H. L.(2003). Learning facts from fiction. Journal ofMemory and Language, 49, 519–536.

Marsh, E. J., Roediger, H. L., & McDermott, K. B.(2004). Does test-induced priming play a rolein the creation of false memories? Memory, 12 ,44–55 .

Mather, M., Henkel, L. A., & Johnson, M. K.(1997). Evaluating characteristics of falsememories: Remember/know judgments andmemory characteristics questionnaire com-pared. Memory & Cognition, 2 5 , 826–837.

McDaniel, M. A., & Geraci, L. (2006). Encod-ing and retrieval processes in distinctivenesseffects: Toward an integrative framework. InR. R. Hunt & J. Worthen (Eds.), Distinctivenessand memory (pp. 65–88). Oxford: Oxford Uni-versity Press.

Miller, M. B., & Gazzaniga, M. S. (1998). Creat-ing false memories for visual scenes. Neuropsy-chologia, 36, 513–520.

Moscovitch, M. (1992). Memory and working-with-memory: A component process modelbased on modules and central systems. Journalof Cognitive Neuroscience, 4 , 257–267.

Moscovitch, M. (1994). Memory and workingwith memory: Evaluation of a component pro-cess model and comparisons with other models.In D. L. Schacter & E. Tulving (Eds.), Memorysystems 1994 (pp. 269–310). Cambridge, MA:MIT Press.

Moscovitch, D. A., & McAndrews, M. P. (2002).Material-specific deficits in “remembering” inpatients with unilateral temporal lobe epilepsyand excisions. Neuropsychology, 40, 1335–1342 .

Moscovitch, M., Vriezen, E., & Goshen-Gottstein, Y. (1993). Implicit tests of mem-ory in patients with focal lesions or degener-ative brain disorders. In H. Spinnler & F. Boller(Eds.), Handbook of neuropsychology (Vol. 8, pp.133–173). Amsterdam: Elsevier.

Neisser, U. & Harsch, N. (1992). Phantomflashbulbs: False recollections of hearing the

news about the Challenger. In E. Winograd &U. Neisser (Eds.), Affect and accuracy in recall:Studies of “flashbulb” memories (pp. 9–31). NewYork: Cambridge University Press.

Neuschatz, J. S., Payne, D. G., Lampinen, J. M., &Toglia, M. P. (2001). Assessing the effectivenessof warnings and the phenomenological charac-teristics of false memories. Memory, 9, 53–71.

Norman, K. A., & Schacter, D. L. (1997).False recognition in older and younger adults:Exploring the characteristics of illusory mem-ories. Memory & Cognition, 2 5 , 838–848.

Ochsner, K. (2000). Are affective events richlyrecollected or simply familiar? The experienceand process of recognizing feelings past. Journalof Experimental Psychology: General, 12 9, 242–261.

Parkin, A. J., Gardiner, J. M., & Rosser, R. (1995).Functional aspects of recollective experience inface recognition. Consciousness and Cognition:An International Journal, 4 , 387–398.

Parkin, A. J., & Russo, R. (1990). Implicit andexplicit memory and the automatic/effortfuldistinction. European Journal of Cognitive Psy-chology, 2 , 71–80.

Rajaram, S. (1993). Remembering and knowing:Two means of access to the personal past. Mem-ory & Cognition, 2 1, 89–102 .

Rajaram, S. (1996). Perceptual effects onremembering: Recollective processes in picturerecognition memory. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 2 , 365–77.

Rajaram, S. (1998). The effects on conceptualsalience and conceptual distinctiveness on con-scious recollection. Psychonomic Bulletin &Review, 5 , 71–78.

Rajaram, S. (1999). Assessing the nature ofretrieval experience: Advances and challenges.In B. H. Challis & B. M. Velichkovsky (Eds.),Stratification in cognition and consciousness (pp.255–275). Amsterdam: John Benjamins.

Rajaram, S., & Geraci, L. (2000). Conceptualfluency selectively influences knowing. Journalof Experimental Psychology: Learning, Memory,and Cognition, 2 6, 1070–1074 .

Rajaram, S., & Hamilton, M. (2005). Concep-tual processes can enhance both Remember-ing and Knowing even after delay: Effectsof contextually-varied repetition, meaning-ful encoding, and test delay. Unpublishedmanuscript, Stony Brook University, StonyBrook, N.Y.

Page 303: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 85

Rajaram, S., Hamilton, M., & Bolton, A. (2002).Distinguishing states of awareness from confi-dence during retrieval: Evidence from amne-sia. Cognitive, Affective, and Behavioral Neuro-science, 2 , 227–235 .

Rajaram, S. & Roediger, H. L. (1993). Direct com-parison of four implicit memory tests. Journalof Experimental Psychology: Learning, Memory,and Cognition, 19, 765–776.

Rajaram, S. & Roediger, H. L. (1997) Remember-ing and knowing as states of consciousness dur-ing retrieval. In J. D. Cohen & J. W. Schooler(Eds.), Scientific approaches to consciousness.(pp. 213–240). Hillsdale, NJ: Erlbaum.

Richardson-Klavehn, A. & Gardiner, J. M. (2000).Remembering and knowing. In E. Tulving &F. I. M. Craik (Eds.), The Oxford handbook ofmemory (pp. 229–244). New York, N.Y., U.S.:Oxford University Press.

Richardson-Klavehn, A., Gardiner, J. M., & Java,R. I. (1994). Involuntary conscious memoryand the method of opposition. Memory, 2 , 1–29.

Richardson-Klavehn, A., Gardiner, J. M, & Java,R. I. (1996). Memory: Task dissociations, pro-cess dissociations and dissociations of con-sciousness. In G. D. M. Underwood (Ed.),Implicit cognition (pp. 85–158). New York, N.Y.,U.S.: Oxford University Press.

Roediger, H. L. (1990). Implicit memory: Reten-tion without remembering. American Psychol-ogist, 45 , 1043–1056.

Roediger, H. L. (1999). Retrieval experience: Anew arena of psychological study. In B. H. Chal-lis & B. M. Velichkovsky (Eds.), Stratificationof cognition and consciousness (pp. 229–235).Amsterdam: John Benjamins.

Roediger, H. L., & Blaxton, T. A. (1987). Effectsof varying modality, surface features, and reten-tion interval on priming in word-fragmentcompletion. Memory & Cognition, 15 , 379–388.

Roediger, H. L., & Geraci, L. (2004). Conductingimplicit memory research: A practical guide. InA. Wenzel & D. Rubin (Eds.), A guide to imple-menting cognitive methods with clinical popula-tions. Washington, DC: APA Books.

Roediger, H. L., Jacoby, J. D., & McDermott,K. B. (1996). Misinformation effects in recall:Creating false memories through repeatedretrieval. Journal of Memory & Language, 35 ,300–318.

Roediger, H. L., & McDermott, K. B. (1993).Implicit memory in normal human subjects. In

F. Boller & J. Grafman (Eds.), Handbook of neu-ropsychology (Vol. 8, pp. 63–13 1). Amsterdam:Elsevier.

Roediger, H. L., & McDermott, K. B. (1995).Creating false memories: Remembering wordsnot presented in lists. Journal of ExperimentalPsychology: Learning, Memory, & Cognition, 2 1,803–814 .

Roediger, H. L., Weldon, M. S., & Challis, B.H. (1989). Explaining dissociations betweenimplicit and explicit measures of retention:A processing account. In H. L. Roediger &F. I. M. Craik (Eds.), Varieties of memory andconsciousness: Essays in honour of Endel Tulving(pp. 3–41). Hillsdale, NJ: Erlbaum.

Roediger, H. L., Weldon, M. S., Stadler, M. L.,& Riegler, G. L. (1992). Direct comparison oftwo implicit memory tests: Word fragment andword stem completion. Journal of ExperimentalPsychology: Learning, Memory, & Cognition, 18,125 1–1269.

Roediger, H. L. III, Wheeler, M. A., & Rajaram,S. (1993). Remembering, knowing, and recon-structing the past. Psychology of Learning andMotivation: Advances in Research and Theory,30, 97–134 .

Rotello, C. M., Macmillan, N. A., & Reeder, J. A.(2004). Sum-difference theory of remember-ing and knowing: a two-dimensional signal-detection model. Psychological Review, 111, 588–616.

Rozin, P. (1976). The psychobiological approachto human memory. In M. R. Rosenzweig & E. L.Bennett (Eds.), Neural Mechanisms of Learningand Memory (pp. 3–48). Cambridge, MA: MITPress.

Rubin, D. C., & Kozin, M. (1984). Vivid memo-ries. Cognition, 16, 81–95 .

Schacter, D. L. (1987). Implicit memory: His-tory and current status. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,13 , 501–518.

Schacter, D. L., Bowers, J., & Booker, J. (1989).Intention awareness, and implicit memory:The retrieval intentionality criterion. In S.Lewandowsky, J. C. Dunn, et al. (Eds.), Implicitmemory: Theoretical issues (pp. 47–65). Hills-dale, NJ: Erlbaum.

Schacter, D. L., Chiu, C. Y. P., & Ochsner, K. N.(1993). Implicit memory: A selective review.Annual Review of Neuroscience, 16, 159–182 .

Schacter, D. L., Verfaellie, M., & Pradere,D. (1996). The neuropsychology of memory

Page 304: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

2 86 the cambridge handbook of consciousness

illusions: False recall and recognition inamnesic patients. Journal of Memory and Lan-guage, 35 , 319–334 .

Schmidt, S. R. (1991). Can we have a distinctivetheory of memory? Memory & Cognition, 19,523–542 .

Schmidt, S. (2006). Emotion, significance, dis-tinctiveness, and memory. In R. R. Hunt &J. Worthen (Eds.), Distinctiveness and memory.Oxford: Oxford University Press.

Schmolck, H., Buffalo, E. A., & Squire, L. R.(2000). Memory distortions develop over time:Recollections of the O. J. Simpson trial verdictafter 15 and 32 months. Psychological Science,11, 39–45 .

Sherman, J. W., & Bessenoff, G. R. (1999). Stereo-types as source-monitoring cues: On the inter-action between episodic and semantic memory.Psychological Science, 10, 106–110.

Shimamura, A. P., & Squire, L. R. (1984).Paired-associated learning and priming affectsin amnesia: A neuropsychological study. Jour-nal of Experimental Psychology: General, 113 ,556–570.

Slamecka, N. J., & Graf, P. (1978). The generationeffect: Delineation of a phenomenon. Journal ofExperimental Psychology: Human Learning andMemory, 4 , 592–604 .

Snodgrass, J. G., & Corwin, J. (1988). Perceptualidentification thresholds for 150 fragmentedpictures from the Snodgrass and Vanderwartpicture set. Perceptual and Motor Skills, 67 , 3–36.

Snodgrass, J. G., Smith, B., Feenan, K., & Cor-win, J. (1987). Fragmenting pictures on theApple Macintosh computer for experimen-tal and clinical applications. Behavior ResearchMethods, Instruments and Computers, 19, 270–274 .

Squire, L. R., Stark, C. E. L., & Clark, R. E.(2004). The medial temporal lobe. AnnualReview of Neuroscience, 27, 279–306.

Srinivas, K., & Roediger, H. L. (1990). Classify-ing implicit memory tests: Category associationand anagram solution. Journal of Memory andLanguage, 2 9, 389–412 .

Talarico, J. M., & Rubin, D. C. (2003). Con-fidence, not consistency, characterizes flash-bulb memories. Psychological Science, 14 , 455–461.

Tenpenny, P. L., & Shoben, E. J. (1992).Component processes and the utility ofthe conceptually-driven/data-driven distinc-

tion. Journal of Experimental Psychology: Learn-ing, Memory, and Cognition, 18, 25–42 .

Toth, J. P. (2000). Nonconscious forms ofhuman memory. In E. Tulving & F. I. M.Craik (Eds.), The Oxford handbook of memory(pp. 245–261). New York: Oxford UniversityPress.

Tulving, E. (1983). Elements of episodic memory.Oxford: Oxford University Press.

Tulving, E. (1985). Memory and consciousness.Canadian Psychology, 2 6, 1–12 .

Tulving, E.. & Schacter, D. L. (1990). Primingand human memory systems. Science, 2 47 , 301–306.

Tulving, E., Schacter, D. L., & Stark, H. A. (1982).Priming effects in word-fragment completionare independent of recognition memory. Jour-nal of Experimental Psychology: Learning, Mem-ory, and Cognition, 8, 336–342 .

Tulving, E., & Thompson, D. M. (1973). Encodingspecificity and retrieval processes in episodicmemory. Psychological Review, 80, 352–373 .

Verfaellie, M. (1994). A re-examination ofrecognition memory in amnesia: Reply toRoediger and McDermott. Neuropsychology, 8,289–292 .

Verfaellie, M., Giovanello, K. S., & Keane, M.M. (2001). Recognition memory in amnesia:Effects of relaxing response criteria. Cogni-tive, Affective, and Behavioral Neuroscience, 1,3–9.

Verfaellie, M., & Keane, M. M. (2002). Impairedand preserved memory processes in amnesia. InL. R. Squire & D. L. Schacter (Eds.), Neuropsy-chology of memory (3d ed., pp. 36–45). NewYork: Guilford Press.

von Restorff, H. (1933). Uber die wirkung vonBereichsblidungen im spurenfeld. PsychologisheForschung, 18, 299–342 .

Wagner, A. D., & Gabrieli, J. D. E. (1998). Onthe relationship between recognition familiar-ity and perceptual fluency: Evidence for dis-tinct mnemonic processes. Acta Psychologia,98, 211–230.

Wagner, A. D., Gabrieli, J. D. E., & Verfaellie,M. (1997). Dissociations between familiarityprocesses in explicit recognition and implicitperceptual memory. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 3 , 305–323 .

Warrington, E. K., & Weiskrantz, L. (1968). Astudy of learning and retention in amnesicpatients. Neuropsychologia, 6, 283–291.

Page 305: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

three forms of consciousness in retrieving memories 2 87

Warrington, E. K., & Weiskrantz, L. (1970).Amnesic syndrome: Consolidation or retrieval?Nature, 2 2 8, 628–630.

Wegner, D. M. (2002). The illusion of consciouswill. Cambridge, MA: MIT Press.

Weldon, M. S. (1993). The time course of per-ceptual and conceptual contributions to wordfragment completion priming. Journal of Exper-imental Psychology: Learning, Memory, and Cog-nition, 19, 1010–1023 .

Weldon, M. S., & Roediger, H. L. (1987). Alteringretrieval demands reverses the picture superi-ority effect. Memory & Cognition, 15 , 269–280.

Wheeler, M. A., Stuss, D. T., & Tulving, E. (1997).Toward a theory of episodic memory: Thefrontal lobes and autonoetic consciousness. Psy-chological Bulletin, 12 1, 331–354 .

Wilson, T. D. (2002). Strangers to ourselves: Dis-covering the adaptive unconscious. Cambridge,MA: Harvard University Press.

Wixted, J., & Stretch, V. (2004). In defense ofthe signal detection interpretation of remem-ber/know judgments. Psychonomic Bulletin &Review, 11, 616–641.

Yonelinas, A. P. (1994). Receiver-operating char-acteristics in recognition memory: evidence fora dual-process model. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 0, 1341–1354 .

Yonelinas, A. P. (2001). Consciousness, control,and confidence: The 3 Cs of recognition mem-ory. Journal of Experimental Psychology: Gen-eral, 130, 361–379.

Yonelinas, A. P. (2002). The nature of recollec-tion and familiarity: A review of 30 years ofresearch. Journal of Memory and Language, 46,441–517.

Yonelinas, A. P., & Jacoby, L. L. (1995). Therelation between remembering and knowingas bases for recognition-effects of size congru-ency. Journal of Memory and Language, 34 , 622–643 .

Yonelinas, A. P., Kroll, N. E. A., Dobbins, I.,Lazzara, M., & Knight, R. T. (1998). Recollec-tion and familiarity deficits in amnesia: Con-vergence of remember-know, process dissocia-tion, and receiver operating characteristic data.Neuropsychology, 12 , 323–339.

Yonelinas, A. P., Kroll, N. E. A., Quamme, J. R.,Lazzara, M. M., Sauve, M., Widaman, K. F., &Knight, R. T. (2002). Effects of extensive tem-poral lobe damage or mild hypoxia on recol-lection and familiarity. Nature Neuroscience, 5 ,1236–1241.

Zechmeister, E. B. (1972). Orthographic dis-tinctiveness as a variable in word recogni-tion. American Journal of Psychology, 85 , 425–430.

Page 306: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: JzG0521857430c10 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 24 , 2007 9:14

288

Page 307: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

C H A P T E R 11

Metacognition and Consciousness

Asher Koriat

Abstract

The study of metacognition can shed light onsome fundamental issues about conscious-ness and its role in behavior. Metacognitionresearch concerns the processes by whichpeople self-reflect on their own cognitiveand memory processes (monitoring) andhow they put their metaknowledge to use inregulating their information processing andbehavior (control). Experimental researchon metacognition has addressed the follow-ing questions. First, what are the bases ofmetacognitive judgments that people makein monitoring their learning, remembering,and performance? Second, how valid aresuch judgments and what are the factorsthat affect the correspondence between sub-jective and objective indexes of knowing?Third, what are the processes that underliethe accuracy and inaccuracy of metacogni-tive judgments? Fourth, how does the out-put of metacognitive monitoring contributeto the strategic regulation of learning andremembering? Finally, how do the metacog-nitive processes of monitoring and con-trol affect actual performance? This chapter

reviews research addressing these questions,emphasizing its implications for issues con-cerning consciousness; in particular, the gen-esis of subjective experience, the function ofself-reflective consciousness, and the cause-and-effect relation between subjective expe-rience and behavior.

Introduction

There has been a surge of interest inmetacognitive processes in recent years,with the topic of metacognition pullingunder one roof researchers from tradition-ally disparate areas of investigation. Theseareas include memory research (Kelley& Jacoby, 1998; Metcalfe & Shimamura,1994 ; Nelson & Narens, 1990; Reder, 1996),developmental psychology (Schneider &Pressley, 1997), social psychology (Bless& Forgas, 2000; Jost, Kruglanski, & Nel-son, 1998; Schwarz, 2004), judgment anddecision making (Gilovich, Griffin, &Kahneman, 2002 ; Winman & Juslin, 2005),neuropsychology (Shimamura, 2000),forensic psychology (e.g., Pansky, Koriat,

2 89

Page 308: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

2 90 the cambridge handbook of consciousness

& Goldsmith, 2005 ; Perfect, 2002), edu-cational psychology (Hacker, Dunlosky, &Graesser, 1998), and problem solving andcreativity (Davidson & Sternberg, 1998;Metcalfe, 1998a). The establishment ofmetacognition as a topic of interest in itsown right is already producing synergyamong different areas of investigation con-cerned with monitoring and self-regulation(e.g. Fernandez-Duque, Baird, & Posner,2000). Furthermore, because some of thequestions discussed touch upon tradition-ally ostracized issues in psychology, suchas the issues of consciousness and freewill (see Nelson, 1996), a lively debatehas been going on between metacognitiveresearchers and philosophers (see Nelson& Rey, 2000). In fact, it appears that theincreased interest in metacognition researchderives in part from the feeling that perhapsthis research can bring us closer to dealingwith (certainly not resolving) some of themetatheoretical issues that have been theprovince of philosophers of the mind.

Definition

Metacognition concerns the study of whatpeople know about cognition in general,and about their own cognitive and mem-ory processes, in particular, and how theyput that knowledge to use in regulating theirinformation processing and behavior. Flavell(1971) introduced the term “metamemory,”which concerns specifically the monitoringand control of one’s learning and remem-bering. Metamemory is the most researchedarea in metacognition and is the focus of thischapter.

Nelson and Narens (1990) proposeda conceptual framework that has beenadopted by most researchers. According tothem, cognitive processes may be dividedinto those that occur at the object leveland those that occur at the meta level: Theobject level includes the basic operationstraditionally subsumed under the rubric ofinformation processing – encoding, rehears-ing, retrieving, and so on. The meta level isassumed to oversee object-level operations(monitoring) and return signals to regulatethem actively in a top-down fashion (con-

trol). The object level, in contrast, has nocontrol over the meta level and no accessto it. For example, the study of new mate-rial involves a variety of basic, object-leveloperations, such as text processing, compre-hending, rehearsing, and so on. At the sametime, metacognitive processes are engagedin planning how to study, in devising andimplementing learning strategies, in moni-toring the course and success of object-levelprocesses, in modifying them when neces-sary, and in orchestrating their operation. Inthe course of studying new material, learnersare assumed to monitor their degree of com-prehension online and then decide whetherto go over the studied material once again,how to allocate time and effort to differentsegments, and when to end studying.

We should note, however, that thedistinction between cognitive and metacog-nitive processes is not sharp because thesame type of cognitive operation may occurat the object level or at the meta level, andin some cases it is unclear to which level aparticular operation belongs (Brown, 1987).

Research Traditions

Historically, there have been two mainlines of research on metacognition thatproceeded almost independently of eachother, one within developmental psychologyand the other within experimental memoryresearch. The work within developmen-tal psychology was spurred by Flavell (seeFlavell, 1979; Flavell & Wellman, 1977), whoargued for the critical role that metacog-nitive processes play in the developmentof memory functioning (see Flavell, 1999).Within memory research, the study ofmetacognition was pioneered by Hart’s(1965) studies on the feeling-of-knowing(FOK), and Brown and McNeill’s (1966)work on the tip-of-the-tongue (TOT).

There is a difference in goals andmethodological styles between these tworesearch traditions. The basic assumptionamong developmental students of metacog-nition is that learning and memory per-formance depend heavily on monitoringand regulatory proficiency. This assump-tion has resulted in attempts to specify the

Page 309: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 2 91

components of metacognitive abilities, totrace their development with age, and toexamine their contribution to memory func-tioning. Hence a great deal of the workis descriptive and correlational (Schneider,1985). The focus on age differences and indi-vidual differences in metacognitive skills hasalso engendered interest in specifying “defi-ciencies” that are characteristic of children atdifferent ages and in devising ways to rem-edy them. This work has expanded into theeducational domain: Because of the increas-ing awareness of the critical contribution ofmetacognition to successful learning (Paris &Winograd, 1990), educational programs havebeen developed (see Scheid, 1993) designedto make the learning process more “metacog-nitive.” Several authors have stressed specif-ically the importance of metacognition totransfer of learning (see De Corte, 2003).

The conception of metacognition bydevelopmental psychologists is more com-prehensive than that underlying much ofthe experimental work on metacognition.It includes a focus on what children knowabout the functioning of memory and par-ticularly about one’s own memory capac-ities and limitations. Developmental workhas also placed heavy emphasis on strate-gies of learning and remembering (Bjorklund& Douglas, 1997; Brown, 1987; Pressley,Borkowski, & Schneider, 1987). In addition,many of the issues addressed in the area oftheory of mind (Perner & Lang, 1999) con-cern metacognitive processes. These issuesare, perhaps, particularly important for theunderstanding of children’s cognition.

In contrast, the experimental-cognitivestudy of metacognition has been drivenmore by an attempt to clarify basic questionsabout the mechanisms underlying monitor-ing and control processes in adult mem-ory (for reviews, see Koriat & Levy-Sadot,1999; Nelson & Narens, 1990; Schwartz,1994). This attempt has led to the emergenceof several theoretical ideas as well as spe-cific experimental paradigms for examiningthe monitoring and control processes thatoccur during learning, during the attemptto retrieve information from memory, andfollowing the retrieval of candidate answers(e.g., Metcalfe, 2000; Schwartz, 2002).

In addition to the developmental andthe experimental-memory lines of research,there has been considerable work onmetacognition in the areas of social psy-chology and judgment and decision mak-ing. Social psychologists have long beenconcerned with questions about metacog-nition, although their work has not beenexplicitly defined as metacognitive (see Jostet al., 1998). In particular, social psychol-ogists share the basic tenets of metacog-nitive research (see below) regarding theimportance of subjective feelings and beliefs,as well as the role of top-down regula-tion of behavior. In recent years social psy-chologists have been addressing questionsthat are at the heart of current research inmetacognition (e.g., Winkielman, Schwarz,Fazendeiro, & Reber, 2003 ; Yzerbyt, Lories,& Dardenne, 1998; see Metcalfe, 1998b).Within the area of judgment and decisionmaking, a great deal of the work concern-ing the calibration of probability judgments(Fischhoff, 1975 ; Lichtenstein, Fischhoff, &Phillips, 1982 ; Winman & Juslin, 2005) isdirectly relevant to the issues raised inmetacognition.

Research Questions

This chapter emphasizes the work onmetacognition within the area of adultmemory research. It is organized primarilyaround the five main questions that havebeen addressed in experimental research onmetamemory. First, what are the bases ofmetacognitive judgments; that is, how dowe know that we know (e.g., Koriat & Levy-Sadot, 1999)? Second, how valid are subjec-tive intuitions about one’s own knowledge;that is, how accurate are metacognitivejudgments, and what are the factors thataffect their accuracy (e.g., Schwartz &Metcalfe, 1994)? Third, what are the pro-cesses underlying the accuracy and inaccu-racy of metacognitive judgments? In par-ticular, what are the processes that leadto illusions of knowing and to dissociationsbetween knowing and the feeling of know-ing (e.g., Benjamin & Bjork, 1996; Koriat,1995)? Fourth, what are the processes under-lying the strategic regulation of learning

Page 310: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

2 92 the cambridge handbook of consciousness

and remembering? In particular, how doesthe output of monitoring affect controlprocesses (e.g., Barnes, Nelson, Dunlosky,Mazzoni, & Narens, 1999; Son & Metcalfe,2000)? Finally, how do the metacognitiveprocesses of monitoring and control affectactual memory performance (e.g., Koriat& Goldsmith, 1996a; Metcalfe & Kornell,2003)?

Although these questions focus onrelatively circumscribed processes of mem-ory and metamemory, they touch uponsome of the issues that are at the heartof the notions of consciousness and self-consciousness. Thus, the study of the sub-jective monitoring of knowledge addresses adefining property of consciousness, becauseconsciousness implies not only that weknow something but also that we knowthat we know it. Thus, consciousness bindstogether knowledge and metaknowledge(Koriat, 2000b). This idea is implied, forexample, in Rosenthal’s (2000) “higher-order thought” (HOT) philosophical theoryof consciousness: A “lower-order” men-tal state is conscious by virtue of therebeing another, higher-order mental state thatmakes one conscious that one is in the lower-order state (see Chapter 3). Clearly, thesubjective feelings that accompany cognitiveprocesses constitute an essential ingredientof conscious awareness. Rather than takingthese feelings (and their validity) at their facevalue, the study of metacognition attemptsto uncover the processes that shape subjec-tive feelings and contribute to their validityor to their illusory character. Furthermore,the study of monitoring-based control hasimplications for the question of the functionof conscious awareness, and for the benefitsand perils in using one’s own intuitive feel-ings and subjective experience as a guide tojudgments and behavior.

Basic Assumptions about Agencyand Consciousness

The increased interest in metacognitionseems to reflect a general shift fromthe stimulus-driven, behavioristic view ofthe person to a view that acknowledges

the importance of subjective processes andtop-down executive functions (see Koriat,2000b). The study of metacognition is gen-erally predicated on a view of the person asan active organism that has at its disposal anarsenal of cognitive operations that can beapplied at will toward the achievement ofvarious goals. The strategic choice and reg-ulation of these operations are assumed tobe guided in part by the person’s subjectivebeliefs and subjective feelings.

Embodied in this view are two metathe-oretical assumptions (see Koriat, 2002).The first concerns agency – the assumptionthat self-controlled processes have measur-able effects on behavior. Although mostresearchers would acknowledge that manycognitive processes, including some that aresubsumed under the rubric of executivefunction, occur outside of consciousness,there is also a recognition that the per-son is not a mere medium through whichinformation flows. Rather, people have somefreedom and flexibility in regulating activelytheir cognitive processes during learning andremembering. Furthermore, it is assumedthat such self-regulation processes deserveto be studied not only because they can haveconsiderable effects on performance but alsobecause they are of interest in their ownright.

This assumption presents a dilemmafor experimental researchers because self-controlled processes have been tradition-ally assumed to conflict with the desireof experimenters to exercise strict experi-mental control. Of course, there are manystudies in which learning and rememberingstrategies have been manipulated (throughinstructions) and their effects investigated(e.g., Craik & Lockhart, 1972). Unlike suchexperimenter-induced strategies, however,self-initiated strategies generally have beenseen as a nuisance factor that should beavoided or neutralized. For example, labo-ratory studies typically use a fixed-rate pre-sentation of items rather than a self-pacedpresentation (see Nelson & Leonesio, 1988).Also, in measuring memory performance,sometimes forced-choice tests are preferredover free-report tests to avoid having to

Page 311: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 2 93

deal with differences in “guessing,” or elsesome correction for guessing procedure isused to achieve a pure measure of “true”memory (see Koriat & Goldsmith, 1996a;Nelson & Narens, 1994). Needless to say,people in everyday life have great freedom inregulating their memory processes, and thechallenge is to find ways to bring these self-controlled metacognitive processes into thelaboratory (Koriat, 2000a; Koriat & Gold-smith, 1996a).

The second assumption concerns the roleof self-reflective, subjective experience inguiding controlled processes. This is, ofcourse, a debatable issue. It is one thing toequate controlled processes with consciousprocesses (e.g., Posner & Snyder, 1975); itis another to assume that subjective expe-rience plays a causal role in behavior. Stu-dents of metacognition not only place aheavy emphasis on subjective experiencebut also assume that subjective feelings, suchas the feeling of knowing, are not mereepiphenomena, but actually exert a causalrole on information processing and behavior(Koriat, 2000b; Nelson, 1996).

A similar growing emphasis on the role ofsubjective feelings in guiding judgments andbehavior can be seen in social-psychologicalresearch (Schwarz & Clore, 2003) and indecision making (Slovic, Finucane, Peters, &MacGregor, 2002). Also, the work on mem-ory distortions and false memories brings tothe fore the contribution of phenomenolog-ical aspects of remembering to source mon-itoring and reality monitoring (see Kelley &Jacoby, 1998; Koriat, Goldsmith, & Pansky,2000; Mitchell & Johnson, 2000).

It should be stressed, however, that not allstudents of metacognition subscribe to theassumptions discussed above. In particular,Reder (1987) has argued that a great deal ofstrategy selection occurs without consciousdeliberation or awareness of the factors thatinfluence one’s choice. Of course, there islittle doubt that many monitoring and con-trol processes occur without consciousness(Kentridge & Heywood, 2000), so the ques-tion becomes one of terminology, like thequestion whether feelings must be consciousor can also be unconscious (Clore, 1994 ;

Winkielman & Berridge, 2004). However,by and large, much of the experimentalresearch in metacognition is predicated onthe tacit assumption that the metacognitiveprocesses studied entail conscious control.Nonetheless, although the term “metacog-nition” is generally understood as involvingconscious awareness, it should be acknowl-edged that monitoring and control pro-cesses can also occur unconsciously (Spehn& Reder, 2000).

I now review some of the experimentalwork on metamemory, focusing on researchthat may have some bearing on generalquestions about phenomenal experience andconscious control.

Experimental Paradigms in the Studyof Online Metamemory

A variety of metacognitive judgments havebeen studied in recent years that ought to beincluded under the umbrella of metacogni-tion (Metcalfe, 2000). Among these are ease-of-learning judgments (Leonesio & Nelson,1990), judgments of comprehension (Maki& McGuire, 2002), remember/know judg-ments (Gardiner, & Richardson-Klavehn,2000), output monitoring (Koriat, Ben-Zur,& Sheffer, 1988), olfactory metacognition(Jonsson & Olsson, 2003), and source moni-toring (Johnson, 1997). However, the bulk ofthe experimental work has concerned threetypes of judgments.

First are judgments of learning (JOLs)elicited following the study of each item.For example, after studying each paired-associate in a list, participants are asked toassess the likelihood that they will be able torecall the target word in response to the cuein a future test. These item-by-item judg-ments are then compared to the actual recallperformance.

Second are FOK judgments elicitedfollowing blocked recall. In the Recall-Judgment-Recognition (RJR) paradigmintroduced by Hart (1965), participantsare required to recall items from memory(typically, the answers to general knowledgequestions). When they fail to retrievethe answer, they are asked to make FOK

Page 312: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

2 94 the cambridge handbook of consciousness

judgments regarding the likelihood thatthey would be able to select the correctanswer from among several distractors in aforced-choice test to be administered later.The validity of FOK judgments is thenevaluated by the correspondence betweenthese judgments and performance on therecognition test. Finally, after retrievingan answer from memory or after selectingan answer, the subjective confidence inthe correctness of that answer is elicited,typically in the form of a probability judg-ment reflecting the assessed likelihood thatthe answer is correct. Whereas JOLs andFOK judgments are prospective, involvingpredictions of future memory performance,confidence judgments are retrospective,involving assessments about a memory thathas been produced.

Many different variations of these generalparadigms have been explored, includingvariations in the type of memory stud-ied (semantic, episodic, autobiographical,eyewitness-type events, etc.), the format ofthe memory test (free recall, cued recall,forced-choice recognition, etc.), and theparticular judgments elicited (item-by-itemjudgments or global judgments, using a prob-ability or a rating scale, etc.).

How Do We Know That We Know?The Bases of Metacognitive Judgments

As we see later, metacognitive judgmentsare accurate by and large. JOLs made fordifferent items during study are generallypredictive of the accuracy of recalling theseitems at test. FOK judgments elicited fol-lowing blocked recall predict the likelihoodof recalling or recognizing the elusive targetat some later time, and subjective confidencein the correctness of an answer is typicallydiagnostic of the accuracy of that answer.Thus, the first question that emerges is, Howdo we know that we know?

This question emerges most sharply withregard to the tip-of-the-tongue (TOT) state,in which we fail to recall a word or a name,and yet we are convinced that we know it

and can even sense its imminent emergenceinto consciousness. What is peculiar aboutthis experience is the discrepancy betweensubjective and objective knowing. So howcan people monitor the presence of infor-mation in memory despite their failure toretrieve it? In reviewing the verbal learningliterature more than 30 years ago, Tulvingand Madigan (1970), in fact, argued that oneof the truly unique characteristics of humanmemory is its knowledge of its own knowl-edge. They proposed that genuine progressin memory research depends on understand-ing how the memory system not only canproduce a learned response or retrieve animage but also can estimate rather accuratelythe likelihood of its success in doing it. Agreat deal of research conducted since 1970

has addressed this question.

The Direct-Access View

A simple answer to the question aboutthe basis of feelings of knowing is pro-vided by the direct-access view according towhich people have direct access to mem-ory traces both during learning and duringremembering and can base their metacog-nitive judgments on detecting the presenceand/or the strength of these traces. Forexample, in the case of JOLs elicited dur-ing study, it may be proposed that learnerscan detect directly the memory trace that isformed following learning and can also mon-itor online the increase in trace strength thatoccurs in the course of study as more time isspent studying an item (e.g., Cohen, Sandler,& Keglevich, 1991). Of course, to the extentthat learners can do so, they can also decideto stop studying (under self-paced condi-tions) when trace strength has reached adesirable value (Dunlosky & Hertzog, 1998).

A direct-access account has also beenadvanced by Hart (1965) with regard toFOK. Hart proposed that FOK judgmentsrepresent the output of an internal moni-tor that can survey the contents of mem-ory and can determine whether the traceof a solicited memory target exists in store.Thus, the feeling associated with the TOTstate may be assumed to stem from direct,

Page 313: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 2 95

privileged access to the memory trace ofthe elusive target (see also Burke, MacKay,Worthley, & Wade, 1991; Yaniv & Meyer,1987). Hart stressed the functional value ofhaving such a monitor, given the general fal-libility of the memory system: If the mon-itor “signals that an item is not in storage,then the system will not continue to expenduseless effort and time at retrieval; instead,input can be sought that will put the iteminto storage” (Hart, 1965 ; p. 214).

Direct-access (or trace-access) accounts,which assume that monitoring involves adirect readout of information that appearsin a ready-made format, have two merits.The first is that they can explain not onlythe basis of JOLs and FOK judgments butalso their accuracy. Clearly, if JOLs are basedon accessing the strength of the memorytrace that is formed following learning, thenthey ought to be predictive of future recall,which is also assumed to depend on memorystrength. Similarly, if FOK judgments moni-tor the presence of the memory trace of theunrecalled item, they should be expected topredict the future recognition or recall ofthat item.

The second merit is that they wouldseem to capture the phenomenal quality ofmetacognitive feelings: the subjective feel-ing, such as that which accompanies thetip-of-the-tongue state, that one monitorsdirectly the presence of the elusive target inmemory and its emergence into conscious-ness (James, 1890). In fact, metacognitivefeelings are associated with a sense of self-evidence, which gives the impression thatpeople are in direct contact with the con-tents of their memories and that their intro-spections are inherently accurate.

The Cue-Utilization View ofMetacognitive Judgments

Although the direct-access view has notbeen entirely abandoned (see Burke et al.,1991; Metcalfe, 2000), an alternative viewhas been gaining impetus in recent years.According to this view, metacognitive judg-ments are inferential in origin, based on avariety of cues and heuristics that have some

degree of validity in predicting objectivememory performance (Benjamin & Bjork,1996). To the extent that such indeed isthe case, then the accuracy of metacogni-tive judgments is not guaranteed, but shoulddepend on the validity of the cues on whichit rests.

Inferential, cue-utilization accounts gen-erally distinguish between information-based (or theory-based) and experience-based metacognitive judgments (see Kelley& Jacoby, 1996a; Koriat & Levy-Sadot,1999; Matvey, Dunlosky, & Guttentag, 2001;Strack, 1992). This distinction parallels a dis-tinction between two modes of thought thathas been proposed in other domains (seeKahneman, 2003 , and see further below).Thus, it is assumed that metacognitive judg-ments may be based either on a deliber-ate use of beliefs and memories to reach aneducated guess about one’s competence andcognitions, or on the application of heuristicsthat result in a sheer subjective feeling.

Theory-Based Monitoring

Consider first theory-based metacognitivejudgments. Developmental students of cog-nition placed a great deal of empha-sis on what Flavell called “metacognitiveknowledge;” that is, on children’s beliefsand intuitions about their own memorycapacities and limitations and about thefactors that contribute to memory perfor-mance (Brown, 1987). Such beliefs havebeen found to affect the choice of learningstrategies, as well as people’s predictions oftheir own memory performance (see Flavell,1999; Schneider & Pressley, 1997).

In contrast, the experimental research onadult metacognition contains only scatteredreferences to the possible contribution oftheories and beliefs to metacognitive judg-ments. For example, in discussing the basesof JOLs, Koriat (1997) proposed to distin-guish between two classes of cues for theory-based online JOLs, intrinsic and extrinsic.The former includes cues pertaining to theperceived a priori difficulty of the studieditems (e.g., Rabinowitz, Ackerman, Craik,& Hinchley, 1982). Such cues seem to affect

Page 314: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

2 96 the cambridge handbook of consciousness

JOLs, particularly during the first study trial,as suggested by the observation that norma-tive ratings of ease of learning are predic-tive both of JOLs and of recall of differentitems (e.g., Koriat, 1997; Leonesio & Nelson,1990; Underwood, 1966). The second classincludes extrinsic factors that pertain eitherto the conditions of learning (e.g., numberof times an item has been presented, pre-sentation time, etc., Mazzoni, Cornoldi, &Marchitelli, 1990; Zechmeister & Shaugh-nessy, 1980) or to the encoding operationsapplied by the learner (e.g., level of pro-cessing, interactive imagery, etc.; Begg, Vin-ski, Frankovich, & Holgate, 1991; Matveyet al., 2001; Rabinowitz et al., 1982 ; Shaw& Craik, 1989). For example, participants’JOLs seem to draw on the belief that gen-erating a word is better for memory thanreading it (Begg et al., 1991; Matvey et al.,2001). Koriat (1997) proposed that JOLs arecomparative in nature. Hence, they shouldbe more sensitive to intrinsic cues pertainingto the relative recallability of different itemswithin a list than to factors that affect over-all performance (see Begg, Duft, Lalonde,Melnick, & Sanvito, 1989; Carroll, Nelson, &Kirwan, 1997; Shaw & Craik, 1989). Indeed,he obtained evidence indicating that, inmaking JOLs, the effects of extrinsic factorsare discounted relative to those of intrinsicfactors that differentiate between differentitems within a list.

Another major determinant of people’smetacognitive judgments is their perceivedself-efficacy (Bandura, 1977). In fact, peo-ple’s preconceived notions about their skillsin specific domains predict their assessmentof how well they did on a particular task.For example, when students are asked totell how well they have done on an exam,they tend to overestimate greatly their per-formance on the test, and this bias derivesin part from the tendency of people tobase their retrospective assessments on theirpreconceived, inflated beliefs about theirskills in the domain tested, rather than ontheir specific experience with taking thetest (Dunning, Johnson, Ehrlinger, & Kruger,2003). In a study by Ehrlinger and Dun-ning (2003), two groups of participants took

the same test; those who believed that thetest measured abstract reasoning ability (onwhich they had rated themselves highly)estimated that they had achieved higherscores than did those who thought thatthey had taken a computer programmingtest. This was so despite the fact that thetwo groups did not differ in their actualperformance.

Another finding that points to the effectsof one’s a priori beliefs comes from stud-ies of the relationship between confidenceand accuracy. People’s confidence in theirresponses is generally predictive of the accu-racy of these responses in the case of generalknowledge questions but not in the case ofeyewitness memory (Perfect, 2002). Perfect(2004) provided evidence that this occursbecause people’s confidence is based in parton their preconceptions about their abilities.Such preconceptions are generally valid inthe case of general knowledge questions, forwhich people have had considerable feed-back and hence know their relative standing.Such is not the case with eyewitness mem-ory, for which they lack knowledge abouthow good they are and, by implication, howconfident they ought to be. Thus, people’sconfidence in their performance seems to bebased in part on their preconceived beliefsabout their own competence in the domainof knowledge tested.

Evidence for the effects of beliefs andtheories also comes from studies of correc-tion processes in judgment. People oftenbase their judgments directly on their sub-jective feelings (see Schwarz & Clore, 1996;Slovic et al., 2002). However, when theyrealize that their subjective experience hasbeen contaminated by irrelevant factors,they may try to correct their judgmentsaccording to their beliefs about how thesejudgments had been affected by the irrel-evant factors (Strack, 1992). For example,in the study of Schwarz, Bless, Strack,Klumpp, Rittenauer-Schatka, and Simons(1991), participants who were asked torecall many past episodes demonstratingself-assertiveness reported lower self-ratingsof assertiveness than those who were askedto recall a few such episodes, presumably

Page 315: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 2 97

because of the greater difficulty experiencedin recalling many episodes. However, whenled to believe that the experienced difficultyhad been caused by background music, par-ticipants relied more heavily on the retrievedcontent, reporting higher ratings under themany-episodes condition than under thefew-episodes condition. These and otherfindings suggest that the correction pro-cess is guided by the person’s beliefs aboutthe factors that make subjective experi-ence an unrepresentative basis for judgment.Although most researchers assume that thecorrection process requires some degree ofawareness (see Gilbert, 2002), others sug-gest that it may also occur unconsciously(Oppenheimer, 2004).

More recent work in social cognition (seeSchwarz, 2004) suggests that the conclu-sions that people draw from their metacog-nitive experience, such as the experienceof fluent processing, depend on the naıvetheory that they bring to bear. Further-more, people can be induced to adoptopposite theories about the implications ofprocessing fluency, and these theories modu-late experience-based judgments. These sug-gestions deserve exploration with regard tojudgments of one’s own knowledge.

Another line of evidence comes fromstudies that examined how people deter-mine that a certain event did not happen.Strack and Bless (1994) proposed that deci-sions of nonoccurrence may be based ona metacognitive strategy that is used whenrememberers fail to retrieve any feature ofa target event that they have judged tobe highly memorable. In contrast, in theabsence of a clear recollection of a non-memorable event, people may infer thatthe event had actually occurred (but hadbeen forgotten). Indeed, non-occurrencedecisions are made with strong confidencefor events that would be expected to beremembered (e.g., one’s name, a salientitem, etc.; Brown, Lewis, & Monk, 1977;Ghetti, 2003). On the other hand, study-ing material under conditions unfavorablefor learning (or expecting fast forgetting,Ghetti, 2003) results in a relatively high rateof false alarms for non-memorable distrac-

tors. Brainerd, Reyna, Wright, and Mojardin(2003) also discussed a process termed “rec-ollection rejection” in which a distractor thatis consistent with the gist of a presented itemmay be rejected when the verbatim trace ofthat item is accessed. However, they arguedthat this process can occur automatically,outside conscious awareness.

The evidence reviewed thus far supportsthe idea that metacognitive judgments maybe based on one’s beliefs and theories. Forexample, the subjective confidence in thecorrectness of one’s memory product (e.g., aselected answer in a quiz) can be based on alogical, analytic process in which one evalu-ates and weighs the pros and cons (Gigeren-zer, Hoffrage & Kleinbolting, 1991; Koriat,Lichtenstein, & Fischhoff, 1980). FOK judg-ments, too, may draw on theories or beliefsresulting in an educated guess about the like-lihood of retrieving or recognizing an elusiveword in the future (Costermans, Lories, &Ansay, 1992). Such judgments may not bequalitatively different from many predic-tions that people make in everyday life.

Experience-Based Monitoring

Experience-based metacognitive judgments,in contrast, are assumed to entail a quali-tatively different process from that under-lying theory-based judgments. Consider, forexample, the TOT experience. The strongconviction that one knows the elusive targetis based on a sheer subjective feeling. Thatfeeling, however, appears to be the prod-uct of an inferential process that involvesthe application of nonanalytic heuristics (seeJacoby & Brooks, 1984 ; Kelley & Jacoby,1996a; Koriat & Levy-Sadot, 1999) that oper-ate below full consciousness and give rise to asheer subjective experience. Indeed, the ideathat subjective experience can be influencedand shaped by unconscious inferential pro-cesses has received support in the work ofJacoby, Kelley, Whittlesea, and their asso-ciates (see Kelley & Jacoby, 1998; Whittle-sea, 2004). Koriat (1993) argued that thenonanalytic, unconscious basis of metacog-nitive judgments is responsible for the phe-nomenal quality of the feeling of knowing

Page 316: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

2 98 the cambridge handbook of consciousness

as representing an immediate, unexplainedintuition, similar to that which is associ-ated with the experience of perceiving (seeKahneman, 2003). According to this view,sheer subjective experience, which lies atthe core of conscious awareness, is in factthe end product of processes that lie belowawareness.

Several cues have been proposed as deter-minants of JOL, FOK, and subjective confi-dence. These cues have been referred to col-lectively as “mnemonic” cues (Koriat, 1997).With regard to JOLs and FOK, these cuesinclude the ease or fluency of processing ofa presented item (Begg et al., 1989), thefamiliarity of the cue that serves to probememory (Metcalfe, Schwartz, & Joaquim,1993 ; Reder & Ritter, 1992 ; Reder & Schunn,1996), the accessibility of pertinent partialinformation about a solicited memory tar-get (Dunlosky & Nelson, 1992 ; Koriat, 1993 ;Morris, 1990), and the ease with whichinformation comes to mind (Kelley & Lind-say, 1993 ; Koriat, 1993 ; Mazzoni & Nelson,1995). Subjective confidence in the correct-ness of retrieved information has also beenclaimed to rest on the ease with which infor-mation is accessed and on the effort experi-enced in reaching a decision (Kelley & Lind-say, 1993 ; Nelson & Narens, 1990; Robinson& Johnson, 1998; Zakay & Tuvia, 1998).

These cues differ in quality fromthose underlying theory-based judgments.Whereas the latter judgments draw uponthe content of domain-specific beliefs andknowledge that are retrieved from memory,the former rely on contentless mnemoniccues that pertain to the quality of process-ing, in particular, the fluency with whichinformation is encoded and retrieved. AsKoriat and Levy-Sadot (1999) argued, “Thecues for feelings of knowing, judgments oflearning or subjective confidence lie in struc-tural aspects of the information processingsystem. This system, so to speak, engages ina self-reflective inspection of its own oper-ation and uses the ensuing information as abasis for metacognitive judgments” (p. 496).

Consider experience-based JOLs. Thesehave been claimed to rely on the easewith which the items are encoded during

learning or on the ease with which theyare retrieved. Both of these types of cuesbecome available in the course of learningand disclose the memorability of the stud-ied material. Such cues have been assumedto give rise to a sheer feeling of knowing.Indeed, there is evidence suggesting thatJOLs monitor the ease with which stud-ied items are processed during encoding(Begg, et al., 1989; Koriat, 1997; Matveyet al., 2001). For example, Begg et al. (1989)reported results suggesting that JOLs aresensitive to several attributes of words (e.g.,concreteness-abstractness) that affect easeof processing. Other findings suggest thatJOLs are affected by the ease and probabil-ity with which the to-be-remembered itemsare retrieved during learning (Benjamin &Bjork, 1996; Benjamin, Bjork, & Schwartz,1998; Koriat & Ma’ayan, 2005). For example,Hertzog, Dunlosky, Robinson, and Kidder(2003) reported that JOLs increased withthe speed with which an interactive imagewas formed between the cue and the targetin a paired-associates task. Similarly, Matveyet al. (2001) found that JOLs increased withincreasing speed of generating the targetsto the cues at study. These results are con-sistent with the view that JOLs are basedon mnemonic cues pertaining to the fluencyof encoding or retrieving to-be-remembereditems during study.

With regard to FOK judgments, sev-eral heuristic-based accounts have been pro-posed. According to the cue familiarityaccount, first advanced by Reder (1987; seealso Metcalfe et al., 1993), FOK is based onthe familiarity of the pointer (e.g., the ques-tion, the cue term in a paired-associate, etc.,see Koriat & Lieblich, 1977) that serves toprobe memory (Reder, 1987). Reder arguedthat a fast, preretrieval FOK is routinely andautomatically made in response to the famil-iarity of the terms of a memory questionto determine whether the solicited answerexists in memory. This preliminary FOKcan guide the question answering strategy.Indeed, the latency of speeded FOK judg-ments was found to be shorter than that ofproviding an answer. Furthermore, in sev-eral studies, the advance priming of the

Page 317: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 2 99

terms of a question was found to enhancespeeded, preliminary FOK judgments with-out correspondingly increasing the proba-bility of recall or recognition of the answer(Reder, 1987, 1988). Schwartz and Metcalfe(1992) extended Reder’s paradigm to showthat cue priming also enhances (unspeeded)FOK judgments elicited following recallfailure. Additional evidence for the cue-familiarity account comes from studies usinga proactive-interference paradigm (Metcalfeet al., 1993). Remarkable support was alsoobtained using arithmetic problems: Whenparticipants made fast judgments whetherthey knew the answer to an arithmetic prob-lem and could retrieve it, or whether theyhad to compute it, Know judgments werefound to increase with increasing frequencyof previous exposures to the same parts ofthe problem, not with the availability of theanswer in memory (Reder & Ritter, 1992).This was true even when participants didnot have enough time to retrieve an answer(Schunn, Reder, Nhouyvanisvong, Richards,& Stroffolino, 1997; see Nhouyvanisvong &Reder, 1998, for a review).

Consistent with the cue-familiarityaccount are also the results of studies ofthe feeling-of-not-knowing. Glucksberg andMcCloskey (1981) and Klin, Guzman, andLevine (1997) reported results suggestingthat lack of familiarity can serve as a basis fordetermining that something is not known.Increasing the familiarity of questionsfor which participants did not know theanswer increased the latency of Don’t Knowresponses as well as the tendency to make aKnow response erroneously.

According to the accessibility account ofFOK, in contrast, FOK is based on the overallaccessibility of pertinent information regard-ing the solicited target (Koriat, 1993). Thisaccount assumes that monitoring does notprecede retrieval but follows it: It is by try-ing to retrieve a target from memory thata person can appreciate whether the targetis “there” and worth continuing to searchfor. This occurs because, even when retrievalfails, people may still access a variety ofpartial clues and activations, such as frag-ments of the target, semantic and episodic

attributes, and so on (see Koriat, Levy-Sadot,Edry, & de Marcas, 2003 ; Miozzo & Cara-mazza, 1997). These partial clues may giverise to a sheer feeling that one knows theanswer. An important assumption of the acc-essibility account is that participants have nodirect access to the accuracy of the partialclues that come to mind, and therefore bothcorrect and wrong partial clues contribute tothe FOK.

Support for the accessibility accountcomes from a study on the TOT state (Koriat& Lieblich, 1977). An analysis of the ques-tions that tend to induce an overly highFOK suggested that the critical factor isthe amount of information they tend toelicit. For example, questions that containredundancies and repetitions tend to pro-duce inflated feelings of knowing, and so arequestions that activate many “neighboring”answers. Thus, accessibility would seem tobe a global, unrefined heuristic that respondsto the mere amount of information irrespec-tive of its correctness. Because people canrarely specify the source of partial informa-tion, they can hardly escape the contami-nating effects of irrelevant clues by attribut-ing them to their source. Such irrelevantclues sometimes precipitate a strong illusionof knowing (Koriat, 1995 , 1998a) or evenan illusory TOT state – reporting a TOTstate even in response to questions that haveno real answers (Schwartz, 1998), possiblybecause of the activations that they evoke.

Indeed, Schwartz and Smith (1997)observed that the probability of reportinga TOT state about the name of a ficti-tious animal increased with the amountof information provided about that animal,even when the amount of information didnot contribute to the probability of recall-ing the name of the animal. In addition,FOK judgments following a commissionerror (producing a wrong answer) are higherthan following an omission error (Koriat,1995 ; Krinsky & Nelson, 1985 ; Nelson &Narens, 1990), suggesting that FOK judg-ments are sensitive to the mere accessibilityof information.

In Koriat’s (1993) study, after participantsstudied a nonsense string, they attempted to

Page 318: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

300 the cambridge handbook of consciousness

recall as many of the letters as they couldand then provided FOK judgments regard-ing the probability of recognizing the cor-rect string among lures. The more lettersthat participants could access, the strongerwas their FOK regardless of the accuracyof their recall. When the number of lettersaccessed was held constant, FOK judgmentsalso increased with the ease with whichinformation came to mind, as indexed byrecall latency.

If both correct and incorrect partial infor-mation contribute equally to the feelingthat one knows the elusive memory target,how is it that people can nevertheless mon-itor their knowledge accurately? Accord-ing to Koriat (1993) this happens becausemuch of the information that comes sponta-neously to mind (around 90%; see Koriat &Goldsmith, 1996a) is correct. Therefore, thetotal amount of partial information accessi-ble is a good cue for recalling or recogniz-ing the correct target. Thus, the accuracy ofmetamemory is a byproduct of the accuracyof memory: Memory is by and large accu-rate in the sense that what comes to mind ismuch more likely to be correct than wrong.

A third account still assumes a combinedoperation of the familiarity and accessibilityheuristics. According to this account bothheuristics contribute to FOK, but whereasthe effects of familiarity occur early inthe microgenesis of FOK judgments, thoseof accessibility occur later, and only whencue familiarity is sufficiently high to drivethe interrogation of memory for potentialanswers (Koriat, & Levy-Sadot, 2001; Ver-non & Usher, 2003). This account assumesthat familiarity, in addition to affecting FOKjudgments directly, also serves as a gatingmechanism: When familiarity is high, par-ticipants probe their memory for the answer,and then the amount of information acces-sible affects memory performance. Whenfamiliarity is low, the effects of potentialaccessibility on FOK are more limited.

It should be noted, however, that resultsobtained by Schreiber and Nelson (1998)question the idea that FOK judgments aresensitive to the mere accessibility of partialclues about the target. These results indi-

cate that FOK decreases with the numberof pre-experimental, neighboring conceptsthat are linked to a cue, suggesting that thesejudgments are sensitive to the competitionbetween the activated elements.

Subjective confidence in the correctnessof one’s answers has also been assumed torest sometimes on mnemonic cues deriv-ing from the process of recalling or select-ing an answer. Thus, people express strongerconfidence in the answers that they retrievemore quickly, whether those answers are cor-rect or incorrect (Nelson & Narens, 1990).Similarly, in a study by Kelley and Lindsay(1993), retrieval fluency was manipulatedthrough priming. Participants were asked toanswer general information questions and toindicate their confidence in the correctnessof their answers. Prior to this task, partici-pants were asked to read a series of words,some of which were correct answers andsome were plausible but incorrect answersto the questions. This prior exposure wasfound to increase the speed and probabil-ity with which those answers were pro-vided in the recall test and, in parallel, toenhance the confidence in the correctnessof those answers. Importantly, these effectswere observed for both correct and incorrectanswers. These results support the view thatretrospective confidence is based in part on asimple heuristic: Answers that come to mindeasily are more likely to be correct than thosethat take longer to retrieve.

The imagination inflation effect also illus-trates the heuristic basis of confidence judg-ments. Asking participants to imagine somechildhood events increased confidence thatthese events did indeed happen in thepast (Garry, Manning, Loftus, & Sherman,1996). Merely asking about the event twicealso increased subjective confidence. Possi-bly imagination of an event and attempt-ing to recall it increase its retrieval fluency,which in turn contributes to the confidencethat the event has occurred (see also Hastie,Landsman & Loftus, 1978).

In sum, although metacognitive judg-ments may be based on explicit inferencesthat draw upon a priori beliefs and knowl-edge, much of the recent evidence points to

Page 319: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 301

the heuristic basis of such judgments, sug-gesting that feelings of knowing are basedon the application of nonanalytic heuris-tics that operate below conscious awareness.These heuristics rely on mnemonic cuespertaining to the quality of processing andresult in a sheer noetic experience. Thus, itwould seem that sheer subjective feelings,such as the feeling of knowing, which areat the core of subjective awareness, are theproduct of unconscious processes (Koriat,2000b).

The distinction between information-based and experience-based processes hasimportant implications that extend beyondmetacognition. It shares some features withthe old distinction between reason and emo-tion (see Damasio, 1994), but differs fromit. It implies a separation between two com-ponents or states of consciousness – on theone hand, sheer subjective feelings and intu-itions that have a perceptual-like qualityand, on the other hand, reasoned cognitionsthat are grounded in a network of beliefs andexplicit memories. It is a distinction betweenwhat one “feels” and “senses” and what one“knows” or “thinks.” The extensive researchin both cognitive psychology and social psy-chology (e.g., Jacoby & Whitehouse, 1989;Strack, 1992) indicates that these two com-ponents of conscious awareness are not onlydissociable, but may actually conflict witheach other, pulling judgments and behaviorin opposite directions (Denes-Raj & Epstein,1994). The conflict between these compo-nents is best illustrated in correction phe-nomena (e.g., Jacoby & Whitehouse, 1989;Strack, 1992), which suggest that whenpeople realize that their subjective experi-ence has been contaminated, they tend tochange their judgments so as to correct forthe assumed effects of that contamination(Strack, 1992).

Dissociations between Knowingand the Feeling of Knowing

The clearest evidence in support of the ideathat metacognitive judgments are based oninference from cues rather than on direct

access to memory traces comes from obser-vations documenting a dissociation betweensubjective and objective indexes of know-ing. Several such dissociations have beenreported. These dissociations also bring tothe fore the effects of specific mnemoniccues on metacognitive judgments.

With regard to JOLs, Begg et al. (1989)found that high-frequency words, presum-ably fluently processed, yielded higher JOLsbut poorer recognition memory than low-frequency words (see also Benjamin, 2003).Narens, Jameson and Lee (1994) reportedthat subthreshold target priming enhancedJOLs, perhaps because it facilitated the pro-cessing of the target, although it did notaffect eventual recall.

Bjork (1999) described several conditionsof learning that enhance performance dur-ing learning but impair long-term reten-tion and/or transfer. According to Bjorkand Bjork (1992), these manipulations facil-itate “retrieval strength” but not “storagestrength.” As a result, the learners, fooledby their own performance during learning,may experience an illusion of competence,resulting in inflated predictions about theirfuture performance. For example, massedpractice typically yields better performancethan spaced practice in the short term,whereas spaced practice yields considerablybetter performance than massed practice inthe long term. Massed practice, then, hasthe potential of leading learners to over-estimate their future performance. Indeed,Zechmeister and Shaughnessy (1980) foundthat words presented twice produced higherJOLs when their presentation was massedthan when it was distributed, although thereverse pattern was observed for recall. Asimilar pattern was reported by Simon andBjork (2001) using a motor-learning task:Participants asked to learn each of sev-eral movement patterns under blocked con-ditions predicted better performance thanwhen those patterns were learned underrandom (interleaved) conditions, whereasactual performance exhibited the oppositepattern.

Benjamin et al. (1998) reported sev-eral experiments documenting a negative

Page 320: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

302 the cambridge handbook of consciousness

relation between recall predictions andactual recall performance, presumably deriv-ing from reliance on retrieval fluency whenretrieval fluency was a misleading cue forfuture recall. For example, they had partici-pants answer general information questionsand assess the likelihood that they would beable to free recall each answer in a later test.The more rapidly participants retrieved ananswer to a question, the higher was theirestimate that they would be able to freerecall that answer at a later time. In reality,however, the opposite was the case.

Another type of dissociation was reportedby Koriat, Bjork, Sheffer, and Bar (2004).They speculated that, to the extent thatJOLs are based on processing fluency at thetime of study, they should be insensitive tothe expected time of testing. This shouldbe the case because the processing fluencyof an item at the time of encoding shouldnot be affected by when testing is expected.Indeed, when participants made JOLs fortests that were expected either immediatelyafter study, a day after study, or a week afterstudy, JOLs were entirely indifferent to theexpected retention interval, although actualrecall exhibited a typical forgetting func-tion. This pattern resulted in a dissociationsuch that predicted recall matched actualrecall very closely for immediate testing. Fora week’s delay, however, participants pre-dicted over 50% recall, whereas actual recallwas less than 20%.

That study also demonstrated the impor-tance of distinguishing between experience-based and theory-based JOLs: When a newgroup of participants were presented withall three retention intervals and asked toestimate how many words they would recallat each interval, their estimates closelymimicked the forgetting function exhib-ited by the first group’s actual recall. Thus,the effects of forgetting on recall per-formance seem to emerge under condi-tions that activate participants’ beliefs aboutmemory.

Dissociations have also been reportedbetween FOK judgments and actual mem-ory performance. First are the findingsin support of the cue-familiarity account

reviewed above. These findings indicate thatmanipulations that enhance the familiarityof the terms of a question enhance FOKjudgments associated with that questionwithout correspondingly affecting actualrecall performance. A similar dissociation,inspired by the accessibility account, hasbeen demonstrated by Koriat (1995): Theresults of that study suggest that FOK judg-ments for general information questionstend to be accurate as long as these ques-tions bring to mind more correct than incor-rect partial information. However, deceptivequestions (Fischhoff, Slovic, & Lichtenstein,1977), which bring to mind more incorrectthan correct information, produce undulyhigh FOK judgments following recall failureand, in fact, yield a dissociation to the extentthat FOK judgments are negatively corre-lated with subsequent recognition memoryperformance.

With regard to confidence judgments,Chandler (1994) presented participants witha series of target and non-target stimuli, eachconsisting of a scenic nature picture. In asubsequent recognition memory test, a dis-sociation was observed such that targets forwhich there existed a similar stimulus in thenon-target series were recognized less often,but were endorsed with stronger confidencethan targets for which no similar non-targetcounterpart was included. Thus, seeing arelated target seems to impair memory whileenhancing confidence.

Busey, Tunnicliff, Loftus, and Loftus(2000) had participants study a series offaces appearing at different luminance con-ditions. For faces that had been studied in adim condition, testing in a bright conditionreduced recognition accuracy, but increasedconfidence, possibly because it enhancedtheir fluent processing during testing.

In sum, several researchers, motivatedby the cue-utilization view of metacogni-tive judgments, have deliberately searchedfor conditions that produce a dissociationbetween memory and metamemory. Inter-estingly, all of the manipulations exploredact in one direction: inflating metacogni-tive judgments relative to actual memoryperformance. Some of the experimental

Page 321: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 303

conditions found to engender illusions ofknowing are ecologically unrepresentative,even contrived. However, the demonstrateddissociations clearly speak against the notionthat metacognitive judgments rest on privi-leged access to the contents of one’s ownmemory.

The Validity of MetacognitiveJudgments

How valid are subjective feelings of know-ing in monitoring actual knowledge? Howaccurate are people’s introspections abouttheir memory? Earlier research has sought toestablish a correspondence between know-ing and the feeling of knowing as an attemptto support the trace-access view of metacog-nitive judgments. Later studies, in contrast,inspired by the inferential view, have con-centrated on producing evidence for mis-correspondence and dissociation, as justreviewed. Although the conditions used inthese studies may not be ecologically rep-resentative, the results nevertheless suggestthat the accuracy of metacognitive judg-ments is limited. Furthermore, these resultspoint to the need to clarify the reasons foraccuracy and inaccuracy and to specify theconditions that affect the degree of corre-spondence between subjective and objectivemeasures of knowing.

Two aspects of metacognitive accuracymust be distinguished. The first is calibra-tion (Lichtenstein et al., 1982) or “bias” or“absolute accuracy” (see Nelson & Dunlosky,1991), which refers to the correspondencebetween mean metacognitive judgmentsand mean actual memory performance andreflects the extent to which metacogni-tive judgments are realistic. For example, ifconfidence judgments are elicited in termsof probabilities, then the mean probabilityassigned to all the answers in a list is com-pared to the proportion of correct answers.This comparison can indicate whetherprobability judgments are well calibratedor whether they disclose an overconfidencebias (inflated confidence relative to perfor-mance) or an underconfidence bias. Calibra-

tion or bias can also be assessed by elicit-ing global or aggregate predictions (Hertzog,Kidder, Powell-Moman, & Dunlosky 2002 ;Koriat, Sheffer, & Ma’ayan, 2002 ; Liberman,2004), for example, by asking participants toestimate how many answers they got rightand comparing that estimate to the actualnumber of correct answers.

It should be stressed that calibration canbe evaluated only when judgments and per-formance are measured on equivalent scales.Thus, for example, if confidence judgmentsare made on a rating scale, calibration can-not be evaluated unless some assumptionsare made (e.g., Mazzoni & Nelson, 1995).

Such is not the case for the secondaspect of metacognitive accuracy, resolution(or relative accuracy). Resolution refers tothe extent to which metacognitive judg-ments are correlated with memory perfor-mance across items. This aspect is com-monly indexed by a within-subject gammacorrelation between judgments and perfor-mance (Nelson, 1984). For example, in thecase of JOLs and FOK judgments, resolu-tion reflects the extent to which a partic-ipant can discriminate between items thatshe will recall and those that she will not. Inthe case of confidence, it reflects the abilityto discriminate between correct and incor-rect answers.

The distinction between calibration andresolution is important. For example, inmonitoring one’s own competence dur-ing the preparation for an exam, calibra-tion is pertinent to the decision when tostop studying: Overconfidence may lead tospending less time and effort than are actu-ally needed. Resolution, in turn, is rele-vant to the decision how to allocate thetime between different parts of the mate-rial. Importantly, resolution can be high,even perfect, when calibration is very poor.Also, calibration and resolution may beaffected differentially. For example, Koriatet al. (2002) observed that practice study-ing the same list of items improves res-olution but impairs calibration, instillingunderconfidence.

We should note that much of the exper-imental work on the accuracy of JOLs and

Page 322: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

304 the cambridge handbook of consciousness

FOK judgments has focused on resolution.In contrast, research on confidence judg-ments, primarily the work carried out withinthe judgment and decision tradition, hasconcentrated on calibration.

With regard to JOLs elicited during study,the results of several investigations indi-cate that by and large item-by-item JOLsare well calibrated on the first study-testtrial (e.g., Dunlosky & Nelson, 1994 ; Maz-zoni & Nelson, 1995). Judgments of com-prehension, in contrast, tend to be veryinflated. One reason for this is that in mon-itoring comprehension people assess famil-iarity with the general domain of the textinstead of assessing knowledge gained fromthat text (Glenberg, Sanocki, Epstein, &Morris, 1987).

Two interesting trends have beenreported with regard to the calibration ofJOLs. First is the aggregate effect. Whenlearners are asked to provide an aggregatejudgment (i.e., predict how many itemsthey will recall), their estimates, whentransformed into percentages, are substan-tially lower than item-by-item judgments.Whereas the latter judgments tend to berelatively well calibrated or even slightlyinflated, aggregate judgments tend to yieldunderconfidence (Koriat et al., 2002 , 2004 ;Mazzoni & Nelson, 1995). A similar effecthas been observed for confidence judgments(Griffin & Tversky, 1992).

Second is the underconfidence-with-practice (UWP) effect (Koriat et al., 2002):When learners are presented with the samelist of items for several study-test cycles,their JOLs exhibit relatively good calibrationon the first cycle, with a tendency towardoverconfidence. However, a shift towardmarked underconfidence occurs from thesecond cycle on. The UWP effect was foundto be very robust across several experimen-tal manipulations and was obtained even fora task involving the monitoring of memoryfor self-performed tasks.

Turning next to resolution, the within-person correlation between JOLs and sub-sequent memory performance tends to berelatively low, particularly when the stud-ied material is homogeneous. For example,

the JOL-recall gamma correlation averaged.54 across several studies that used lists ofpaired-associates that included related andunrelated pairs (Koriat et al., 2002). In con-trast, in Dunlosky and Nelson’s (1994) study,in which all pairs were unrelated, the gammacorrelation averaged .20.

Monitoring seems to be particularly poorwhen it concerns one’s own actions. Whenparticipants are asked to perform a seriesof minitasks (so called self-performed tasks)and to judge the likelihood of recalling thesetasks in the future, the accuracy of their pre-dictions is poor, and much lower than thatfor the study of a list of words (Cohen et al.,1991). It has been argued that people some-times have special difficulties in monitoringtheir own actions (e.g., Koriat, Ben-Zur, &Druch, 1991).

However, two types of procedures havebeen found to improve JOL resolution. Thefirst procedure is repeated practice study-ing the same list of items. As noted ear-lier, although repeated practice impairs cal-ibration, it does improve resolution (King,Zechmeister, & Shaughnessy, 1980; Koriat,2002 ; Mazzoni et al., 1990). Thus, in Koriatet al.’s (2002) analysis, in which the JOL-recall gamma correlation averaged .54 forthe first study-test cycle, that correlationreached .82 on the third study-test cycle.Koriat (1997) produced evidence suggestingthat the improved resolution with practiceoccurs because (a) with increased practicestudying a list of items, the basis of JOLschanges from reliance on pre-experimentalintrinsic attributes of the items (e.g., per-ceived difficulty) toward a greater relianceon mnemonic cues (e.g., processing fluency)associated with the study of these items,and (b) mnemonic cues tend to have greatervalidity than intrinsic cues, being sensitive tothe immediate processing of the items dur-ing study. Rawson, Dunlosky, and Thiede(2000) also observed an improvement injudgments of comprehension with repeatedreading trials.

A second procedure that proved effec-tive in improving JOL accuracy is thatof soliciting JOLs not immediately afterstudying each item, but a few trials later.

Page 323: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 305

In paired-associate learning, delaying JOLshas been found to enhance JOL accuracymarkedly (Dunlosky & Nelson, 1994 ; Nelson& Dunlosky, 1991). However, the delayed-JOL effect occurs only when JOLs are cuedby the stimulus term of a paired-associate,not when cued by an intact stimulus-response pair (Dunlosky & Nelson, 1992).It would seem that the condition in whichJOLs are delayed and cued by the stimu-lus alone approximates the eventual crite-rion test, which requires access to informa-tion in long-term memory in response toa cue. Indeed, Nelson, Narens, and Dun-losky (2004) reported evidence suggestingthat, in making delayed JOLs, learners relyheavily on the accessibility of the target,which is an effective predictor of subsequentrecall. When JOLs are solicited immediatelyafter study, the target is practically alwaysretrievable, and hence its accessibility haslittle diagnostic value. There is still contro-versy, however, whether the delayed-JOLeffect indeed reflects improved metamem-ory (Dunlosky & Nelson, 1992) or improvedmemory (Kimball & Metcalfe, 2003 ; Spell-man & Bjork, 1992).

Koriat and Ma’ayan (2005) reported evi-dence suggesting that the basis of JOLschanges with delay: As the solicitation ofJOLs is increasingly delayed, a shift occurs inthe basis of JOLs from reliance on encodingfluency (the ease with which an item is com-mitted to memory) toward greater relianceon retrieval fluency (the ease with which thetarget comes to mind in response to the cue).In parallel, the validity of retrieval fluencyin predicting recall increases with delay andbecomes much better than that of encodingfluency. These results suggest that metacog-nitive judgments may be based on the flex-ible and adaptive utilization of differentmnemonic cues according to their relativevalidity in predicting memory performance.

The results of Koriat and Ma’ayan suggestthat repeated practice and delay may con-tribute to JOL accuracy by helping learn-ers overcome biases that are inherent inencoding fluency. Koriat and Bjork (2005)described an illusion of competence – fore-sight bias – that arises from an inherent dis-

crepancy between the standard conditionsof learning and the standard conditions oftesting. On a typical memory test, peopleare presented with a question and are askedto produce the answer. In contrast, in thecorresponding learning condition, both thequestion and the answer generally appearin conjunction, meaning that the assessmentof one’s future memory performance occursin the presence of the answer. This differ-ence has the potential of creating undulyhigh feelings of competence that derive fromthe failure to discount what one now knows.This situation is similar to what has beenreferred to as the “curse of knowledge” –the difficulty in discounting one’s privilegedknowledge in judging what a more igno-rant other knows (Birch & Bloom, 2003).Koriat and Bjork produced evidence suggest-ing that learners are particularly prone to aforesight bias in paired-associate cue-targetlearning when the target (present duringstudy) brings to the fore aspects of the cuethat are less apparent when the cue is laterpresented alone (at test). Subsequent exper-iments (Koriat & Bjork, 2006) indicatedthat foresight bias, and associated overconfi-dence, can be alleviated by conditions thatenhance learners’ sensitivity to mnemoniccues that pertain to the testing situation,including study-test experience, particularlytest experience, and delaying JOLs.

Another way in which JOLs can be mademore sensitive to the processes that affectperformance during testing was explored byGuttentag and Carroll (1998) and Benjamin(2003). They obtained the typical result inwhich learners predict superior recognitionmemory performance for common than foruncommon words (although in reality theopposite is the case). However, when dur-ing the recognition test learners made post-dictions about the words that they couldnot remember (i.e., judged the likelihoodthat they would have recognized the wordif they had studied it), they actually post-dicted superior recognition of the uncom-mon words. Furthermore, the act of makingpostdictions for one list of items was foundto rectify predictions made for a second listof items studied later.

Page 324: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

306 the cambridge handbook of consciousness

As far as the accuracy of FOK judgmentsis concerned, these judgments are relativelywell calibrated (Koriat, 1993) and are mod-erately predictive of future recall and recog-nition. Thus, participants unable to retrievea solicited item from memory can estimatewith above-chance success whether theywill be able to recall it in the future, produceit in response to clues, or identify it amongdistractors (e.g., Gruneberg & Monks, 1974 ;Hart, 1967). In a meta-analysis, Schwartzand Metcalfe (1994) found that the accuracyof FOK judgments in predicting subsequentrecognition performance increases with thenumber of test alternatives. The highest cor-relations were found when the criterion testwas recall.

Assuming that metacognitive judgmentsare based on internal, mnemonic cues, thentheir accuracy should depend on the validityof the cues on which they rest. However,only a few studies examined the validityof the mnemonic cues that are assumedto underlie FOK judgments. Koriat (1993)showed that the correlation between theamount of partial information retrievedabout a memory target (regardless of itsaccuracy) is a good predictor of eventualmemory performance, and its validity isequal to that of FOK judgments. Whereasthe overall accessibility of information abouta target (inferred from the responses of onegroup of participants) predicted the mag-nitude of FOK judgments following recallfailure, the output-bound accuracy of thatinformation was predictive of the accuracy(resolution) of these FOK judgments(Koriat, 1995). In a similar manner, cuefamiliarity may contribute to the accuracyof FOK judgments because in the real worldcues and targets (or questions and answers)typically occur in tight conjunction; there-fore familiarity with the clue should predictfamiliarity with the target (Metcalfe, 2000).

Turning finally to retrospective confi-dence judgments, these have received a greatdeal of research in the area of judgmentand decision making. When participants arepresented with general knowledge questionsand are asked to assess the probability thatthe chosen answer is correct, an overconfi-dence bias is typically observed, with mean

probability judgments markedly exceedingthe proportion of correct answers (Lichten-stein et al., 1982). This overconfidence hasbeen claimed to derive from a confirmationbias (see Koriat et al., 1980; Nickerson, 1998;Trope & Liberman, 1996) – the tendency tobuild toward a conclusion that has alreadybeen reached by selectively gathering or uti-lizing evidence that supports that conclu-sion. However, it has also been argued thatpart of the observed overconfidence maybe due to the biased sampling of items byresearchers – the tendency to include toomany deceptive items. Indeed, when itemsare drawn randomly, the overconfidence biasdecreases or disappears (Gigerenzer et al.,1991).

More recently, attempts have been madeto show that confidence in a decision isbased on the sampling of events from mem-ory, with overconfidence resulting from abiased sampling (Winman & Juslin, 2005).Indeed, Fiedler and his associates (Fiedler,Brinkmann, Betsch, & Wild, 2000; Freytag& Fiedler, 2006) used a sampling approachto explain several biases in judgment anddecision making in terms of the notionof metacognitive myopia. According to thisapproach, many environmental entities haveto be inferred from the information given ina sample of stimulus input. Because sam-ples are rarely representative, an impor-tant metacognitive requirement would be tomonitor sampling biases and control forthem. People’s responses, however, are finelytuned to the information given in thesample, and biased judgments, includingoverconfidence, derive from the failure toconsider the constraints imposed on the gen-eration of the information sample.

It is important to note that over-confidence is not ubiquitous: When itcomes to sensory discriminations, partici-pants exhibit underconfidence, thinking thatthey did worse than they actually did(Bjorkman, Juslin, & Winman, 1993). Also,whereas item-by-item confidence judg-ments yield overconfidence, aggregate (orglobal) judgments (estimating the num-ber of correct answers), as noted earlier,typically yield underconfidence (Gigerenzeret al., 1991; Griffin & Tversky, 1992). The

Page 325: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 307

underconfidence for aggregate judgmentsmay derive in part from a failure to make anallowance for correct answers likely to resultfrom mere guessing (Liberman, 2004).

A great deal of research has been car-ried out also on the confidence-accuracy(C-A) relation, with variable results. Thegeneral pattern that emerges from thisresearch is that the C-A relation is quitestrong when calculated within each partic-ipant (which is what was referred to as res-olution), but very weak when calculatedbetween participants (see Perfect, 2004).Consider the latter situation first. Researchconducted in the domain of eyewitness tes-timony, focusing on the ability of partic-ipants to recall a particular detail from acrime or to identify the perpetrator in alineup, has yielded low C-A correlations(Wells & Murray, 1984). That research hastypically focused on a between-individualanalysis, which is, perhaps, particularly rele-vant in a forensic context: It is important toknow whether eyewitnesses can be trustedbetter when they are confident in the tes-timony than when they express low con-fidence. Similarly, if there are several wit-nesses, it is important to know whetherthe more confident among them is likely tobe the more accurate. Thus, in this contextthe general finding is that a person’s confi-dence in his or her memory is a poor predic-tor of the accuracy of that memory.

On the other hand, research focusing onwithin-person variation has typically yieldedmoderate-to-high C-A correlations. Thus,when participants answer a number of ques-tions and for each question report their con-fidence in the correctness of the answer,the cross-item correlation between confi-dence and accuracy tends to be relativelyhigh (e.g. Koriat & Goldsmith, 1996a). Thesame is true when the questions concernthe episodic memory for a previously expe-rienced event (Koriat, Goldsmith, Schnei-der, & Nakash-Dura, 2001). Thus, people cangenerally discriminate between answers (ormemory reports) that are likely to be correctand those that are likely to be false.

Why are the between-participant correla-tions very low? Several studies suggest thatthese low correlations stem from the low

level of variability among witnesses in exper-imental laboratory studies. Such studies typ-ically maintain the same conditions acrossparticipants. In contrast, under naturalisticconditions the correlation is generally muchhigher, and it is that type of correlation thatwould seem to be of relevance in a foren-sic context (Lindsay, Read & Sharma, 1998).A second reason, mentioned earlier, is thatretrospective confidence judgments tend tobe based in part on participants’ precon-ceptions about their ability in the domaintested, and these preconceptions tend to beof low validity when they concern eyewit-ness memory (e.g., lineup identification).

Several studies explored the subjectivemnemonic cues that may mediate thewithin-person C-A correlation. These cuesinclude retrieval latency and the perceptionof effortless retrieval. The correlation washigher for recall than for recognition pre-sumably because recall provides more cuespertaining to ease of retrieval than recogni-tion (Koriat & Goldsmith, 1996a; Robinson,Johnson, & Herndon, 1997). Robinson,Johnson, & Robertson (2000) found that rat-ings of vividness and detail for a videotapedevent contributed more strongly to confi-dence judgments than processing fluencyand were also more diagnostic of memoryaccuracy. Attempts to enhance the C-A rela-tion in eyewitness identification by inducinggreater awareness of the thoughts and rea-soning process involved in the decision pro-cess have been largely ineffective or evencounterproductive (Robinson & Johnson,1998).

In sum, the accuracy of metacognitivejudgments has attracted a great deal of inter-est because of its theoretical and practicalimplications. The results are quite variable,although by and large JOLs, FOK judg-ments, and confidence ratings are moder-ately predictive of item differences in actualmemory performance.

The Control Functionof Metacognition

As noted earlier, much of the work in meta-cognition is predicated on the assumption

Page 326: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

308 the cambridge handbook of consciousness

that consciousness is not a mere epiphe-nomenon. Rather, subjective feelings andsubjective judgments exert a causal role onbehavior. In metacognition research this ideahas been expressed in terms of the hypoth-esis that monitoring affects control (Nelson,1996). Indeed, several observations suggest acausal link between monitoring and controlso that the output of monitoring serves toguide the regulation of control processes.

With regard to the online regulation oflearning, it has been proposed that JOLsaffect the choice of which items to relearnand how much time to allocate to each item.Indeed, it has been observed that under self-paced conditions, when learners are giventhe freedom to regulate the amount of timespent on each item, they tend to allocatemore time to items that are judged to be dif-ficult to learn than to those that are judgedto be easier (for a review see Son & Met-calfe, 2000). It was proposed that the effectsof item difficulty on study time allocationare mediated by a monitoring process inwhich learners judge the difficulty of eachitem and then invest more effort in studyingthe judged-difficult items to compensate fortheir difficulty (Nelson & Leonesio, 1988).

Dunlosky and Hertzog (1998; see alsoThiede & Dunlosky, 1999) proposed adiscrepancy-reduction model to describe therelation between JOLs and study time allo-cation. Learners are assumed to monitoronline the increase in encoding strength thatoccurs as more time is spent studying an itemand to cease study when a desired level ofstrength has been reached. This level, whichis referred to as “norm of study” (Le Ny,Denhiere, & Le Taillanter, 1972), is preseton the basis of various motivational factors,such as the stress on accurate learning ver-sus fast learning (Nelson & Leonesio, 1988).Thus, in self-paced learning, study continuesuntil the perceived degree of learning meetsor exceeds the norm of study.

In their review of the literature, Son andMetcalfe (2000) found that indeed, in 35

of 46 published experimental conditions,learners exhibited a clear preference forstudying the more difficult materials. Thereare two exceptions to this rule, however.

First, Thiede and Dunlosky (1999) showedthat when learners are presented with aneasy goal (e.g., to learn a list of 30 itemswith the aim of recalling at least 10 of them),they tended to choose the easier rather thanthe more difficult items for restudy. Thiedeand Dunlosky took these results to imply ahierarchy of control levels: At a superordi-nate level, learners may plan to invest moreeffort studying either the easier or the moredifficult items. This strategy is then imple-mented at the subordinate level to controlthe amount of time allocated to each itemand to select items for restudy.

Second, Son and Metcalfe (2000) had par-ticipants learn relatively difficult materialswith the option to go back to materials thathad previously been studied. Under hightime pressure, participants allocated morestudy time to materials that were judged aseasy and interesting. When the time pres-sure was not so great, however, they tendedto focus on the more difficult items.

These results indicate that study timeallocation is also affected by factors otherthan the output of online monitoring.Indeed, other studies indicated, for exam-ple, that learners invest more study timewhen they expect a recall test than whenthey expect a recognition test (Mazzoni &Cornoldi, 1993) and when the instructionsstress memory accuracy than when theystress speed of learning (Nelson & Leonesio,1988). Also, the allocation of study time toa given item varies according to the incen-tive for subsequently recalling that item andaccording to the expected likelihood thatthe item would be later tested (Dunlosky &Thiede, 1998).

Altogether, these results suggest thatstudy time allocation is guided by an adap-tive strategy designed to minimize effort andimprove learning.

With regard to FOK judgments, severalstudies indicated that they predict how longpeople continue searching for a memory tar-get before giving up: When people feel thatthey know the answer or that the answer ison the tip-of-the-tongue, they search longerthan when they feel that they do not knowthe answer (Barnes et al., 1999; Costermans

Page 327: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 309

et al., 1992 ; Gruneberg, Monks, & Sykes,1977; Schwartz, 2001). FOK judgments arealso predictive of the speed of retrievingan answer, so that in the case of commis-sion responses the correlation between FOKjudgments and retrieval latency is positive,whereas for omission responses the corre-lation between FOK and the latency ofthe decision to end search is negative (seeNelson & Narens, 1990).

Search time is also affected by other fac-tors in addition to FOK judgments: Whenparticipants are penalized for slow respond-ing, they tend to retrieve answers fasterbut produce more incorrect answers (Barneset al., 1999).

As noted earlier, Reder (1987) proposedthat preliminary FOK judgments also guidethe selection of strategies for solving prob-lems and answering questions. In her stud-ies, the decision whether to retrieve a solu-tion to an arithmetic problem (Know) or tocompute it was affected by manipulationsassumed to influence cue familiarity. Thesestudies suggest that FOK judgments that aremisled by cue familiarity can misguide thedecision to retrieve or compute the answer.

Retrospective monitoring can also affectbehavior. When people make an error in per-forming a task they can detect that withoutan external feedback and can often imme-diately correct their response. Following thedetection of an error, people tend to adjusttheir speed of responding to achieve a desir-able level of accuracy (Rabbit, 1966).

Confidence judgments have also beenshown to affect choice and behavior anddo so irrespective of their accuracy. Asnoted earlier, people are often overconfidentin their knowledge. Fischhoff et al. (1977)showed that people had sufficient faith intheir confidence judgments that they werewilling to stake money on their validity.

Consider the finding, mentioned earlier,that when judging how well they have doneon a test, people tend to base their judg-ments on their preconceptions about theirabilities in the domain tested. Ehrlingerand Dunning (2003) reasoned that becausewomen tend to perceive themselves as lessscientifically talented than men, they should

be expected to rate their performance on aquiz of scientific reasoning lower than menrate themselves. Such was indeed the case,although in reality there was no gender dif-ference in actual performance. When askedlater if they would like to participate in a sci-ence competition, women were more likelyto decline, and their reluctance correlatedsignificantly with their self-rated perfor-mance on the quiz. Thus, their choices wereaffected by their confidence even when con-fidence was unrelated to actual performance.

A systematic examination of the con-trol function of confidence judgments wasconducted by Koriat and Goldsmith (1994 ,1996a,b) in their investigation of the strate-gic regulation of memory accuracy. Con-sider the situation of a person on the witnessstand who is sworn to “tell the whole truthand nothing but the truth.” To meet thisrequirement, that person should monitorthe accuracy of every piece of informationthat comes to mind before deciding whetherto report it or not. Koriat and Goldsmithproposed a model that describes the mon-itoring and control processes involved. Therememberer is assumed to monitor the sub-jective likelihood that each candidate mem-ory response is correct and then comparethat likelihood to a preset threshold on themonitoring output to determine whetherto volunteer that response or not. The set-ting of the control threshold depends on therelative utility of providing as complete areport as possible versus as accurate a reportas possible. Several results provided consis-tent support for this model. First, the ten-dency to report an answer was very stronglycorrelated with subjective confidence in thecorrectness of the answer (the intra-subjectgamma correlations averaged more than .95 ;Koriat & Goldsmith, 1996b, Experiment 1;see also Kelley & Sahakyan, 2003). Thisresult suggests that people rely completelyon their subjective confidence in decidingwhether to volunteer an answer or with-hold it. In fact, participants were found torely heavily on their subjective confidenceeven when answering a set of “deceptive”general knowledge questions, for which sub-jective confidence was quite undiagnostic

Page 328: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

310 the cambridge handbook of consciousness

of accuracy (Koriat & Goldsmith, 1996b,Experiment 2). Second, participants given ahigh accuracy incentive (e.g., “you win onepoint for each correct answer but lose allof your winnings if even a single answer isincorrect”) adopted a stricter criterion thanparticipants given a more moderate incen-tive (a 1:1 penalty-to-bonus ratio), suggest-ing that the strategic regulation of memoryreporting is flexibly adapted to the empha-sis on memory accuracy. Third, the optionto volunteer or withhold responses (whichis often denied in traditional memory exper-iments) allowed participants to boost theaccuracy of what they reported, in compar-ison with a forced-report test. This increaseoccurred by sacrificing some of the cor-rect answers; that is, at the expense ofmemory quantity performance. This impliesthat eyewitnesses generally cannot “tell thewhole truth” and also “tell nothing but thetruth,” but must find a compromise betweenthe two requirements. Importantly, how-ever, the extent of the quantity-accuracytradeoff was shown to depend critically onmonitoring effectiveness: In fact, when mon-itoring resolution is very high (that is, when aperson can accurately discriminate betweencorrect and wrong answers), the accuracy ofwhat is reported may be improved signifi-cantly under free report conditions at littleor no cost in quantity performance. Thus,in the extreme case when monitoring is per-fect, a person should be able to exercise aperfect screening process, volunteering allcorrect items of information that come tomind and withholding all incorrect items.

Koriat and Goldsmith’s model wasapplied to study the strategic regulation ofmemory accuracy by school-aged children(Koriat et al., 2001). Even second-to-third-grade children were effective in enhancingthe accuracy of their testimony when giventhe freedom to volunteer or withhold ananswer under a 1:1 penalty-to-bonus ratio,and they were able to enhance the accu-racy of their reports even further when givenstronger incentives for accurate reporting.However, both the children in this study(see also Roebers, Moga, & Schneider, 2001)and elderly adults in other studies (Kelley &

Sahakyan, 2003 ; Pansky, Koriat, Goldsmith,& Pearlman-Avnion, 2002) were found to beless effective than young adults (college stu-dents) in utilizing the option to withholdanswers to enhance their accuracy. Theseresults have implications for the dependabil-ity of children’s testimony in legal settings.

Interestingly, results suggest that the rela-tionship between monitoring and control,what Koriat and Goldsmith (1996b) termed“control sensitivity,” may be impaired tosome extent in aging (Pansky et al., 2002)and in certain psychotic disorders, such asschizophrenia (Danion, Gokalsing, Robert,Massin-Krauss, & Bacon, 2001; Koren et al.,2004). In the Koren et al. study, for instance,the correlation between confidence judg-ments in the correctness of a response andthe decision to volunteer or withhold thatresponse was highly diagnostic of the degreeof insight and awareness that schizophrenicpatients showed concerning their mentalcondition – more so than traditional mea-sures of executive control, such as the Wis-consin Card Sorting Task. Patients exhibit-ing low control sensitivity were also less ableto improve the accuracy of their responseswhen given the option to choose whichanswers to volunteer and which to withhold.

The research reviewed above has directbearing on the question of how people canavoid false memories and overcome the con-taminating effects of undesirable influences.Using fuzzy-trace theory as a framework,Brainerd et al. (2003) proposed a mechanismfor false-memory editing that allows chil-dren and adults to reject false but gist-consistent events. The model also predictsthe occurrence of erroneous recollectionrejection, in which true events are inappro-priately edited out of memory reports.

Payne, Jacoby, and Lambert (2004) inves-tigated the ability of participants to over-come stereotype-based memory distortionswhen allowed the option of free report.Reliance on subjective confidence allowedparticipants to enhance their overall mem-ory accuracy, but not to reduce stereotypebias. The results suggested that whereas sub-jective confidence monitors the accuracyof one’s report, stereotypes distort memory

Page 329: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 311

through an unconscious-accessibility bias towhich subjective confidence is insensitive.Hence the effects of stereotypes are difficultto control.

The work of Johnson and her associateson source monitoring (see Johnson, 1997;Johnson, Hashtroudi, & Lindsay, 1993) alsohas important implications for the edit-ing of memory reports. According to thesource-monitoring framework, there are sev-eral phenomenal cues that can be used by arememberer to specify the source of a men-tal record, including such mnemonic cues asvividness, perceptual detail, and spatial andtemporal information. Because mental expe-riences from different sources (e.g., percep-tion versus imagination) differ on averagein their phenomenal qualities (e.g., visualclarity), these diagnostic qualities can sup-port source monitoring by using either aheuristically based process or a more strate-gic, systematic process. Both types of pro-cesses require setting criteria for making ajudgment, as well as procedures for compar-ing activated phenomenal information to thecriteria.

The broader implication of the work onthe strategic regulation of memory accu-racy (Koriat & Goldsmith, 1996b) is that, toinvestigate the complex dynamics between(a) memory (the quality of the informationthat is available to the rememberer), (b)monitoring, (c) control, and (d) overt accu-racy and quantity performance, one mustinclude a situation in which participants arefree to decide what to report and what not toreport. In fact, in everyday life people havegreat freedom in reporting an event frommemory: They can choose what perspectiveto adopt, what to emphasize and what toskip, how much detail to provide, and soforth. Such strategic regulation entails com-plex monitoring and control processes thatgo beyond the decision to volunteer or with-hold specific items of information, and these,too, deserve systematic investigation.

In fact, the conceptual framework ofKoriat and Goldsmith was extended to incor-porate another means by which people nor-mally regulate the accuracy of what theyreport: control over the grain size (pre-

cision or coarseness) of the informationthat is reported (Goldsmith & Koriat, 1999;Goldsmith, Koriat, & Pansky, 2005 ; Gold-smith, Koriat, & Weinberg-Eliezer, 2002).For example, when not completely cer-tain about the time of an event, a personmay simply report that it occurred “late inthe afternoon” rather than “at four-thirty.”Neisser (1988) observed that when answer-ing open-ended questions, participants tendto provide answers at a level of general-ity at which they are not likely to be mis-taken. Of course, more coarsely grainedanswers, although more likely to be correct,are also less informative. Thus, Goldsmithet al. (2002) found that when participantsare allowed to control the grain size of theirreport, they do so in a strategic manner,sacrificing informativeness (degree of preci-sion) for the sake of accuracy when theirsubjective confidence in the more preciseinformative answer is low, and taking intoaccount the relative payoffs for accuracy andinformativeness in choosing the grain sizeof their answers. Moreover, the monitoringand control processes involved in the reg-ulation of memory grain size appear to besimilar to those underlying the decision tovolunteer or withhold specific items of infor-mation, implying perhaps the use of com-mon metacognitive mechanisms. A morerecent study by Goldsmith et al. (2005),which examined changes in the regulationof grain size over different retention inter-vals, also yielded results consistent with thismodel: Starting with the well-known findingthat people often remember the gist of anevent though they have forgotten its details,Goldsmith et al. (2005) asked whetherrememberers might exploit the differentialforgetting rates of coarse and precise infor-mation to strategically regulate the accu-racy of the information that they report overtime. The results suggested that when givencontrol over the grain size of their answers,people tend to provide coarser answers atlonger retention intervals, in the attempt tomaintain a stable level of report accuracy.

In sum, the few studies concerning thecontrol function of metacognition suggestthat people rely heavily on their subjective,

Page 330: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

312 the cambridge handbook of consciousness

metacognitive feelings and judgments inchoosing their course of action. In additionto the monitoring output, however, they alsotake into account a variety of other consid-erations, such as the goals of learning andremembering, time pressure, emphasis onaccuracy versus quantity, and the emphasison accuracy versus informativeness.

The Effects of MetacognitiveRegulation on Memory Performance

Given the dynamics of monitoring and con-trol processes discussed so far, it is ofinterest to ask, To what extent does the self-regulation of one’s processing affect actualmemory performance? There are only a fewstudies that have examined this issue sys-tematically. As noted earlier, under self-paced learning conditions, when partici-pants are free to allocate study time todifferent items, they tend to divide theirtime unevenly among the items. Does theself-allocation of study time affect actualmemory performance? Nelson and Leone-sio (1988) coined the phrase “labor-in-vaineffect” to describe the phenomenon thatlarge increases in self-paced study timeyielded little or no gain in recall. Specif-ically, they observed that the amount ofself-paced study time increased substan-tially under conditions that emphasizedaccuracy in comparison with a condition thatemphasized speed. However, the increase instudy time resulted in little or no gain inrecall.

Metcalfe and her associates (Metcalfe,2002 ; Metcalfe & Kornell, 2003) examinedsystematically the effectiveness of the pol-icy of study time allocation for enhanc-ing memory performance. They found, forexample, that learners allocated most timeto medium-difficulty items and studied theeasiest items first (in contrast to what wouldbe expected from the discrepancy-reductionmodel, Dunlosky & Hertzog, 1998). Whenstudy time was experimentally manipulated,the best performance resulted when mosttime was given to the medium-difficultyitems, suggesting that the strategy that peo-

ple use under self-paced conditions is largelyappropriate. These and other results wereseen to accord with the region of proxi-mal learning framework according to whichlearning proceeds best by attending to con-cepts and events that are nearest to one’scurrent understanding and only later goingon to integrate items that are more difficult.

Thiede, Anderson, and Therriault (2003)used a manipulation that affected thelearner’s monitoring accuracy in studyingtext. They found that improved accuracyresulted in a more effective regulation ofstudy and, in turn, in overall better testperformance. Thus, learners seem to relyon their metacognitive feelings in regulatingtheir behavior, and to the extent that thesefeelings are accurate, such self-regulationhelps improve memory performance.

With regard to confidence judgments, asnoted earlier, the work of Koriat and Gold-smith (1994 , 1996b) indicates that whengiven the option of free report, peopleenhance their memory accuracy consider-ably in comparison to forced-report test-ing and do so by relying on the subjectiveconfidence associated with each item thatcomes to mind. Because confidence is gener-ally predictive of accuracy, reliance on con-fidence judgments is effective in enhancingaccuracy when accuracy is at stake. How-ever, the effective regulation of memoryaccuracy comes at the cost of reducedmemory quantity, and both the increase inmemory accuracy achieved under the free-report option and the reduction in mem-ory quantity depend heavily on monitoringeffectiveness.

Koriat and Goldsmith (1996b) evaluatedthe effectiveness of the participants’ controlpolicies given their actual levels of moni-toring effectiveness. The participants werefound to be quite effective in choosing acontrol policy that would maximize theirjoint levels of free-report accuracy and quan-tity performance, compared to an “opti-mal” control policy that could be applieddirectly, based on the confidence judg-ments assigned to the individual answersunder forced report. The effectiveness ofthe participants’ control of grain size in the

Page 331: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 313

Goldsmith et al. (2002) study was much lessimpressive, however, perhaps because of thegreater complexity of the incentive struc-ture (differential payoffs for correct answersat different grain sizes, a fixed penalty forincorrect answers, regardless of grain size).In fact, one of the interesting findings of thatstudy was that participants seemed to adopta simple “satisficing” heuristic based on thepayoff (whether explicit or implicit) andconfidence for the more precise-informativeanswer alone, rather than to compare theexpected subjective utility (confidence mul-tiplied by subjective payoff) of potentialanswers at different grain sizes. Monitor-ing effectiveness for the correctness of theanswers at different grain sizes was, how-ever, also relatively poor (see also Yaniv &Foster, 1997). Thus, it may be that thereare limits on the complexity and efficiencyof both monitoring and control processesthat in turn place limits on the performancebenefits that can be achieved through suchcontrol.

In sum, only a few studies explored theeffectiveness of metacognitive monitoringand control processes in enhancing actualmemory performance. More work in thisvein is needed.

Metacognition and Consciousness:Some General Issues

In concluding this chapter I would like tocomment on how the research on metacog-nition relates to some of the fundamentalissues regarding consciousness and its rolein behavior. I discuss three issues: the deter-minants of subjective experience, the con-trol function of subjective experience, andthe cause-and-effect relation between con-sciousness and behavior.

The Genesis of Subjective Experience

The study of the bases of metacognitivejudgments and their accuracy brings to thefore an important process that seems tounderlie the shaping of subjective experi-ence. The unique qualities of that process are

best highlighted by contrasting experience-based judgment and theory-based judg-ments. Similar contrasts have been proposedby researchers in both cognitive psychologyand social psychology who drew a distinctionbetween two general modes of cognition (seeChaiken & Trope, 1999), and each of thesecontrasts highlights a particular dimension.Thus, different researchers have conceptual-ized the distinction in terms of such polari-ties as Nonanalytic versus Analytic cognition(Jacoby & Brooks, 1984), Associative versusRule-Based Systems (Sloman, 1996), Expe-riential versus Rational Systems (Epstein& Pacini, 1999), Impulsive versus Reflec-tive processes (Strack & Deutsch, 2004),Experience-Based versus Information-Basedprocesses (Kelley & Jacoby, 1996a; Koriat &Levy-Sadot, 1999), Heuristic versus Delib-erate modes of thought (Kahneman, 2003),and Experiential versus Declarative informa-tion (Schwarz, 2004). Stanovich and West(2000) used the somewhat more neutralterms System 1 versus System 2 , whichhave been adopted by Kahneman (2003) indescribing his work on judgmental biases.

In this chapter I focused on the contrastbetween theory-based and experience-basedjudgments, which seems to capture best thefindings in metacognition. As far as metacog-nitive judgments are concerned, the impor-tant assumption is that both experience-based and theory-based judgments are infer-ential in nature. They differ, however, in tworespects. First, theory-based judgments drawupon the content of declarative (semanticand/or episodic) information that is typicallystored in long-term memory. Experience-based judgments, in contrast, are assumed torely on mnemonic cues stemming from thecurrent processing of the task at hand. Suchcues as fluency of processing or ease of accesspertain to the quality and efficacy of object-level processes as revealed online. Hence,as Koriat (1993) argued, experience-basedFOK judgments, for example, monitor theinformation accessible in short-term mem-ory rather than the information available inlong-term memory. It follows that the accu-racy of theory-based judgments depends onthe validity of the theories and knowledge

Page 332: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

314 the cambridge handbook of consciousness

on which they are based, whereas the accu-racy of experience-based judgments shoulddepend on the diagnosticity of the effectivemnemonic cues.

Second, they differ in the nature ofthe underlying process. Theory-based judg-ments are assumed to rely on an explicitlyinferential process: The process is assumedto be deliberate, analytic, slow, effortful, andlargely conscious. In contrast, experience-based judgments involve a two-step process:A fast, unconscious, automatic inferenceresults in a sheer subjective experience, andthat subjective experience can then serve asthe basis for noetic judgments. Therefore,as Koriat and Levy-Sadot argued (1999), theprocesses that take off from subjective expe-rience generally have no access to the pro-cesses that have produced that experiencein the first place.

It is experience-based metacognitivejudgments that have attracted the atten-tion of memory researchers who asked thequestion, How do we know that we know?(e.g., Hart, 1965 ; Tulving & Madigan, 1970).Experience-based judgments have the qual-ity of immediate, direct impressions, similarto what would follow from the trace-accessview of metacognitive judgments. However,as argued earlier, this phenomenal qualitycould be explained in terms of the idea thatexperience-based judgments are based on aninferential process that is not available toconsciousness, and hence the outcome ofthat process has the phenomenal quality of adirect, self-evident intuition (see Epstein &Pacini, 1999).

Thus, the work on metacognition nicelyconverges on the proposals advanced byJacoby and Kelley (see Kelley & Jacoby,1993) and by Whittlesea (2002 , 2004) onthe shaping of subjective experience. Theseproposals also parallel ideas in the area ofsocial psychology on the genesis of vari-ous subjective feelings (see Bless & Forgas,2000; Strack, 1992). However, although itis theoretically comforting that the distinc-tion between experience-based and theory-based metacognitive processes converges onsimilar distinctions that have emerged inother domains, a great deal can be gained

by attempting to place the metacognitivedistinction within a broader framework thatencompasses other similar distinctions. Forexample, research in social psychology sug-gests that the interplay between declara-tive and experiential information is greaterthan has been realized so far (see Schwarz,2004). However, little is known about thepossibility that a similar interplay betweenthe effects of theories and knowledge andthose of mnemonic cues occurs also withregard to metacognitive judgments. Also, lit-tle research has been carried out that exam-ines the possible effects of attribution andmisattribution on metacognitive judgments.Furthermore, processing fluency has beenshown to affect a variety of phenomenalexperiences, such as liking, truth judgments,recognition decisions, and so on. Again,it is important to examine noetic feelingsin the context of these other phenomenalexperiences.

The Control Function of SubjectiveExperience

The issue of metacognitive control emergesmost sharply when we ask, What is the sta-tus of metacognitive monitoring and con-trol processes within the current distinctionbetween implicit and explicit cognition? Inlight of the extensive research on both ofthese areas of research, one would expect theanswer to be quite straightforward. How-ever, such is not the case. In an edited vol-ume on Implicit Memory and Metacognition(Reder, 1996), the discussions of the partic-ipants revealed a basic ambivalence: Kelleyand Jacoby (1996b) claimed that “metacog-nition and implicit memory are so similar asto not be separate topics” (p. 287). Funnell,Metcalfe, and Tsapkini (1996), on the otherhand, concluded that “the judgment of whatand how much you know about what youknow or will know is a classic, almost defini-tional, explicit task” (p. 172). Finally, Rederand Schunn (1996) stated, “Given that feel-ing of knowing, like strategy selection, tendsto be thought of as the essence of a metacog-nitive strategy, it is important to defend ourclaim that this rapid feeling of knowing is

Page 333: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 315

actually an implicit process rather than anexplicit process” (p. 50).

Koriat (1998b, 2000b) argued that thisambivalence actually discloses the two facesof metacognition. He proposed a crossovermodel that assigns metacognition a piv-otal role in mediating between unconsciousand conscious determinants of informationprocessing. Thus, metacognitive judgmentswere assumed to lie at the interface betweenimplicit and explicit processes. Generallyspeaking, a rough distinction can be drawnbetween two modes of operation: In theexplicit-controlled mode, which underliesmuch of our daily activities, behavior isbased on a deliberate and conscious eval-uation of the available options and on adeliberate and controlled choice of the mostappropriate course of action. In the implicit-automatic mode, in contrast, various fac-tors registered below full consciousness mayinfluence behavior directly and automati-cally, without the mediation of consciouscontrol (see Bargh, 1997; Wegner, 2002).

Metacognitive experiences are assumedto occupy a unique position in this scheme:They are implicit as far as their antecedentsare concerned, but explicit as far as theirconsequences are concerned. Although astrong feeling of knowing or an unmedi-ated subjective conviction is certainly partand parcel of conscious awareness, they maythemselves be the product of an unconsciousinference, as reviewed earlier. Once formed,however, such subjective experiences canserve as the basis for the conscious controlof information processing and action.

The crossover model may apply to othertypes of unmediated feelings (Koriat & Levy-Sadot, 1999). Thus, according to this view,sheer subjective feelings, which lie at theheart of consciousness, may themselves bethe product of unconscious processes. Suchfeelings represent an encapsulated summaryof a variety of unconscious influences, and itis in this sense that they are informative (seeSchwarz & Clore, 1996): They contain infor-mation that is relevant to conscious con-trol, unlike the implicit, unconscious pro-cesses that have given rise to these feelings.Koriat (2000b) speculated that the function

of immediate feelings, such as experience-based metacognitive feelings, is to augmentself-control; that is, to allow some degree ofpersonal control over processes that wouldotherwise influence behavior directly andautomatically, outside the person’s con-sciousness and control.

The Cause-and-Effect-Relation betweenMonitoring and Control

A final metatheoretical issue concerns theassumption underlying much of the workin metacognition (and adopted in the fore-going discussion) – that metacognitive feel-ings play a causal role in affecting judgmentsand behavior. However, the work of Jacobyand his associates (see Kelley & Jacoby,1998) and of Whittlesea (2004) suggests aprocess that is more consistent with thespirit of the James-Lange view of emotion(see James, 1890): Subjective experience isbased on an interpretation and attributionof one’s own behavior, so that it followsrather than precedes controlled processes.In fact, the assumption that metacognitivefeelings monitor the dynamics of informa-tion processing implies that such feelings aresometimes based on the feedback from self-initiated object-level processes. For exam-ple, the accessibility model of FOK (Koriat,1993) assumes that FOK judgments arebased on the feedback from one’s attempt toretrieve a target from memory. Hence theyfollow, rather than precede, controlled pro-cesses. Thus, whereas discussions of the func-tion of metacognitive feelings assume thatthe subjective experience of knowing drivescontrolled action, discussions of the basesof metacognitive feelings imply that suchfeelings are themselves based on the feed-back from controlled action, and thus followrather than precede behavior.

Recent work that addressed the cause-and-effect relation between metacogni-tive monitoring and metacognitive con-trol (Koriat, in press; Koriat, Ma’ayan, &Nussinson, 2006; see Koriat, 2000b) sug-gests that the interplay between them isbidirectional: Although metacognitive mon-itoring can drive and guide metacognitive

Page 334: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

316 the cambridge handbook of consciousness

control, it may itself be based on the feed-back from controlled operations. Thus, whencontrol effort is goal driven, greater effortenhances metacognitive feelings, consistentwith the “feelings-affect-behavior” hypothe-sis. For example, when different incentivesare assigned to different items in a study list,learners invest more study time on the high-incentive items and, in parallel, make higherJOLs for these items than for the low-incentive items. This is similar to the ideathat we run away because we are fright-ened, and therefore the faster we run awaythe safer we feel. In contrast, when controleffort is data driven, increased effort is corre-lated with lower metacognitive feelings, con-sistent with the hypothesis that such feelingsare based on the feedback from behavior. Forexample, under self-paced learning the moreeffort learners spend studying an item thelower is their JOL, and also the lower is theirsubsequent recall of that item. This is simi-lar to the idea that we are frightened becausewe are running away, and therefore the fasterwe run the more fear we should experi-ence. Thus, the study of metacognition canalso shed light on the long-standing issue ofthe cause-and-effect relation between con-sciousness and behavior.

In sum, some of the current researchin metacognition scratches the surfaceof metatheoretical issues concerning con-sciousness and its role in behavior and isbeginning to attract the attention of philoso-phers of mind (see Nelson & Rey, 2000).

Acknowledgments

The preparation of this chapter was sup-ported by a grant from the German Fed-eral Ministry of Education and Research(BMBF) within the framework of German-Israeli Project Cooperation (DIP). The chap-ter was prepared when the author was afellow at the Centre for Advanced Study,Norwegian Academy of Science, Oslo. Iam grateful to Morris Goldsmith, SarahBar, and Rinat Gil for their help on thischapter.

References

Bandura, A. (1977). Self-efficacy: Toward a uni-fying theory of behavioral change. PsychologicalReview, 84 , 191–215 .

Bargh, J. A. (1997). The automaticity of every-day life. In R. S. Wyer, Jr. (Ed.), Advances insocial cognition (Vol. 10, pp. 1–61). Mahwah, NJ:Erlbaum.

Barnes, A. E., Nelson, T. O., Dunlosky, J., Maz-zoni, G., & Narens, L. (1999). An integrativesystem of metamemory components involvedin retrieval. In D. Gopher & A. Koriat (Eds.),Attention and performance XVII: Cognitive regu-lation of performance: Interaction of theory andapplication (pp. 287–313). Cambridge, MA:MIT Press.

Begg, I., Duft, S., Lalonde, P., Melnick, R., &Sanvito, J. (1989). Memory predictions arebased on ease of processing. Journal of Memoryand Language, 2 8, 610–632 .

Begg, I., Vinski, E., Frankovich, L., & Holgate,B. (1991). Generating makes words memorable,but so does effective reading. Memory and Cog-nition, 19, 487–497.

Benjamin, A. S. (2003). Predicting and postdict-ing the effects of word frequency on memory.Memory and Cognition, 31, 297–305 .

Benjamin, A. S., & Bjork, R. A. (1996). Retrievalfluency as a metacognitive index. In L.Reder (Ed.), Implicit memory and metacognition(pp. 309–338). Hillsdale, NJ: Erlbaum.

Benjamin, A. S., Bjork, R. A., & Schwartz,B. L. (1998). The mismeasure of memory:When retrieval fluency is misleading as ametamnemonic index. Journal of ExperimentalPsychology: General, 12 7, 55–68.

Birch, S. A. J., & Bloom, P. (2003). Childrenare cursed: An asymmetric bias in mental-state attribution. Psychological Science, 14 , 283–286.

Bjork, R. A. (1999). Assessing our owncompetence: Heuristics and illusions. In D.Gopher & A. Koriat (Eds.), Attention and perfor-mance XVII: Cognitive regulation of performance:Interaction of theory and application (pp. 435–459). Cambridge, MA: MIT Press.

Bjork, R. A., & Bjork, E. L. (1992). A new theoryof disuse and an old theory of stimulus fluctu-ation. In A. F. Healy, S. M. Kosslyn, & R. M.Shiffrin (Eds.), Essays in honor of William K.Estes, Vol. 1: From learning theory to connectionist

Page 335: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 317

theory, Vol. 2 : From learning processes to cognitiveprocesses (pp. 35–67). Hillsdale, NJ: Erlbaum.

Bjorklund, D. F., & Douglas, R. N. (1997).The development of memory strategies. In N.Cowan (Ed.), The development of memory inchildhood (pp. 201–246). Hove, England: Taylor& Francis.

Bjorkman, M., Juslin, P., & Winman, A.(1993). Realism of confidence in sensorydiscrimination: The underconfidence phe-nomenon. Perception and Psychophysics, 54 ,75–81.

Bless, H., & Forgas, J. P. (Eds.). (2000). The mes-sage within: The role of subjective experience insocial cognition and behavior. Philadelphia: Psy-chology Press.

Brainerd, C. J., & Reyna, V. F., Wright, R., &Mojardin, A. H. (2003). Recollection rejection:False-memory editing in children and adults.Psychological Review, 110, 762–784 .

Brown, A. L. (1987). Metacognition, executivecontrol, self-regulation, and other more mys-terious mechanisms. In F. E. Weinert & R.H. Kluwe (Eds.), Metacognition, motivation,and understanding (pp. 95–116). Hillsdale, NJ:Erlbaum.

Brown, J., Lewis, V. J., & Monk, A. F. (1977).Memorability, word frequency and negativerecognition. Quarterly Journal of ExperimentalPsychology, 2 9, 461–473 .

Brown, R., & McNeill, D. (1966). The “tip of thetongue” phenomenon. Journal of Verbal Learn-ing and Verbal Behavior, 5 , 325–337.

Burke, D. M., MacKay, D. G., Worthley, J. S.,& Wade, E. (1991). On the tip of the tongue:What causes word finding failures in young andolder adults? Journal of Memory and Language,30, 542–579.

Busey, T. A., Tunnicliff, J., Loftus, G. R., & Loftus,E. (2000). Accounts of the confidence accuracyrelation in recognition memory. PsychonomicBulletin & Review, 7, 26–48.

Carroll, M., Nelson, T. O., & Kirwan, A. (1997).Tradeoff of semantic relatedness and degree ofoverlearning: Differential effects on metamem-ory and on long-term retention. Acta Psycholog-ica, 95 , 239–253 .

Chaiken, S., & Trope, Y. (Eds.). (1999). Dualprocess theories in social psychology. New York:Guilford Press.

Chandler, C. C. (1994). Studying related picturescan reduce accuracy, but increase confidence,

in a modified recognition test. Memory & Cog-nition, 2 2 , 273–280.

Clore, G. L. (1994). Can emotions be noncon-scious. In P. Ekman & R. J. Davidson (Eds.), Thenature of emotion: Fundamental questions. Seriesin affective science (pp. 283–299). London:Oxford University Press.

Cohen, R. L., Sandler, S. P., & Keglevich, L.(1991). The failure of memory monitoring ina free recall task. Canadian Journal of Psychol-ogy, 45 , 523–538.

Costermans, J., Lories, G., & Ansay, C. (1992).Confidence level and feeling of knowing inquestion answering: The weight of inferentialprocesses. Journal of Experimental Psychology:Learning, Memory, and Cognition, 18, 142–150.

Craik, F. I. M., & Lockhart, R. S. (1972). Levels ofprocessing: A framework for memory research.Journal of Verbal Learning and Verbal Behavior,11, 671–684 .

Damasio, A. R. (1994). Descartes’ error: Emo-tion, reason, and the human brain. New York:Putnam.

Danion, J. M., Gokalsing, E., Robert, P., Massin-Krauss, M., & Bacon, E. (2001). Defective rela-tionship between subjective experience andbehavior in schizophrenia. American Journal ofPsychiatry, 158, 2064–2066.

Davidson, J. E., & Sternberg, R. J. (1998). Smartproblem solving: How metacognition helps. InD. J. Hacker, J. Dunlosky, & A. C. Graesser(Eds.), Metacognition in educational theory andpractice (pp. 47–68). Mahwah, NJ: Erlbaum.

De Corte, E. (2003). Transfer as the productiveuse of acquired knowledge, skills, and moti-vations. Current Directions in Psychological Sci-ence, 12 , 142–146.

Denes-Raj, V., & Epstein, S. (1994). Conflictbetween intuitive and rational processing:When people behave against their better judg-ment. Journal of Personality and Social Psychol-ogy, 66, 819–829.

Dunlosky, J., & Hertzog, C. (1998). Training pro-grams to improve learning in later adulthood:Helping older adults educate themselves. In D.J. Hacker (Ed.), Metacognition in educationaltheory and practice (pp. 249–275). Mahwah,NJ: Erlbaum.

Dunlosky, J., & Nelson, T. O. (1992). Importanceof the kind of cue for judgments of learning(JOL) and the delayed-JOL effect. Memory &Cognition, 2 0, 374–380.

Page 336: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

318 the cambridge handbook of consciousness

Dunlosky, J., & Nelson, T. O. (1994). Does thesensitivity of judgments of learning (JOLs) tothe effects of various study activities depend onwhen the JOLs occur? Journal of Memory andLanguage, 33 , 545–565 .

Dunlosky, J., & Thiede, K. W. (1998). Whatmakes people study more? An evaluation offactors that affect self-paced study. Acta Psy-chologica, 98, 37–56.

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger,J. (2003). Why people fail to recognize theirown incompetence. Current Directions in Psy-chological Science, 12 , 83–87.

Ehrlinger, J., & Dunning, D. (2003). How chronicself-views influence (and potentially mislead)estimates of performance. Journal of Personalityand Social Psychology, 84 , 5–17.

Epstein, S., & Pacini, R. (1999). Some basic issuesregarding dual-process theories from the per-spective of cognitive-experiential self-theory.In S. Chaiken & Y. Trope (Eds.), Dual processtheories in social psychology (pp. 462–482). NewYork: Guilford Press.

Fernandez-Duque, D., Baird, J., & Posner, M.(2000). Awareness and metacognition. Con-sciousness and Cognition, 9, 324–326.

Fiedler, K., Brinkmann, B., Betsch, T., & Wild, B.(2000). A sampling approach to biases in con-ditional probability judgments: Beyond baserate neglect and statistical format. Journal ofExperimental Psychology: General, 12 9, 399–418.

Fischhoff, B. (1975). Hindsight is not equal toforesight: The effect of outcome knowledge onjudgment under uncertainty. Journal of Exper-imental Psychology: Human Perception and Per-formance, 1, 288–299.

Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977).Knowing with certainty: The appropriatenessof extreme confidence. Journal of ExperimentalPsychology: Human Perception and Performance,3 , 552–564 .

Flavell, J. H. (1971). First discussant’s comments:What is memory development the devel-opment of? Human Development, 14 , 272–278.

Flavell, J. H. (1979). Metacognition and cog-nitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist,34 , 906–911.

Flavell, J. H. (1999). Cognitive development:Children’s knowledge about the mind. AnnualReview of Psychology, 50, 21–45 .

Flavell, J. H., & Wellman, H. M. (1977).Metamemory. In R. V. Kail & J. W. Hagen(Eds.), Perspectives on the development of mem-ory and cognition (pp. 3–33). Hillsdale, NJ:Erlbaum.

Freytag, P. & Fiedler, K. (2006). Subjective valid-ity judgements as an index of sensitivity tosampling bias. In K. Fiedler & P. Juslin (Eds.),Information sampling and adaptive cognition(pp. 127–146). New York: Cambridge Univer-sity Press.

Funnell, M., Metcalfe, J., & Tsapkini, K. (1996).In the mind but not on the tongue: Feelingof knowing in an anomic patient. In L. M.Reder, (Ed.) Implicit memory and metacognition(pp. 171–194). Hillsdale, NJ: Erlbaum.

Gardiner, J. M., & Richardson-Klavehn, A.(2000). Remembering and knowing. InE. Tulving & F. I. M. Craik (Eds.), The Oxfordhandbook of memory (pp. 229–244). London:Oxford University Press.

Garry, M., Manning, C. G., Loftus, E. F., & Sher-man, S. J. (1996). Imagination inflation: Imag-ining a childhood event inflates confidence thatit occurred. Psychonomic Bulletin and Review, 3 ,208–214 .

Ghetti, S. (2003). Memory for nonoccurrences:The role of metacognition. Journal of Memoryand Language, 48, 722–739.

Gigerenzer, G., Hoffrage, U., & Kleinbolting,H. (1991). Probabilistic mental models: ABrunswikian theory of confidence. Psychologi-cal Review, 98, 506–528.

Gilbert, D. T. (2002). Inferential correction. In T.Gilovich, D. Griffin, & D. Kahneman (Eds.),Heuristics and biases: The psychology of intuitivejudgment (pp. 167–184). New York: CambridgeUniversity Press.

Gilovich, T., Griffin, D. & Kahneman, D. (Eds.).(2002). Heuristics and biases: The psychology ofintuitive judgment. New York: Cambridge Uni-versity Press.

Glenberg, A. M., Sanocki, T., Epstein, W., & Mor-ris, C. (1987). Enhancing calibration of com-prehension. Journal of Experimental Psychology:General, 116, 119–136.

Glucksberg, S., & McCloskey, M. (1981). Deci-sions about ignorance: Knowing that youdon’t know. Journal of Experimental Psychology:Human Learning and Memory, 7, 311–325 .

Goldsmith, M., & Koriat, A. (1999). The strategicregulation of memory reporting: Mechanismsand performance consequences. In D. Gopher& A. Koriat (Eds.), Attention and performance

Page 337: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 319

XVII: Cognitive regulation of performance: Inter-action of theory and application (pp. 373–400).Cambridge, MA: MIT Press.

Goldsmith, M., Koriat, A., & Pansky, A. (2005)Strategic regulation of grain size in memoryreporting over time. Journal of Memory andLanguage, 52 , 505–525 .

Goldsmith, M., Koriat, A., & Weinberg-Eliezer,A. (2002). Strategic regulation of grain sizememory reporting. Journal of ExperimentalPsychology: General, 13 1, 73–95 .

Griffin, D., & Tversky, A. (1992). The weighing ofevidence and the determinants of confidence.Cognitive Psychology, 2 4 , 411–435 .

Gruneberg, M. M., & Monks, J. (1974). “Feelingof knowing” and cued recall. Acta Psychologica,38, 257–265 .

Gruneberg, M. M., Monks, J., & Sykes, R. N.(1977). Some methodological problems withfeelings of knowing studies. Acta Psychologica,41, 365–371.

Guttentag, R., & Carroll, D. (1998). Memora-bility judgments for high- and low-frequencywords. Memory and Cognition, 2 6, 951–958.

Hacker, D. J., Dunlosky, J., & Graesser, A. C.(Eds.). (1998). Metacognition in educational the-ory and practice. Mahwah, NJ: Erlbaum.

Hart, J. T. (1965). Memory and the feeling-of-knowing experience. Journal of Educational Psy-chology, 56, 208–216.

Hart, J. T. (1967). Second-try recall, recognition,and the memory-monitoring process. Journal ofEducational Psychology, 58, 193–197.

Hastie, R., Landsman, R., & Loftus, E. F. (1978).The effects of initial questioning on subsequenteyewitness testimony. Jurimetrics Journal, 19,1–8.

Hertzog, C., Dunlosky, J., Robinson, A. E., &Kidder, D. P. (2003). Encoding fluency is acue used for judgments about learning. Journalof Experimental Psychology: Learning, Memory,and Cognition, 2 9, 22–34 .

Hertzog, C., Kidder, D. P., Powell-Moman, A.,& Dunlosky, J. (2002). Aging and monitor-ing associative learning: Is monitoring accuracyspared or impaired? Psychology and Aging, 17,209–225 .

Jacoby, L. L., & Brooks, L. R. (1984). Nonana-lytic cognition: Memory, perception, and con-cept learning. In G. H. Bower (Ed.), The psy-chology of learning and motivation: Advancesin research and theory (pp. 1–47). New York:Academic Press.

Jacoby, L. L., & Whitehouse, K. (1989). Anillusion of memory: False recognition influ-enced by unconscious perception. Journal ofExperimental Psychology: General, 118, 126–135 .

James, W. (1890). The principles of psychology.New York: Holt.

Johnson, M. K. (1997). Identifying the originof mental experience. In M. S. Myslobodsky(Ed.), The mythomanias: The nature of deceptionand self-deception (pp. 133–180). Hillsdale, NJ:Erlbaum.

Johnson, M. K., Hashtroudi, S., & Lindsay, D. S.(1993). Source monitoring. Psychological Bul-letin, 114 , 3–28.

Jonsson, F. U., & Olsson, M. J. (2003). Olfac-tory metacognition. Chemical Senses, 2 8, 651–658.

Jost, J. T., Kruglanski, A. W., & Nelson, T. O.(1998). Social metacognition: An expansion-ist review. Personality and Social PsychologyReview, 2 , 137–154 .

Kahneman, D. (2003). A perspective on judg-ment and choice: Mapping bounded rational-ity. American Psychologist, 58, 697–720.

Kelley, C. M., & Jacoby, L. L. (1993). The con-struction of subjective experience: Memoryattributions. In M. Davies & G. W. Humphreys(Eds.), Consciousness: Psychological and philo-sophical essays. Readings in mind and language(Vol. 2 , pp. 74–89). Malden, MA: Blackwell.

Kelley, C. M., & Jacoby, L. L. (1996a). Adultegocentrism: Subjective experience versus ana-lytic bases for judgment. Journal of Memory andLanguage, 35 , 157–175 .

Kelley, C. M., & Jacoby, L. L. (1996b). Mem-ory attributions: Remembering, knowing, andfeeling of knowing. In L. M. Reder (Ed.),Implicit memory and metacognition (pp. 287–308). Hillsdale, NJ: Erlbaum.

Kelley, C. M., & Jacoby, L. L. (1998). Subjec-tive reports and process dissociation: Fluency,knowing, and feeling. Acta Psychologica, 98,127–140.

Kelley, C. M., & Lindsay, D. S. (1993). Remem-bering mistaken for knowing: Ease of retrievalas a basis for confidence in answers to generalknowledge questions. Journal of Memory andLanguage, 32 , 1–24 .

Kelley, C. M., & Sahakyan, L. (2003). Memory,monitoring, and control in the attainment ofmemory accuracy. Journal of Memory and Lan-guage, 48, 704–721.

Page 338: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

32 0 the cambridge handbook of consciousness

Kentridge, R. W., & Heywood, C. A. (2000).Metacognition and awareness. Consciousnessand Cognition, 9, 308–312 .

Kimball, D. R., & Metcalfe, J. (2003). Delay-ing judgments of learning affects memory, notmetamemory. Memory and Cognition, 31, 918–929.

King, J. F., Zechmeister, E. B., & Shaughnessy, J. J.(1980). Judgments of knowing: The influenceof retrieval practice. American Journal of Psy-chology, 93 , 329–343 .

Klin, C. M., Guzman, A. E., & Levine,W. H. (1997). Knowing that you don’t know:Metamemory and discourse processing. Journalof Experimental Psychology: Learning, Memory,and Cognition, 2 3 , 1378–1393 .

Koren, D., Seidman, L. J., Poyurovsky, M., Gold-smith, M., Viksman, P., Zichel, S., & Klein, E.(2004). The neuropsychological basis of insightin first-episode schizophrenia: A pilot metacog-nitive study. Schizophrenia Research, 70, 195–202 .

Koriat, A. (1993). How do we know that weknow? The accessibility model of the feeling ofknowing. Psychological Review, 100, 609–639.

Koriat, A. (1995). Dissociating knowing and thefeeling of knowing: Further evidence for theaccessibility model. Journal of ExperimentalPsychology: General, 12 4 , 311–333 .

Koriat, A. (1997). Monitoring one’s own knowl-edge during study: A cue-utilization approachto judgments of learning. Journal of Experimen-tal Psychology: General, 12 6, 349–370.

Koriat, A. (1998a). Illusions of knowing: The linkbetween knowledge and metaknowledge. InV. Y. Yzerbyt, G. Lories, & B. Dardenne (Eds.),Metacognition: Cognitive and social dimensions(pp. 16–34). London: Sage.

Koriat, A. (1998b). Metamemory: The feeling ofknowing and its vagaries. In M. Sabourin, F. I.M. Craik, & M. Robert (Eds.), Advances in psy-chological science (Vol. 2 , pp. 461–469). Hove,UK: Psychology Press.

Koriat, A. (2000a). Control processes in remem-bering. In E. Tulving & F. I. M. Craik (Eds.),The Oxford handbook of memory (pp. 333–346).London: Oxford University Press.

Koriat, A. (2000b). The feeling of knowing: Somemetatheoretical implications for consciousnessand control. Consciousness and Cognition, 9,149–171.

Koriat, A. (2002). Metacognition research: Aninterim report. In T. J. Perfect & B. L. Schwartz

(Eds.), Applied metacognition (pp. 261–286).Cambridge: Cambridge University Press.

Koriat, A. (in press). Are we frightened becausewe run away? Some evidence from metacogni-tive feelings. In B. Uttl, N. Ohta, & A. L. Siegen-thaler (Eds.), Memory and emotions: Interdisci-plinary perspectives. Malden, MA: Blackwell.

Koriat, A., Ben-Zur, H., & Druch, A. (1991).The contextualization of memory for input andoutput events. Psychological Research, 53 , 260–270.

Koriat, A., Ben-Zur, H., & Sheffer, D. (1988).Telling the same story twice: Output monitor-ing and age. Journal of Memory and Language,2 7, 23–39.

Koriat, A., & Bjork, R. A. (2005). Illusions of com-petence in monitoring one’s knowledge dur-ing study. Journal of Experimental Psychology:Learning, Memory and Cognition, 31, 187–194 .

Koriat, A., & Bjork, R. A. (2006). Illusions ofcompetence during study can be remedied bymanipulations that enhance learners’ sensitiv-ity to retrieval conditions at test. Memory &Cognition, 34 , 959–972 .

Koriat, A., & Bjork, R. A., Sheffer, L., & Bar,S. K. (2004). Predicting one’s own forgetting:The role of experience-based and theory-basedprocesses. Journal of Experimental Psychology:General, 133 , 643–656.

Koriat, A., & Goldsmith, M. (1994). Memoryin naturalistic and laboratory contexts: Distin-guishing the accuracy-oriented and quantity-oriented approaches to memory assessment.Journal of Experimental Psychology: General,12 3 , 297–315 .

Koriat, A., & Goldsmith, M. (1996a). Mem-ory metaphors and the real-life/laboratorycontroversy: Correspondence versus store-house conceptions of memory. Behavioral andBrain Sciences, 19, 167–228.

Koriat, A., & Goldsmith, M. (1996b). Monitoringand control processes in the strategic regulationof memory accuracy. Psychological Review, 103 ,490–517.

Koriat, A., Goldsmith, M., & Pansky, A. (2000).Toward a psychology of memory accuracy.Annual Review of Psychology, 51, 481–537.

Koriat, A., Goldsmith, M., Schneider, W., &Nakash-Dura, M. (2001). The credibility ofchildren’s testimony: Can children control theaccuracy of their memory reports? Journal ofExperimental Child Psychology, 79, 405–437.

Page 339: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 32 1

Koriat, A., & Levy-Sadot, R. (1999). Pro-cesses underlying metacognitive judgments:Information-based and experience-based mon-itoring of one’s own knowledge. In S. Chaiken& Y. Trope (Eds.), Dual process theories in socialpsychology (pp. 483–502). New York: GuilfordPress.

Koriat, A., & Levy-Sadot, R. (2001). The com-bined contributions of the cue-familiarity andaccessibility heuristics to feelings of knowing.Journal of Experimental Psychology: Learning,Memory, and Cognition, 2 7, 34–53 .

Koriat, A., Levy-Sadot, R., Edry, E., & de Mar-cas, S. (2003). What do we know about whatwe cannot remember? Accessing the seman-tic attributes of words that cannot be recalled.Journal of Experimental Psychology: Learning,Memory, and Cognition, 2 9, 1095–1105 .

Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980).Reasons for confidence. Journal of ExperimentalPsychology: Human Learning and Memory, 6,107–118.

Koriat, A., & Lieblich, I. (1977). A study of mem-ory pointers. Acta Psychologica, 41, 15 1–164 .

Koriat, A., & Ma’ayan, H. (2005). The effects ofencoding fluency and retrieval fluency on judg-ments of learning. Journal of Memory and Lan-guage, 52 , 478–492 .

Koriat, A., Ma’ayan, H., & Nussinson, R. (2006).The intricate relationships between monitor-ing and control in metacognition: Lessonsfor the cause -and-effect relation betweensubjective experience and behavior. Jour-nal of Experimental Psychology: General, 135 ,36–69.

Koriat, A., Sheffer, L., & Ma’ayan, H. (2002).Comparing objective and subjective learn-ing curves: Judgments of learning exhibitincreased underconfidence with practice. Jour-nal of Experimental Psychology: General, 13 1,147–162 .

Krinsky, R., & Nelson, T. O. (1985). The feeling ofknowing for different types of retrieval failure.Acta Psychologica, 58, 141–158.

Le Ny, J. F., Denhiere, G., & Le Taillanter, D.(1972). Regulation of study-time and interstim-ulus similarity in self-paced learning condi-tions. Acta Psychologica, 36, 280–289.

Leonesio, R. J., & Nelson, T. O. (1990). Do dif-ferent metamemory judgments tap the sameunderlying aspects of memory? Journal ofExperimental Psychology: Learning, Memory,and Cognition, 16, 464–470.

Liberman, V. (2004). Local and global judg-ments of confidence. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,30, 729–732 .

Lichtenstein, S., Fischhoff, B., & Phillips, L. D.(1982). Calibration of probabilities: The stateof the art to 1980. In D. Kahneman, P. Slovic, &A. Tversky (Eds.), Judgment under uncertainty:Heuristics and biases (pp. 306–334). New York:Cambridge University Press.

Lindsay, D. S., Read, D. J., & Sharma, K.(1998). Accuracy and confidence in personidentification: The relationship is strong whenwitnessing conditions vary widely. Psychologi-cal Science, 9, 215–218.

Maki, R. H., & McGuire, M. J. (2002). Metacog-nition for text: Findings and implications foreducation. In T. Perfect & B. Schwartz (Eds.),Applied metacognition (pp. 39–67). Cambridge:Cambridge University Press.

Matvey, G., Dunlosky, J., & Guttentag, R. (2001).Fluency of retrieval at study affects judgmentsof learning (JOLs): An analytic or nonanalyticalbasis for JOLs? Memory and Cognition, 2 9, 222–233 .

Mazzoni, G., & Cornoldi, C. (1993). Strategiesin study time allocation: Why is study timesometimes not effective? Journal of Experimen-tal Psychology: General, 12 2 , 47–60.

Mazzoni, G., Cornoldi, C., & Marchitelli, G.(1990). Do memorability ratings affect study-time allocation? Memory and Cognition, 18,196–204 .

Mazzoni, G., & Nelson, T. O. (1995). Judgmentsof learning are affected by the kind of encodingin ways that cannot be attributed to the levelof recall. Journal of Experimental Psychology:Learning, Memory, and Cognition, 2 1, 1263–1274 .

Metcalfe, J. (1998a). Cognitive optimism:Self-deception or memory-based processingheuristics? Personality and Social PsychologyReview, 2 , 100–110.

Metcalfe, J. (Ed.). (1998b). Metacognition.[Special issue]. Personality and Social Psycho-logical Review, 2 .

Metcalfe, J. (2000). Metamemory: Theory anddata. In E. Tulving & F. I. M. Craik (Eds.),The Oxford handbook of memory (pp. 197–211).London: Oxford University Press.

Metcalfe, J. (2002). Is study time allocated selec-tively to a region of proximal learning? Journal

Page 340: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

32 2 the cambridge handbook of consciousness

of Experimental Psychology: General, 13 1, 349–363 .

Metcalfe, J., & Kornell, N. (2003). The dynam-ics of learning and allocation of study time toa region of proximal learning. Journal of Exper-imental Psychology: General, 132 , 530–542 .

Metcalfe, J., Schwartz, B. L., & Joaquim,S. G. (1993). The cue-familiarity heuris-tic in metacognition. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,19, 851–864 .

Metcalfe, J., & Shimamura, A. P. (Eds.). (1994).Metacognition: Knowing about knowing. Cam-bridge, MA: MIT Press.

Miozzo, M., & Caramazza, A. (1997). Retrieval oflexical-syntactic features in tip-of-the tonguestates. Journal of Experimental Psychology:Learning, Memory, and Cognition, 2 3 , 1410–1423 .

Mitchell, K. J., & Johnson, M. K. (2000). Sourcemonitoring: Attributing mental experiences. InE. Tulving & F. I. M. Craik (Eds.), The Oxfordhandbook of memory (pp. 179–195). London:Oxford University Press.

Morris, C. C. (1990). Retrieval processes under-lying confidence in comprehension judgments.Journal of Experimental Psychology: Learning,Memory, and Cognition, 16, 223–232 .

Narens, L., Jameson, K. A., & Lee, V. A. (1994).Subthreshold priming and memory monitor-ing. In J. Metcalfe & A. P. Shimamura (Eds.),Metacognition: Knowing about knowing (pp. 71–92). Cambridge, MA: MIT Press.

Neisser, U. (1988). Time present and time past. InM. M. Gruneberg, P. Morris, & R. Sykes (Eds.),Practical aspects of memory: Current researchand issues (Vol. 2 , pp. 545–560). Chichester,England: Wiley.

Nelson, T. O. (1984). A comparison of currentmeasures of the accuracy of feeling-of-knowingpredictions. Psychological Bulletin, 95 , 109–133 .

Nelson, T. O. (1996). Consciousness andmetacognition. American Psychologist, 5 , 102–116.

Nelson, T. O., & Dunlosky, J. (1991). Whenpeople’s judgments of learning (JOLs) areextremely accurate at predicting subsequentrecall: The “delayed-JOL effect.” PsychologicalScience, 2 , 267–270.

Nelson, T. O., & Leonesio, R. J. (1988). Alloca-tion of self-paced study time and the “labor-in-vain effect.” Journal of Experimental Psychology:Learning, Memory, and Cognition, 14 , 676–686.

Nelson, T. O., & Narens, L. (1990). Metamemory:A theoretical framework and new findings. InG. Bower (Ed.), The psychology of learning andmotivation: Advances in research and theory (pp.125–173). New York: Academic Press.

Nelson, T. O., & Narens, L. (1994). Why investi-gate metacognition. In J. Metcalfe & A. P. Shi-mamura (Eds.), Metacognition: Knowing aboutknowing (pp. 1–25). Cambridge, MA: MITPress.

Nelson, T. O., Narens, L., & Dunlosky, J.(2004). A revised methodology for research onmetamemory: Pre-judgment Recall and Moni-toring (PRAM). Psychological Methods, 9, 53–69.

Nelson, T. O., & Rey, G. (Eds.). (2000). Metacog-nition and consciousness: A convergence ofpsychology and philosophy [Special issue].Consciousness and Cognition, 9(2).

Nickerson, R. S. (1998). Confirmation bias:A ubiquitous phenomenon in many guises.Review of General Psychology, 2 , 175–220.

Nhouyvanisvong, A., & Reder, L. M. (1998).Rapid feeling-of-knowing: A strategy selectionmechanism. In V. Y. Yzerbyt, G. Lories, & B.Dardenne (Eds.), Metacognition: Cognitive andsocial dimensions (pp. 35–52). London: Sage.

Oppenheimer, D. M. (2004). Spontaneous dis-counting of availability in frequency judgmenttasks. Psychological Science, 15 , 100–105 .

Pansky, A., Koriat, A., & Goldsmith, M. (2005).Eyewitness recall and testimony. In N. Brewer& K. D. Williams (Eds.), Psychology and law: Anempirical perspective (pp. 93–150). New York:Guilford.

Pansky, A., Koriat, A., Goldsmith, M., &Pearlman-Avnion, S. (March, 2002). Memoryaccuracy and distortion in old age: Cognitive,metacognitive, and neurocognitive determinants.Poster presented at the 30th Anniversary Con-ference of the National Institute for Psychobi-ology, Jerusalem, Israel.

Paris, S. G., & Winograd, P. (1990). How metacog-nition can promote academic learning andinstruction. In B. F. Jones & L. Idol (Eds.),Dimensions of thinking and cognitive instruction(pp. 15–51). Hillsdale, NJ: Erlbaum.

Payne, B. K., Jacoby, L. L., & Lambert, A. J.,(2004). Memory monitoring and the controlof stereotype distortion. Journal of Experimen-tal Social Psychology, 40, 52–64 .

Perfect, T. J. (2002). When does eyewitness con-fidence predict performance? In T. J. Perfect

Page 341: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 32 3

& B. L. Schwartz (Eds.), Applied metacognition(pp. 95–120). Cambridge: Cambridge Univer-sity Press.

Perfect, T. J. (2004). The role of self-rated abil-ity in the accuracy of confidence judgmentsin eyewitness memory and general knowledge.Applied Cognitive Psychology, 18, 157–168.

Perner, J., & Lang, B. (1999). Development of the-ory of mind and executive control. Trends inCognitive Sciences, 3 , 337–344 .

Posner, M. I., & Snyder, C. R. R. (1975). Atten-tion and cognitive control. In R. L. Solso(Ed.), Information processing and cognition: TheLoyola Symposium (pp. 55–85). Hillsdale, NJ:Erlbaum.

Pressley, M., Borkowski, J. G., & Schneider,W. (1987). Cognitive strategies: Good strat-egy users coordinate metacognition and knowl-edge. Annals of Child Development, 4 , 89–129.

Rabinowitz, J. C., Ackerman, B. P., Craik,F. I. M., & Hinchley, J. L. (1982). Aging andmetamemory: The roles of relatedness andimaginary. Journal of Gerontology, 37, 688–695 .

Rabbitt, P. M. A. (1966). Errors and error correc-tion in choice reaction tasks. Journal of Experi-mental Psychology, 71, 264–272 .

Rawson, K. A., Dunlosky, J., & Theide, K. W.(2000). The rereading effect: Metacomprehen-sion accuracy improves across reading trials.Memory and Cognition, 2 8, 1004–1010.

Reder, L. M. (1987). Strategy selection in ques-tion answering. Cognitive Psychology, 19, 90–138.

Reder, L. M. (1988). Strategic control of retrievalstrategies. In G. H. Bower (Ed.), The psychologyof learning and motivation: Advances in researchand theory (Vol. 22 , pp. 227–259). San Diego:Academic Press.

Reder, L. M. (Ed.). (1996). Implicit memory andmetacognition. Mahwah, NJ: Erlbaum.

Reder, L. M., & Ritter, F. E. (1992). What deter-mines initial feeling of knowing? Familiaritywith question terms, not with the answer. Jour-nal of Experimental Psychology: Learning, Mem-ory, and Cognition, 18, 435–451.

Reder, L. M. & Schunn, C. D. (1996). Metacogni-tion does not imply awareness: Strategy choiceis governed by implicit learning and mem-ory. In L. M. Reder (Ed.), Implicit memoryand metacognition (pp. 45–77). Mahwah, NJ:Erlbaum.

Robinson, M. D., & Johnson, J. T. (1998). How notto enhance the confidence-accuracy relation:

The detrimental effects of attention to theidentification process. Law and Human Behav-ior, 2 2 , 409–428.

Robinson, M. D., Johnson, J. T., & Herndon, F.(1997). Reaction time and assessments of cog-nitive effort as predictors of eyewitness mem-ory accuracy and confidence. Journal of AppliedPsychology, 82 , 416–425 .

Robinson, M. D., Johnson, J. T., & Robertson,D. A. (2000). Process versus content in eyewit-ness metamemory monitoring. Journal of Exper-imental Psychology: Applied, 6, 207–221.

Roebers, C. M., Moga, N., & Schneider, W.(2001). The role of accuracy motivation on chil-dren’s and adults’ event recall. Journal of Exper-imental Child Psychology, 78, 313–329.

Rosenthal, D. M. (2000). Consciousness, con-tent, and metacognitive judgments. Conscious-ness and Cognition, 9, 203–214 .

Scheid, K. (1993). Helping students become strate-gic learners: Guidelines for teaching. Cambridge,MA: Brookline Books.

Schneider, W. (1985). Developmental trendsin the metamemory-memory behaviorrelationship: An integrative review. In D.L. Forest-Pressley, G. E. MacKinnon, & T.G. Waller (Eds.), Metacognition, cognition,and human performance (Vol. 1, pp. 57–109).Orlando, FL: Academic Press.

Schneider, W., & Pressley, M. (1997). Memorydevelopment between two and twenty (2d ed.).Mahwah, NJ: Erlbaum.

Schreiber, T. A., & Nelson, D. L. (1998). Therelation between feelings of knowing and thenumber of neighboring concepts linked to thetest cue. Memory and Cognition, 2 6, 869–883 .

Schunn, C. D., Reder, L. M., Nhouyvanisvong,A., Richards, D. R., & Stroffolino, P. J. (1997).To calculate or not to calculate: A source activa-tion confusion model of problem familiarity’srole in strategy selection. Journal of Experimen-tal Psychology: Learning, Memory, and Cogni-tion, 2 3 , 3–29.

Schwartz, B. L. (1994). Sources of information inmetamemory: Judgments of learning and feel-ing of knowing. Psychonomic Bulletin & Review,1, 357–375 .

Schwartz, B. L. (1998). Illusory tip-of-the-tonguestates. Memory, 6, 623–642 .

Schwartz, B. L. (2001). The relation of tip-of-the-tongue states and retrieval time. Memory andCognition, 2 9, 117–126.

Page 342: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

32 4 the cambridge handbook of consciousness

Schwartz, B. L. (2002). Tip-of-the-tongue states:Phenomenology, mechanism, and lexicalretrieval. Mahwah, NJ: Erlbaum.

Schwartz, B. L., & Metcalfe, J. (1992). Cue famil-iarity but not target retrievability enhancesfeeling-of-knowing judgments. Journal ofExperimental Psychology: Learning, Memory,and Cognition, 18, 1074–1083 .

Schwartz, B. L., & Metcalfe, J. (1994). Method-ological problems and pitfalls in the study ofhuman metacognition. In J. Metcalfe & A.P. Shimamura (Eds.), Metacognition: Knowingabout knowing (pp. 93–113). Cambridge, MA:MIT Press.

Schwartz, B. L., & Smith, S. M. (1997). Theretrieval of related information influences tip-of-the-tongue states. Journal of Memory andLanguage, 36, 68–86.

Schwarz, N. (2004). Meta-cognitive experiencesin consumer judgment and decision making.Journal of Consumer Psychology, 14 , 332–348.

Schwarz, N., Bless, H., Strack, F., Klumpp, G.,Rittenauer-Schatka, H., & Simons, A. (1991).Ease of retrieval as information: Anotherlook at the availability heuristic. Journal ofPersonality and Social Psychology, 61, 195–202 .

Schwarz, N., & Clore, G. L. (1996). Feelings andphenomenal experiences. In E. T. Higgins & A.W. Kruglanski (Eds.), Social psychology: Hand-book of basic principles (pp. 433–465). NewYork: Guilford Press.

Schwarz, N., & Clore, G. L. (2003). Moodas information: 20 years later. PsychologicalInquiry, 14 , 296–303 .

Shaw, R. J., & Craik, F. I. M. (1989). Age differ-ences in predictions and performance on a cuedrecall task. Psychology and Aging, 4 , 13 1–135 .

Shimamura, A. P. (2000). Toward a cognitiveneuroscience of metacognition. Consciousnessand Cognition, 9, 313–323 .

Simon, D. A., & Bjork, R. A. (2001). Metacogni-tion in motor learning. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 7, 907–912 .

Sloman, S. A. (1996). The empirical case for twosystems of reasoning. Psychological Bulletin, 119,3–22

Slovic, P., Finucane, M., Peters, E., & MacGre-gor, D. G. (2002). The affect heuristic. In T.Gilovich, D. Griffin, & Kahneman, D. (Eds.),Heuristics and biases: The psychology of intu-

itive judgment (pp. 397–420). New York: Cam-bridge University Press.

Son, L. K., & Metcalfe, J. (2000). Metacognitiveand control strategies in study-time allocation.Journal of Experimental Psychology: Learning,Memory, and Cognition, 2 6, 204–221.

Spehn, M. K., & Reder, L. M. (2000). The uncon-scious feeling of knowing: A commentary onKoriat’s paper. Consciousness and Cognition, 9,187–192 .

Spellman, B. A., & Bjork, R. A. (1992). Whenpredictions create reality: Judgments of learn-ing may alter what they are intended to assess.Psychological Science, 3 , 315–316.

Stanovich, K. E., & West, R. F. (2000). Individ-ual differences in reasoning: Implications forthe rationality debate. Behavioral and Brain Sci-ences, 2 3 , 645–665 .

Strack, F. (1992). The different routes to socialjudgments: Experiential versus informationalstrategies. In L. L. Martin & A. Tesser (Eds.),The construction of social judgments (pp. 249–275). Hillsdale, NJ: Erlbaum.

Strack, F., & Bless, H. (1994). Memory fornonoccurrences: Metacognitive and presuppo-sitional strategies. Journal of Memory and Lan-guage, 33 , 203–217.

Strack, F., & Deutsch R. (2004). Reflective andimpulsive determinants of social behavior. Per-sonality and Social Psychology Review, 8, 220–247.

Thiede, K. W., Anderson, M. C. M., & Therriault,D. (2003). Accuracy of metacognitive monitor-ing affects learning of texts. Journal of Educa-tional Psychology, 95 , 66–73 .

Thiede, K. W., & Dunlosky, J. (1999). Towarda general model of self-regulated study: Ananalysis of selection of items for study andself-paced study time. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 5 , 1024–1037.

Trope, Y., & Liberman, A. (1996). Social hypoth-esis testing: Cognitive and motivational mech-anisms. In E. T. Higgins & A. W. Kruglan-ski (Eds.), Social psychology: Handbook of basicprinciples (pp. 239–270). New York: GuilfordPress.

Tulving, E., & Madigan, S. A. (1970). Memory andverbal learning. Annual Review of Psychology,2 1, 437–484 .

Underwood, B. J. (1966). Individual and grouppredictions of item difficulty for free learning.

Page 343: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

metacognition and consciousness 32 5

Journal of Experimental Psychology, 71, 673–679.

Vernon, D., & Usher, M. (2003). Dynamicsof metacognitive judgments: Pre- and postre-trieval mechanisms. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,2 9, 339–346.

Wegner, D. M. (2002). The illusion of consciouswill. Cambridge, MA: MIT Press.

Wells, G. L., & Murray, D. M. (1984). Eyewit-ness confidence. In G. L. Wells & E. F. Loftus,(Eds.), Eyewitness testimony: Psychological per-spectives (pp. 155–170). New York: CambridgeUniversity Press.

Whittlesea, B. W. A. (2002). Two routes toremembering (and another to rememberingnot). Journal of Experimental Psychology: Gen-eral, 13 1, 325–348.

Whittlesea, B. W. A. (2004). The perception ofintegrality: Remembering through the valida-tion of expectation. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,30, 891–908.

Winkielman, P., & Berridge, K. C. (2004). Uncon-scious emotion. Current Directions in Psycholog-ical Science, 13 , 120–123 .

Winkielman, P., Schwarz, N., Fazendeiro, T. A.,& Reber, R. (2003). The hedonic marking ofprocessing fluency: Implications for evaluative

judgment. In J. Musch & K. C. Klauer (Eds.),The psychology of evaluation: Affective processesin cognition and emotion (pp. 189–217). Mah-wah, NJ: Erlbaum.

Winman, A., & Juslin, P. (2005). “I’m m/n confi-dent that I’m correct”: Confidence in foresightand hindsight as a sampling probability. In K.Fiedler & P. Juslin (Eds.), Information samplingand adaptive cognition. Cambridge, UK: Cam-bridge University Press.

Yaniv, I., & Foster, D. P. (1997). Precision andaccuracy of judgmental estimation. Journal ofBehavioral Decision Making, 10, 21–32 .

Yaniv, I., & Meyer, D. E. (1987). Activa-tion and metacognition of inaccessible storedinformation: Potential bases for incubationeffects in problem solving. Journal of Experi-mental Psychology: Learning, Memory, and Cog-nition, 13 , 187–205 .

Yzerbyt, V. Y., Lories, G., & Dardenne, B.(Eds.). (1998). Metacognition: Cognitive andsocial dimensions. Thousand Oaks, CA: Sage.

Zakay, D., & Tuvia, R. (1998). Choice latencytimes as determinants of post-decisional con-fidence. Acta Psychologica, 98, 103–115 .

Zechmeister, E. B., & Shaughnessy, J. J. (1980).When you know that you know and when youthink that you know but you don’t. Bulletin ofthe Psychonomic Society, 15 , 41–44 .

Page 344: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857437c11 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:24

326

Page 345: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

C H A P T E R 12

Consciousness and Control of Action

Carlo Umilta

Abstract

Any voluntary action involves at least threestages: intention to perform an action, per-formance of the intended action, and per-ception of the effects of the performedaction. In principle, consciousness may man-ifest itself at all three stages. Concerning thefirst stage, research suggests that intentionsfor carrying out voluntary actions may begenerated unconsciously and retrospectivelyreferred consciously to the action when ithas been executed. There is a mechanismthat binds together in consciousness theintention to act and the consequences of theintended action, thus producing the experi-ence of free will.

Human beings consistently show visualillusions when they are tested with per-ceptual measures, whereas the illusions donot manifest themselves when they aretested with motor measures. These disso-ciations concern the stage of performingthe intended action and recall blindsightand visual agnosia, in which patients canperform visually guided tasks without con-

scious visual experience. The explanationis that action execution depends on asensorimotor or “how” system, which con-trols visually guided behavior without accessto consciousness. The other is a cogni-tive or “what” system, which gives riseto perception, is used consciously in pat-tern recognition, and produces normal visualexperience. The processes of this secondstage do not have access to consciousnesseither.

In contrast, we are aware of some aspectsof the current state of the motor system atthe third stage in the sequence that leads tothe execution of an action. When perform-ing an action, we are aware of the predictionof its effects, which depend on the motorcommands that were planned in the premo-tor and motor cortical areas.

Introduction: The Notionof Intentionality

In the present chapter, I am concernedexclusively with motor (i.e., bodily) actions.

32 7

Page 346: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

32 8 the cambridge handbook of consciousness

Although actions usually manifest them-selves as bodily movements, in accord withMarcel (2004) an action can be distinguishedfrom a mere movement because the formerhas a goal and an effect, belongs to a seman-tic category (i.e., it has a content), and hassome degree of voluntariness.

With the term “intentional action” wenormally mean an action that one is con-scious of; that is, an action that is performedconsciously (Marcel, 2004). Other termsthat can be used in place of “intentional”action are “deliberate” action or “volitional”action or “willed” action. As Zhu (2004 ,pp. 2–3) has maintained,

A central task for theories of action isto specify the conditions that distinguishvoluntary and involuntary bodily move-ments. . . . A general way to understand thenature of action is to view actions as bod-ily movements preceded by certain forms ofthought, such as appropriate combinationsof beliefs, desires, intentions, and reasons.It is these particular forms of thought thatcharacterize the voluntariness of humanaction.

The critical question in Zhu’s view is,“[H]ow can a certain piece of thought bringabout physical bodily movement?”

It is important to point out, however,that the terms “conscious” and “intentional”should not be used interchangeably whenreferring to action. In fact, consciously per-formed actions are not necessarily actionsthat are performed intentionally. One canbe conscious of performing an action thatis non-intentional, or automatic, in nature.This distinction between conscious andintentional applies to all cognitive processes.Research has found that the vast majorityof human thinking, feeling, and behavioroperates in automatic fashion with little orno need for intentional control (see, e.g.,Dehaene & Naccache, 2001; and chapters inUmilta & Moscovitch, 1994 ; also, see Proc-hazka, Clarac, Loeb, Rothwell, & Wolpaw,2000, for a discussion of voluntary vs. reflex-ive behavior). Once certain triggering condi-tions occur, cognitive processes can proceedautomatically and autonomously until their

completion, independent of intentional ini-tiation and online intentional control. Thatdoes not mean, however, that the observeris not conscious of the fact that those cogni-tive processes are in progress or, to be moreprecise, of their intermediate and final out-puts (i.e., representations; in fact, we arenever conscious of the inner workings of ourcognitive processes, but only of their out-puts). Very likely, therefore, the observerbecomes conscious of the representationsthat are produced by the cognitive processesthat operate automatically. Thus, the ques-tion concerning what aspects of action weare conscious of is relevant for both inten-tional and automatic actions.

In our daily life we quite often experi-ence conscious intentions to perform specificactions and have the firm belief that thoseconscious intentions drive our bodily move-ments, thus producing the desired changesin the external world. In this view, whichno doubt is shared by the vast majority ofpeople, the key components of an inten-tional action constitute a causal chain thatcan be described as follows, though in anadmittedly oversimplified way. At the begin-ning of the chain that leads to an inten-tional action there is a goal-directed con-scious intention. As a direct consequence ofthe conscious intention, a series of move-ments occurs. Then, effects – that is changesin the external world – manifest themselvesand are consciously linked to the inten-tion to act and to the series of performedmovements.

However, an apparently goal-directedbehavior does not necessarily signal inten-tionality. There can be little doubt thatan organism may manifest goal-directedbehavior without satisfying the criteria forperforming an intentional action. Mostlower-order organisms do exactly that. Thecharacterization of an intentional action thatI have adopted implies that, to define anaction as intentional, one has to consciouslyexperience a link, through the movementsperformed to achieve the goal, between themental state of intention and the effects ofthe performed movements. That is, one canclaim that a given goal-directed behavior

Page 347: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 32 9

satisfies the criteria for an intentional actiononly if the organism produces that behavioralong with a conscious mental representa-tion of its own internal state and of the stateof the external world.

It is generally agreed that intentionalactions engage processes different fromthose engaged by automatic actions (e.g.,Prochazka et al., 2000). Here the fundamen-tal principle is that consciousness is necessaryfor intentional action. In some cases, this prin-ciple is explicitly stated in one of a variety ofdifferent forms, whereas in some other casesit is simply implied. At any rate, the (explicitor implicit) accepted view is that consciousawareness of intentions, and of the motorand environmental consequences (effects)they cause, is required to construct the sub-jective experience of intentional action.

In summary, it seems that conscious-ness can manifest itself at three stages:intention to perform an action, perfor-mance of the intended action, and percep-tion of the effects of the performed action.It is possible that the closeness in timeof these three stages allows one to unifythe conscious experiences that accompanythem. This unification in turn is how weconstruct the strong subjective associationamong intentions, actions, and action effects(Haggard, Aschersleben, Gehrke, & Prinz,2002a).

Note that the second stage, consciousnessof performing the intended action, may besubdivided into two aspects (Marcel, 2004).One aspect is consciousness of the action weare actually performing; that is, the extent towhich we are aware of what we are doing.The other aspect is consciousness of someevents that take place during the actionwe are performing; that is, awareness of thenature of the specific components and of theprecise details of our current action. In whatfollows, I am concerned almost exclusivelywith the first aspect because very little exper-imental evidence is available concerning thesecond aspect.

Also, it is worth noting that this tradi-tional view of how we perform intentionalactions may acquire a dangerously dualistconnotation and becomes thus difficult to

reconcile with the reductionism of neuro-science and cognitive science. At first sight,the toughest problem would be to pro-vide an answer to the question Zhu (2004),among many others, asked above; that is,how a mental state (i.e., the observer’s con-scious intention) interacts with the neuralevents in the motor and premotor brain areasthat produce body movements. That wouldno longer be a problem, however, if weaccepted that the so-called mental state isin fact a specific neural state: It is a neuralstate representing or mediating intentional-ity that acts on another neural state medi-ating or representing movements and theirconsequences. Therefore, it is only phrasingthe problem the way Zhu does that createsa dualistic separation where there may benone (also, see Wegner, 2005 , and commen-taries therein). The question I address in thefirst part of this chapter is quite different andis in fact concerned with the relative timingof two sets of neural events, those that rep-resent (accompany?) the subjective experi-ence of intentional action and those that rep-resent (accompany?) the execution of theintended action.

Intention to Perform an Action(Unawareness of Intention)

Regardless of the difficulties of the tradi-tional view and the danger of dualism itcreates, there can be little doubt that veryoften we introspectively feel we can generateour actions; that is, we are conscious of ouractions and of why they are initiated. Ourexperience of willing our own actions is soprofound that it tempts us to believe that ouractions are caused by consciousness. We firstexperience the conscious intention to per-form a specific action; then, after a variablenumber of intermediate states, the desiredaction takes place. In spite of its appar-ent plausibility, this causal chain is likelyincorrect.

It is important to keep in mind that thereare two issues here, which should not be con-founded. The first is whether the observer’sconscious intention (i.e., a mental state)

Page 348: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

330 the cambridge handbook of consciousness

causes the neural events in the motor andpremotor brain areas, which in turn producebody movements. This is a dualistic view-point that is at odds with current cognitiveneuroscience. The other is whether the neu-ral state representing conscious intention-ality precedes or follows the neural statesthat produce movements. I am concernedalmost exclusively with this latter issue, evenif for simplicity I sometimes make recourseto mentalistic terms.

Concerning the dualistic issue, suffice itto say that, contrary to what introspectionseems to suggest, conscious intention thatleads to actions in fact arises as a result ofbrain activity, and not vice versa (Haggard& Eimer, 1999; Haggard & Libet, 2001; Hag-gard & Magno, 1999; Haggard et al., 2002a;Haggard, Clark, & Kalogeras, 2002b; Libet,1985 , 1999; Libet, Gleason, Wright, & Pearl,1983). Or, as Wegner (2003 , page 65) putsit, “You think of doing X and then do X –not because conscious thinking causes doing,but because other mental processes (thatare not consciously perceived) cause boththe thinking and the doing.” This sentenceis phrased in mentalistic terms, but it iseasy to rephrase by substituting “consciousthinking” with “neural events that representconscious thinking” and “other mental pro-cesses” with “neural events that representother mental processes.”

The celebrated experiments by Libet andhis collaborators (Libet 1985 , 1999; Libetet al., 1983) challenged the classical notionof conscious intention as action initiator andprovided evidence that the conscious inten-tion that is experienced does not correspondto causation. In a non-dualistic view, theseexperiments suggest that the neural eventsthat represent conscious intentions, and thatwe think determine voluntary action, actu-ally occur after the brain events that under-lie action execution. Precisely, although theexperience of conscious intention precedesthe movement (flexing a finger, for exam-ple), it occurs well after the relevant brainevents. This fact is taken to show that theexperience of consciously willing an actionbegins after brain events that set the actioninto motion. Thus, the brain creates both

the intention and the action, leaving the per-son to infer that the intention is causing theaction.

Libet et al. (1983 ; also see McCloskey,Colebatch, Potter, & Burke, 1983) askedtheir participants to watch a clock face witha revolving hand and to report either thetime at which they “felt the urge” to makea freely willed endogenous movement (Wjudgment) or the time the movement actu-ally commenced (M judgment). The vol-untary movement consisted in flexing thewrist at a time the participants themselveschose. Also, they were asked to note andto report the position of the hand whenthey first became conscious of wanting tomove; that is, the moment at which they firstconsciously experienced the will to move.The W judgment was considered to be thefirst moment of conscious intention. Theexact moment at which the action began wasestimated by measuring the electrical activ-ity in the muscles involved. The prepara-tory activity in the motor areas of the brain(the readiness potential, RP) was measuredthrough the electrical activity recorded by ascalp electrode placed over the motor cor-tex. The RP is a gradual increase in electricalactivity in the motor cortical regions, whichtypically precedes willed actions by 1 s orlonger, and is known to be related closely tothe cognitive processes required to generatethe action.

Assuming that the cause precedes theeffect, the temporal order of the W judg-ment and PR onset allows one to investigatewhich event is the cause and which event isthe effect. If the moment of the W judgment(i.e., the moment of conscious intention or,to be more precise, of the neural event rep-resenting conscious intention) precedes theonset of RP, then the idea that consciousintention can initiate the subsequent prepa-ration of movement is tenable. In contrast,if the moment of the W judgment followsthe onset of RP, then conscious intention(i.e., the neural state representing consciousintention) would be a consequence of activ-ity in the motor areas, rather than the causeof it. It should be noted, however, thatDennett (1998) has made the interesting

Page 349: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 331

point that conscious experience does notoccur instantaneously, but rather developsover time. That means that the consciousexperience of the onset of the neural eventthat produces a movement may evolve alongwith the movement itself. Therefore, it islikely wrong to expect a precise point in timewhen one becomes conscious of the initia-tion of a movement.

The sequence of events observed by Libetet al. (1983) is as follows: RP began between1,000 and 500 ms before onset of the actualbody movement, participants only experi-enced a conscious intention (i.e., W judg-ments) about 200 ms before movementonset, and conscious intention was experi-enced between 500 and 350 ms after RPonset. That indicated that a brain processin the motor areas initiated the intentionalprocess well before participants were awareof the intention to act. The conclusion isthat the brain is preparing the action that isapparently caused by a conscious intentionbefore the participant is aware that he or sheintends to act. All this goes directly againstthe traditional view that the conscious inten-tion to perform an action initiates the neuralevents in the motor areas of the brain, whichin turn produce the desired action. Also, itchallenges the folk notion of free will, imply-ing that the feeling of having made a decisionis merely an illusion (e.g., Eagleman & Hol-combe, 2002).

The notion that free will is a mereconstruct of our own minds, and we areonly aware in retrospect of what our brainhad already started some time ago, wasnot accepted lightly. Several critiques ofLibet et al.’s (1983) study were advanced inresponse to a target article by Libet (1985).The more damaging one was that partic-ipants are often poor at judging the syn-chrony of two events, and especially so ifthe two events occur in different percep-tual streams. Even more importantly, the so-called prior entry phenomenon may occur,according to which events in an attendedstream appear to occur earlier than simul-taneous events in an unattended stream.Because participants in the Libet et al.study very likely attempted to divide their

attention between the external clock andtheir own internal states in order to makethe W judgment, the value of 200 msas the interval by which the W judg-ment preceded movement onset is uncer-tain. However, even the largest estimates ofthe prior entry effect are much smaller thanthe gap between RP and W judgment thatLibet et al. found (e.g., Haggard & Libet,2001; Haggard et al., 2002b).

In conclusion, it seems that Libet et al.’sresults, as well as those of Haggard and hiscolleagues (Haggard & Eimer, 1999; Haggard& Magno, 1999; Haggard et al., 2002a,b),are fully consistent with the view that con-sciousness of intention to perform an action(i.e., the brain events that represent the con-scious intention to perform an action) is theconsequence, rather than the cause, of activ-ity in the cortical motor areas. That does notmean to deny that there must be an identi-fiable link between having an intention andthe action that is subsequently performed toachieve the goal indicated by the intention.It simply means that the timing between thebrain events involved is not necessarily theone corresponding to the subjective experi-ence of the observer.

Note, however, that the studies by Hag-gard and his colleagues (Haggard & Eimer,1999; Haggard & Magno, 1999; Haggardet al., 2002a,b) suggest that the processesunderlying the so-called lateralized readi-ness potential (LRP) are more likely thanthose underlying the RP to cause awarenessof movement initiation. The LRP measuresthe additional negativity contralateral to theactual movement that is being performed,over and above that in the ipsilateral motorcortex, and it can be considered to be an indi-cator of action selection. This is based on thereasoning that, once the LRP has begun, theselection of which action to make must havebeen completed. In the studies by Haggardand his colleagues the onset of the RP and Wjudgment did not covary, whereas the onsetof the LRP and W judgment did. That is, theLRP for early W awareness trials occurredearlier than the LRP for late W awareness tri-als. This finding would seem to rule out theRP as the unconscious cause of the conscious

Page 350: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

332 the cambridge handbook of consciousness

state upon which W judgment depends, butit is consistent with LRP having that role.Therefore, there is a clear indication that(a) the initial awareness of a motor actionis premotor, in the sense of deriving fromthe specifications for movement rather thanthe movement itself (i.e., a stage later thanintention but earlier than movement itself;Marcel, 2004), and (b) awareness of initi-ating action relates to preparing a specificaction, rather than a general abstract state ofintending to perform an action of some kind.

In this connection, a study of Fried, Katz,McCarthy, Sass, Williamson, Spencer, andSpencer (1991) is of interest. They stimu-lated, through surface electrodes that wereimplanted for therapeutic reasons, the pre-motor areas (Broadmann’s area 6, BA6, inparticular) of epileptic patients. Weak stim-ulation of some of the more anterior elec-trode sites caused the patients to report aconscious intention to move or a feeling thatthey were about to move specific body parts.Stronger stimulation evoked actual move-ments of the same body parts. Fried et al.’sresults too are clearly consistent with a cau-sation chain that goes from motor areas tointention and not vice versa.

The results of the studies that Haggardand his colleagues (Haggard & Eimer, 1999;Haggard & Magno, 1999; Haggard et al.,2002a,b) conducted by following Libet’sseminal work (also see Marcel, 2004) pointto the stage of motor specifications as thestage at which that initial awareness of actionarises. By the term “motor specifications”they refer to the operations of preparing,planning, and organizing forthcoming move-ments (including selection of the effectors),which Haggard and Magno (1999) attributedto a specific premotor area, the Supplemen-tary Motor Area (SMA), and which Haggardand Eimer (1999) associated with LRPs. Thestage of motor specifications is downstreamof the stage of intention formation, but priorto the stage of activation of the primarymotor cortex, which controls execution ofthe movements themselves.

It is worth pointing out that, having dis-proved the traditional concept of mind-braincausation, Libet (1985 , 1999) salvaged an

important consequence of conscious inten-tion. He claimed that, although actions seemnot to be initiated consciously, they maybe inhibited or stopped consciously. Thisis because there is sufficient time betweenW judgment (i.e., W awareness) and move-ment onset for a conscious veto to operate.Thus, although conscious intention to actdoes not seem to initiate the intended action,consciousness might still allow the uncon-sciously initiated action to go to completion,or veto it and prevent the actual action fromoccurring. The view of consciously veto-ing an unconsciously initiated action maybe altered to one according to which theaction is only consciously modified (Haggard& Libet, 2001). That clearly is not a criticalalteration. The critical point is that the neu-ral onset of a voluntary movement precedesthe conscious experience of having had theintention to act, and the causal role of con-scious intention is confined to the possibilityof suppressing the movement.

It must be conceded that one does not eas-ily accept the notion that intention of actionfollows its initiation. That is, the activationof the neural mechanisms that give rise tothe intention to act follows the activation ofthe neural mechanisms that cause initiationof the action. What seems to have been askedin all studies that to date have addressedthis issue is whether perception of the timeof intentionality precedes or follows action.Suppose, however, we have a poor awarenessof the time when we believe our intention-alty occurs: Then, all the evidence collectedso far becomes suspect.

Even if one accepts the notion that theneural state of intention follows the neuralstate of action, it is still necessary to indicatewhat distinguishes voluntary from involun-tary actions. It seems we have not yet uncov-ered the neural correlate of intention forma-tion, which should always be a prerequisitefor voluntary actions, regardless of whethersuch a correlate follows or precedes the neu-ral events that lead to an action. As we seein one of the following sections, a signal thatis available to consciousness before a volun-tary action is initiated is the prediction ofthe sensory consequences of the action; that

Page 351: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 333

is, the anticipatory representations of theirintended and expected effects. Perhaps, thisis the direction in which to look fordifferences between voluntary and involun-tary actions.

Performing the Intended Action(Unawareness of Action)

Studies on Neurologically IntactParticipants

In what follows I summarize evidence thatan unconscious visual system can accuratelycontrol visually guided behavior even whenthe unconscious visual representation onwhich it depends conflicts with the con-scious visual representation on which per-ception depends.

In the human brain, visual informationfollows many distinct pathways that giverise to many distinct representations of thevisual world (e.g., Milner & Goodale, 1995 ;Rossetti & Pisella, 2002). Of these, twoare more important than the others. Oneproduces conscious perceptual representa-tions (i.e., perception) and governs objectrecognition, whereas the other producesnon-conscious representations and controlsvisually guided behavior. Because of thisorganization, human beings can simultane-ously hold two representations of the samevisual display without becoming aware ofthe conflict when they are incongruent. Onerepresentation is perceptual and is perceivedconsciously and directly. The other represen-tation is not consciously perceived, is uncon-scious, and manifests itself only throughits behavioral effects. In spite of that, ahealthy observer experiences one coher-ent visual world and performs fully appro-priate motor actions to interact with thatworld, a conscious representation of whichhe or she is at the same time perceiving.This occurs because the outputs of the twovisual pathways normally do not conflict,but rather lead to perceptual experiencesand to actions that are consistent with oneanother. Therefore, demonstrating their dis-sociability requires studies that disrupt this

congruence as a result either of experimen-tal intervention in healthy participants or ofcertain types of brain injury in neurologicalpatients. In recent years, many studies (forreviews, see Glover, 2002 , 2004 ; Rossetti &Pisella, 2002) have shown that introspec-tion, on the basis of which we are convincedof perceiving a coherent visual world thatis identical to the visual world that is theobject of our actions, is in error: At leasttwo visual representations operate simulta-neously and in parallel when we interactwith the visual world. Several of those stud-ies have exploited perceptual illusions toexplore double dissociations between per-ceptual and sensorimotor visual systems innon-brain-damaged participants.

One of these illusions is a variation ofthe Roelofs effect (Roelofs, 1935), whichhas been studied extensively by Bridgemanand his colleagues (e.g., Bridgeman, 2002 ;Bridgeman, Peery, & Anand, 1997). Thelocation of a rectangular frame shown off-center in the visual field is misperceived sothat a frame presented on the left side ofthe visual field, for example, will appearless eccentric than it is, and the right edgewill appear somewhat to the right of theobserver’s center. In a typical experiment,the target is presented within the asym-metrically located frame, and its locationis misperceived in the direction oppositethe offset of the frame. That is, mispercep-tion of frame position induces misperceptionof target position (i.e., an induced Roelofseffect). In their studies, Bridgeman and hiscolleagues asked participants to performtwo tasks when faced with a display thatinduced the Roelofs effect. One task was todescribe the target’s position verbally, andbased on the verbal response, the Roelofseffect was reliably observed. In contrast, ajab at the target, performed just after it haddisappeared from view, was not affected bythe frame’s position: The direction of thejab proved accurate despite the consciouslyperceived (and consciously reported) per-ceptual mislocation. The result is differ-ent, however, if a delay is imposed betweendisappearance of the target and executionof the motor response. After a delay of

Page 352: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

334 the cambridge handbook of consciousness

just 4 s, participants have the tendencyto jab in the direction of the perceptualmislocation.

Aglioti, DeSouza, and Goodale (1995)made use, for a similar purpose, ofthe Ebbinghaus illusion (also called theTitchener circle illusion). In it, a circle is pre-sented in the center of a circular array, com-posed of circles of either smaller or largerthan the central one. The circle in the cen-ter appears to be larger if it is surroundedby smaller than by larger circles. One canbuild displays with central circles of physi-cally different sizes that appear perceptuallyequivalent in size. In a 3 -D version of thisillusion, Aglioti et al. required participantsto grasp the central circle between thumband index finger and measured the maximalgrip aperture during the reaching phase ofthe movement. They found that grip sizewas largely determined by the true, physi-cal size of the circle to be grasped and notby its illusory size. In a subsequent study,Haffenden and Goodale (1998) measuredthe circle illusion either by asking partic-ipants to indicate the apparent size of acircle or to pick it up, without vision ofhand and target. In both tasks the dependentvariable was the distance between thumband forefinger, so that the output mode wascontrolled, and only the source of informa-tion varied. The illusion appeared in bothtasks, but was much smaller for the graspresponse.

Similarly, Daprati and Gentilucci (1997;also see Gentilucci, Chieffi, Daprati, Saetti,& Toni, 1996) used the Mueller-Lyer illusionfor contrasting grasp and perception. Thisillusion induces the perception of longer orshorter length of a line ended by outward-or inward-pointing arrows. In the study byDaprati and Gentilucci, participants wererequired to reach and grasp a wooden barthat was superimposed over the line. Resultsshowed that the illusion was smaller whenmeasured with grasp than with perception,even though there was some illusion underboth conditions. That is, hand shaping whilegrasping the bar with thumb and indexfinger was influenced by the illusion con-figurations on which it was superimposed.This effect, however, was smaller than that

observed in two tasks in which participantswere required to reproduce the length of theline with the same two fingers. As is the casefor the Roelofs effect (see above), the dif-ference between grasp and perception dis-appeared when participants were asked todelay their response. With the delay, the illu-sion in the motor condition became as largeas in the perceptual condition.

Additional evidence of a dissociationbetween perception and action derives fromother studies that made use of paradigms dif-ferent from those based on visual illusions.The perturbation paradigm is one of these(see review in Desmurget, Pelisson, Rossetti,& Prablanc, 1998). It involves a task in whichthe participant is asked to reach and graspa target. Then, often coincident with theonset of the movement by the participant, acharacteristic of the target, typically its loca-tion and/or size, is changed suddenly. Manystudies have demonstrated the ability of thesensorimotor system to adjust to change inthe characteristics of the target well beforethe perceptual system can even detect thechange. For example, Paulignan, MacKen-zie, Marteniuk, and Jeannerod (1991) placedthree dowels on a table and, by manipu-lating the lighting of the dowels, were ableto create the impression that the target hadchanged location on some trials. They foundthat the acceleration profile of the graspingmovement changed only 100 ms after theperturbation. Castiello, Bennett, and Stel-mach (1993) studied the effect of size per-turbation on hand shaping in a thumb andfinger grasp of a target object and found thathand shaping could respond to a size pertur-bation in as little as 170 ms.

In a series of experiments (Castiello& Jeannerod, 1991; Castiello, Paulignan, &Jeannerod, 1991; also, see Jeannerod, 1999),participants made a simple vocal utteranceto signal their awareness of the object per-turbation in a version of the perturbationparadigm. Comparison of the hand motorreaction time and the vocal reaction timeshowed that the vocal response consistentlytook place after the motor correction hadstarted. As in the Paulignan et al. (1991)study, the change in the hand trajectoryoccurred as early as 100 ms following the

Page 353: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 335

object’s perturbation, whereas the vocalresponse by which they reported awarenessof the perturbation was not observed untilmore than 300 ms later. The conclusionwas that awareness of the perturbationlagged behind the motor action performedin response to this perturbation.

Because our perceptual system is able todisregard motion of images on the retina dur-ing eye movements, it is very difficult todetect small displacements (perturbations)that occur in the visual field during a saccade;that is, a rapid eye movement from one pointto another. Often objects can be moved sev-eral degrees of visual angle during a saccadewithout the displacement being perceived.This phenomenon is known as saccadic sup-pression (see, e.g., Bridgeman, Hendry, &Stark, 1975 , and Chekaluk & Llewelynn,1992 , for a review). A seminal study on thissubject is the one by Bridgeman, Lewis, Heit,and Nagle (1979). It was instrumental instarting the whole line of research on thedistinction between processing for action,which produces non-conscious representa-tions, and processing for perception, whichproduces conscious representations. Partic-ipants were asked to point at a target thathad been displaced during the saccade andthen extinguished. It was found that thedisplacement often went undetected (sac-cadic suppression), but was not accompa-nied by corresponding visuomotor errors.That is, the pointing movement after a tar-get jump remained accurate, irrespective ofwhether the displacement could be verballyreported or not. Not only did participantsfail to detect the target displacement butthey also failed to detect their own move-ment corrections.

The paradigm just described is also calledthe double-step paradigm, in which the firststep is target presentation and the secondstep is target displacement. It was furtherexploited by Goodale, Pelisson, and Prablanc(1986) and Pelisson, Prablanc, Goodale, andJeannerod (1986). They confirmed that par-ticipants altered the amplitude of theirmovements to compensate for (most) ofthe target displacement, even though theywere not able to detect either the tar-get jump or their own movement correc-

tions. Not even forced-choice guesses aboutthe direction of the jump could discrimi-nate between forward and backward targetperturbations.

Visual masking too has been used to studythe dissociation between motor control andconscious perception (e.g., Kunde, Kiesel, &Hoffman, 2003 ; Neumann & Klotz, 1994 ;Taylor & McCloskey, 1990; Vorberg, Mat-tler, Heinecke, Schmidt, & Schwarzbach,2003 ; see Bar, 2000, and Price, 2001, forreviews). It seems that masking (and meta-contrast) eliminates conscious perceptionof the stimulus, whereas the ability of the(non-perceived) stimulus to trigger an action(a motor response) remains largely intact.In the study of Neumann and Klotz, forexample, the observer was unable to dis-criminate reliably the presence from theabsence of the masked stimulus, but themasked (and undetected) stimulus affectedthe speed of voluntary responses, even in atwo-choice situation that required integrat-ing form information with position infor-mation. Thus, this study clearly confirmedthat motor action in response to a visualstimulus can be dissociated from the verbalreport about detection of that same stimu-lus. Similarly, Vorberg et al. (2003) showedthat experimental manipulations that mod-ify the subjective visual experience of invis-ible masked stimuli do not affect the speedof motor responses to those same stimuli. Inaddition, they found that, over a wide rangeof time courses, perception and unconsciousbehavioral effects of masked visual stimuliobey different temporal laws.

The studies I have summarized aboveused different procedures, but the resultsconverge toward supporting a rather coun-ter-intuitive conclusion: An unconsciousvisual system controls visually guided actionsand operates more or less simultaneouslywith the conscious visual system. Whenthe representations produced by the uncon-scious visual system conflict with the repre-sentations produced by the conscious visualsystem, the former prevail and guide action.Here, as well as in the studies that are sum-marized in the following section, two disso-ciations emerge. One is between (conscious)perceptual and (unconscious) sensorimotor

Page 354: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

336 the cambridge handbook of consciousness

visual systems. The other, more interestingdissociation is between awareness of the rep-resentations, produced by the perceptualsystem, which one believes guide action,and the lack of awareness of the representa-tions, produced by the sensorimotor system,which actually guide action.

Studies on Brain-Damaged Patients

The deficits observed in some neuropsycho-logical patients lend further (and perhapsmore convincing) support to the notion ofdissociability between perceptual and sen-sorimotor visual systems. In particular, opticataxia and visual agnosia patients not onlystrongly support the case for a double dis-sociation between perceptual recognition ofobjects and reaching and grasping of thesame objects but also suggest that the neuro-logical substrates for these two systems arelocated selectively in the ventral (object per-ception) and dorsal (object-directed action)streams of the visual pathways (e.g., Milner& Goodale, 1995).

A patient, DF, first reported by Goodale,Milner, Jakobson, and Carey (1991; also seeMilner & Goodale, 1995), perhaps providesthe clearest evidence to date for dissocia-tion between perception and action, show-ing a reciprocal pattern to optic ataxia (seebelow). DF developed a profound visual-form agnosia following a bilateral lesionof the occipito-temporal cortex. She wasunable to recognize object size, shape, andorientation. She failed even when she wasasked purposively to match her forefinger-thumb grip aperture to the size of visuallypresented target objects. In sharp contrast,when instructed to pick up objects by per-forming prehension movements, the patientwas quite accurate, and the maximum size ofher grip correlated normally with the size ofthe target object. Apparently, DF possessedthe ability to reach out and grasp objectswith remarkable accuracy and thus couldprocess visual information about object fea-tures that she could not perceive accurately,if at all. However, although her ability tograsp target objects was truly remarkable,it had certain limitations. In normal par-

ticipants, grip size still correlates well withobject width even when a temporal delay ofup to 30 s is interposed between disappear-ance of the target object and execution ofthe motor response. In DF, instead, evidenceof grip scaling was no longer present after adelay of just 2 s.

If DF’s performance is compared withthe performance of patients with posteriorparietal lesions, impairments in perceptualrecognition of objects and in object-directedaction seem to be clearly dissociated.Jeannerod (1986) and Perenin and Vighetto(1988) reported patients with lesions to theposterior parietal lobe, who showed a deficitthat is termed optic ataxia. Patients withoptic ataxia have difficulties in directingactions to objects presented in their periph-eral visual field. Their visually directedreaching movements are inaccurate, oftensystematically in one direction. In addi-tion, these movements are altered kinemat-ically, especially concerning duration, peakvelocity, and deceleration phase. However,patients with optic ataxia, in contrast tovisual agnosic patients, are not impaired inthe recognition of the same objects that theyare unable to reach correctly.

In addition to optic ataxia and visualagnosia, blindsight is another neurologicaldeficit that provides support to the disso-ciation between conscious perception andnon-conscious motor control. Patients withextensive damage to the primary visual areain the occipital lobe (area V1 or BA17)are regarded as cortically blind becausethey do not acknowledge seeing stimuli inthe affected parts of their visual fields. Itis possible, however, to demonstrate thattheir behavior can still be controlled byvisual information (e.g., Farah, 1994 ; Mil-ner & Goodale, 1995 ; Weiskrantz, 1986).The paradoxical term “blindsight” was ini-tially coined by Sanders, Warrington, Mar-shall, and Weiskrantz (1974) to refer to allsuch non-conscious visual capacities thatare spared in cortically blind parts of thevisual field. The first of the many reportsthat patients with cortical blindness canuse visuospatial information to guide theiractions within the blind field showed this by

Page 355: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 337

asking patients to move their eyes toward alight that they insisted they could not see.Their movements were statistically corre-lated with the location of the light (Poeppel,Held, & Frost, 1973). Further, Zihl (1980;Zihl & von Cramon, 1985) found that accu-racy of their saccadic responses can beimproved markedly as a consequence oftraining. Even more striking evidence wasprovided by Perenin and Jeannerod (1978),who showed accurate pointing within theblind fields in several patients. It is clearthat considerable visual control of the direc-tion and amplitude of both eye and armmovements is present in cortically blindpatients.

In addition, it turns out that some patientswith blindsight seem to have a residualability to use shape and size information.Perenin and Rossetti (1996) asked a com-pletely hemianoptic patient to “post” a cardinto a slot placed at different angles withinhis blind field. His performance was statis-tically well above chance. Yet, when askedto make perceptual judgments of the slot’sorientation, either verbally or by manualmatching, the patient was at chance levelin his affected field. Perenin and Rossettialso demonstrated that the patient, whentested with the same tasks as those usedto test agnosic patient DF, and the task of“posting” the card was one of them, couldreach out and grasp rectangular objects witha certain accuracy in his affected field: Asin normal participants, the wider the object,the greater the anticipatory hand-grip sizeduring reaching. Yet again the patient failedwhen asked to make perceptual judgments:With either verbal or manual response hisattempts were uncorrelated with object size.As was the case with agnosic patient DF, ori-entation and size can be processed in corti-cally blind visual fields, but only when usedto guide a motor action and not for percep-tual tasks. Remember, however, that patientswith optic ataxia show the converse dissoci-ation. They are unable to use visual infor-mation to guide motor acts, such as reachingand grasping, but they still retain the abilityto perceive consciously the objects on whichthey are unable to act.

The evidence summarized above stronglysupports the notion that the human brainproduces at least two functionally distinctrepresentations of a given visual display,which, under some conditions, can be dif-ferent. One originates from a conscious sys-tem that performs visual object (pattern)recognition. The other is a motor-orientedrepresentation that is unconscious, can bein conflict with the conscious perceptualrepresentation, and can accurately controlvisually guided behavior in spite of thepotential conflict with the conscious rep-resentation. The distinction between thesetwo pathways originated nearly 40 yearsago (Schneider, 1967; Trevarthen, 1968). Tre-varthen named the two systems “focal” and“ambient”: The focal system was postulatedto be based on the geniculostriate path-way and to be devoted to pattern recogni-tion; the ambient system was thought to bebased on the superior colliculus and relatedbrainstem structures and to be devoted tovisually guided behavior. This anatomicaland functional distinction became knownas the now classical distinction between asystem specialized for answering the ques-tion “what is it” and a system specializedfor answering the question “where is it”(Schneider, 1969). Later on, both systemswere shown also to have a cortical repre-sentation: The successor to the focal sys-tem (i.e., the “what” system) comprisesan occipito-temporal pathway, whereas theambient system (i.e., the “where” system)includes an occipito-parietal pathway as wellas the superior colliculus (Ungerleider &Mishkin, 1982).

More precisely, the cortical pathway forthe “what” system is the ventral occipito-temporal route that links striate cortex (V1

or BA17) to prestriate areas and from therereaches the inferotemporal cortex on bothsides via callosal connections. Lesions to thispathway abolish object discrimination with-out damaging perception of spatial relationsamong objects (visual-form agnosia). Theother, dorsal pathway diverges from the ven-tral one after the striate cortex and linksthe prestriate areas to the posterior part ofthe parietal lobe. Lesions to this pathway

Page 356: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

338 the cambridge handbook of consciousness

produce visuospatial deficits characterizedby errors in establishing the relative posi-tions of spatial landmarks and by localiza-tion deficits during object-directed actions(optic ataxia).

As described above, cases of optic ataxia,visual-form agnosia, and blindsight, as wellas a number of studies on healthy par-ticipants, have shown that the anatomicaldorsal-ventral distinction relates more pre-cisely to a distinction between the process-ing of what an object is and of how to directaction, rather than of where an object islocated (Milner & Goodale, 1995).

The phenomenon of blindsight merits afew additional words (Milner & Goodale,1995 ; Stoerig & Cowey, 1997). The dorsalstream also has substantial inputs from sev-eral subcortical visual structures in additionto input from V1. In contrast, the ventralstream depends on V1 almost entirely forits visual input. As a consequence, damageto V1, even though it affects either stream,deprives the ventral stream of all its input,leaving instead the dorsal stream with muchof its input from its associated subcorticalstructures (the superior colliculus, in partic-ular). It is possible, thus, to demonstrate thattheir behavior can be controlled by visualinformation provided by intact subcorticalvisual structures via the dorsal stream. Whatthese patients are unable to do is to pro-cess visual information through the ventralstream, which depends primarily on V1 forits visual input. In addition, it is impossiblefor these patients to give any but the mostrudimentary kind of perceptual report abouttheir visual experiences (regions of bright-ness, motion, or change).

As already noted, differences betweenimmediate and delayed actions have beenreported in neurologically intact observers,as well as in neurological patients (see Ros-setti & Pisella, 2002 , for an extensive discus-sion of this issue). In general, what emerges isthat the insertion of a temporal gap betweentarget presentation and response executionrenders the features of the representationon which the motor action depends moresimilar to the perceptual representation onwhich the verbal report depends. In partic-

ular, the delay seems to bring about a shiftfrom reliance on a (non-conscious) represen-tation formed online, and used for immedi-ate action control, to a (conscious) represen-tation retrieved from memory and used forverbal report. The former would be short-lived and would compute stimulus locationin egocentric coordinates, whereas the latterwould last much longer and would computestimulus location in allocentric coordinates.Egocentric coordinates have their origin inthe observer, often in its body midline. Allo-centric coordinates have their origin outsidethe observer, often in an external object.While an action is being performed, the loca-tion of its target seems to be computed interms of egocentric coordinates. In contrast,if a delay is introduced between target pre-sentation and verbal report, target locationseems to be computed in terms of allocentriccoordinates.

Evidence is available that supports thisdistinction. In a study reported by Rossettiand Pisella (2002), for example, immedi-ate and delayed pointing toward propriocep-tive targets was tested in blindfolded par-ticipants. On each trial a target was pre-sented on one out of six possible locationslying on a circle centered in the startingpoint. In a preliminary session participantshad been trained to point to these positionsand to associate a number (from 1 to 6)with each target. When the task was topoint to the target immediately, the partici-pants’ pointing distribution was unaffectedby the target array and was elongated inthe movement direction. That showed theuse of an egocentric reference frame. Incontrast, when pointing was delayed and/orwas accompanied by a simultaneous verbalresponse of the target number, the partici-pants’ pointing distribution tended to alignwith the target array, perpendicular to move-ment direction. That showed the use of anallocentric frame of reference. In the wordsof Rossetti and Pisella (2002 , p. 86), “whenaction is delayed and the object has dis-appeared, the parameters of object posi-tion and characteristics that are used by theaction system can only be accessed from acognitive sustained representation. This type

Page 357: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 339

of representation . . . relies on different refer-ence frames with respect to the immediateaction system.”

In conclusion, it is clear that the sen-sorimotor representations that supportimmediate action are not conscious andare short-lived. If a delay is introduced,long-term and conscious perceptual repre-sentations take over. The frames of referenceon which immediate and delayed actionsdepend also differ, consistent with theneeds of an action system and a perceptualrepresentational system.

Perception of the Effects of thePerformed Action

So far I have reviewed evidence that sug-gests that many aspects of action, from ini-tiation to appreciation of the percepts thatguide them, occur without awareness (i.e.,unconsciously). Now I argue that one aspectof an action that is normally available toawareness is the sensory consequence(s) ofthat action, or, more precisely, the predictionof the sensory consequences of that action(Blakemore & Frith, 2003 ; also see Frith,Blakemore, & Wolpert, 2000; Jeannerod,Farrer, Franck, Fourneret, Posada, Daprati, &Georgieff, 2003). However, although thereis only limited awareness of the actual sen-sory consequences of an action (i.e., of actioneffects) when they are successfully predictedin advance, we are very often aware of theactual action effects when they deviate fromwhat we expect.

As was pointed out by Blakemore andFrith (2003), here the important point isto distinguish between the predicted actioneffects and the actual action effects. Nor-mally, we are aware of the former but notof the latter, unless the latter do not con-form to our expectations, in which casethey too become conscious. In some circum-stances, however, not even quite large devia-tions from the expected action effects reachconsciousness (see, e.g., Fourneret & Jean-nerod, 1998). When a task is overlearned andbecomes automatic with practice, and thuscan be carried out without the intervention

of executive functions (see below), we arenot aware of the predicted consequences ofour actions, as well as of the intended actionsand of the motor programs that are executedto achieve the actions’ goals.

Representations of action effects haveat least two functions in performing anaction (a special issue of PsychologicalResearch/Psychologische Forschung, edited byNattkemper and Ziessler, 2004 , was devotedto the role of action effects in the cog-nitive control of action). First, after hav-ing executed a particular action, one needsto compare the obtained effects with theeffects that the action intended to accom-plish. Hence, anticipatory effect represen-tations are involved in the evaluation ofaction results. Second, one plans and exe-cutes actions with the aim of producingsome desired effects. Hence, anticipations ofaction goals are involved in action control.Both functions require representations of anaction goal and of action effects.

The notion that intentional actions arecontrolled by anticipatory representationsof their intended and expected effects goesback at least to James (1890). It is termedthe “ideomotor theory,” which basically pro-poses that actions are represented in mem-ory by their effects and in turn these effectsare used to control actions. In the last15 years or so, the ideomotor approach hasbeen reformulated within the framework ofcognitive psychology by Prinz and his col-leagues (e.g., Prinz, 1997; Hommel, Mues-seler, Aschersleben, & Prinz, 2001), whoused the term “common coding theory ofperception and action.”

The simplest idea would, of course, bethat our awareness of initiating an actionoriginated from sensory signals arising in themoving limbs. This seems unlikely, though,because such signals are not availableuntil after the limbs have started to move.Instead, our awareness seems to depend on asignal that precedes the action. A signal thatis available before an action is initiated isthe prediction of the sensory consequencesof the action; that is, the anticipatory rep-resentations of their intended and expectedeffects.

Page 358: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

340 the cambridge handbook of consciousness

Haggard and Magno (1999), by usingLibet’s paradigm (see above), found a disso-ciation between perceived onset and actioninitiation. They showed that the perceivedtime of action onset is slightly delayed (byabout 75 ms) if the motor cortex is stim-ulated with transcranial magnetic stimula-tion (TMS), whereas this stimulation causesa greater delay (of about 200 ms) in the initi-ation of the actual movement. This finding iscompatible with the notion that it is the pre-diction of the effects of the action that corre-sponds to the onset of the action. However,it is not evidence of such a relation. In a sub-sequent study, which too exploited Libet’sparadigm, Haggard et al. (2002b; also, seeEagleman & Holcombe, 2002 ; Frith, 2002)explored the time course of the binding ofactions and their effects in consciousness byinvestigating the sensory consequences of anevent being causally linked to an observer’saction. Their aim was to clarify what hap-pens to our subjective judgment of timing ofevents when an event is causally linked to anobserver’s intentional action. They showedthat the perceived time of intentional actionsand the perceived time of their sensory con-sequences were attracted together in con-sciousness, so that participants perceivedintentional movements as occurring laterand their sensory consequences as occur-ring earlier than they actually did. In thevoluntary condition, participants noted, bywatching a revolving hand on a clock face,the time of onset of their intention to per-form an action (i.e., a key-press). In the TMScondition, they noted the time of a muscletwitch produced by magnetic stimulation ofthe motor cortex. In the sham TMS condi-tion, they noted the time of an audible clickmade by TMS in the absence of motor stim-ulation. In the auditory condition, they justnoted the time of a tone.

When the first three conditions were fol-lowed, after 250 ms, by a tone, large percep-tual shifts occurred in the time of awarenessof conscious intention, the TMS-inducedtwitch, and click produced by the shamTMS. Only awareness of the voluntary key-press and awareness of the tone were per-ceived as being closer in time than they actu-

ally were. Awareness of the voluntary key-press was shifted later in time, toward theconsequent tone, whereas awareness of thetone was shifted earlier in time, towardthe action. The involuntary, TMS-inducedmovement produced perceptual shifts in theopposite direction. The sham TMS showedminimal perceptual shifts. Based on thesefindings Haggard et al. (2002b) suggestedthe existence of a mechanism that associates(or binds) together awareness of a voluntaryaction and awareness of its sensory conse-quences, bringing them closer in perceivedtime. In other words, “the brain would pos-sess a specific mechanism that binds inten-tional actions to their effects to construct acoherent conscious experience of our ownagency” (p. 385). Note that several otherbehavioral studies confirmed that represen-tations of actions and their effects tend tobe integrated (see chapters by Hazeltine, byStoet & Hommel, and by Ziesser & Nattkem-per in Prinz & Hommel, 2002) and the obser-vation of the existence of attraction effectsbetween percepts of stimuli and perceptsof movements that might have caused theoccurrence of those same actions.

Even if ample empirical evidence sug-gests that motor actions are cognitivelycoded based on their sensory effects, it doesnot imply that they are initiated by con-sciously accessing their sensory effects. Itis entirely possible that binding betweenactions and their consequences and/or actionretrieval via action consequences occurunconsciously (Haggard et al., 2002a). Forexample, Kunde (2004) reasoned that, basedon the ideomotor theory, initiating a cer-tain action is mediated by retrieving itsperceptual effects, which then in turn acti-vate the particular motor pattern that nor-mally brings about these anticipated per-ceptual effects. Also, the ideomotor theoryimplies that actions (as motor patterns)should become activated by presenting theeffect codes that represent their conse-quences. In other words, actions should beinduced by perceiving their effects. Kundefound that responding to a visual target wasfaster and more accurate when the targetwas briefly preceded by the visual effect

Page 359: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 341

of the required response. Interestingly, thisresponse priming that was induced by theaction effects was independent of prime per-ceptibility and occurred when the prime wasnot consciously perceived. That indicatesthat consciousness is not a necessary condi-tion for action effects to evoke their associ-ated actions (motor patterns).

Regardless of whether binding amongintention, action, and consequence occursconsciously or unconsciously, or, more likely,in part consciously and in part uncon-sciously, it has both unconscious and con-scious elements (Haggard et al., 2002a);inference based on timing plays a criticalrole in producing the illusion of consciouslywilled actions. The conscious experiences ofintention, action, and action effects are com-pressed in time. This unification is instru-mental in producing the experience of vol-untary action. Even though he overlookedthe role of action effects, Wegner (2003 ,p. 67) has clearly expressed this notion, bysaying, “When a thought appears in con-sciousness just before an action (priority), isconsistent with the action (consistency), andis not accompanied by conspicuous alter-native causes of the action (exclusivity),we experience conscious will and ascribeauthorship to ourselves for the action.”

Before leaving this section it is perhapsuseful briefly to touch on the work on mir-ror neurons. Mirror neurons (see Rizzolatti& Craighero, 2004 , for a recent and exten-sive review) are found in the premotor andparietal cortex of the monkey. They selec-tively discharge both when the monkey per-forms a given action and when the mon-key observers another living being performa similar action. Thus, mirror neurons codespecific actions performed by the agent orby others. Evidence exists that a mirror-neuron system, similar to that of the mon-key, is present in humans. The notion thatmirror neurons are involved in action under-standing is corroborated by the observationthat they discharge also in conditions inwhich the monkey does not see the actionbut nonetheless is provided sufficient cluesto create a mental representation of whataction is being performed (Kohler, Keysers,

Umilta, Fogassi, Gallese, & Rizzolatti, 2002 ;Umilta, Kohler, Gallese, Fogassi, Fadiga, &Rizzolatti, 2001). Visual clues of the actioncan be critical to trigger mirror neurons onlyif they allow the observer to understandthe action. Auditory clues originating froman action performed behind a screen canreplace visual clues if auditory clues conveythe crucial information about the meaning ofthe action. However, an fMRI study (Buc-cino, Lui, Canessa, Patteri, Lagravinese, &Rizzolatti, 2004) showed that only actionsbelonging to the motor repertoire of theobserver excite the human mirror system,whereas actions that do not belong to themotor repertoire of the observer do not.They are recognized through a different,purely visual mechanism.

Especially interesting in the present con-text are studies (e.g., Schubotz & von Cra-mon, 2002) that showed that the humanfrontal mirror region is important notonly for the understanding of goal-directedactions but also for recognizing predictablepatterns of visual change. In conclusion, itseems clear that mirror neurons play a rolein coding intended actions and their conse-quences.

Executive Functions

Another popular view is that investigation ofthe processes underlying intentional actionsare related closely to executive functions(also termed “control processes”), such asthose involved in planning, problem solv-ing, inhibition of prepotent response, andresponse to novelty. If the notion is to bepreserved that consciousness is necessary forintentional actions, and intentional actionsbelong to the realm of executive functions,then drawing on the Supervisory AttentionalSystem model (SAS; Norman & Shallice,1986; Shallice, 1988, 1994) can be useful inthis context.

It is widely accepted that the vast major-ity of our cognitive processes operate inautomatic fashion, with little or no needfor conscious, intentional control: Whenspecific conditions occur, automatic mental

Page 360: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

342 the cambridge handbook of consciousness

processes are triggered and run autonomo-usly to completion, independent of inten-tional initiation and conscious guidance(e.g., Dehane & Naccache, 2001, and chap-ters in Umilta & Moscovitch, 1994). Inthe information-processing accounts of con-sciousness developed in the 1970s, the uni-tary nature and control functions of con-sciousness were explained in terms of theinvolvement of a limited capacity higher-level processing system (Mandler, 1975 ; Pos-ner & Klein, 1973 ; Umilta, 1988). However,because of the diversification of processingsystems that originated from the researchin cognitive psychology, cognitive neuropsy-chology, and cognitive neuroscience, andbecause of the realization that processingsystems are often informationally encapsu-lated (Fodor, 1983 ; Moscovitch & Umilta,1990), it became less plausible to associatethe unitary characteristics of consciousnesswith the operations of any single process-ing system. Shallice (Norman & Shallice,1986; Shallice, 1988, 1994) put forward analternative approach by proposing that anumber of high-level systems have a set ofspecial characteristics that distinguish themfrom the cognitive systems devoted to rou-tine informationally encapsulated processes.He maintained that the contrast betweenthe operations of these special systems andthose realizing informationally encapsulatedprocesses corresponded in phenomenologi-cal terms to that between conscious and non-conscious processes.

His model is clearly concerned withaction selection and thus is very relevantto the present discussion. It has three mainprocessing levels. The lowest is that ofspecial-purpose processing subsystems, eachspecialized for particular types of opera-tions. There are a large number of actionsand thought schemas, one for each well-learned routine task or subtask. Schemasare selected for operation through a processinvolving mutually inhibitory competition(contention scheduling). To cope with non-routine situations, an additional system –SAS – provides modulating activating inputto schemas in contention scheduling. In laterversions of the SAS, the SAS is held to

contain a variety of special-purpose subsys-tems localized in different parts of the pre-frontal cortex and in the anterior cingulate.

This model relates to consciousnessof action if one assumes that consciousprocesses require the mediation of the SASand lead directly to the selection in con-tention scheduling of a schema for thoughtor action. On this view, consciousness of aparticular content causes the selection of aschema. Once a schema is selected, and pro-vided that it does not conflict with a stronglyestablished schema for a different action (asin cases requiring inhibition of prepotentresponse), then action may proceed withoutany transfer of information from the SAS.Therefore, the principle that consciousnessis necessary for intentional action can bestated more precisely as the hypothesis thattasks involving intentional action recruitconscious processes, whereas automaticactions do not. Keep in mind, however, that,as Block (1995) argued, such attempts toaccount for the functional role of consciousinformation (“access-consciousness”) do notaddress the phenomenological properties ofconscious experience (“phenomenological-consciousness”).

Some results are especially importantfor a precise specification of the rela-tion between consciousness and intentionalaction. Studies on blindsight (see above)show that awareness of the location of thestimulus is not necessary for accurate per-formance on a simple pointing task whenparticipants are asked to guess. As was main-tained by Marcel (1988; also, see Natsoulas,1992), blindsight patients can learn to ini-tiate goal-directed actions (e.g., reaching),which means that their actions can be visu-ally guided in the absence of conscious visualrepresentations on which the actions arebased. Very strikingly, patients protest thatthey are not performing the visually guidedbehavior that they are in fact performing.However, awareness of the presence of thevisual stimulus has to be provided by anauditory cue for the initiation of the point-ing action to occur (Weiskrantz, 1997). Thatseems to indicate that the blindsight patientrequires input via the SAS to initiate a

Page 361: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 343

pre-existing schema for pointing. Once thatschema is initiated, non-conscious informa-tion held in special-purpose processing sys-tems can serve to guide action.

Lesion studies in monkeys and humansindicate that prefrontal lesions (i.e., SASlesions) have little effect on performance inautomatic tasks, but instead have a strongeffect on tasks that seem to be cruciallydependent on conscious processes, amongwhich is the spontaneous generation ofintentional actions. A striking consequenceof a deficit of the SAS is the so-calledutilization behavior sign. It manifests itselfas a component of a very grave dysexecu-tive syndrome and is typically observed inpatients with a bilateral focal frontal lesion.If there is some object that can be usedor manipulated within the patient’s fieldof view and within reach, the patient willuse it to perform actions appropriate tothe object, even though they have beenexplicitly and insistently asked not to doso. It is clear that in utilization behaviorthe non-intended actions are environment-driven with the mediation of contentionscheduling. The SAS plays very little role orno role at all in them. In effect, most currentexplanations of this bizarre behavior sug-gest a weakening of whatever mechanism isresponsible for ensuring the implementationof intended actions, such that it overrides,under normal circumstances, automatic orenvironmentally driven actions. Apparently,utilization behavior is attributable to thefailure to inhibit inappropriate actions,rather than a failure to select appropriateactions.

Environmentally driven actions are quitecommon in normal people under circum-stances that suggest a diminished influenceof the SAS, such as when we are in adistracted state or when we are engagedin another task; this is especially so whenthe level of arousal is low. In these cases,an apparently intentional action is initiatedin the absence of full awareness. Examplesinclude reaching for and drinking from aglass while talking, slips of highly routineactions that involve action lapses of the “cap-ture” error type, and changing gears or brak-

ing while driving (Norman, 1981; Norman &Shallice, 1986; Reason, 1984 ; Shallice, 1988).It is not clear whether it would be cor-rect to speak of these actions as completelyunconscious. Probably not. These anoma-lous cases are explained by distinguishingbetween the influence of the stimulus oncontention scheduling, and the influence onthe SAS. The selection of well-learned andrelatively undemanding schemas need notrequire the SAS involvement. As alreadymentioned in the context of action effects,overlearned tasks are common examples ofactions (schemas) that can be carried outwith very limited awareness. With sufficientpractice many tasks can become automaticand can be carried out without the interven-tion of the SAS; that is, without any need toconsciously control the actions required bythe task.

Deficits in the Control of Action ThatCo-Occur with Abnormalities inConsciousness

At first sight one of the most striking abnor-malities in the control of action would notseem to be accompanied by abnormality ofconsciousness, but rather by disownership ofaction. This abnormality is termed the “anar-chic hand” sign, which was first describedby Della Sala, Marchetti, and Spinnler (1991,1994 ; also see Frith, Blakemore, & Wolpert,2000; Marcel, 2004 , for extensive discus-sions). It is often confused with the “alienhand” sign (Marchetti & Della Sala, 1998;Prochazka et al., 2000) or with the utiliza-tion behavior sign (e.g., Lhermitte, 1983 ;also, see above), which too are abnormalitiesin the control of action. The anarchic handsign is unimanual, and patients describethe anarchic hand as having a “will of itsown,” which, of course, terrifies them. Theaffected hand performs unintended (evensocially unacceptable) but complex, well-executed, goal-directed actions that com-pete with those performed by the non-affected hand. Sometimes the patient talksto his or her anarchic hand, asking it to desist,and often the patient succeeds in stopping

Page 362: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

344 the cambridge handbook of consciousness

it only by holding it down with the otherhand. The patient seems to lack any senseof intention for what concerns the anarchicaction performed by the affected hand.

It is important to keep in mind thatthe patient is aware that the anarchic handbelongs to him or her. What is alien, whatthe patient disowns, is the action the handperforms, not the hand itself. In contrast, inthe case of the alien hand sign, the affectedhand does not feel to be one’s own. Thisseems to be a sensory phenomenon and haslittle to do with motor action. As I havealready noted when I discussed the dysex-ecutive syndrome, patients with damage tothe frontal lobe may show utilization behav-ior that is characterized by the fact that thesight of an object elicits a stereotyped actionthat is inappropriate in the current context.The abnormal aspect of consciousness thataccompanies utilization behavior is not somuch lack of awareness of the action orlack of awareness of the intention to act,but rather the erroneous experience thatthose inappropriate and unwanted actionsare experienced as intended (also, see Frithet al., 2000).

The anarchic hand sign is often associ-ated with unilateral damage to the SMA con-tralateral to the hand affected by the patho-logical sign. Considering that the functionof the anterior part of the SMA is likely tobe essentially inhibitory, and a movementcan only be initiated by the primary motorcortex (M1 or BA4) when activity in theanterior part of the SMA declines, Frith etal. (2000) have suggested that the anarchichand sign manifests itself when the anteriorpart of the SMA is damaged. In the absenceof the inhibitory influence by the SMA onone side, appropriate stimuli would triggerthe automatic action of the correspondinghand. In support of the inhibitory role ofthe SMA is the observation of its preferen-tial activation when movements are imag-ined but their execution must be inhibited.It is clear that this hypothesis renders theanarchic hand sign very similar, if not iden-tical, to the utilization behavior sign. Actu-ally, the former would be a special case ofthe latter. Marcel (2004), however, has con-

vincingly argued that that is not the case. Hismain point is that the patient who shows uti-lization behavior does not disown the handthat performs it. If the SMA is the source ofthe LRP (see above) and the LRP is corre-lated with voluntary action, it makes sensethat damage to the SMA would be associ-ated with lack of awareness, or distortions inawareness, associated with movement initi-ation and execution.

A possibility, with which Marcel (2004)would not agree, though, is that disowner-ship of action in the anarchic hand sign isattributable to lack of awareness of the inten-tion that guides the affected hand’s behavior.In other words, the patient would disownthe action (not the hand that performs it,as happens in the alien hand sign) becausethe intention that has initiated that actionis not available to consciousness. If that isso, then the anarchic hand sign would afterall be a deficit in the control of action thatco-occurs with an abnormality of conscious-ness. Perhaps, it is caused by an abnormal-ity of consciousness of the predicted actioneffects.

By following Frith et al. (2000), I havemaintained (see above) that what normallyreaches consciousness is the prediction ofthe action effects, rather than the actualaction effects. Frith et al. suggest that thesame is true of the state of the limbs after amovement; that is, the conscious experienceof a limb would normally be based on itspredicted state, rather than on its actualstate. In effect, the predicted state wouldplay a greater role than sensory feedback.That, after all, is not surprising, consideringthat one of the effects of an action is tobring about a new state of the limbs thatwere involved in performing the action.

As argued by Frith et al. (2000), thenotion that the conscious experience of alimb originates from its predicted state issupported by the phenomenon of the “phan-tom limb.” After amputation of all or partof a limb, patients may report that, in spiteof the fact they know very well that thereis no longer a limb, they still feel the pres-ence of it. Although the limb is missingbecause of the amputation, the premotor

Page 363: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 345

and the motor cortex can still program amovement of it, which causes computationof the predicted state. Because the predictedstate becomes available, the phantom limbwill be experienced as moving. It is interest-ing to note that Ramachandran and Hirstein(1998) have proposed that the experience ofthe phantom limb depends on mechanismslocated in the parietal lobes, and Frith et al.(2000) have independently suggested thatparietal lobes are involved in the represen-tation of predicted limb positions.

The phantom limb may also manifestitself after deafferentation of a limb that infact is still present. Patients may or may notbe aware of the existing but deafferentedlimb. If they are, then the phantom limb isexperienced as a supernumerary limb. Oneor more supernumerary limbs can be experi-enced even if the real limbs are not deaffer-ented. Frith et al. (2000) have proposed thatthese phenomena are all attributable to thefailure to integrate two independent sourcesof information concerning the position ofthe limbs. One derives from the motor com-mands, which are issued from the corticalpremotor and motor areas independent ofwhether the limb is still present. The othersource derives from sensory feedback and ofcourse is available only if the limb is stillpresent.

After right-hemisphere damage, leadingto paralysis of the left side (usually associatedwith anesthesia of that same side), patientsmay show anosognosia for hemiplegia. Bythat term it is meant that they are unawareof the impairment that concerns the motorcontrol of their left limb(s) (see, e.g., Pia,Neppi-Modona, Ricci, & Berti, 2004 , for areview). Anosognosia for hemiplegia is oftenassociated with unilateral neglect for the leftside of space and the location of the lesionis in the right parietal lobe. The interest-ing question here is why patients with thiscondition develop the false belief that thereis nothing wrong with the paralyzed limb,even to the point of claiming to have movedit to command when in fact no movementhas occurred. The explanation provided byFrith et al. (2000) once more makes recourseto the hypothesis that awareness of initiat-

ing a movement is based on a representa-tion of the predicted action effects, ratherthan of the actual action effects. A repre-sentation of the predicted action effects canbe formed as long as the motor commandscan be issued. Thus, a patient with a par-alyzed limb would have the normal expe-rience of initiating a movement with thatlimb as long as the motor commands canbe issued. The belief that the movementwas performed is not contradicted by thediscrepancy between the predicted actioneffects and the actual action effects because,as I have already noted, even healthy individ-uals may have a remarkably limited aware-ness of this discrepancy (Fourneret & Jean-nerod, 1998). In addition, when patientshave suffered a parietal lesion, damage to theparietal cortex is likely to impair awarenessof the state of the motor system and cause afailure to detect the discrepancies betweenthe actual and the predicted action effects.

Finally, according to Frith et al. (2000;also, see Kircher & Leube, 2003) the sameexplanation is applicable to those patientswith schizophrenia who describe experi-ences of alien control, in which actions (aswell as thoughts or emotions) are performedby external agent, rather than by their ownwill. In healthy individuals, self-monitoringsystems enable one to distinguish the prod-ucts of self-generated actions (or thoughts)from those of other-generated actions (orthoughts). It has been postulated thatself-monitoring is normally based on a com-parison between the intention underlying anaction and its observed effects (Jeannerod,1999; Jeannerod et al., 2003). The proposalof Frith et al. is that the experience of aliencontrol arises from a lack of awareness ofthe predicted limb position (action effect).In particular, they suggest “that, in thepresence of delusions of control, the patientis not aware of the predicted consequencesof a movement and is therefore not awareof initiating a movement” (2000, p.1784).

In conclusion, it would seem that the pro-posal according to which it is the predictionof the action effects, rather than the actualaction effects, that reaches consciousness canexplain some odd and apparently unrelated

Page 364: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

346 the cambridge handbook of consciousness

phenomena. In the case of the phantomlimb, for example, the limb is of course miss-ing because of the amputation. However,the premotor and motor cortical areas areintact and can still program a movement ofthe missing limb. The action effects are com-puted and reach consciousness, thus produc-ing the conscious experience of a limb. Sim-ilarly, anosagnosic patients may be unawareof their paralyzed limb because they can stillcompute the predicted action effects of thatlimb. Conversely, in the case of the anar-chic hand and of schizophrenic patients, theaction effects would not be computed inadvance, causing the feeling of lack of inten-tion, which in turn would induce the patientto disown the action and attribute it to anexternal agent.

Conclusion

Although being aware of initiating and con-trolling actions is a major component of con-scious experience, empirical evidence showsthat much of the functioning of our motorsystem occurs without awareness. That isto say, it appears that, in spite of the con-tents of our subjective experience, we haveonly limited conscious access to the systemby which we control our actions. This lim-ited access is certainly not concerned withthe detailed mechanisms of the motor sys-tem. Even higher-order processes seem to bedenied access to consciousness. Awarenessof initiating an action occurs after the move-ment has begun in the brain area devotedto motor processes. Similarly, awareness ofchoosing one action rather than anotheroccurs after brain correlates indicate thatthe choice already has been made. There-fore, it is important to provide an answerto the question of what are the (few) lev-els, in the process of action generation, andexecution, that can be accessed consciouslywhile many more levels occur withoutawareness.

With much simplification, it can be saidthat voluntary actions, from the simplest tothe most complex ones, involve the follow-ing three stages: intention to perform an

action, performance of the intended action,and perception of the effects of the per-formed action. In principle, therefore, con-sciousness may manifest itself at all threestages. At the first stage, conscious intentionto perform an action may arise. At the sec-ond stage, the intended action may be con-sciously performed. At the third stage, theeffects of the performed action may be con-sciously perceived.

For the first stage, research suggests thatintentions for carrying out voluntary actionsare generated unconsciously and retrospec-tively referred consciously to the actionwhen it has been executed. That is, the evi-dence is that consciousness of intention toperform an action is the consequence, ratherthan the cause, of activity in the brain. Thereis a mechanism that binds together in consci-ousness the intention to act and the conse-quences of the intended action, thus produc-ing the illusion of free will. That representsa paradox: An individual may accuratelyattribute the origin of an action to him- orherself and yet lack online consciousness ofthe events that have led to that action.

The interpretation outlined above is themost obvious one of the available evidence.However, there is still the possibility thatthere is something wrong with it. In partic-ular, what seems to be missing in that inter-pretation is a clear indication of what dif-ferentiates willed (voluntary) actions frominvoluntary actions. Perhaps what is neededis an intention whose brain correlates beginto initiate an action before we can sig-nal exactly when that intention began. Theadvanced representations of the effects of anaction might prove instrumental in contrast-ing a voluntary action with one that is trulyautomatic.

Human beings consistently show visualillusions when they are tested with per-ceptual measures, whereas the illusions donot manifest themselves when they aretested with motor measures. They can pointaccurately to targets that have been illu-sorily displaced by induced motion. A tar-get can be moved substantially during asaccadic eye movement, and the observerwill deny perceiving the displacement even

Page 365: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 347

while pointing correctly to the new, dis-placed position. These dissociations recallthe situation in blindsight and visual agnosia,where patients can perform visually guidedtasks without visual experience and withoutawareness of the accuracy of their behav-ior. The explanation is that action executiondepends on one of two visual systems. Thereis a sensorimotor or “how” system, whichcontrols visually guided behavior withoutaccess to consciousness. Its memory is verybrief, only long enough to execute an act,but it possesses an egocentrically calibratedmetric visual space that the other systemlacks. The other is a cognitive or “what” sys-tem, which gives rise to perception and isused consciously in pattern recognition andnormal visual experience. The processes thatcompose the second stage do not have accessto consciousness either.

In contrast, we certainly are aware ofsome aspects of the current state of themotor system at the third stage in thesequence that leads to execution of anaction. These, however, do not seem to beconcerned with the perception of the actioneffects. Normally, we are not aware of theaction effects if they match our prediction ofthe expected effects; that is, of the effect wepredict the action should produce. Rather,we are aware of the stream of motor com-mands that have been issued to the system.Or, to be more precise, when performingan action, we are aware of the prediction ofits effects, which, of course, depend on themotor commands that were planned in thepremotor and motor cortical areas.

The conclusion that we are not aware ofmost of our own behavior is disturbing, butthe evidence to date clearly indicates thatvery few aspects of action generation andexecution are accessible to consciousness.

Acknowledgments

Preparation of this chapter was supportedby grants from MIUR and the University ofPadua. The author thanks Morris Moscov-itch for very helpful suggestions on a previ-ous version of the chapter.

References

Aglioti, S., DeSouza, J. F. X., & Goodale, M.A. (1995). Size-contrast illusions deceive theeye but not the hand. Current Biology, 5 , 679–685 .

Bar, M. (2000). Conscious and nonconscious pro-cessing of visual identity. In Y. Rossetti & A.Revonsuo (Eds.), Beyond dissociation: Interac-tion between dissociated implicit and explicit pro-cessing (pp. 153–174). Amsterdam: John Ben-jamins.

Blakemore, S. J.-, & Frith, C. D. (2003). Self-awareness and action. Current Opinions in Neu-robiology, 13 , 219–224 .

Block, N. (1995). On a confusion about a functionof consciousness. Behavioral Brain Sciences, 18,227–287.

Bridgeman, B. (2002). Attention and visuallyguided behavior in distinct systems. In W. Prinz& B. Hommel (Eds.), Attention and performanceXIX: Common mechanisms in perception andaction (pp. 120–135). Oxford: Oxford Univer-sity Press.

Bridgeman, B., Hendry, D., & Stark, L. (1975).Failure to detect displacement of the visualworld during saccadic eye movements. VisionResearch, 15 , 719–722 .

Bridgeman, B., Lewis, S., Heit, F., & Nagle, M.(1979). Relation between cognitive and motor-oriented systems of visual perception. Journalof Experimental Psychology: Human Performanceand Attention, 5 , 692–700.

Bridgeman, B., Peery, S., & Anand, S. (1997).Interaction of cognitive and sensorimotor mapsof visual space. Perception and Psychophysics,59, 456–469.

Buccino, G., Lui, F., Canessa, N., Patteri, I.,Lagravinese, G., & Rizzolatti, G. (2004). Neu-ral circuits involved in the recognition ofactions performed by non-conspecifics: AnfMRI study. Journal of Cognitive Neuroscience,16, 1–14 .

Castiello, U., Bennett, K., & Stelmach, G. (1993).Reach to grasp: The natural response to per-turbation of object size. Experimental BrainResearch, 94 , 163–178.

Castiello, U., & Jeannerod, M. (1991). Measuringtime of awareness. Neuroreport, 2 , 797–800.

Castiello, U., Paulignan, Y., & Jeannerod,M. (1991). Temporal dissociation of motorresponses and subjective awareness. Brain, 114 ,2639–2655 .

Page 366: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

348 the cambridge handbook of consciousness

Chekaluk, E., & Llewelynn, K. (1992). Saccadicsuppression: A functional viewpoint. In E.Chekaluk & K. Llewelynn (Eds.), Advancesin psychology, 88: The role of eye movementsin perceptual processes (pp. 3–36). Amsterdam:Elsevier.

Daprati, E., & Gentilucci, M. (1997). Grasping anillusion. Neuropsychologia, 35 , 1577–1582 .

Dehaene, S., & Naccache, L. (2001). Towards acognitive neuroscience of consciousness: Basicevidence and a workspace framework. Cogni-tion, 79, 1–37.

Della Sala, S., Marchetti, C., & Spinnler, H.(1991). Right-sided anarchic (alien) hand: Alongitudinal study. Neuropsychologia, 2 9, 1113–1127.

Della Sala, S., Marchetti, C., & Spinnler, H.(1994). The anarchic hand: A fronto-mesialsign. In F. Boller & J. Grafman (Eds.), Hand-book of neuropsychology (Vol. 9, pp. 204–248).Amsterdam: Elsevier.

Dennett, D.C. (1998). The myth of double trans-duction. In S. R. Hameroff, A. W. Kaszniak,& A. C. Scott (Eds), Towards a science of con-sciousness. II. The second Tucson discussions anddebates (pp. 97–107). Cambridge, MA: MITPress.

Desmurget, M., Pelisson, D., Rossetti, Y., &Prablanc, C. (1998). From eye to hand:Planning goal-directed movements. Neuro-science and Biobehavioral Review, 2 2 , 761–788.

Eagleman, D. M., & Holcombe, A. O. (2002).Causality and perception of time. Trends inCognitive Sciences, 6, 323–325 .

Farah, M. J. (1994). Visual perception and vis-ual awareness after brain damage: A tutorialreview. In C. Umilta & M. Moscovitch (Eds.),Attention and performance XV: Conscious andnonconscious information processing (pp. 37–76). Cambridge, MA: MIT Press.

Fodor, J. (1983). The modularity of mind. Cam-bridge, MA: MIT Press.

Fourneret, P., & Jeannerod, M. (1998). Limitedconscious monitoring of motor performance innormal subjects. Neuropsychologia, 36.

Fried, I., Katz, A., McCarthy, G., Sass, K. J.,Williamson, P., Spencer, S. S., & Spencer, D. D.(1991). Functional organization of human sup-plementary motor cortex studied by electricalstimulation. Journal of Neuroscience, 11, 3656–3666.

Frith, C. D. (2002). Attention to action andawareness of other minds. Consciousness andCognition, 11, 481–487.

Frith, C. D., Blakemore, S.-J., & Wolpert, D. M.(2000). Abnormalities in the awareness andcontrol of action. Philosophical Transactions ofthe Royal Society, London B, 355 , 1771–1788.

Gentilucci, M., Chieffi, S., Daprati, E., Saetti, M.C., & ToniI. (1996). Visual illusions and action.Neuropsychologia, 34 , 369–376.

Glover, S. (2002). Visual illusions affect planningbut not control. Trends in Cognitive Sciences, 6,288–292 .

Glover, S. (2004). Separate visual representationsin the planning and control of action. Behav-ioral and Brain Sciences, 2 7, 3–78.

Goodale, M. A., Milner, D. A., Jakobson, L. S., &Carey, D. P. (1991). A neurological dissociationbetween perceiving objects and grasping them.Nature, 349, 154–156.

Goodale, M. A., Pelisson, D., & Prablanc, C.(1986). Large adjustments in visually guidedreaching do not depend on vision of the handor perception of target displacement. Nature,32 0, 748–750.

Haffenden, A., & Goodale, M. A. (1998). Theeffect of pictorial illusion on prehension andperception. Journal of Cognitive Neurosciences,10, 122–136.

Haggard, P., Aschersleben, G., Gehrke, J., &Prinz, W. (2002a). Action, binding, and aware-ness. In W. Prinz & B. Hommel (Eds.), Attentionand performance XIX: Common mechanisms inperception and action (pp. 266–285). Oxford:Oxford University Press.

Haggard, P., Clark, S., & Kalogeras, J. (2002b).Voluntary actions and conscious awareness.Nature Neuroscience, 5 , 382–385 .

Haggard, P., & Eimer M. (1999). On the rela-tion between brain potentials and awarenessof voluntary movements. Experimental BrainResearch, 12 6, 128–133 .

Haggard, P., & Libet, B. (2001). Conscious inten-tion and brain activity. Journal of ConsciousnessStudies, 11, 47–63 .

Haggard, P., & Magno, E. (1999). Localisingawareness of action with transcranial magneticstimulation. Experimental Brain Research, 17,102–107.

Hazeltine, E. (2002). The representational nat-ure of sequence learning: evidence for goal-based codes. In W. Prinz & B. Hommel (Eds.),

Page 367: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 349

Attention and performance XIX: Common mech-anisms in perception and action (pp. 673–689).Oxford: Oxford University Press.

Hommel, B., Muesseler, J., Aschersleben, G., &Prinz, W. (2001). The theory of event cod-ing (TEC): A framework for perception andaction planning. Behavioral and Brain Sciences,2 4 , 849–937.

James, W. (1890). Principles of psychology. NewYork: Holt.

Jeannerod, M. (1986). The formation of fingergrip during prehension: A cortically mediatedvisuomotor pattern. Behavioral Brain Research,19, 99–116.

Jeannerod, M. (1999). To act or not to act:Perspectives on the representation of actions.Quarterly Journal of Experimental Psychology,52 A, 1–29.

Jeannerod, M., Farrer, M., Franck, N., Fourneret,P., Posada, A., Daprati, E., & Georgieff, N.(2003). Action recognition in normal andschizophrenic subjects. In T. Kircher & A.David (Eds.), The self in neuroscience and psy-chiatry (pp. 119–15 1). Cambridge: CambridgeUniversity Press.

Kircher, T. T. J., & Leube, D. T. (2003). Self-consciousness, self-agency, and schizophrenia.Consciousness and Cognition, 12 , 656–669.

Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L.,Gallese, V., & Rizzolatti, G. (2002). Hearingsounds, understanding actions: Action repre-sentation in mirror neurons. Science, 2 97, 846–848.

Kunde, W. (2004). Response priming by supral-iminar and subliminar action effects. Psycho-logical Research / Psychologische Forschung, 68,91–96.

Kunde, W., Kiesel, A., & Hoffman, J. (2003).Conscious control over the content of uncon-scious cognition. Cognition, 88, 223–242 .

Lhermitte, F. (1983). ‘Utilization behavior’ andits relation to lesions of the frontal lobes. Brain,106, 237–255 .

Libet, B. (1985). Unconscious cerebral initiativeand the role of conscious will in voluntaryaction. Behavioral and Brain Sciences, 8, 529–566.

Libet, B. (1999). Do we have free will? Journal ofConsciousness Studies, 6, 47–57.

Libet, B., Gleason, C. A., Wright, E. W., & Pearl,D. K. (1983). Time of conscious intention toact in relation to onset of cerebral activity

(readiness-potential): The unconscious initia-tion of freely voluntary act. Brain, 106, 623–642 .

Mandler, G. (1975). Mind and emotion. New York:Wiley.

Marcel, A. J. (1988). Phenomenal experienceand functionalism. In A. J. Marcel & E. Bisi-ach (Eds.), Consciousness in contemporary sci-ence (pp. 121–158). Oxford: Oxford UniversityPress.

Marcel, A. J. (2004). The sense of agency: Aware-ness and ownership of action. In J. Roessler& N. Eilan (Eds.), Agency and self-awareness(pp. 48–93). Oxford: Clarendon.

Marchetti, C., & Della Sala, S. (1998). Disen-tangling the alien and anarchic hand. CognitiveNeuropsychiatry, 3 , 191–207.

McCloskey, D. I., Colebatch, J. G., Potter, E. K.,& Burke, D. (1983). Judgements about onset ofrapid voluntary movements in man. Journal ofNeurophysiology, 49, 851–863 .

Milner, D. A., & Goodale, M. A. (1995). Thevisual brain in action (Oxford Psychology Series27). Oxford: Oxford University Press.

Moscovitch, M., & Umilta, C. (1990). Modu-larity and neuropsychology: Implications forthe organization of attention and memory innormal and brain-damaged people. In M. F.Schwartz (Ed.), Modular deficits in Alzheimer-type dementia (pp. 1–59). Cambridge, MA: MITPress.

Natsoulas, T. (1992). Is consciousness what psy-chologists actually examine?American Journalof Psychology, 105 , 363–384 .

Nattkemper, D., & Ziessler, M. (Eds.). (2004).Cognitive control of action: The role of actioneffects. Psychological Research / PsychologischeForschung, 68(2/3).

Neumann, O., & Klotz, W. (1994). Motorresponses to nonreportable, masked stimuli:Where is the limit of direct parameter speci-fication? In C. Umilta & M. Moscovitch (Eds.),Attention and performance XV: Conscious andnonconscious information processing (pp. 123–150). Cambridge, MA: MIT Press.

Norman, D. A. (1981). Categorisation of actionslips. Psychological Review, 88, 1–15 .

Norman, D. A., & Shallice, T. (1986). Attentionto action: Willed and automatic control ofbehaviour. In R. J. Davidson, G. E. Schwartz, &D. Shapiro (Eds.), Consciousness and self regula-tion (Vol. 4 , pp. 189–234). New York: Plenum.

Page 368: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

350 the cambridge handbook of consciousness

Paulignan, Y., MacKenzie, C. L., Marteniuk,R. G., & JeannerodM. (1991). Selective pertur-bation of visual input during prehension move-ments. I. The effect of changing object position.Experimental Brain Research, 83 , 502–512 .

Pelisson, D., Prablanc, C., Goodale, M. A.,& Jeannerod, M. (1986). Visual control ofreaching movements without vision of thelimb. II. Evidence of fast unconscious pro-cesses correcting the trajectory of the handto the final position of double-step stimulus.Experimental Brain Research, 62 , 303–311.

Perenin, M.-T., & Jeannerod, M. (1978). Visualfunction within the hemianoptic field follow-ing early cerebral hemidecortication in man.1. Spatial localization. Neuropsychologia, 16, 1–13 .

Perenin, M.-T., & Rossetti, Y. (1996). Graspingwithout form discrimination in a hemianopicfield. Neuroreport, 7, 793–797.

Perenin, M.-T., & Vighetto, A. (1988). Opticataxia. Brain, 111, 643–674 .

Pia, L., Neppi-Modona, M., Ricci, R., & Berti, A.(2004). The anatomy of anosognosia for hemi-plegia: A meta-analysis. Cortex, 40, 367–377.

Poeppel, E., Held, R., & Frost, D. (1973).Residual visual function after brain woundsinvolving the central visual pathways in man.Nature, 2 43 , 295–296.

Posner, M. I., & Klein, R. M. (1973). On the func-tion of consciousness. In S. Kornblum (Ed.),Attention and Performance IV (pp. 21–35). NewYork: Academic Press.

Price, M.C. (2001). Now you see it, now youdon’t. Preventing consciousness with visualmasking. In P.G. Grossenbacher (Ed.), Findingconsciousness in the brain: A neurocognitiveapproach (Advances in consciousness research(Vol. 8, pp. 25–60). Amsterdam: John Ben-jamins.

Prinz, W. (1997). Perception and action planning.European Journal of Cognitive Psychology, 9,129–154 .

Prinz, W., & Hommel, B. (Eds.). (2002) Attentionand Performance XIX: Common mechanismsin perception and action. Oxford: OxfordUniversity Press.

Prochazka, A., Clarac, F., Loeb, G. E., Rothwell,J. C., & Wolpaw, J. R. (2000). What do reflexand voluntary mean? Modern views on anancient debate. Experimental Brain Research,130, 417–432 .

Ramachandran, V. S., & Hirstein, W. (1998).The perception of phantom limbs. Brain, 12 1,1603–1630.

Reason, J. T. (1984). Lapses of attention. In R.Parasuraman, R. Davies, & J. Beatty (Eds.),Varieties of attention (pp. 15 1–183). Orlando,FL: Academic Press.

Rizzolatti, G. & Craighero, L. (2004). Themirror-neuron system. Annual review ofneuroscience 2 7, 169–192 .

Roelofs, C. (1935). Optische Localisation. Archivfuer Augenheilkunde, 109, 395–415 .

Rossetti, Y., & Pisella, L. (2002). Several ‘visionfor action’ systems: A guide to dissociatingand integrating dorsal and ventral functions(Tutorial). Attention and visually guidedbehavior in distinct systems. In W. Prinz &B. Hommel (Eds.), Attention and performanceXIX: Common mechanisms in perception andaction (pp. 62–119). Oxford: Oxford UniversityPress.

Sanders, M. D., Warrington, E. K., Marshall, J.,& Weiskrantz, L. (1974). ‘Blindsight’: Visionin a field defect. Lancet, 2 0, 707–708.

Schneider, G. E. (1967). Contrasting visuomotorfunctions of tectum and cortex in the goldenhamster. Psychologische Forschung, 31, 52–62 .

Schneider, G. E. (1969). Two visual systems.Science, 163 , 895–902 .

Schubotz, R. I., & von Cramon, D. Y. (2002).Predicting perceptual events activates corre-sponding motor schemes in lateral premotorcortex: An fMRI study. Neuroimage, 15 , 787–796.

Shallice, T. (1988). From neuropsychology tomental structure. Cambridge: CambridgeUniversity Press.

Shallice, T. (1994). Multiple levels of controlprocesses. In C. Umilta & M. Moscovitch(Eds.), Attention and performance XV: Con-scious and nonconscious information processing(pp. 395–420). Cambridge, MA: MIT Press.

Stoerig, P., & Cowey, A. (1997). Blindsight inman and monkey. Brain, 12 0, 535–559.

Stoet, G., & Hommel, B. (2002). Interactionbetween feature binding in perception andaction. In W. Prinz & B. Hommel (Eds.),Attention and performance XIX: Common mech-anisms in perception and action (pp. 538–552).Oxford: Oxford University Press.

Taylor, T. L., & McCloskey, D. (1990). Triggeringof preprogrammed movements as reactions

Page 369: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

consciousness and control of action 351

to masked stimuli. Journal of Neurophysiology,63 , 439–446.

Trevarthen, C. B. (1968). Two mechanisms ofvision in primates. Psychologische Forschung,31, 299–337.

Umilta, C. (1988). The control operations of con-sciousness. In A. J. Marcel & E. Bisiach (Eds.),Consciousness in contemporary science (pp. 334–356). Oxford: Oxford University Press.

Umilta, C., & Moscovitch, M. (1994) (Eds.). Att-ention and performance XV: Conscious and non-conscious information processing. Cambridge,MA: MIT Press.

Umilta, M. A., Kohler, E., Gallese, V., Fogassi,L., Fadiga, L., & Rizzolatti, G. (2001). “I knowwhat you are doing”: A neurophysiologicalstudy. Neuron, 32 , 91–101.

Ungerleider, L. G., & Mishkin, M. (1982). Twocortical visual systems. In D. Ingle, M. A.Goodale, & R. J. W. Mansfield (Eds.), Analysisof visual behavior (pp. 549–586). Cambridge,MA: MIT Press.

Vorberg, D., Mattler, U., Heinecke, A., Schmidt,T., & Schwarzbach, J. (2003). Different timecourses for visual perception and actionpriming. Proceedings of the National Academyof Science USA, 100, 6275–6280.

Wegner, D. M. (2003). The mind’s best trick:How we experience conscious will. Trends inCognitive Sciences, 7, 65–69.

Wegner, D. M. (2005). Precis of the illusion ofconscious will. Behavioral and Brain Sciences,2 7, 649–659.

Weiskrantz, L. (1986). Blindsight. A case study andimplications. Oxford: Oxford University Press.

Weiskrantz, L. (1997). Consciousness lost andfound. A neuropsychological exploration. Ox-ford: Oxford University Press.

Zhu, J. (2004). Locating volition. Consciousnessand Cognition, 13 , 1–21.

Ziesser, M., & Nattkemper, D. (2002). Effectanticipation in action planning. In W. Prinz& B. Hommel (Eds.), Attention and perfor-mance XIX: Common mechanisms in perceptionand action (pp. 645–672). Oxford: OxfordUniversity Press.

Zihl, J. (1980). Blindsight: Improvement ofvisually guided eye movements by systematicpractice in patients with cerebral blindness.Neuropsychologia, 18, 71–77.

Zihl, J., & von Cramon D. (1985). Visual fieldrecovery from scotoma in patients withpostgeniculate damage: A review of 55 cases.Brain, 108, 335–365 .

Page 370: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c12 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw January 16, 2007 19:25

352

Page 371: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

D. Linguistic Considerations

353

Page 372: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

354

Page 373: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

C H A P T E R 13

Language and Consciousness

Wallace Chafe

Abstract

This chapter focuses on two distinct, lin-guistically oriented approaches to languageand consciousness taken by Ray Jackendoffand Wallace Chafe. Jackendoff limits con-sciousness to uninterpreted imagery, and hepresents evidence that such imagery is exter-nal to thought because it is too particular,does not allow the identification of individ-uals, and fails to support reasoning. If all weare conscious of is imagery and imagery doesnot belong to thought, it follows that weare not conscious of thought. Chafe distin-guishes between immediate consciousness,involved in direct perception, and displacedconsciousness, involved in experiences thatare recalled or imagined. He sees the formeras including not only the sensory experiencesdiscussed by Jackendoff but also their inter-pretation in terms of ideas. Displaced con-sciousness includes sensory imagery that isqualitatively different from immediate sen-sory experience, but it too is accompaniedby ideational interpretations that resemblethose of immediate experience. Both the

imagistic and the ideational componentsof consciousness are held to be central tothought as thought is usually understood.Both approaches are supported with linguis-tic data, but data that are different in kind.

Introduction

How are language and consciousness rela-ted? Can the study of language shed lighton the nature of consciousness? Can animproved understanding of consciousnesscontribute to an understanding of language?Some scholars in the past have gone so faras to equate conscious experience with lin-guistic expression. Within philosophy, as oneexample, we find Bertrand Russell (1921,p. 31) writing, “A desire is ‘conscious’ whenwe have told ourselves that we have it.” Inpsychology we have Jean Piaget’s statement(1964/1967, p. 19) that “thought becomesconscious to the degree to which the child isable to communicate it.” Given our currentknowledge, however, we might want to gobeyond simply equating consciousness with

355

Page 374: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

356 the cambridge handbook of consciousness

language. This chapter focuses on some pos-sible insights originating in linguistics, thediscipline for which language is the primaryfocus, including the relation of language tobroader aspects of humanness. It may bethat a language-centered approach can pro-vide some answers that have eluded those inother disciplines who have approached thesequestions from their own perspectives.

Consciousness-related questions wererarely asked within linguistics in the 20thcentury, a period during which most scholarsin that field would have found them irrel-evant or pointless. Stamenov (1997, p. 6)mentions correctly that “there is at presentlittle research in linguistics, psycholinguis-tics, neurolinguistics and the adjacent disci-plines which explicitly addresses the prob-lem of the relationships between languageand consciousness.” This neglect can betraced historically to the association of con-sciousness with “mentalism,” a supposedlyunscientific approach to language that wasforcefully rejected by Leonard Bloomfieldand his followers, who dominated linguisticsduring the second quarter of the 20th cen-tury. Bloomfield often repeated statementssuch as the following, taken from his obitu-ary of Albert Paul Weiss, a behaviorist psy-chologist who strongly influenced him: “Ouranimistic terms (mind, consciousness, sensa-tion, perception, emotion, volition, and soon) were produced, among the folk or byphilosophers, in pre-scientific times, whenlanguage was taken for granted” (Bloom-field, 1931, p. 219). Bloomfield’s enormousinfluence led his followers to regard suchnotions as pointless appeals to somethingakin to magic. Noam Chomsky, whose influ-ence eclipsed that of Bloomfield in the lat-ter half of the 20th century, departed fromBloomfield by expressing an interest in themind, but his interests were restricted toan abstract variety of syntax that in basicrespects remained loyal to Bloomfield’s con-straints and that, by its very nature, pre-cluded a significant role for consciousness.

Language Structure, Discourse and theAccess to Consciousness, a book character-ized by its editor, M. I. Stamenov, as “thefirst one dedicated to a discussion of someof the aspects of this topic” (1997, p. 6),

includes various authors who show that atleast some aspects of language structure lieoutside of consciousness. Langacker (1997,p. 73), for example, concludes his chapter bysaying that “it should be evident that gram-mar is shaped as much by what we are notconsciously aware of as by what we are.”Although that may be true, the discussion tofollow explores the question of whether andto what extent consciousness and the struc-ture of language are mutually interactive.

This exploration can hardly proceed with-out taking into account ways in which bothlanguage and consciousness are related toother mental phenomena, and especially tothose captured by such words as thought andimagery. These are, of course, words in theEnglish language, and they can mean dif-ferent things to different people. Natsoulas(1987) discusses six different ways of under-standing consciousness, and exactly what ismeant by thought and imagery is hardlyuncontroversial. The underlying problem, ofcourse, is that consciousness, thought, andimagery all refer to private phenomena thatare inaccessible to direct public observation.Although their referents may seem obviousto introspection and may be pervasive ingre-dients of people’s mental lives, it can be frus-tratingly difficult to achieve agreement onwhat they include or where their bounda-ries lie.

It may seem presumptuous to suggestnew solutions to problems that have occu-pied scholars for millennia, but continu-ing advances in both scholarship and sup-porting technology have encouraged newapproaches to old puzzles in many areas,including the study of language and itsrelation to the mind. This chapter focuseson two partially different answers to thequestions raised at the beginning of thisintroduction. One has been proposed byRay Jackendoff, the other by Wallace Chafe.Both agree that people are conscious ofimagery and affect. Beyond that, how-ever, their conclusions differ. Jackendoff lim-its consciousness to imagery and excludesimagery from thought, above all becauseit is unable to account for inferences andother logical processes. It follows that peo-ple are not conscious of thought. Chafe

Page 375: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 357

suggests that imagery is inseparable from itsinterpretation in terms of ideas and orienta-tions of ideas and that thought exhibits bothimagistic and ideational components that aresimultaneously available to consciousness.Both scholars support their suggestions withevidence from language, but the evidence isof different kinds.

Is Thought Unconscious?

We can consider first the view that peo-ple are conscious only of imagery and thatimagery is not an element of thought, a viewthat was forcefully presented by Jackendoffin his book, Consciousness and the Computa-tional Mind (1987) and subsequently refinedin Chapter 8 of The Architecture of the Lan-guage Faculty (1997), which was in turn asomewhat modified version of Jackendoff(1996). He identifies three basic levels ofinformation processing. At the outermostlevel, closest to “the outside world,” are brainprocesses of which we are totally uncon-scious; among these processes are such visualphenomena as fusing the retinal images fromthe two eyes into a perception of depth orstabilizing the visual field when the eyes aremoving. We are conscious of the results ofthese processes – depth perception and a sta-bilized visual field – but not of how they hap-pen. When it comes to auditory phenomena,we may be conscious of speech sounds, suchas vowels, consonants, syllables, and pitchcontours, but not of how raw sound comes tobe interpreted in those ways. Mechanisms atthe outer layer of perception do their workoutside of consciousness.

What we are conscious of are the visualforms, speech sounds, and so on that belongto what Jackendoff calls the intermediatelevel of information processing. It is inter-mediate between unconscious perceptualprocesses and a deeper level inhabited bythought, of which we are also not con-scious. In short, we are conscious of imageryassociated with the various sense modali-ties, and we are also conscious of affect,but that is all. We certainly have thoughts,but it is only through their manifestations inimagery, nonverbal or verbal, that we can be

conscious of what they are. We may hear thesounds of language overtly or in our heads,but the thoughts expressed by those soundslie outside of consciousness. “We becomeaware of thought taking place – we catchourselves in the act of thinking – only whenit manifests itself in linguistic form, in factphonetic form” (1997, p. 188). Nonverbalimagery provides other conscious manifes-tations of thought, but it too is distinct fromthought itself.

Jackendoff discusses several kinds of evi-dence that thought is unconscious. For onething, our minds sometimes seem to solveproblems without our being conscious ofthem doing it: “We have to assume thatthe brain is going about its business ofsolving problems, but not making a lot ofconscious noise about it; reasoning is tak-ing place without being expressed as lan-guage” (1997, p. 188). He emphasizes therole of reasoning as a fundamental aspect ofthought, suggesting that if he were to sayto you Bill killed Harry, you would knowfrom that statement that Harry died. Youmight at first think you knew it becauseyou had an image of Bill stabbing Harryand Harry falling dead, but this image istoo specific because “the thoughts expressedby the words kill and die, not to mentionthe connections between them, are too gen-eral, too abstract to be conveyed by a visualimage.” Thus, the knowledge that killingentails dying must belong to unconsciousthought. He also mentions the problem ofidentification. “How do you know that thosepeople in your visual image are Bill andHarry respectively?” Beyond that, there aremany concepts like virtue or social justice orseven hundred thirty-two for which no usefulimages may be available. Such words expresselements of thought of which there is no wayto be directly conscious.

Jackendoff takes pains to separatethought from language. A major ingredientof consciousness is inner speech, and we mayfor that reason be tempted to equate innerspeech with thought. But, for one thing,“thinking is largely independent of whatlanguage one happens to think in” (1997,p. 183). Whether one is speaking Englishor French or Turkish, one can, he says, be

Page 376: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

358 the cambridge handbook of consciousness

having essentially the same thoughts. Theform of a particular language is irrelevantto the thoughts it conveys, as shown bythe fact that an English speaker can behaving the same thoughts as a Japanesespeaker, even though the English speakerputs the direct object after the verb and theJapanese speaker before it. Furthermore,one can be conscious of linguistic formthat is dissociated from any thought atall, as with the rote learning of a ritual inan unfamiliar language. Conversely, in thetip-of-the-tongue phenomenon “one feelsas though one is desperately trying to filla void . . . One is aware of having requisiteconceptual structure” but the phonologicalstructure is missing (1987, p. 291). He citesvarious other examples to show that thoughtis unconscious and that all we are consciousof is the phonetic imagery that expressesthought, often accompanied by nonverbalimagery. “The picture that emerges fromthese examples is that although languageexpresses thought, thought itself is a separatebrain phenomenon” (1997, p. 185).

The Relation of Language to Thought

In spite of this disconnect between con-scious phonetic imagery and unconsciousthought, Jackendoff discusses three ways inwhich “language helps us think.” First of all,language makes it possible to communicatethoughts:

Without language, one may have abstractthoughts, but one has no way to commu-nicate them (beyond a few stereotyped ges-tures such as head shaking for affirmationand the like). . . . Language permits us tohave history and law and science and gos-sip. . . . As a result of our having language,vastly more of our knowledge is collectiveand cumulative than that of nonlinguisticorganisms . . . Good ideas can be passed onmuch more efficiently (1997, p. 194).

So, even though thought itself is uncon-scious, language provides an important wayof sharing thoughts.

Second, “having linguistic expressions inconsciousness allows us to pay attentionto them” (1997, p. 200). Jackendoff views

attention as a process for zeroing in on whatconsciousness makes available:

I am claiming that consciousness happensto provide the basis for attention to pickout what might be interesting and therebyput high-power processing to work on it.In turn, the high-power processing resultingfrom attention is what does the intelligentwork; and at the same time, as a byproduct,it enhances the resolution and vividnessof the attended part of the conscious field(p. 2 00).

Furthermore, “without the phonetic form asa conscious manifestation of the thought,attention could not be applied, since atten-tion requires some conscious manifestationas a ‘handle’” (p. 201). Language is particu-larly useful because it “is the only modalityof consciousness in which the abstract andrelational elements of thought correspondeven remotely to separable units” (p. 202).Units of that kind are not available to othercomponents of consciousness, such as visualimagery.

Third, language gives access to what Jack-endoff calls valuations of percepts. Valua-tions include judgments that something isnovel or familiar, real or imagined, volun-tary or involuntary, and the like. Languageprovides “words like familiar, novel, real,imaginary, self-controlled, hallucination thatexpress valuations and therefore give us aconscious link to them. This conscious linkpermits us to attend to valuations and sub-ject them to scrutiny” (p. 204). On awaken-ing from a dream, for example, we can say,“It was just a dream,” but a dog cannot dothat – it cannot bring the valuation into con-sciousness in that way.

Summary

Jackendoff finds thought to be totally uncon-scious, but he suggests that it is mani-fested in consciousness by way of language,which itself enters consciousness only byway of phonetic imagery. Nevertheless, lan-guage enhances the power of thought inthree ways: by allowing thought to be com-municated, by making it possible to focusattention on selected aspects of thought

Page 377: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 359

(particularly on its relational and abstractelements), and by providing access to val-uations of thought.

Jackendoff has presented a serious andresponsible challenge to those who wouldlike to believe that we are conscious ofthought. If his suggestion that we are notconscious of thought seems on the face ofit to conflict with ordinary experience, itis a conclusion that follows inevitably fromthese two propositions:

(1) Consciousness is limited to phonetic andnonverbal imagery.

(2) Thought is independent of those twokinds of imagery.

Behind these propositions lie particular waysof understanding consciousness, thought,and imagery, and the subjective nature ofthese phenomena leaves room for otherinterpretations. The remainder of this chap-ter explores other possibilities, based in largepart on Chafe (1994 , 1996a,b), with occa-sional references to William James (1890),whose views on “the stream of thought” arestill worth taking seriously (Chafe, 2000).

Preliminaries to an Alternative View

Because consciousness, thought, and imagerydo refer to subjective experiences, furtherdiscussion can benefit from specifying howthese words are used here. Consciousness isnotoriously difficult to define in an objectiveway, but some of its properties will emergeas we proceed.

Some Properties of Consciousness

There is agreement by Jackendoff, Chafe,and others that conscious experiences havetwo obvious components that may bepresent simultaneously. One of them isrelated to sensory experience, and the otheris affect. A case could be made for theaffective component being in some ways themore basic. It is certainly the oldest in termsof evolution, and it still underlies much ofhuman behavior. It is reflected in language

in ways that have not been sufficiently stud-ied (but see Chafe, 2002 ; Wierzbicka, 1999),and it is mentioned only in passing below.

It should also be apparent that conscious-ness may be focused on the environmentimmediately surrounding the conscious self,in which case we can speak of an immedi-ate consciousness. However, it may also befocused on experiences remembered or ima-gined or learned from others, in which casewe can characterize it as a displaced con-sciousness. Immediate consciousness invol-ves direct perception, whereas displacedconsciousness takes the form of imagery – anattenuated experiencing of visual, auditory,and/or other sensory information. Immedi-ate and displaced consciousness are qualita-tively different (Chafe, 1994).

Consciousness is dynamic, constantlychanging through time, and it resemblesvision in possessing both a focus and aperiphery – a fully active and a semiactiverange – analogous to foveal and peripheralvision. Its focus is severely limited in con-tent and duration, typically being replacedevery second or two, whereas its peripherychanges at longer, more variable intervals. Inthat respect consciousness shares the proper-ties of eye movements as they are monitored,for example, while people look at pictures(Buswell, 1935) and also while they talkabout them (Holsanova, 2001). Languagereflects the foci of active consciousness inprosodic phrases, whereas larger coherencesof semiactive consciousness appear in hier-archically organized topics and subtopics(Chafe, 1994). This view sees the processtermed attention as simply the process bywhich consciousness is deployed.

The Nature of Imagery

It is well known that direct perception ofthe environment is not a matter of simplyregistering what is “out there.” Informationthat enters the eyes and ears is always inter-preted in ways that are shaped in part bygenetic endowments, in part by cultural andindividual histories. Imagery is no different.It is not an attenuated replaying or imagin-ing of raw visual, auditory, or other sensory

Page 378: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

360 the cambridge handbook of consciousness

input. Sensory experiences do not appearin that form even in immediate conscious-ness, and images are subject to considerablymore interpretive processing. Experiencingimagery is not like looking at a picture. Buteven pictures are interpreted as containingparticular people, structures, trees, or what-ever, which are oriented and related in par-ticular ways.

Although images always involve inter-pretations, there may be some circum-stances under which it is possible to beconscious of interpretive phenomena in theabsence of accompanying sensory experi-ence. It is thus useful to distinguish these twoaspects of consciousness. The word imageryis restricted here to sense-related phenom-ena, real or imagined, whereas the interpre-tive experiences that are often (but neednot be) accompanied by imagery are termedideational.

The imagistic component of conscious-ness may or may not include the sounds oflanguage. On that basis one can distinguishverbal from non-verbal consciousness, thelatter being focused on non-verbal aspects ofexperience. One may experience non-verbalimagery without language, one may experi-ence language as well as non-verbal imagery,or one may experience language alone, eitherovertly or covertly. There are evidently indi-vidual differences in this regard (Poltrock &Brown, 1984). William James, having askedpeople about the images they had of theirbreakfast table, reported that

an exceptionally intelligent friend informsme that he can frame no image whateverof the appearance of his breakfast-table.When asked how he then remembers it atall, he says he simply ‘knows’ that it seatedfour people, and was covered with a whitecloth on which were a butter-dish, a coffee-pot, radishes, and so forth. The mind-stuffof which this ‘knowing’ is made seems tobe verbal images exclusively (James, 1890,p. 2 65).

There are two possible interpretations ofthe phrase verbal images in this quote. Jamesprobably concluded that his friend expe-rienced the sounds of language and noth-ing more. More interesting is the possibil-

ity that his friend was not restricted to thesounds of language, but had conscious accessto ideational interpretations of those soundsdevoid of sensory accompaniments. That isthe more interesting interpretation becauseit suggests that one can indeed experienceideational consciousness while at the sametime imagery is restricted (in some people)to the sounds of language. In other words,an absence of non-verbal imagery need notdeprive individuals like James’s friend ofa consciousness of ideas. This conscious-ness of ideas, with or without accompanyingimagery, can plausibly be considered a majorcomponent of thought.

Components of Language

Figure 13 .1 lays out some basic stages thatintervene between a person’s interactionwith the outside world and the utterance oflinguistic sounds. These phenomena interactin ways that are obscured by their assign-ment to separate boxes. In the final analysisthey are realized in structures and processeswithin the brain, distributed in ways that aresurely not separated so neatly. Nevertheless,there are certain distinguishable principlesof organization that do lend themselves todiscussion in these terms.

The boxes on the far left and right,labeled reality and sound, represent phenom-ena external to the human mind. We neednot concern ourselves with problems thatmight be associated with the word reality;here it is shorthand for whatever peoplethink and talk about. It includes events andstates and people and objects encounteredin the course of living that may in some wayaffect a person’s thoughts, but that wouldexist whether they were processed by a mindor not. Sound on the right represents exter-nal, physical manifestations of the soundsof language: their articulation by the vocalorgans, their acoustic properties outside thehuman body, and their reception in the earsof a listener.

The boxes labeled thought on the left andphonology on the right represent immediateresults of the interpretive processes that themind applies to those external phenomena.Each has its own patterns of organization.

Page 379: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 361

reality → thought → semantics → syntax → phonology → sound

mental processing

language

Figure 13 .1. Stages in the production of language.

Thought need not be as directly relatedto external reality as Figure 13 .1 suggests,because a great deal of it involves remember-ing one’s own earlier experiences or expe-riences learned from others, or imaginingthings for which no direct contact with real-ity ever existed. Linguists have devoted agreat deal of time and effort to understand-ing phonological organization, but have con-cerned themselves much less with the orga-nization of thought, for reasons that havemore to do with the history of the disciplinethan with the ultimate relevance of thoughtto language.

The two remaining boxes, labeled seman-tics and syntax, represent ways in which ele-ments of thought are adjusted to fit therequirements of language so that they can beassociated with sounds, manipulated sym-bolically, and communicated to other minds.If languages never changed, semantic struc-tures could be submitted directly to phonol-ogy. But languages do change, and seman-tic structures are reshaped through processesof lexicalization and grammaticalization toform syntactic structures that are no longerrelated to thoughts fully or directly. It isthese syntactic structures that then proceedto be associated with sounds (Chafe, 2005).Missing from Figure 13 .1 is consciousness,and its place in this picture is our chiefconcern.

Consciousness of Phonology

It can be instructive to consider briefly whatis involved in consciousness of the soundsof language, because what happens on theright side of Figure 13 .1 may be simplerand easier to comprehend than what hap-pens on the left. The important lesson isthat the mind does not operate exclusivelywith raw sound. Jackendoff describes con-

sciousness of what he calls either phonologi-cal structure or phonetic form, pointing outthat “we experience the inner speech streamas segmented into words and possibly fur-ther into syllables or individual segments.In addition, the rhythm, stress pattern, andintonation of inner speech must come fromphonological units as well” (1987, p. 288).In educational psychology there have beennumerous studies of children’s awareness ofsegments (“phonemes”) and syllables and therelevance of such awareness to the acquisi-tion of reading (e.g., Anthony & Lonigan,2004 ; Castles & Coltheart, 2004). It is clearthat we are conscious of interpreted sounds,not physical sounds alone. Sounds are orga-nized by the mind into linguistically rele-vant elements of which we can be conscious,although their physical manifestations canhave a place in consciousness too. We orga-nize sound into syllables, words, intonationalpatterns, and the rest, in association withtheir sensory manifestations. The point isthat phonological consciousness has bothimagistic and interpretive components. Canthe same be said of thought, our focus in thischapter?

Consciousness of Thought

It is helpful at this point to consider an exam-ple of actual speech. The three lines belowwere taken from a conversation amongthree women, in the course of which theydiscussed some local forest fires. As theydeveloped this topic, one of them said,

(a) You’d look at the sun,(b) it just looked red;(c) I mean you couldn’t see the sun.

She was talking about something she hadexperienced 2 days earlier, when she had

Page 380: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

362 the cambridge handbook of consciousness

direct access to the unusual appearance ofthe sun. At that time her perception of itwas in her immediate consciousness. It musthave been primarily a visual experience, butit was probably accompanied by a feeling ofawe or wonder at this rather extreme depar-ture from normality. It was not just a regis-tering of sensory information, but an inter-pretation that included a selection of certainsalient ideas from those that must have beenpotentially available. Among them was theidea of an event she verbalized in line (a) aslook and an object she verbalized as the sun.The visual input to her eyes had been inter-preted by her mind, and she was conscious ofboth the interpretation and its sensory cor-relates. If the word thought means anythingat all, this immediate experience, with itsideational and imagistic components, was atthat moment present in her thought. It wasan immediate perceptual experience, it hadboth ideational and imagistic components,she was conscious of it, and she was think-ing about it.

When, 2 days later, she uttered the lan-guage above, the earlier experience was oncemore active in her consciousness. This time,however, it was no longer immediate but dis-placed. It was probably experienced in partas imagery, but her language expressed aninterpretation in terms of ideas. By utter-ing this language she must have intendedto activate a partially similar experience inthe consciousness of her listeners. They, fortheir part, must have experienced their owndisplaced consciousness, but for them it wastwice displaced. We can note a differencebetween the relative success of communi-cating the ideational elements and commu-nicating their sensory accompaniments. Thespeaker’s ideas as such could be communi-cated more or less intact, but their sensoryaccompaniments necessarily differed for herlisteners from what she was experiencing.The ideas expressed as the sun and looked redcould survive the communication process,but their sensory manifestations could onlybe reshaped by the listeners’ imaginations.

Experiential and communicative eventsof this kind are ubiquitous in everydayspeech, and indeed in everyday life. They are

so common as to seem trivial, but they shedlight on the basic nature of consciousness,thought, imagery, and communication byproviding evidence for properties of imme-diate and displaced consciousness, for themutually supportive roles of ideational andimagistic consciousness, and for the relationof consciousness to thought.

The Organization of Thought

The dynamic quality of consciousness is evi-dent in the constantly changing content ofits focus. The prosody of language dividesspeech into a sequence of prosodic phrases,each expressing a focus of the speaker’s con-sciousness. Each focus is replaced by anotherat intervals in a typical range of 1 or 2 s(cf. Poppel, 1994). This progression throughtime was described by James in his often-quoted statement regarding the stream ofthought, where, it may be noted, he equatedthought and consciousness:

As we take, in fact, a general view ofthe wonderful stream of our consciousness,what strikes us first is this different paceof its parts. Like a bird’s life, it seems to bemade of an alternation of flights and perch-ings. The rhythm of language expresses this,where every thought is expressed in a sen-tence, and every sentence closed by a period.The resting-places are usually occupied bysensorial imaginations of some sort, whosepeculiarity is that they can be held beforethe mind for an indefinite time, and con-templated without changing; the places offlight are filled with thoughts of relations,static or dynamic, that for the most partobtain between the matters contemplatedin the periods of comparative rest (James,1890, p. 2 43).

The example cited above shows three fociof consciousness expressed by the prosodicphrases transcribed in the three lines:

(a) You’d look at the sun,(b) it just looked red;(c) I mean you couldn’t see the sun.

Within these foci, language shows the mindorganizing experience into elements thatcan be called ideas, among which are ideas

Page 381: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 363

of events and states, typically expressed asclauses. This woman expressed first the ideaof a remembered event, you’d look at thesun, including the idea of the looking eventitself along with ideas of its participants –unspecified individuals expressed by theword you and the object expressed as the sun.She then proceeded to verbalize the idea of astate with it just looked red, including the ideaof looking red and once more the sun, thistime expressed as it. Finally, in I mean youcouldn’t see the sun she rephrased her expe-rience. Language provides copious evidencethat the processing of experience into ideasof events and states and their participants isfundamental to the mind’s organization ofthought. Outside linguistics the segmenta-tion of experience into ideas of events hasbeen investigated experimentally by Newt-son, Engquist, and Boyd (1977) and morerecently with brain imaging (Zacks et al.,2001).

The thoughts of this woman may havefocused on these ideas, but there was clearlyan affective accompaniment that was infact foreshadowed by an earlier statementdescribing the most weird day I’ve ever seenin my entire adult life. In the above, affectis most noticeable in line (b) in a rise-fallpitch contour on the word red, expressive ofa feeling engendered by the striking qualityof the sun’s appearance, and in line (c) in arise-fall contour on see, conveying an emo-tionally laden contrast with what one wouldnormally see.

Language, in summary, shows thoughtbeing organized into successive foci of con-sciousness, each typically activating ideasof events and states and their participants,often with accompanying affect. It is at leastplausible to suppose that people are con-scious of these ideational elements togetherwith their imagistic correlates and that boththe ideas and the images contribute to theflow of thought.

Pronouns

The idea of the sun was verbalized in threedifferent ways in this example, althoughonly two are visible in the transcription.

In line (a), you’d look at the sun, it wasexpressed with the words the sun, whichwere given prosodic prominence. They werespoken slowly, relatively loudly, and with ahigh falling pitch. In line (b), it just lookedred, the same idea was expressed with thepronoun it, which was spoken rapidly withlow volume and low pitch. The prosodi-cally most prominent word in (b) was red.In line (c), I mean you couldn’t see the sun,the same idea was expressed once more asthe sun, but this time these words were spo-ken softly with the pitch deteriorating intocreaky voice. The most prominent words in(c) were couldn’t see.

The function of pronouns like the it inline (b) is well explained in terms of con-sciousness. Pronouns are used when an idea(like the idea of the sun) is in a speaker’sfully active consciousness and is assumed tobe in the listeners’ fully active consciousnessas well. This idea was activated for the listen-ers during the utterance of line (a). Becausein (b) it was assumed to be already activein their consciousness, it could be expressedwith a minimum of phonological material.There was no need to assign it the promi-nence it had in (a). English and many otherlanguages typically express such already con-scious ideas with unaccented pronouns. InAsian languages there is a tendency to givesuch ideas no phonological representationwhatsoever.

In line (c) there was a reversion to thewords the sun, but this time they werespoken without the prominence they hadin (a). Because the idea of the sun was pre-sumably still active in the consciousness ofthe speaker and listeners, why was it notagain expressed with it? Line (c) is discussedfurther below, but here it can be noted thatit involved a second attempt at verbalizingthe speaker’s consciousness, a kind of start-ing over. If she had said I mean you couldn’tsee it, she would have tied this act of verbal-ization too closely to what preceded. Withits lack of prominence the idea of the sunwas presented as still in active consciousness,but with the words the sun, rather than it, itwas presented as part of a new attempt atverbalizing this experience.

Page 382: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

364 the cambridge handbook of consciousness

Harry’s Death Revisited

We can now return to reasons cited by Jack-endoff for excluding imagery and conscious-ness from thought. As noted, he imagineda situation in which he said to someone,Bill killed Harry. (Linguists often use violentexamples like this because they highlight inthe starkest terms the properties of transi-tive verbs.) It is an example that conflictswith observations of actual speech, whereit is only under special circumstances thatnouns like Bill and Harry are spelled outin this way. In a more realistic context onemight have said, for example, he killed him,with pronouns conveying an assumptionthat the ideas of Bill and Harry were alreadyin the consciousness of the speaker andlistener.

But let us suppose that this sentencewas actually uttered and that the speakerand listener both experienced their ownimages of the event in question. The ques-tion is whether those images belonged tothe speaker’s and listener’s thoughts. Jack-endoff says they did not, because an imagewould have to be particular. It would have to“depict Bill stabbing or strangling or shoot-ing or poisoning or hanging or electrocut-ing Harry. . . . And Harry had to fall down,or expire sitting down, or die hanging froma rope” (1997, p. 188). How could any ofthose particular images be the general con-cept of killing? But of course the thoughtexpressed by this sentence was not a thoughtof killing in general; it was a thought of a par-ticular event for which a particular imagewas entirely appropriate. The thought andthe image were thus not at odds.

Jackendoff’s question, however, goesbeyond the relation of a particular image toa particular thought to ask how a listenerwould know Harry was dead, knowledgethat would depend on a general knowledgeof killing and its results. It is important atthis point to distinguish between the ideaof an event, an element of thought, andthe way an idea is categorized for expres-sion in language. That distinction is capturedin Figure 13 .1 by the placement of thoughtand semantics in separate boxes. Whoeversaid Bill killed Harry decided that the idea

of the event could be appropriately verbal-ized as an instance of the kill category. Hemight have decided to categorize it differ-ently – as an instance of the murder, stab, orpoison categories, for example – each withits own entailments. Choosing the kill cat-egory accomplished two things: It allowedthe speaker to use the word kill, but atthe same time it associated the idea of theevent with expectations that would apply toinstances of the category, and among themwas the expectation that the victim wouldbe dead. Categories give speakers words fortheir ideas, but they also carry expecta-tions that are thereby associated with thoseideas. Categorization is an essential stepin the conversion of thoughts into soundsbecause of the obvious impossibility of asso-ciating every unique thought with a uniquesound, but categories are also the locus ofentailments.

Jackendoff asks further how a particularimage could allow one to know the identi-ties of Bill and Harry. That question, how-ever, seems to be based on the view that animage resembles an uninterpreted picture.If images are always accompanied by inter-pretations, the identities of Bill and Harrywould necessarily accompany the sense-related experience.

The Priority of Thoughtover Phonology

Let us suppose that one can be consciousof both thoughts and sounds. Languageprovides evidence of several kinds thatconsciousness of thoughts has priority overconsciousness of sounds in ordinary mentallife.

Familiar and Unfamiliar Languages

It is instructive to compare the experienceof listening to one’s own language with thatof listening to an unfamiliar language. In thelatter case it is only the sounds of which onecan be conscious. They may be recognizedas the sounds of a language, but that is all.Listening to one’s own language is a verydifferent experience. Normally one is hardly

Page 383: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 365

conscious of the sounds at all, but rather ofthe thoughts the language conveys, thoughtsthat may or may not appear in consciousnessas imagery but that always have an ideationalcomponent.

Spoken language provides the best exam-ples of this distinction, but it can bemimicked with written symbols. To any-one who is not familiar with the Japaneselanguage and writing system the symbolsbelow may at best be recognized as exam-ples of such writing, but otherwise theyare incomprehensible marks. To a literateJapanese they elicit consciousness of thesounds ame ga furu, but consciousness isprimarily focused on a thought, a thoughtthat might be expressed in English as it’sraining.

Linguists who conduct fieldwork withlittle-studied languages sometimes ask a con-sultant to repeat a certain word so they canfocus on its phonetic properties. It is notunusual for the consultant to fail to coop-erate, preferring to discuss at length whatthe word means. Uppermost in the consul-tant’s consciousness is the thought behindthe word, and questions regarding its soundare regarded as irrelevant and intrusive. Thissame priority in consciousness of thoughtsover sounds may explain why it is difficultto teach writing to speakers of previouslyunwritten languages. To write accurately itis necessary to shift one’s consciousness fromthoughts to sounds, and that is not always aneasy thing to do.

The same preference for thoughts canbe instructive in another way. Althoughthoughts may be uppermost in conscious-ness, they can with prodding and effort bereplaced by consciousness of sounds. Con-sciousness thus has priorities that are pre-sumably established by their salience withinthe manifold varieties of human experience.Some experiences enter consciousness morereadily or more naturally than others. We arenot usually conscious of breathing or blink-ing our eyes, but we can become so. It fol-lows that questions regarding the content ofconsciousness should not be phrased cate-

gorically. Availability to consciousness canbe a graded matter, as people are consciousmore easily, immediately, or readily of somethings than others, but are able under theright circumstances to shift consciousness toless accustomed phenomena.

Rote Learning

As mentioned by Jackendoff (1996, pp. 6–7), one may have learned “by heart” thesounds of a poem, ritual, or song with littleor no consciousness of thoughts associatedwith those sounds. In such rote learning con-sciousness of thought is excluded, and con-sciousness of sound is all that is available.This experience has something in commonwith listening to an unfamiliar language, butin this case one is, oneself, actually speak-ing. A basic element of linguistic process-ing is missing, however, and one recognizesthat it is unusual to produce sounds with noaccompanying thoughts. The fact that wehave such experiences is indirect evidencethat we are, under normal circumstances,conscious of thoughts and know when theyare absent.

Ambiguity

One other linguistic phenomenon that sug-gests a consciousness of thoughts is ambi-guity: the fact that certain words can bephonologically identical but associated withdifferent thoughts. Chafe (1996b, p. 184)mentions the word discipline, which maycategorize an idea similar to that catego-rized as academic field or may alternativelyinvolve harsh training to obey a set of rules.Because the sound and spelling are the same,there must be consciousness of a differencein thought. Beyond that, one can con-sciously compare one of the ideas expressedas discipline with the idea expressed as aca-demic field and judge the closeness of fit, ajudgment that can only be accomplishedin the realm of thought. Ambiguity canextend beyond lexical expressions like theseto include aspectual distinctions, as whenI’m flying to Washington may mean that thespeaker is in the middle of the flight, is plan-ning to undertake it, or is doing it gener-ically these days. Consciousness is capable

Page 384: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

366 the cambridge handbook of consciousness

of focusing on such distinctions. Withoutconscious access to thought, the experiencesmentioned in this paragraph would be diffi-cult to explain.

Distinguishing Thought OrganizationFrom Semantic Organization

In Figure 13 .1 thought and semantics wereassigned to separate boxes, with the seman-tics box representing ways in which thoughtsare organized to fit the requirements of lan-guage. Language often shows a conscious-ness of this distinction, a fact which suggeststhat thought and its semantic organizationare both consciously available.

Inadequacies of Language as anExpression of Thought

We can return to our example:

(a) You’d look at the sun,(b) it just looked red;(c) I mean you couldn’t see the sun.

In line (c) the phrase I mean is a devicemade available in the English language toshow that one is having difficulty expressingthoughts with available semantic resources.All three lines of this example recorded thespeaker’s attempt to verbalize her memoryof what she had experienced 2 days earlier,and evidently by the end of line (b) she wasnot fully satisfied with what she had said.Line (c) was a further attempt to put whatshe was thinking into words. In some logicalsense, line (c) contradicted what she said in(a) and (b), but the total effect of all threelines was to convey in different ways themanner in which the sun was obscured bysmoke, the larger idea on which her thoughtswere focused. Line (c) provides one kindof evidence of a mismatch between thoughtand possible linguistic choices.

When people talk, their speech oftenexhibits disfluencies: hesitations, false starts,and rewordings that are evidence that peopleexperience difficulty “turning thoughts intowords.” People are sometimes quite explicit

about this difficulty: I don’t know quite howto say it, or that’s not exactly what I meant.Noam Chomsky wrote,

Now what seems to me obvious by intro-spection is that I can think without lan-guage. In fact, very often, I seem to be think-ing and finding it hard to articulate what Iam thinking. It is a very common experienceat least for me and I suppose for everybodyto try to express something, to say it and torealize that is not what I meant and thento try to say it some other way and maybecome closer to what you meant; then some-body helps you out and you say it in yetanother way. That is a fairly common expe-rience and it is pretty hard to make sense ofthat experience without assuming that youthink without language. You think and thenyou try to find a way to articulate what youthink and sometimes you can’t do it at all;you just can’t explain to somebody whatyou think (Chomsky, 2 000, p. 76).

This kind of experience implies an ability tocompare what one is thinking with possibleways of organizing it linguistically, ways thatdepend in the first instance on the semanticresources of one’s language. Again, it seemsthat people are conscious of both thoughtsand semantic options because they are ableto evaluate differences between them.

The Tip-of-the-Tongue Experience

In the familiar tip-of-the-tongue phenome-non (A. S. Brown, 1991; Brown & McNeill,1966; Caramazza & Miozzo, 1997), oneexperiences a thought and its categoriza-tion, but has difficulty accessing a word orname. What is missing is the connection tophonology. Suppose one is thinking of a rel-atively unfamiliar object like an astragal butis unable to retrieve that sound. One canreflect on many aspects of the idea, cate-gorize it, and thus attribute to it a varietyof traits, but the phonological representationthat is usually provided by categorization isabsent. Jackendoff (1987, p. 291) mentionsthis experience as evidence for the inabil-ity to be conscious of thought, but it caneasily be seen as evidence for just the oppo-site: consciousness of a thought but only a

Page 385: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 367

glimmering of phonology at best. The expe-rience may or may not include imagery,but it always has an ideational component.Chafe (1996b, p. 184) mentions temporaryconsciousness of a thought expressed by theword inconclusiveness without access to thatsound, and probably in that case withoutimagery. This experience, when imagery islacking, can be the purest kind of evidencefor “imageless thought,” a major issue for theWurzburg school a century ago (Humphrey,1951).

Categories Lacking a Sound Altogether

There are certain categories that play auseful role in the organization of thoughtbut lack any phonological representation. Insome cases a word may be available to spe-cialists, but it is a word of which most peopleare ignorant. An example might be the smallsheath on the end of a shoelace that allowsthe lace to be passed through a small hole.One might or might not learn the word agletsooner or later, but probably most who aretotally familiar with both the thought and itscategorization have never done so. There isa resemblance here to the tip-of-the-tongueexperience, but in this case the phonologi-cal representation is wholly unavailable, notsimply difficult to retrieve.

Repeated Verbalizations

One promising way to compare thoughtswith their semantic organization is to exam-ine repeated verbalizations of what canbe assumed to be more or less the samethoughts experienced by the same personon different occasions (Chafe, 1998). Dataof this kind arise only fortuitously in therecording of actual speech, but instances canbe elicited by giving people something totalk about and recording what they say at dif-ferent times. Bartlett (1932) pioneered thismethod with a written stimulus and writtenresponses. A similar procedure was followedby Chafe and his associates using a short filmand spoken reponses (Chafe, 1980). Descrip-tions of what happened in the film wererecorded shortly after it was viewed, and

some of the viewers returned on one or moreoccasions to talk about it again (Chafe, 1991).The film depicted a theft of pears, and thefollowing excerpts show how one persontalked about certain events in three succes-sive versions:

Version 1 (after approximately 15 min-utes)

(a) And he ended up swiping one of his bas-kets of pears,

(b) and putting it on his bicycle.(c) On the front of his bicycle,(d) and riding off.

Version 2 (after 6 weeks)

(e) And so,(f) he finally gets it,(g) on his bike,(h) and he starts riding away.

Version 3 (after a year)

(i) And just put it right on his . . . on thefront of his bicycle,

(j) and rode off with it.

One cannot expect thoughts to remainconstant over time, of course, but these threeversions have enough in common to suggestthe retention of certain ideas in contrast tothe rather different language that was cho-sen to express those ideas each time. Allthree versions conveyed ideas of two events:the boy’s placing a basket of pears onhis bicycle followed by his departure fromthe scene. The participants in those eventsincluded the boy, verbalized as he or simplyomitted; the basket of pears, verbalized as it;and the bicycle, verbalized as his bicycle orhis bike. Probably the speaker experiencedimagery each time, most vividly in version 1

and least vividly in version 3 . A listener (orreader of the above) might experience ima-gery too, but it would necessarily be differ-ent from that of someone who saw the film.

Data of this kind highlight the distinctionbetween relatively constant thoughts andthe less constant language used to expressthem. The idea of positioning the basket ofpears was expressed as put it on his bicycle or

Page 386: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

368 the cambridge handbook of consciousness

put it on the front of his bicycle in versions 1 and3 , but get it on his bike in version 2 . The ideaof the boy’s departure was expressed as rideoff or start riding away or ride off with it. Theseare not just partially different words, but par-tially different ways of organizing thoughtssemantically.

As with the pronoun it that expressed theidea of the sun in the earlier example, thepronouns he and it in this example showagain that speakers are conscious not onlyof imagery but also of ideas. One may expe-rience imagery of a boy placing a basket ofpears on his bicycle, but it is interpreted interms of ideas of the boy, the basket, andthe bicycle, and consciousness of those ideasis essential for the production of language.In this case the ideas themselves were rel-atively constant across time and across thecommunicative act. Their imagistic accom-paniments may have had some constancy forthe speaker, but they were left to the listen-ers’ imaginations.

Language Differences

Different languages provide their speakerswith different semantic resources, includingdifferent inventories of categories, differentorientations, and different combinatory pat-terns. It is possible for thoughts to be more orless the same regardless of the language usedto verbalize them, but they can be moldeddifferently by different semantic resources.Semantic choices may then feed back on thethoughts themselves.

The film just mentioned was shown toa number of German speakers, one ofwhom expressed her thoughts concerningthe events described above as follows:

(a) er wuchtet den fast so grossen Korb wieer selbst es auch ist,

(b) auf das grosse Fahrrad,(c) und fahrt dann davon.

German provides a semantic category thatallows the idea of an event to be verbalizedwith the verb wuchten. That choice asso-ciates the idea of lifting with weight, entail-ing in this case that the weight of the bas-ket caused the boy to have difficulty lifting

it onto his bicycle. There is no correspond-ing category in English. One could say helifted it with difficulty but the effect is notthe same, lacking the association with weightand with a broader range of circumstancesunder which the German category mightbe employed. Line (a), furthermore, com-bines its elements in ways that are foreign toEnglish. The entire sequence might be trans-lated he has trouble lifting the basket, which isalmost as big as he is, onto his big bicycle, andrides off, but neither this nor any other trans-lation can capture the effect of the originalwith any precision.

Translation is an attempt to join two lan-guages in the area of thought, and a com-pletely successful translation would be onein which exactly the same thoughts wereconveyed by both the source and target lan-guages. Because the semantic resources ofdifferent languages are never identical, itis a goal that can never be fully achieved.Because the goal is to connect the languagesin the realm of thought, translating cannotbe accomplished at all without a conscious-ness of thoughts. One does the best onecan to express the thoughts that were origi-nally expressed with the semantic resourcesof one language while being unavoidablyconstrained by the semantic resources ofanother.

This German example provides a fur-ther illustration of the way repeated tellingscan triangulate on constancies of thought.As this speaker watched the film she wasevidently impressed with how difficult itwas for the boy to transfer the basket fromthe ground to his bicycle. With the lan-guage quoted above she chose the wuchtencategory to express that difficulty. Later sheused the following language:

(d) (er) nahm einen der Korbe,(e) und hiefte ihn mit großer Anstrengung

auf sein Fahrrad,(f) und fuhr davon.

A possible translation is he took one of thebaskets, and heaved it with great effort ontohis bicycle, and rode off. In line (e) theidea of lifting was expressed with the verb

Page 387: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 369

hiefte, similar to English heaved, capturingagain an impression of great effort that wasthen made explicit with the phrase mitgroßer Anstrengung. But this time her choicefocused on the difficulty that was suggestedby the boy’s way of moving, not by thebasket’s weight. Every choice of a seman-tic category is a way of molding thoughtwith a unique complex of associations, usingresources that often differ from language tolanguage.

Manipulations of Consciousnessin Literature

Examining how people talk is not the onlylinguistic avenue to an understanding of con-sciousness. Authors of fiction have discov-ered various ways to involve their readers ina fictional consciousness, and studying suchdevices can lead to understandings of con-sciousness that might otherwise be difficultto achieve (Chafe, 1994 ; Cohn, 1978). Theexamples below are taken from Edith Whar-ton’s novel The House of Mirth, first pub-lished in 1905 (Wharton, 1987).

There are several ways in which literaturehighlights qualitative differences between animmediate and a displaced consciousness.The most obvious difference may be in therichness of detail that is available to immedi-ate consciousness. When something is avail-able to direct perception, even though con-sciousness can focus on only a small portionat a time, a wealth of information is poten-tially available. With a displaced conscious-ness, as with imagery, detail is necessarilyimpoverished. The act of writing gives writ-ers a freedom to verbalize details of a kindappropriate to an immediate consciousness,allowing readers to share in a fictional imme-diacy:

Seating herself on the upper step of the ter-race, Lily leaned her head against the hon-eysuckles wreathing the balustrade. Thefragrance of the late blossoms seemed anemanation of the tranquil scene, a land-scape tutored to the last degree of rural ele-gance. In the foreground glowed the warm

tints of the gardens. Beyond the lawn, withits pyramidal pale-gold maples and velvetyfirs, sloped pastures dotted with cattle; andthrough a long glade the river widened likea lake under the silver light of September(p. 79).

Language like this allows readers to experi-ence vicariously what was passing throughthe consciousness of the protagonist at thevery time she was sitting and leaning herhead, with a succession of olfactory andvisual experiences. The impression con-veyed by such language is that these experi-ences were not being reported from the per-spective of a later displaced consciousness,but that they were immediate.

They were, nevertheless, presented in thepast tense, and in ordinary speech the pasttense is compatible with a displaced con-sciousness, with experiences recalled froma time preceding the time the languagewas produced. Its use here, as well as thethird-person references to herself and her,imply the existence of a narrating con-sciousness that is separate from the protag-onist’s. With relation to that consciousnessthe time of Lily’s experiences and her iden-tity were displaced – thus the past tenseand third person. Chafe (1994) termed thisartifice displaced immediacy. The experi-ences of the protagonist, although displacedwith tense and person, achieve immediacythrough Lily’s actions and the sensory detail.Her consciousness is available to the readeras an immediate consciousness while thenarrating consciousness responsible for thetense and person remains unacknowledged,providing little more than a tense and personbaseline.

There was also a consciousness of affect-laden judgments and comparisons:

Lily smiled at her classification of herfriends. How different they had seemed toher a few hours ago! Then they had sym-bolized what she was gaining, now theystood for what she was giving up. Thatvery afternoon they had seemed full of bril-liant qualities; now she saw that they weremerely dull in a loud way. Under the glitterof their opportunities she saw the povertyof their achievement. It was not that she

Page 388: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

370 the cambridge handbook of consciousness

wanted them to be more disinterested; butshe would have liked them to be more pic-turesque. And she had a shamed recollec-tion of the way in which, a few hours since,she had felt the centripetal force of theirstandards (pp. 88–89).

This passage, expressing thoughts thatpassed through Lily’s consciousness, retainsthe quality of an immediate consciousnesscombined with the displacement expressedby the past tense and the third-person she.But the passage is noteworthy, in addition,for the fact that its adverbial expressions –a few hours ago, then, now, that very after-noon – have their baseline in the time ofthe immediate consciousness, not of the dis-placed consciousness responsible for tenseand person. The now, for example, wasthe now of Lily’s immediate experiences,not the now of the language production.Although tense and person are anchored inwhat Chafe (1994) has called the representingconsciousness – the consciousness producingthe language – temporal adverbs like now areanchored in the represented immediate con-sciousness. In their treatments of conscious-ness novelists thus manage to throw light onthe separate functioning of tense and personas opposed to the functioning of adverbs,a distinction that might not otherwise beapparent.

Is There Thought Outsideof Consciousness?

If we are conscious of what we are thinkingin ways suggested above, that need not implythat all of thought is conscious. Whetherthere is unconscious thought is a questionthat calls first for agreement on the mean-ing of thought. To consider and reject oneextreme possibility, it would not be use-ful for this word to encompass everythingthe brain does, including totally unconsciousprocesses like those involved in the regula-tion of body chemistry, or semi-involuntaryprocesses like breathing, none of whichwould belong within the domain of thought

as generally conceived. At the other extremethe word could be restricted arbitrarily toconscious experiences alone, so that thoughtwould be conscious by definition. That alter-native need not be rejected out of hand, butit excludes various phenomena that manywould prefer to include within the realmof thought. We can briefly examine three ofthem.

Reasoning

One such phenomenon is reasoning. Jack-endoff (1997), in fact, presents reasoningas a core ingredient of thought and usesits supposed unavailability to consciousnessas crucial evidence that thought is uncon-scious. The form that reasoning takes in ordi-nary experience, however, is far from wellunderstood, and traditional logic is of lit-tle help. People do very often take advan-tage of things they have experienced in orderto make inferences about things they havenot experienced, such as distal events thathave occurred or might hypothetically occur.All languages have devices of some sort forexpressing the inferential nature of ideas,although different languages do it in differ-ent ways (Aikhenvald & Dixon, 2003 ; Chafe& Nichols, 1986; Palmer, 2001). A simpleexample from English is the use of must asin you must have had a haircut, where directexperience of your appearance has led to theidea of a displaced event. Whether inferen-tial reasoning of this sort operates outsideof consciousness is an open question, but itdoes seem that people use words like mustwithout any primary awareness of what theyare doing.

Orientations

Within the realm of thought, ideas arepositioned in a multidimensional matrix oforientations. They may be positioned, forexample, in time, space, epistemology (asabove with must), affect, social interaction,and with relation to the context provided byneighboring ideas. These orientations affectthe shape of languages in many ways, andthe semantic resources of different languages

Page 389: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 371

give different prominence to different ones.English, for example, pays a great deal ofattention to tense. In the English retellingquoted earlier, the first version was in thepast tense:

(a) And he ended up swiping one of his bas-kets of pears,

(b) and putting it on his bicycle.(c) On the front of his bicycle,(d) and riding off.

whereas the second was in the present:

(e) And so,(f) he finally gets it,(g) on his bike,(h) and he starts riding away.

Although English is semantically con-strained to orient events in this way and soto anchor ideas with relation to the time thelanguage is produced, these examples showthat the choice of a particular tense varieseasily from one telling to the next. It thusseems to be a choice that is made duringthe act of producing language, not a prop-erty of underlying thought. The thoughts inthese two excerpts may have been more orless the same, but the speaker chose to ori-ent them with two different tenses as sheadjusted them to the semantic constraints ofEnglish. Other languages may pay less atten-tion to tense and be more preoccupied, say,with aspectual or epistemological distinc-tions. But whatever a language does, suchchoices are not likely to be in the forefrontof consciousness.

As another example, the ubiquitousEnglish word the, which orients an ideaas identifiable to the listener, has beenresponsible for numerous articles and booksand is still an object of controversy (e.g.,Lyons, 1999). If its semantic contribution hasproved so difficult to specify and if, further-more, it is so hard for Japanese, Korean, andother learners of English as a second lan-guage to assimilate, its availability to con-sciousness may be questioned.

The general question is whether or towhat extent people are conscious, not just

of ideas, but also of the ways in which thoseideas are oriented. On the one hand, if spe-cific orientations express variable semanticchoices and not elements of thought per se,they may not bear directly on consciousnessof thought but only on the semantic, linguis-tically imposed organization of thought. Onthe other hand, semantic choices may oftenfeed back into thought itself, so that rigidlyseparating semantics from thought may inthe end be misleading. Whether people areor can be conscious of orientations may turnout to depend on the particular orientationsinvolved – some being more available to con-sciousness than others – and even perhaps onthe varying semantic sensitivities of differentindividuals.

Relations

The elements of consciousness are not inde-pendent of each other, but are interrelatedin a variety of ways, and there is a questionas to whether at least some of these relationslie outside of consciousness. A simple exam-ple is provided by the first two lines of theexample quoted above:

(a) You’d look at the sun,(b) it just looked red.

The foci of consciousness expressed in lines(a) and (b) bear a relation that mightbe described as conditionality. There is anunderstanding that the event that was ver-balized in (a) took place one or more timesand that each time it occurred it was a nec-essary condition for the state that was ver-balized in (b). For this person to experi-ence the appearance of the sun, she first hadto look at it. Relations like this are oftenverbalized overtly with little words like ifand when, but in this case the conditionalrelation was only implied (Chafe, 1989). Aswith orientations, the question of whetherspeakers of a language are conscious of suchrelations, marked or not, remains open. Arethey so integral to the succession of con-scious ideas that they are conscious too, or dospeakers and thinkers employ them withoutbeing conscious they are doing so? Language

Page 390: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

372 the cambridge handbook of consciousness

itself does not provide clear answers to suchquestions.

Summary

This chapter has focused on two differentapproaches to the relation between languageand consciousness, both linguistically ori-ented. Both agree that this relation cannot beexplored without taking other mental phe-nomena into account, in particular thoughtand imagery. They agree, furthermore, thatconscious experience includes both imageryand affect, whether or not those experiencescan be considered elements of thought.

Jackendoff sees consciousness as limitedto uninterpreted imagery, whose qualitiesmirror those of uninterpreted visual, audi-tory, or other raw sensory information. Hepresents evidence that such imagery is exter-nal to thought, in part because it is too par-ticular, in part because it does not allowthe identification of individuals, and in partbecause it fails to support reasoning. If all weare conscious of is imagery and imagery doesnot belong to thought, it follows that we arenot conscious of thought.

Chafe distinguishes between immediateand displaced consciousness, the formerengaged in direct perception and the latterin experiences that are recalled or imagined.Immediate consciousness includes not onlysensory experiences but also their interpre-tation in terms of ideas, which are positionedwithin a complex web of orientations andrelations. Displaced consciousness includessensory imagery that is different in qualityfrom immediate sensory experience, but itis always accompanied by ideational inter-pretations that resemble those of immedi-ate experience. Both the imagistic and theideational components of consciousness areheld to be central components of thought,as thought is ordinarily understood.

Acknowledgments

I am especially grateful to Ray Jackendoffand the editors of this volume for their

helpful comments on this chapter. If Jack-endoff and I do not always agree, his workhas always provided a stimulating balanceto tendencies that might otherwise havebeen weighted too strongly in a differentdirection.

References

Aikhenvald, A. Y., & Dixon, R. M. W. (Eds.).(2003). Studies in evidentiality. Amsterdam:John Benjamins.

Anthony, J. L., & Lonigan, C. J. (2004). Thenature of phonological awareness: Convergingevidence from four studies of preschool andearly grade school children. Journal of Educa-tional Psychology, 96, 43–55 .

Bartlett, F. C. (1932). Remembering: A study inexperimental and social psychology. Cambridge:Cambridge University Press.

Bloomfield, L. (1931). Obituary of Albert PaulWeiss. Language, 7, 219–221.

Brown, A. S. (1991). A review of the tip-of-the-tongue experience. Psychological Bulletin, 109,204–223 .

Brown, R., & McNeill, D. (1966). The “tip of thetongue” phenomenon. Journal of Verbal Learn-ing and Verbal Behavior, 5 , 325–337.

Buswell, G. T. (1935). How people look at pictures:A study of the psychology of perception in art.Chicago: University of Chicago Press.

Caramazza, A., & Miozzo, M. (1997). Therelation between syntactic and phonologicalknowledge in lexical access: Evidence fromthe “tip-of-the-tongue” phenomenon. Cogni-tion, 64 , 309–343 .

Castles, A., & Coltheart, M. (2004). Is there acausal link from phonological awareness to suc-cess in learning to read?Cognition, 91, 77–111.

Chafe, W. (Ed.). (1980). The pear stories: Cogni-tive, cultural, and linguistic aspects of narrativeproduction. Norwood, NJ: Ablex.

Chafe, W. (1989). Linking intonation units in spo-ken English. In J. Haiman & S. A. Thompson(Eds.), Clause combining in grammar and dis-course (pp. 1–27). Amsterdam: John Benjamins.

Chafe, W. (1991). Repeated verbalizations as evi-dence for the organization of knowledge. In W.Bahner, J. Schildt, & D. Viehweger (Eds.), Pro-ceedings of the Fourteenth International Congressof Linguists, Berlin 1987 (pp. 57–68). Berlin:Akademie-Verlag.

Page 391: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

language and consciousness 373

Chafe, W. (1994). Discourse, consciousness, andtime: The flow and displacement of consciousexperience in speaking and writing. Chicago:University of Chicago Press.

Chafe, W. (1996a). How consciousness shapeslanguage. Pragmatics and Cognition, 4 , 35–54 .

Chafe, W. (1996b). Comments on Jackendoff,Nuyts, and Allwood. Pragmatics and Cognition,4 , 181–196.

Chafe, W. (1998). Things we can learn fromrepeated tellings of the same experience. Nar-rative Inquiry, 8, 269–285 .

Chafe, W. (2000). A linguist’s perspective onWilliam James and the stream of thought. Con-sciousness and Cognition, 9, 618–628.

Chafe, W. (2002). Prosody and emotion in a sam-ple of real speech. In P. Fries, M. Cummings, D.Lockwood, & W. Sprueill (Eds.), Relations andfunctions within and around language (pp. 277–315). London: Continuum.

Chafe, W. (2005). The relation of grammarto thought. In C. S. Butler, M. de losdrAngeles Gomez-Gonzalez, & S. M. Doval-Suarez (Eds.), The dynamics of language use:Functional and contrastive perspectives. Amster-dam: John Benjamins.

Chafe, W., & Nichols, J. (Eds.). (1986). Evi-dentiality: The linguistic coding of epistemology.Norwood, NJ: Ablex.

Chomsky, N. (2000). The architecture of language.New York: Oxford University Press.

Cohn, D. (1978). Transparent minds: Narrativemodes for presenting consciousness in fiction.Princeton, NJ: Princeton University Press.

Holsanova, J. (2001). Picture viewing and picturedescription: Two windows on the mind. LundUniversity Cognitive Studies 83 . Lund: LundUniversity Cognitive Science.

Humphrey, G. (1951). Thinking. New York:Wiley.

Jackendoff, R. (1987). Consciousness and the com-putational mind. Cambridge, MA: MIT Press.

Jackendoff, R. (1996). How language helps usthink. Pragmatics and Cognition, 4 , 1–34 .

Jackendoff, R. (1997). The architecture of the lan-guage faculty. Cambridge, MA: MIT Press.

James, W. (1890). The principles of psychology.New York: Henry Holt.

Langacker, R. W. (1997). Consciousness, con-strual, and subjectivity. In M. I. Stamenov(Ed.), Language structure, discourse and theaccess to consciousness (pp. 49–75). Amster-dam: John Benjamins.

Lyons, C. (1999). Definiteness. Cambridge: Cam-bridge University Press.

Natsoulas, T. (1987). The six basic concepts ofconsciousness and William James’s stream ofthought. Imagination, Cognition and Personal-ity, 6, 289–319.

Newtson, D., Engquist, G., & Bois, J. (1977). Theobjective basis of behavior units. Personalityand Social Psychology, 35 , 847–862 .

Palmer, F. R. (2001). Mood and modality (2nd ed.).Cambridge: Cambridge University Press.

Piaget, J. (1967). Six psychological studies. (D.Elkind, Ed. & A. Tenzer, Trans.). NewYork: Vintage. (Original work published1964)

Poltrock, S. E., & Brown, P. (1984). Individual dif-ferences in visual imagery and spatial ability.Intelligence, 8, 93–138.

Poppel, E. (1994). Temporal mechanisms in per-ception. In O. Sporns & G. Tononi (Eds.), Selec-tionism and the brain: International review ofneurobiology (Vol. 37, pp. 185–201). San Diego:Academic Press.

Russell, B. (1921). Analysis of mind. London: Allen& Unwin.

Stamenov, M. I. (Ed.). (1997). Language structure,discourse and the access to consciousness. Ams-terdam/Philadelphia: John Benjamins.

Wharton, E. (1987). The house of mirth. NewYork: Macmillan. (Original work published1905)

Wierzbicka, A. (1999). Emotions across languageand cultures: Diversity and universals. Cam-bridge: Cambridge University Press.

Zacks, J. M., Braver, T. S., Sheriden, M. A., Don-aldson, D. I., Snyder, A. Z., Ollinger, J. M.,Buckner, R. L., & Raichle, M. E. (2001). Humanbrain activity time-locked to perceptual eventboundaries. Nature Neuroscience, 4 , 651–655 .

Page 392: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c13 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :57

374

Page 393: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

C H A P T E R 14

Narrative Modes of Consciousnessand Selfhood

Keith Oatley

Abstract

Beyond mere awareness, human conscious-ness includes the reflexive idea of conscious-ness of self as a centre of agency and expe-rience. This consciousness of self has beenthought to involve narrative: a distinct modeof thinking about the plans and actions ofagents (self and others), about vicissitudesencountered, about attempts to solve prob-lems posed by these vicissitudes, and aboutthe emotions that arise in these attempts.Philosophical discussion of this idea hasincluded the question of to what extent thisnarrative-of-consciousness is epiphenome-nal and to what extent it may have causaleffects on action. To the extent that the selftakes narrative forms and is constructive, itwill tend to assimilate narratives encoun-tered in the public space: stories that occurin conversation and elsewhere. Both the styleand content of stories that circulate in a cul-ture will potentially contribute to the extentand contents of consciousness, and thereforeto the development of selfhood. A narratiz-ing consciousness, in which self is a unifying

centre of agency in relation to others, hasemerged gradually during evolution, duringcultural development, and during individ-ual development. It functions importantlyin social interaction and allows integrativeunderstanding both of oneself and of others.

Introduction

Consciousness of the kind we value – of oursurroundings, of our thoughts and emotions,of our selves, and of other people – oftentakes narrative forms. However, there is de-bate about how to define narrative. A recentdiscussion of the question is given by Wilson(2003). A minimalist definition is that nar-rative must include at least two events withsome indication of their temporal order-ing. The difficulty with such definitions isthat on the one hand they are unenlight-ening, and on the other that even the see-mingly unobjectionable ones seem to in-vite contention and counter-example. I donot base this chapter, therefore, on a defi-nition of narrative. Instead, I adopt the more

375

Page 394: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

376 the cambridge handbook of consciousness

psychological stance of Bruner (1986) whowrites that narrative “deals in human orhuman-like intention and action and thevicissitudes and consequences that marktheir course” (p. 13).

The theory of narrative is known as nar-ratology (Bal, 1997; Groden & Kreiswirth,1994). Literary theorists such as Booth(1988), as well as cognitive scientists such asRumelhart (1975) and Schank (1990), havecontributed to it, as have theorists of identityand biography, such as Brockmeier & Car-baugh (2001). Following Bruner’s proposal,here is a sketch of prototypical narrativeand its elements. I label some as interaction-type elements. They are typical of narratedsequences of intended social interaction. Ilabel a further set as story-type elements thatare typical of those additional features thatmake narratives stories.

Interaction-Type Elements of Narrative

Interaction-type elements include the fol-lowing:� There are agents who may be called char-

acters.� There is a focus on intentions of the char-

acters.� Events occur in a causal, time-ordered

sequence. These events include mental,interpersonal, and physical actions, someof which flow from the characters’ emo-tions and intentions.

� Vicissitudes of intentions and actionsoccur and affect characters emotionally.

� Outcomes include further physical, men-tal, and social events, which affect char-acters emotionally.

Story-Type Elements of Narrative

Story-type elements of narrative include thefollowing:� A story is a narrative account that uses

interaction-type elements (as indicatedabove), and that is related or depicted byan explicit or implicit narrator in somemedium (talk, text, film, etc.).

� Characters have conscious awareness ofvicissitudes – that is to say, problems –as well as of their own problem-solvingattempts and outcomes.

� Readers and listeners are affected by thecharacters’ problem-solving attempts andby the inferred experiences of the charac-ters (emotional and otherwise).

� A story offers implications of meanings,often moral meanings, in relation to theculture in which the story arose.

Narratives include utterances of one persontelling another what happened at the officethat day. Stories are structured more con-sciously. They include anecdotes that maybe told to friends, newspaper articles, films,novels, and so forth. When I use the term“narrative framework,” I mean the kind ofprototype indicated above, with or withoutcharacteristics of a story. One may noticethat such frameworks are not purely syntac-tic. They include both structure and content.

The analyses I offer here are Western intheir provenance. Although I believe thatmany of the themes I discuss are universal(see, e.g., Hogan, 2003), there would be dif-ferences in this kind of argument if it wereelaborated within, say, Chinese or Indianculture.

One must of course distinguish betweennarratives as cultural objects and narrativethinking (see, e.g., Goldie, 2003). One mustalso distinguish between a life lived and alife thought about in narrative terms as elab-orated, for instance, by a biographer. Giventhese distinctions, however, an advantage ofadopting Bruner’s (1986) view of narrative asa specific mode of thought about agents andtheir intentions is that it is accompanied byhis parallel proposal of the mode of thoughtthat he calls paradigmatic, which “attemptsto fulfill the ideal of a formal, mathemat-ical system of description and explanation.It employs categorization or conceptualiza-tion and the operations by which categoriesare established, instantiated, idealized, andrelated one to another to form a system”(p. 12). Bruner’s starting point is helpful forthis chapter because narrative seems to be

Page 395: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 377

the mode in which important aspects ofhuman consciousness occur and because itoffers a basis for making meaningful sense ofourselves and of our interactions with others.By contrast the paradigmatic mode is that bywhich science, including a science of con-sciousness, is expressed most typically.

Although consciousness used to be regar-ded almost as a non-topic because it wasunanalyzable, it has recently become theobject of considerable interest; witness thishandbook. Likewise, the topic of narrativeis drawing increasing scholarly attention notjust within narratology but also, for instance,in narrative analyses of people’s lives andidentities, with the idea that thinking withinnarrative frameworks in the social sciencesamounts to a paradigm shift (Brockmeier &Harre, 2001).

The idea of specifically narrative con-sciousness is also growing (see, e.g., Fire-man, McVay, & Flanagan, 2003) and canbe thought of as the intersection betweeninterest in consciousness and interest in nar-rative. In itself this intersection is by nomeans new; in the theory of literary nar-ratives, for instance, consciousness and itstransformations have been topics of schol-arly attention for many years (see Auerbach,1953 ; Watt, 1957). In addition, for a hun-dred years, in areas touched by psychoanal-ysis, this intersection has been central asthe formative idea of psychoanalysis is thatconsciousness is reclaimed from the uncon-scious precisely by turning it into languagewithin narrative frameworks (see, e.g., Edel-son, 1992 ; Spence, 1982). What is relativelynew is that this intersection has become ofinterest for cognitive psychology and for thephilosophical theory of mind. It is withinthese frameworks that this chapter is writ-ten, although with links to other disciplinaryareas.

Given that this is a handbook, a reader’sexpectation will properly be of a system ofcategories and explanations in what Brunerhas called the paradigmatic mode. A list ofsection headings is therefore given below.The clauses that succeed the main headingsin this list are pointers to the contents of each

categorized section. Those readers who wishto see what work has been done in a par-ticular area may turn to the correspondingsection for discussion of representative workand references.� The question of whether consciousness

has causal properties: Does conscious-ness affect action, or does it occur onlyas retrospective commentary?

� Four aspects of consciousness: (i) Simpleawareness, (ii) the stream of inner con-sciousness, (iii) conscious thought as itmay affect decisions and action, (iv) con-sciousness of self-with-other.

� Narrative and mutuality: Personal andinterpersonal functions of consciousnessin humans as highly social beings whodepend on mutuality with others.

� The evolution of narrative: Thinking andconsciousness in hominid species andhumans among whom both self and oth-ers are known as individuals with inten-tions; questions of mental models, emo-tions, and imagination.

� The developmental psychology of nar-rative consciousness: Individual develop-ment of narratizing consciousness in chil-dren, the role of language, and the idea ofnarrative consciousness in psychotherapy.

� The rise of consciousness in Westernimaginative literature: Changes that haveoccurred in the depiction of conscious-ness from the earliest narrative writingsto the present.

� Coda: The relation of conscious tounconscious actions and thoughts.

For those who want something morelike a story, please read on. I have arguedelsewhere (Oatley, 1992a) that psychologyis a subject in which we expect both anapproach directed to the attainment ofinsight (characteristic of the narrative mode)and information of the technical kind (in theparadigmatic mode). Styles of writing in psy-chology need typically to address both theseexpectations. In something more like a storyframework, therefore, I have conceived thischapter partly as a debate of a protagonist

Page 396: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

378 the cambridge handbook of consciousness

(me) and an antagonist. The protagonist putsforth the idea that consciousness has causalproperties. The antagonist is sceptical of thisclaim. At the same time, I tell a story (a truestory I hope) about the evolution and devel-opment of consciousness.

The Question of WhetherConsciousness Has Causal Properties

I propose that narrative consciousness hasfunctions. Each of us is, at least in part, acentre of consciousness: a unified narrativeagent. Many though not all of our decisions,thoughts, and actions flow from this centre.By means of a narrative consciousness, wecan begin to understand ourselves and oth-ers in relation to the societies in which welive and to make sense of what would other-wise be disconnected. My proposal derivesfrom the idea that, to understand anythingsatisfactorily in psychology, one must under-stand its functions. In this chapter I begin toanswer the question of what the functionsof narrative consciousness might be.

Intuitively, we think that we act on ourconscious perceptions, beliefs, and desires.But this intuition that consciousness is func-tional is an expression of folk theory. Theopposition in the debate is provided by asceptic, such as Daniel Dennett (e.g., 1991).Dennett is representative of many cogni-tive scientists who believe that scientificunderstandings of the brain have replaced,or will replace, explanations based in folktheory. Neurons cause behavior, they say.Conscious thoughts do not. Folk theory –which includes ideas of beliefs, desires, andemotions; the idea that we act for reasons;and the idea of a conscious self that func-tions as a unified agent – is false, say suchscientists and philosophers (e.g., Church-land, 1986). It is false not just in its detailsbut altogether, just as were notions thatstars and planets revolve in perfect circlesround the earth because they were stuckto crystalline spheres. Although Church-land’s brand of radical eliminativism (elim-ination of all folk-theoretical terminologyfrom cognitive science) is on the wane, scep-

ticism about the functionality of conscious-ness remains strong.

Thus, although emotions are salient inconsciousness (as discussed below), LeDoux(1996), prominent for his work on emotionsand the amygdala, has written, “The brainstates and bodily responses are the funda-mental facts of an emotion, and the con-scious feelings are the frills that have addedicing to the emotional cake” (p. 302). Thisis the kind of view that is represented byDennett, not just for emotional conscious-ness, but for consciousness generally. In thisview the brain computes what it needs tocompute. It recruits the neurons necessaryto produce behavior. What we experienceas consciousness is an extra, with no causaleffects. Human behavior could occur with-out it. Consciousness is a rather narrow sum-mary in narrative form of what has alreadyhappened. The real workings of mind occurelsewhere.

All varieties of perception – indeed all vari-eties of thought or mental activity – areaccomplished in the brain by parallel, mul-titrack processes of interpretation of sensoryinputs (Dennett, 1991, p. 111).

Humphrey and Dennett (2002) offer ananalogy with a termite colony, which “buildselaborate mounds, gets to know its territory,organizes foraging expeditions” (p. 28). Butfor all this apparent purposefulness, it is justa collection of termites with various roles,influenced by each other but not by any mas-ter plan.

Dennett calls the idea of a unifying spec-tacle that constitutes our point of view theCartesian Theater. The epithet comes fromDescartes’ (1649/1911) idea of a soul housedin the brain. The idea of a soul witnessingand forming meanings is too thoroughly oldfashioned. There is no Cartesian Theater,says Dennett, no “Oval Office in the brain,housing a Highest Authority” (1991, p. 428).The conscious narratizing process is mis-taken about how the mind works. Instead,Dennett says, think of the conscious self as“a center of narrative gravity” (1991, p. 418).He wants us to realize that a centre of gravityis an abstraction, not real. He writes,

Page 397: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 379

Our tales are spun, but for the most part wedon’t spin them; they spin us. Our humanconsciousness, and our narrative selfhood,is their product, not their source (p. 418).

I suppose Dennett wants us to understandthe terms “spun” and “spin” in the foregoingquotation as the output of a spin doctor. Inanother paper he has extended this idea: Notonly does consciousness have no causal sta-tus but it is also mischievously misleading:

[W]e are virtuoso novelists, who find our-selves engaged in all sorts of behavior, moreor less unified, but sometimes disunified,and we always put the best “faces” on itwe can. We try to make all of our materialcohere into a single good story. And thatstory is our autobiography (Dennett, 1986,p. 114).

The chief fictional character at the cen-tre of that autobiography is one’s self. And ifyou still want to know what the self really is,you are making a category mistake (Dennett,1992 , p. 114).

So a narratizing consciousness does notjust happen after the fact; it’s a phoney.Compelling, for Dennett, are the data ofsimultaneous but disparate brain processes.For instance, in a group of people who havehad operations to sever their left and righthemispheres to relieve epilepsy (split-brainpatients), the hemispheres can know contra-dictory things (Gazzaniga, 1998). Only theleft side has language, and this side can alsorecount narratives. The right side is silent.Gazzaniga (1992) says his favorite split-brainphenomenon was illustrated by flashing tothe left hemisphere of a split-brain patient,PS, a picture of a chicken claw and to theright hemisphere a picture of a snow scene.Then PS was asked to choose by hand whichof a set of pictures in full and continuousview was associated with the pictures hehad seen flashed previously. His left hand(activated by the right side of his brain andprompted by the snow scene) chose a pic-ture of a shovel, and the right hand (acti-vated by the left side of the brain) chose apicture of a chicken. When asked why hemade these choices, he said. “Oh, that’s sim-ple. The chicken claw goes with the chic-

ken, and you need a shovel to clean out thechicken shed” (p. 90). The left (language-equipped) side of the brain had seen the clawand chose the chicken picture. But this sameleft side had no consciousness of the snowscene; instead it saw the left hand choosingthe shovel and it produced a reason withoutknowing it was – as one might say – a pieceof fiction to fit what it knew. Also there arepeople who, because of brain damage, saythey see nothing on one side of the visualfield and yet react perfectly well to objectsthere. The phenomenon is called blindsight.Humphrey (2002) writes, “I have met sucha case: a young man who maintained that hecould see nothing at all to the left of his nose,and yet could drive a car through busy traf-fic without knowing how he did it” (p. 69).And, in multiple personality disorders thereis not one self but several (Humphrey &Dennett, 2002).

Further along the spectrum of doubtersof the value of narrativity is Galen Strawson(2004), who argues that the idea of a nar-rative self is regrettable. There are plenty ofpeople, he says, whose selves are what hecalls “episodic,” with episodes being sepa-rate rather than bound together in any nar-rative way. He remembers, for instance, hav-ing fallen out of a boat into the water whenhe was younger and can recall details of thisincident, but claims that this did not hap-pen to the self who was writing his article.Strawson’s intuition of disconnectedness inlife’s episodes is accompanied by the claimthat to argue for narrative selves is to endorsefalsehood: to tell stories is (as Dennett andGazzaniga also argue) invariably to select, torevise, to confabulate, and thereby to falsify.Because of this falsification, the idea that welive narrative lives is therefore not just falla-cious but ethically flawed.

Should the idea of a self with causal pow-ers that involve a narratizing consciousnesstherefore be replaced? I argue that it shouldnot. If we may take Dennett, again, as thecentral antagonist to the idea of a functionalnarrative consciousness, we can say thatalthough he is thoroughly immersed in cog-nitive science, the adjacent area of human-computer interaction is also important. In

Page 398: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

380 the cambridge handbook of consciousness

that area there occurs the idea of the inter-face. In personal computers nowadays we areoffered the interface of a simulated desk-top, which is supported by layers of com-puter programs. Although computers workby means of electron flows and semiconduc-tors or at a another level by binary opera-tions, users would not find any inspection ofthem useful. They want to do such thingsas create documents and spreadsheets, makemathematical calculations, compose musi-cal collections, or look at their photographs.Modern interfaces offer interfaces in suchforms as enhanced virtual papers and otherobjects on a virtual desk, which enable themto do such things easily. Similarly, as users ofour bodies, we do not want to know whatour neurons are up to, nor yet the stateof our hippocampus. We live in a world ofagents somewhat like ourselves with whomwe interact. We want to know what and whois in the world in terms of possibilities ofinteractions with them. We need an inter-face on our brain states in terms of inten-tions, beliefs, and emotions. That is whatconsciousness offers us: a functional con-ception of ourselves as agents, with certainmemories, plans, and commitments. Narra-tive is, as Bruner has proposed, that way ofthinking that connects actions to agents andtheir plans.

Confabulations certainly occur, and notjust by split-brain patients. But one can alsodemonstrate the experience of light whenone presses the side of the closed eye in thedark. The pressure triggers receptors in theretina. This does not mean that a world seenvia light rays detected by the light-sensitiveretina is an illlusion. Certainly human con-sciousness has some properties of post-hocrationalization, as Dennett claims. Certainlyits properties are generated by brain pro-cesses of which, in the ordinary course ofevents, we know nothing, just as the inter-face on the computer on which I am writingthis chapter is produced by semiconductorswhose workings I do not know in detail. Butthe conclusion is not that consciousness is afraud, any more than the icons on my com-puter screen are hallucinations. A consciousunderstanding of self is the means by which

I keep track of my memories, my plans, mycommitments, and my interactions with oth-ers. With it, I know about those activitiesthat make me member of the society inwhich I live.

In this chapter I propose that we acceptnot only Dennett’s metaphor of self-as-novelist but also that, as proposed byRichard Velleman (2002), different conclu-sions may be drawn than those offered byDennett. Following Velleman’s argument, Iexplore the idea of a conscious unitary self,based on functional properties of narrative.

Four Aspects of Consciousness

Consciousness has many meanings. To bringout some of the properties of consciousnessthat we value, I propose four aspects (Oat-ley, 1988), each named after an originator:Helmholtz, Woolf, Vygotsky, and Mead.

Helmholtzian Consciousness

The first and most basic conception of con-sciousness in modern cognitive science wasformulated by Hermann von Helmholtz(1866). He proposed that perception wasthe drawing of unconscious conclusions byanalogy (see, e.g., Oatley, Sullivan, & Hogg,1988). He pointed out that we are not con-scious of the means by which we reachcognitive conclusions (neural and cognitivecomputations). We are only conscious of theconclusions themselves.

Dennett’s (1991) theory is a version ofHelmholtz’s idea of perception as uncon-scious inference to reach certain consciousconclusions. Dennett says distributed pro-cesses of neural computation deliver suchperceptual and other mental conclusions,which are not necessarily veridical norauthoritative. Instead, as he argues, it is asif iterative drafts are produced of such con-clusions, which change in certain respects asa function of other input.

So far so good. I agree. Helmholtz’s pro-posal is, I believe, a deep truth about howthe brain works. Information flows from sen-sory systems to perceptual and other kinds

Page 399: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 381

of interpretation, and from the operationsof neural systems to experience. Although Idid not include this aspect in my 1988 arti-cle, I think one must include in this proposalthe consciousness of emotions (Panksepp,2005), which I discuss further below. Muchof Helmholtzian awareness is a one-wayflow. For example, when one is looking at avisual illusion, even when one knows that itis an illusion, one cannot, by taking thought,alter the conclusion our visual processesreach.

Woolfian Consciousness

Helmholtz’s theory is applied principally toperception, in which we receive input fromexcitations of the sense organs that providecues to objects and events in the world thatproduce these excitations. We are, however,also conscious of images that occur verballyand visually, with no perceptual input. Theseimages can be thought of as described byWilliam James (1890) in his metaphor, the“stream of consciousness,” or as the statesdepicted by Virginia Woolf in Mrs Dalloway(1925): a changing kaleidoscope of thoughts,ideas, and images. The verbal and visualimages depicted by Woolf have as muchto do with preoccupations, memories, emo-tions, and trains of inward thought as theydo with perceptual input.

As with Helmholtzian consciousness, inWoolfian consciousness there is a principaldirection of flow: from neural process tomental image. What Woolf did, starting withher novel Mrs Dalloway, was to make innerconsciousness recognizable in the form of anarrative, a novel. Here we are, for instance,inside the mind of Clarissa Dalloway as shewalks up Bond Street in London on a Junemorning, a year or two after the end of WorldWar I.

. . . a roll of tweed in the shop where herfather had bought his suits for fifty years; afew pearls; salmon on an iceblock.

“That is all,” she said, looking at the fish-mongers. “That is all,” she repeated, paus-ing for a moment at the window of a gloveshop where, before the War, you could buyalmost perfect gloves. And her old Uncle

William used to say a lady is known by hershoes and her gloves. He had turned on hisbed one morning in the middle of the War.He had said, “I have had enough.” Glovesand shoes; she had a passion for gloves, buther own daughter, Elizabeth, cared not astraw for either of them (p. 12 ).

No longer based in perception of thepurely Helmholtzian kind, Mrs Dallowaydoes not just see a tailor’s shop and fish-mongers. Her consciousness is an admixtureof perceptions, memories (where her fatherbought his suits), and judgements (“That isall”). Associations occur. Gloves remind herof her uncle, and they prompt memories.And “That is all,” has a mental associationwith her uncle saying, “I have had enough.”The thought of gloves also stirs her preoc-cupation with her daughter Elizabeth, fromwhom she is on the verge of painful estrange-ment. Underlying this are other emotionalthemes: her relationship with her father, herstatus as a lady, repercussions of the war, thefact of death.

In Woolfian consciousness, thoughtbreaks free of the immediacy of perception.No doubt Dennett would be pleased toacknowledge the hints in Woolf’s depictionthat consciousness is not entirely unified.

Vygotskyan Consciousness

A third aspect is the influence of conscious-ness on thoughts, decisions, and actions. MrsDalloway is about a single day on whichClarissa Dalloway gives a party. The novel’sopening line is, “Mrs Dalloway said shewould buy the flowers herself.” If ClarissaDalloway were a real person, Dennett wouldsay a set of neurocomputational processesgenerated her utterance: “I’ll buy the flow-ers myself.” She gives a reason: “For Lucy[her servant] had her work cut out for her,”preparing for the party. Part of that workincluded arrangements with Rumpelmayer’smen who were coming to take the doorsoff their hinges to allow the guests at theparty to move about more freely. Accordingto Dennett, all such thoughts of reasons forgoing out to buy the flowers would be post-hoc elaborations, perhaps confabulations.

Page 400: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

382 the cambridge handbook of consciousness

Lev Vygotsky (a literary theorist beforehe became a psychologist) proposed thatthoughts could affect actions. His basic con-ception was of children developing untilabout the age of 2 years much as apes deve-lop, immersed in their relationships with pa-rents and siblings and solving as well as theymight the problems that confront them. Butthen, with the entry of each child into lan-guage, individual resources are augmentedby the resources of culture. Vygotsky saysthe social world becomes internalized asmind. So it is not that the mind is a con-tainer that holds thoughts or that consciousthoughts are solely the output of neural pro-cesses. Rather, the mind is an internal socialworld, which one can consult and in whichone can take part.

A typical example of Vygotsky’s think-ing is from a study by his colleague Levina(Vygotsky, 1930). She was studying a littlegirl who had been given the task of trying toretrieve some candy, with potential tools ofa stool and a stick available. The child talksto herself. But she does not just say aloudwhat she has done, as one might think if onehad been reading Dennett. She makes sug-gestions to herself: “‘No that doesn’t get it,’”she says. “‘I could use the stick.’ (Takes stick,knocks at the candy.) ‘It will move now.’(Knocks candy.)” Moreover the child reflectson the problem and analyses its solution: “‘Itmoved. I couldn’t get it with the stool, butthe, but the stick worked’” (p. 25).

Consciousness, here, is not an equivocalpost-hoc account, but a mobilization of theresources of human culture, which becomepotentially available to each of us. In this casethe child instructs herself as she has beeninstructed by adults. According to Vygotsky,this is how mind becomes mind. Conscious-ness is not just a result; it can be a cause.

Among the most interesting proposalsabout the functions of a unifying conscious-ness is one from the early days of artifi-cial intelligence: a program called Hacker(Sussman, 1975), which would learn skills ofbuilding structures with (virtual) children’sbuilding blocks. (The name “Hacker” comesfrom a time that seems now almost prehis-toric when it meant something like “com-puter nerd.”) When it makes mistakes, the

program learns from them and rewrites partsof itself. For instance it might want to put ablock on top of one that is on the floor. Butsay the block on the floor already has some-thing on it. The program cannot completeits plan. Prompted by the discrepancy (theproblem) between its current state and itsgoal, it constructs a new piece of plan (pro-gram). It draws from a library of mistakes(bugs) and tries to generalize: For instance,it might conclude, “If I want to put a blockon another block which is on the floor, I mustalways clear off anything that is on the topof the block on the floor.”

Sussman’s program has two kinds of code.The first is code comparable to that inany program: detailed plans made up ofsequences of actions that will achieve goals.The second kind is not represented explicitlyin most programs. It is an explicit account ofthe goals of each procedure (e.g., to put oneblock on top of another). Only with sucha representation could a program analyze amismatch between an intended plan and anoutcome that was not intended. With such arepresentation the program can reason back-wards about the effects of each action in rela-tion to a goal. With such a representation itcan write new pieces of program – patches –to solve problems that are encountered andthereby achieve the goal.

Here is the analogy with consciousness.When introducing a patch to the program(itself), Hacker runs in what Sussman calls“careful mode” in which the patch is com-pared line by line with the range of possi-bly interacting subgoals and goals. Here is aunified agency, trying out imagined actionsin a model of the world before commit-ting to action. Hacker itself is not conscious.But Sussman’s account contains the ideathat consciousness is a unifying process thatincludes a model of goals and the possibleresults of our actions in relation to them.A unifying consciousness is needed to learnanything new that will change the substanceof self.

Dennett (1992) compares a storytellingself with a character whose first words, inMelville’s Moby Dick, are “Call me Ishmael.”But says, Dennett, Ishmael is no more theauthor of that novel than we are the authors

Page 401: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 383

of stories we tell about ourselves. Dennettdevelops the idea further by imagining arobot, Gilbert:

“Call me Gilbert,” it says. What follows isthe apparent autobiography of this fictionalGilbert. Now Gilbert is a fictional, createdself, but its creator is no self. Of course therewere human designers who designed themachine, but they did not design Gilbert.Gilbert is a product of a process in whichthere are no selves at all . . . the robot’sbrain, the robot’s computer, really knowsnothing about the world; it is not a self.It’s just a clanky computer. It doesn’t knowwhat it’s doing. It doesn’t even know thatit’s creating this fictional character. (Thesame is just as true of your brain: it doesn’tknow what it’s doing either.) (pp. 107–108).

Velleman (2002) has contested these points.Imagine Gilbert getting locked in a closet bymistake. It has to call for help to be released.The narratizing robot could then give anaccount of these matters. Velleman takestwo important steps. In the first, he arguesthat such a robot as Gilbert would havesubroutines that enabled him to avoid andescape from danger. Being locked in a closetwould be such a danger, and if it occurred,subroutines would be activated. Let us sup-pose these subroutines were labeled “fear.”Balleine and Dickinson (1998) argue thataccess to such affective states is a core pro-cess in identity.

Velleman says that if Gilbert were tomake an attribution to such a state, his narra-tive autobiography might include the state-ment: “I’m locked in the closet and I’mstarting to get frightened” (Velleman, p. 7).If the fear module recommended breakingdown the door, then in his autobiograph-ical narrative he might say, “I broke downthe door, because I was frightened of beinglocked in for the whole weekend.” Gilbertthereby would have taken a step to being anautonomous agent. He would be acting for areason (his fear) and in a way that was intel-ligible to other people (us). Notice too, that,to act for a reason, the robot must be notjust responsive to stimuli (internal or exter-nal) that prompt it to do this or that. It musthave a representation of goal states (such asavoiding danger) of the kind that Sussman’s

program had. A narrative processor – basedon goals that can be represented explicitlyand that can generate plans of action – isexactly the kind of representation needed byan autonomous self, able to explain action interms of goals and other reasons.

Claparede’s (1934) law of awareness isthat people become conscious of an actionwhen it is disrupted. Then consciousness isflooded with emotion. Why should this be?It is because emotions occur with the unex-pected, when what is familiar, what is habit-ual, what is well practiced, no longer works.Negative emotions occur, as Peterson (1999)has proposed, when an anomaly occursor, in narrative terms, when a vicissitudearises.

Emotions are central to narrative becausethey are the principal processes in whichselfhood is constructed. One falls in love,and the self expands to include the new per-son. One is thwarted, and one forms venge-ful plans to overcome the purposes of theantagonist. One suffers a severe loss, andone’s consciousness searches to find whatin one’s theory of self was mistaken, per-haps perseverating in denial and blamingothers or perhaps recognizing what shouldbe changed in oneself. Consciousness of thisurgent problem-solving kind functions in thekind of way that Sussman has proposed. Ifwe are to learn from mistakes and changeimplicit theories in which we are lodged, aunifying consciousness is necessary.

The second step in Velleman’s argumentis that Gilbert’s very abilities at telling sto-ries also allow him to plan actions. SupposeGilbert works in a university: He does use-ful things to achieve goals that he is given bymembers of the faculty. He might have beengiven the goal of going to library to fetchDennett’s book, Consciousness Explained. Ifat the same time his batteries had startedto run down, Gilbert might notice thisinternal state and plan to go to the closetto get some new batteries. He might say,“I’m going into the closet.” Then, arguesVelleman, balanced between the possibili-ties of the library and the closet, “the robotnow goes into the closet partly because ofhaving said so” (2002 , p. 6; see also Velle-man, 2000).

Page 402: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

384 the cambridge handbook of consciousness

Narrative is based on plans that flowfrom intentions. A computer sophisticatedenough to construct its autobiography couldrun the narrative processor either forwardin planning mode or backward in autobio-graphical mode. It would, in other words,achieve what I have called the Vygotskyanstate of consciousness, of being able to directits own actions. Gilbert is able to act not onlyin terms of reasons such as internal states butalso – running the narrative/planning proces-sor forward – to plan to act in coherence withthe story he is telling about himself.

Velleman’s idea, here, is that withcoherence-making processes a central orga-nizing agency comes into being. Oatley(1992b) has argued that narrative is a modeby which we humans give meaning to lifeevents beyond our control, to our humanlimitations, to mortality. It would be an illu-sion to think that all such matters can bemade meaningful, but we become human inthe ways that we value by making some suchmatters so, and these we share with othersin a meaning-drenched community. We aremeaning-making beings who, by means ofnarrative, make some aspects of disorderlyreality comprehensible, and in some waystractable. Coherence in the computer gener-ation of stories has been achieved by Turner(1994), and Oatley (1999) has argued thatcoherence is a principal criterion of success-ful narrative. By adhering to this criterion,then, the narratizing agency becomes capa-ble of influencing action and creating a selfwho, in retrospect, could achieve an auto-biography that has some meaningful coher-ence. Art stops merely imitating life. Lifestarts to imitate art. The Woolfian storyteller,aware of inner processes, becomes Vygot-skyan. Coherent with the story in whichshe is the principal character, Mrs Dallowayinstructs herself, “I’ll buy the flowers myself.”Having said so, she does.

Meadean Consciousness

The fourth aspect of consciousness is social.As described by George Herbert Mead(1913), it is a consciousness of voices indebate: “If I were to say this, she might say

that.” This form of consciousness rests onan awareness of self and other. It is in thisform that the possibility arises of being ableto know what other people are thinking.

Mead describes how children at about theage of 4 years play games that require themto take roles and to experiment with chang-ing these roles. In hide-and-seek, you can’thave much fun hiding unless you imagineseekers looking for you. Developmental psy-chologists have discovered that, about theage of 4 , children develop a theory of mind(Astington, Harris, & Olson, 1988). Theybegin to understand that other people mightknow quite different things than they do.The hider has a representation of herself; sheknows who and where she is. She knows, too,that the seeker does know who she is butdoes not know where she is.

Once a person has reached this stage ofconsciousness, the idea of self as narratorcomes fully into its own. Not many sto-ries are autobiographies. The fully devel-oped narrator does not simply offer outputof some autobiographical module. Nor doeshe or she only use the narrating module toinstruct him- or herself. Such a narrator hasa theory of other minds and a theory of hisor her own mind persisting through time.Such a narrator is able thereby to act in aworld with other beings constituted in a sim-ilar way. Saying “I will do x,” counts as acommitment to other people to do x, andother people organize their actions aroundthis commitment. So if I were to extend thestory of Gilbert begun by Dennett, and con-tinued by Velleman, to include interactionwith others, it would stop being autobiogra-phy. It would become more like an episodein a story, something like this:

Gilbert was nearing the end of the day, butlike all good robots he was conscientious.He was about to set off to the library tofetch Consciousness Explained, when henoticed his batteries were low. He felt anx-ious. If there were too long a wait at thelibrary elevator, he might not make it backto the Department.

“I’ll go to the library to get the Dennett bookin a few minutes,” he said to Keith, “First

Page 403: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 385

I have to go to the closet for some new bat-teries.”

Keith was staring at his computer screen.“OK,” he said, without paying much atten-tion.

The phone rang. It was a teacher athis daughter’s school. His daughter hadhad an accident while playing basketball.Could he come immediately? Keith had asudden image of his daughter lying on thefloor, immobile. He quickly put his stuffin his book bag, shut down his computer,locked the door to the closet, and ran downthe hallway.

Gilbert heard the closet door being locked.In the middle of changing his batteries,he couldn’t move and didn’t have enoughpower to call out.

“Carbon-based life forms,” he thought.

In his writings on consciousness, Dennetthas worried about whether a conscious selfmight really be autonomous. Perhaps heshould set aside his worries. The self is muchmore. It is a model of self-with-others: firstthe self-with-parent or other caregiver, thenself-with-friends and suchlike others, thenself with the generalized other of a certainsociety, then perhaps self with significantother, and so forth, each with a basis of emo-tions. Self is not just a kernel of autonomy.Self is self-in-relation-to-other, an amalgamof the implicit theories we inhabit, suffusedwith the emotions that prompt our lives andare generative of our actions in the socialworld. The very processes that allow us tomake inferences about others’ minds, aboutwhat another person may be thinking andfeeling, that allow us to give a coherentaccount of that person as an autonomousagent, are the same as those that enable us toform models of our own selfhood, goals, andidentity and to project ideas of our ourselvesinto the future.

Narrative and Mutuality

Having reviewed the aspects of con-sciousness that I proposed (Oatley, 1988;

Helmholtzian, Woolfian, Vygotskyan, andMeadean) and their role in the construc-tion of narratives, let me turn to the cruxof why consciousness is a narratizing pro-cess that must be based in folk theory. Selvesare social. In our species we as individualscan’t manage much on our own. Togetherwe have built the communities in which welive and all the accoutrements of the mod-ern world. Our accomplishments dependon culture, on being able to propose andcarry out mutual plans in which peopleshare objectives and do their parts to ensurethey are accomplished, and then conversein a narrative way about outcomes. Oatleyand Larocque (1995) have found that peo-ple typically initiate something of the orderof ten explicitly negotiated new joint planseach day. We (Oatley & Larocque, 1995 ;Grazzani-Gavazzi & Oatley, 1999) had peo-ple keep diaries of joint plans that wentwrong. Errors occurred in only about 1 in20 new joint plans. People succeed in meet-ing friends for coffee, succeed in workingtogether on projects, succeed in fulfillingcontractual arrangements. If it were not foran interface of folk theory that enables eachof us to discuss and adopt goals, exchangerelevant beliefs about the world, and elab-orate parts of plans to ensure mutual goalsare reached (Power, 1979), joint plans couldnot be constructed. And if we were notable to narrate them to each other after-ward, we could scarcely understand eachother. If those psychologists, neuroscientists,and philosophers were correct who main-tain that beliefs, desires, and emotions arefigments of a radically false folk theory, wewould not find that 19 out of 20 joint planswere accomplished. The number would bezero. And when things went wrong, con-fabulating narrators could never know why.Humans would be able to do certain thingstogether, but we would perhaps live in muchthe same way as do the chimpanzees.

When joint plans go wrong, errors occurfor such reasons as memory lapses or becausea role in a joint plan has been specifiedinexactly. When such a thing occurs theparticipants experience a strong conscious-ness of emotion, most frequently anger but

Page 404: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

386 the cambridge handbook of consciousness

sometimes anxiety or sadness. People typ-ically give narrative accounts of their ownintentions and often of the failure of theother. When a person who has not showedup for a meeting is a loved one, vivid fears ofaccidental death may occur; but if there is afailure of a plan made with someone who isnot an intimate, a mental model is elaboratedof that person as unreliable and untrustwor-thy. Here is a piece of narrative recorded ina joint error diary (from Oatley & Larocque,1995):

My co-worker was measuring some circum-ferences of pipes, converting them to diam-eters and reporting them to me. I recordedthe figures and used them to drill holes later.The drilled holes were incorrect for diame-ters. It could have been the conversion ormeasurement. I had to modify the holes.

Continuing the story, our participantelaborated a mental model of his co-workerand formulated a plan that was coherentwith what had happened, in a way thatdepended on his analysis: “My co-worker isnot as careful about numbers as I am – maybeI should do this kind of task with someoneelse.” Further plans were elaborated – con-tinuing the narrative in the forward direc-tion – concerning the relationship with theco-worker: “I need to and want to do some-thing about this kind of thing with him.”

Personal and Interpersonal Functionsof Consciousness

The novelist constructs characters as vir-tual people who typically have an ensembleof emotion-based goals (intentions), fromwhich flow plans of interaction (Oatley,2002). In daily life, we each construct ouractions (actions, not just behavior) in a sim-ilar way, in the light of what is possible, bycoherence with a mental model of our goals,of our resources, and of our commitments.The person (character) whom each of usconstructs, improvising as we go along, is notvirtual, but embodied. This person accom-plishes things in the world and interacts withothers whom we assume are constituted ina way that is much like our self.

The functions of consciousness includean ability to follow, to some extent, the

injunction written long ago in the templeat Delphi: “Know thyself.” In doing so, weeach form a mental model of our goals,inner resources, limitations, emotions, com-mitments, and values that is no doubt inac-curate, but that is somewhat serviceable. Inthe course of life, we come to know how farand in what ways we can rely on ourselves.With such knowledge we can then also bedependable for others.

Humphrey (1976, 2002) has argued thata principal function of introspective con-sciousness of ourselves is the understandingof others as having selfhood and attributesthat are similar to own. By using ourmodel of ourselves to imagine what oth-ers are thinking and feeling, we becomewhat Humphrey calls “natural psycholo-gists.” Although this method is not veryaccurate, it equips us far better to under-take social interaction than if we were to relysolely on behavioral methods. By elaboratingmental models of others based on our ownintrospection, on our experience with themin joint plans, and on what they and oth-ers say in conscious narrative accounts aboutthem, we come to know how far and in whatways we can rely on them in the joint plansof our extended relationships.

Here, then, I believe is the principal rea-son for thinking narrative consciousness tobe functional. It is a reason that Dennettneglects because he, like most of us West-erners who work in the brain and behavioralsciences, tends to think in terms of individ-ual selves, individual minds, and individualbrains. But if our species is predominantlysocial and depends for its being on mutual-ity and joint planning, we need to consideralso such interfaces as the interface of lan-guage along with its conscious access to whatwe take to be our goals and plans, by whichwe arrange our lives with others. The lan-guage is not just of describing, in the fash-ion of post-hoc autobiography, as Dennettsuggests. It is based in what one might calla language of action (mental models, goals,plans, outcomes), of speech acts to others(see, e.g., Searle, 1969) as we establish mutu-ality, create joint plans, and subsequentlyanalyze outcomes in shared narrative terms.The language is also one of explanation in

Page 405: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 387

terms of agents’ intentions and the vicissi-tudes they meet. Narrative is an expressionof this language.

The Evolution of Narrative

For the period up to about 30,000 years ago,the palaeontological record of hominid evo-lution is largely of fossilized bone fragmentsand stone tools, with nothing in writing until5 ,000 years ago. Judicious comparisons withevidence of our living primate relatives haveallowed a number of important inferences.

Intentions and Consciousnessof Intentions

Narrative is about intentions. Its sequencesinvolve chains of both human and physicalcausation (Trabasso & van den Broek, 1985).A surprising finding from primatology is thatour closest relatives, chimpanzees and bono-bos, have only a very limited understand-ing of causality in both the physical and thesocial world. Although they certainly haveintentions, they do not seem conscious thatthey have them. Although they are good atinteracting with others, they do not seemconscious, in the way that we are, that otherprimates have intentions. These conclusionshave been reached independently by tworesearch groups who have conducted exten-sive series of experiments with chimpanzees:Tomasello (1999) and Povinelli (2000).

Chimpanzees are successful instrumen-tally in the wild, and they are very social.Their brains generate intentions, which theycarry out. If they were equipped with post-hoc autobiography constructors they wouldbe the creatures whom Dennett describes.They seem to have no autonomous selves.Although they solve problems and even useprimitive tools such as sticks and leaves, theirlack of any sense of their own intentions, orof plans mediated by tools, makes them inca-pable of instructing other animals in a newtechnique or, in the wild at least, of receiv-ing technical instruction. The occurrence ofsticks near termite mounds, together withchimpanzees’ interest in food and in manip-ulating objects, had led some groups of them

in eastern Africa to use sticks as tools topoke into termite mounds to fish out ter-mites, which they eat (McGrew, 1992). Astick left near a termite mound by a motherchimpanzee may suggest to her daughterthat she pick it up and poke it into themound. What chimpanzees don’t do – can’tdo according to Tomasello (1999) – is forone (say the mother) to show another (sayher daughter) how by using a stick she canfish for termites. The daughter, althoughshe may see from the mother’s activity thatthere is something interesting to look at, andalthough she may work out herself how tofish for termites with a stick, does not under-stand that her mother is intentionally usinga tool. She does not thereby see quicklyhow to use it herself. Chimpanzees thus lackan essential step for forming true cultures,in which useful technical innovations arepreserved and passed on, in a ratchet-likeprocess.

In an experiment by Povinelli and O’Neill(2000), two of a group of seven chimpanzeeswere each trained separately by the usualtechniques of reinforcement-based learningto pull on a rope to bring toward them aweighted box with fruit on it. When theyhad become proficient at this task, theirtraining changed. The box was made heav-ier so that one chimpanzee could not moveit. Now, both chimpanzees were trained byreinforcement techniques to work togetherto pull the box and retrieve the food. Allseven of the chimpanzees knew each otherwell, and all had previously taken part inother experiments that involved lone pullingon ropes to retrieve food. Here is the ques-tion. If, now, one who had not been taught topull the rope in this apparatus were pairedwith a chimpanzee who was experiencedin cooperative pulling, would the experi-enced one show the naıve one what to do?The answer was no. The experienced chim-panzee would pick up its rope, perhaps pull,and wait for a bit, looking over toward theother. One of the five naıve chimpanzees,Megan, discovered how to pull the ropeindependently of her experienced partner.She thereby managed to work with bothof the experienced animals to retrieve theheavy box with the food on it. But for the

Page 406: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

388 the cambridge handbook of consciousness

most part the experienced and the othernaıve partners failed in the joint task. On nooccasion did either of the experienced chim-panzees make a gesture or attempt to directthe naıve one’s attention to relevant featuresof the task. On no occasion did the expe-rienced animal pick up and offer the ropeto the naıve one. The experienced chim-panzee did not seem to infer that the naıvechimpanzee lacked the proper intentions orknowledge in the task.

Another striking finding is that neuronshave been discovered in the premotor areaof monkeys’ brains that respond both whena human hand is seen picking up a raisin andwhen the monkey itself reaches intentionallyto pick up a raisin. Rizzolatti, Fogasse, andGallese (2001) call them “mirror neurons.”They do not respond to a human hand whenit is moving without any intention to pickup the raisin. They do not respond when ahuman hand picks up the raisin with a pairof pliers. Rizzolatti et al. argue that the brainrecognizes action not in terms of a purelyvisual analysis, but in a process of analysisby synthesis, by means of its own motorprograms of carrying out the action. Riz-zolatti and Arbib (1998) have further sug-gested that this discovery is a preadapta-tion for learning language based on a casegrammar (Fillmore, 1968) around verbs ofintention. Gallese, Keysers, and Rizzolatti(2004) have proposed that mirror neuronsenable simulation of intentions and emo-tions of other individuals in the social world,and hence afford a neural basis of socialcognition.

The issue remains mysterious, as indi-cated by recent experiments by Tomaselloand his group. For instance Behne et al.(2005) found that 18-month-old humaninfants, but not 6-month-olds, could tell thedifference between an experimenter whowas able but unwilling to give them a toy(teasing) and one who was willing but unable(e.g. because the toy dropped out of reach).Call et al. (2004) found that chimpanzeescould also make this same distinction, thusdemonstrating that they know more aboutothers’ intentions than had previously beenthought. We might say that monkeys under-

stand a bit about intention (in terms ofmirror neurons) and that chimpanzees half-understand it in a more explicit way. Theystill, however, lack of grasp of full mutualintention, which is the centre of narrativeconsciousness.

Although they intend and feel, monkeysand apes do not fully know that they andtheir fellows are beings who can intendand feel. As John Donne put it: “The beastdoes but know, but the man knows that heknows” (1615–1631/1960, p. 225). This seem-ingly small extra step has made a great dif-ference. It is an essential step to culture, tonarrative consciousness, and to selfhood.

Mental Models of Others

In her ethological work with chimpanzeesin the wild, Jane Goodall (1986) learned torecognize each one as an individual. Chim-panzees recognize each other as individuals,and this is the basis for their elaborate sociallife. They know who the alpha animal is, whoeveryone’s friends and allies are. Becausechimpanzees are promiscuous in their mat-ing, no one knows who the father is of anyyoungster, but everyone knows other aspectsof kinship. Only when Goodall had startedto recognize individuals did the social livesand actions of the chimpanzees start to makesense.

Chimpanzees know everyone in theirsocial group, which can reach a maximumsize of about 50 individuals. Each one formsa mental model of each other one, whichincludes something of that one’s history,habits, and allies. Dunbar (1993 , 2003) andAiello and Dunbar (1993) have found thatthe size of the brain in primate species has aclose correlation with the maximum size ofits social group.

Each chimpanzee spends some 20% of itstime sitting with others, one at a time, andgrooming: taking turns in sorting throughtheir fur, removing twigs and insects. It isa relaxed activity. It is the way primatessustain affectionate friendships with eachother. Dunbar (1996) shows a graph in whichthe data points are separate hominid species(australopithecus, homo erectus, etc.) with an

Page 407: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 389

x-axis of time over the past 3 million yearsand a y-axis of amount of grooming timerequired according to the species’ brain sizeand the inferred size of its social group.Humans maintain individual mental mod-els of about 150 others. If we used the sameprocedures as chimpanzees, we would haveto spend about 40% of our time groom-ing to maintain our friendships. As groupsize increased, a threshold was reached,according to Dunbar, between 250,000 and500,000 years ago, of about 30% of timespent grooming. This is the maximum anyprimate could afford and still have time forforaging and sleeping. It was at this point,argues Dunbar, that language emerged asconversation: a kind of verbal grooming.

Conversation is something one can dowhile gathering food or performing othertasks, and one can do it with several oth-ers. What do we human beings talk aboutin friendly conversation? Dunbar (1996) hasfound that, in a university refectory, about70% of talk is about the doings of ourselvesand others: conversation, including gossip,most typically the elaboration of mentalmodels of self and others and of people’sgoals in the social group. Dunbar might haveadded that the incidents about which peopletalk are recounted in narrative form.

Consciousness and Emotions

Before conversation and narrative conscious-ness could emerge, preadapations were nec-essary. I have mentioned one suggested bythe findings of Rizzolatti and his colleagues.In this subsection and in the next I sketchtwo more preadaptations, the first of whichconcerns emotions. Panksepp (1998, 2001,2005) has argued that the most parsimo-nious explanation of a range of data is thatemotion is the basic form of consciousness.He calls it primary process affective con-sciousness. We share it with other mam-mals, and it is subserved by a homologousregion of the brain, the limbic system, indifferent mammalian species. Thus when ababy mammal is separated from its mother,it utters distress calls. This behavior is gener-ated in a specific limbic area, and according

to Panksepp’s conjecture, the animal’s dis-tress is much the same as a human infantfeels when separated from its mother. It isalso the core of the distress that we mightfeel as adults if we had arranged to meet ourpartner, and after a half-hour he or she didnot show up, and we start to worry that heor she had suffered an accident. Notice thatthe consciousness involved is vividly salient.Its concern is interpersonal.

Consider another emotion: interpersonalfear. De Waal (1982) describes how, in thegroup of 25 or so chimpanzees who lived in apark-like enclosure at Arnheim Zoo, Yeroenwas an alpha male until he was deposed bythe then-beta male, Luit. As alpha, Yeroenreceived between 75% and 90% of the rit-ual submissive greetings made by individ-uals in the troop. This submissive greetingis well marked in chimpanzees. Typicallyit includes bowing, making short pantinggrunts, sometimes making offerings such asa leaf or stick, or sometimes giving a kiss onthe feet or neck. In the early summer of 1976

Luit stopped making this kind of greeting toYeroen. On 12 June, he mated with a femalejust 10 metres from Yeroen, something ofwhich Yeroen was normally extremely intol-erant. On this occasion Yeroen averted hiseyes. Later that afternoon, Luit made angryaggressive displays toward Yeroen. De Waalsaid he thought at first that Yeroen was ill.But this was not so. Only later did he realizethat these were the first moves of a take-over that took 2 1/2 months to accomplishand required much interindividual manoeu-vering. It involved Luit inducing the adultfemales in the group to abandon their alle-giance to Yeroen and enter an alliance withhim. The take-over was completed on day 72

of the sequence when Yeroen made his firstsubmissive greeting to Luit. When de Waaldescribes the events of this dominance take-over, he recounts a narrative that imposesmeaning on the events for us humans. It isnot, of course, a narrative that the chim-panzees could construct. De Waal says that12 June was the first time he ever saw Yeroenscream and yelp and the first time he sawhim seek support and reassurance. We mayimagine that on that day, Luit felt angry

Page 408: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

390 the cambridge handbook of consciousness

toward Yeroen – his hair stood on end, andthough he had normally looked small, henow looked the same size as the alpha maleYeroen. We may imagine too, that this wasthe first time Yeroen felt afraid of him. ThePanksepp conjecture is that Luit did indeedfeel angry and Yeroen did indeed feel afraidin the way that we humans would when ourposition was threatened.

What we may say about this in terms ofconsciousness is that, for the chimpanzeesin this group, events and emotions wouldhave unfolded in a sequence of present-tense happenings. Among them only whatI have called the interaction-type elementsof narrative would be present. My conjec-ture is that immediate emotions – anger, fear,friendly alliance, deference – conferred thestructure on each episode of interaction, butthe whole sequence would not have, couldnot have, a plot of the kind one expects ina story (either prospectively or retrospec-tively) as far as the animals were concerned.

Among humans, however, sequences ofsuch events not only lend themselves tonarratization but we are also unable toavoid turning them into story form (Oat-ley & Yuill, 1985). At some time since theline that would lead to humans split fromthat which would lead to the chimpanzeessome 6 million years ago, the perception ofinteraction-type elements of narrative addedthe elements of story-type narratization.Thus when de Waal (1982) describes theevents of Luit taking over the alpha positionfrom Yeroen in 1976, he can only do so interms of a humanly recognizable story told,inevitably, by a narrator, a kind of novelistnot of the self as Dennett has postulated,but of others.

My hypothesis, then, is that first, asPanksepp has proposed, emotion is a primaryform of Helmholtzian consciousness (withsensory awareness being another). Second,emotions are frames or scripts for interindi-vidual relationships. Anger (such as Luit’swhen he displayed aggressively to Yeroen)sets up a script for conflict. Emotions struc-ture relationships so that sequences of inter-action are prompted. Third, as Paulhan(1887/1930) proposed, emotions are caused

by disruptions of action or expectancy (vicis-situdes), and they can completely fill con-sciousness. Fourth, only with the comingof language (or perhaps the concepts ofprelanguage) do the interindividual scriptsor frames of emotions become the basesof episodes in stories. At this point, a newkind of consciousness can emerge that seesitself and others as instigators of intendedactions and as experiencers of the emotionsthat result from these actions. Oatley andMar (2005) have argued that social cogni-tion is based on narrative-like simulations,the conclusions of which can (as Helmholtzinsisted) become conscious. The elaboratedstory-type consciousness, in which agentsact for reasons and are responsible for theiractions, is built on preadapted bases (includ-ing those of mirror neurons and emotion-based relating) that were in place in hominidancestors.

Mimesis, Metaphor, and Imagination

Homo erectus emerged about 1.9 millionyears ago, and was our first ancestor to lookmore humanlike than apelike. Althoughsimple stone tools pre-existed this species,it was with these beings that elaboration ofsuch tools began. With them, also, came thefirst strong evidence for the importance ofmeat in the diet, and perhaps even accordingto Wrangham (2001), the control of fire andhence cooking. With them came a secondimportant preadaptation to language (fol-lowing that of consciousness of emotions).Donald (1991) calls it mimesis: a non-verbalrepresentation of action and an ability toreproduce actions. It enabled fundamentalcognitive developments of cultural forms ofgroup enactment, some of which are stillwith us, such as dance and ritual.

Donald says that the next important evo-lutionary transformation did involve lan-guage. It was to myth, which I take to derivefrom an early form of narrative conscious-ness. He dates myth as far back as the timethat Dunbar has postulated for the emer-gence of conversation, but his emphasis ison narrative aspects. Myth, says Donald, isthe preliterate verbal way of understanding

Page 409: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 391

how the world works. It can pervade everyaspect of people’s lives. For instance amongthe !Kung of the Kalahari (Lee, 1984), ill-nesses are typically seen as caused by peo-ple who died a short time before (parentsand grandparents). While they were alivethese people were good, but many say thatonce dead they generally became malevo-lent (notice the narrative structure). Otherssay they are harmful only when the livingdon’t behave properly. Living people are byno means powerless, however. The meansof maintaining health, and healing the sick,have to do with interpreting the interven-tions of these spirits and sometimes combat-ing them. So, says Donald,

Myth is the authoritative version, thedebated, disputed, filtered product of gener-ations of narrative interchange about real-ity . . . the inevitable outcome of narrativeskill and the supreme organizing force inUpper Paleolithic society (p. 2 58).

Myth is a narrative form of social exchange,used for thinking and arguing about howthe world works. It is active today, and notjust in the Kalahari. “We are on the sideof good, dedicated to the fight against evil,”for instance, and “You can do whatever youreally want to do,” are mythic sentimentsof some vitality in North America. (Notice,again, the narrative elements of goals andactions.) Myths have pragmatically impor-tant properties. By casting a matter into asymbolic form they make it potentially con-scious and an object of cultural considera-tion. It becomes a potent means of organiz-ing both individual and societal behavior.

It was between 50,000 and 30,000 yearsago that the first art began to appearin the human record. Caves in south-east France contain paintings of animals ontheir walls from 31,000 years ago (Chauvet,Deschamps, & Hillaire, 1996). Several cul-tural developments, in addition to paintingsand the production of ornamental artifacts,occurred around the same time. One wastreating the dead in a special way, buryingthem with ceremony. Another was a suddenproliferation in the types of tools. Mithen(1996) has proposed that these develop-

ments were related. Rather than simply hav-ing separate domain-specific areas of knowl-edge – social knowledge of the group, tech-nical knowledge of how to make stone tools,natural historical knowledge of plants andanimals – the people of those times beganto relate one kind of knowledge to another.Here began imagination and metaphor: Athis (in one domain) is a that (in anotherdomain). This charcoal mark on the wall ofa cave is a rhinoceros, this person who isdead lives on in another kind of existence,this animal bone can be shaped to becomea harpoon tip. During this period, arguesMithen, human culture began to accelerate.Imagination is the type of consciousness thatmakes narrative possible. It offers us storiesof possibility and of people not currentlypresent in situations that are not directlyvisible.

The Developmental Psychologyof Consciousness

Although ontogeny may not exactly recapit-ulate phylogeny, there are parallels betweenthe rise of narrative consciousness duringhominid evolution and the development ofnarrative skills and consciousness in chil-dren. Zelazo and Sommerville (2001) andZelazo (2004) have proposed a set of levelsof consciousness, reached progressively dur-ing the preschool years (see Chapter 15). Theearliest, which they call minimal conscious-ness, corresponds to Helmholtzian con-sciousness (described above). Subsequentlevels include recursive representations. Thesecond level includes minimal consciousnessof a thing plus the name of that thing. Higherlevels include further recursions. So reflec-tive consciousness of the kinds I have calledWoolfian and Vygotskyan (discussed above)include not just actions but also the con-sciousness of self-in-action. Finally a levelis reached at about the age of 4 years thatincludes the social attributes of theory ofmind (Astington, Harris, & Olson, 1988) inwhich one can know what another knows,even when the other person’s knowledge isdifferent from what one knows oneself. It

Page 410: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

392 the cambridge handbook of consciousness

corresponds to Meadean consciousness (dis-cussed above), in which the social self canbe explicitly represented and thought about.Tomasello and Rakoczy (2003), however,argue that at a much earlier stage, occur-ring about the age of 1 year, children come tounderstand themselves and others as inten-tional agents. They call this the ”real thing”:It is what separates us from other animals.They argue that it enables skills of culturallearning and shared intentions to occur andthat this stage must precede any acquisitionof theory of mind that would need to bebased recursively on this ability.

Recursion also occurs in a way thatextends the scheme proposed by Zelazo.Dunbar (2004) points out that from schoolage onward a person who takes part in a con-versation or who tells a story must know(level 1) that a listener can know (level 2)what a person depicted in the conversa-tion or story knows (level 3). Dunbar arguesthat skilled storytellers can work with aboutfive recursive levels of consciousness. Thusin Othello, Shakespeare writes (level 1) sothat audience members know (level 2) thatIago contrives (3) that Othello believes (4)that Desdemona is in love with (5) Cassio.All this, argues Dunbar, depends on neuralmachinery present in human brains that isnot present in chimpanzee brains.

Recursion is an idea that became impor-tant in cognitive science in the 1960s. It isthat representations may include representa-tions of themselves. Productively, then, theidea of successive steps of recursion has beenproposed by Zelazo as successive levels ofconsciousness, achieved in successive stagesof development. Although each level of con-sciousness depends on a previous one, andeach emerges at a certain stage during devel-opment, the earlier level it is not superceded.It continues to be available.

Tomasello’s (1999) studies of the devel-opment of language indicate that soon aftersingle words (at Zelazo’s second level) comeverb islands, in which different words canbe put into slots for agent of the action,for object, and for outcome. Thus in theverb island of “throw” we get “Sam throwsthe ball.” This is close to the idea of case

grammar, as postulated by Fillmore (1968),for which the preadaptation suggested byRizzolatti and Arbib (1998) is the basis.As Tomasello points out, the child sym-bolizes exactly the kind of intention thatchimpanzees enact, but do not know thatthey know. The symbolization can then beused as a tool to affect the attention ofthe person to whom the communicationis made. Infant communications are some-times requests, like “more,” or even “morejuice,” but conversational priorities earlybecome evident: Parents and children draweach others’ attention, in conversation, tothings in the world.

Then in development comes a phase that,on palaeontological grounds, would be unex-pected: the appearance of monologue. Chil-dren start talking out loud to themselves.Nelson (1989) arranged for the parents of asmall child, Emily, to place a tape recorderby her bed before she went to sleep. Aftersome bedtime conversation with a parent,the parent would leave and Emily wouldoften enter into monologues that, with thehelp of her mother, have been transcribed.

Nelson (1996) has argued that narrativebinds memories of autobiographical eventstogether in the meaningful form that wethink of as selfhood. Here is an examplefrom Emily aged 21 months: “The broke. Carbroke, the . . . Emmy can’t go in the car. Go ingreen car” (Nelson, 1989, p. 64). This mono-logue illustrates verb island constructions ofthe kind identified by Tomasello around theverbs “break” and “go.” It also offers an agent,“Emmy,” and the narrative structuring of anautobiographical event. The car was broken,and therefore Emmy had to go in a differentcar. Here, already, we see some of the ele-ments of story-type narration: the telling of astory by a narrator, a self (character) persist-ing through time, a vicissitude, and the over-all possibility of making meaningful sense ofevents (although not yet with a theory ofother minds).

From the age of 2 or so, children are ableto run the narrative process not just back-ward to link memories with an agent butalso forward. Here is Emily in another mono-logue at the age of 2 years and 8 months.

Page 411: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 393

Tomorrow when we wake up from bed, firstme and Daddy and Mommy, you . . . eatbreakfast eat breakfast, like we usuallydo and then we’re going to p-l-a-y, andthen soon as Daddy comes, Carl’s goingto come over, and then we’re going to playa little while. And then Carl and Emilyare both going down the car with some-body, and we’re going to ride to nurseryschool . . . (Nelson, 1989, pp. 68–69).

Here, again, is a recognizable self, “Emily,”who is a protagonist in a plan-like accountwith many of the attributes of a narrativestructure in story form.

Fivush (2001), in pursuit of similar ques-tions, has studied children discussing piecesof autobiographical narrative with their par-ents and other adults. Children as young as3 years make such discussion an importantpart of their social activity. They use it toevaluate experience. For instance, a childof 3 years and 4 months said to an inter-viewer, “There was too much music, andthey play lots of music, and when the circusis over we went to get some food at the foodplace” (p. 40). Part of the autobiographicalpoint is to assert selfhood and subjectivity viathings that the narrator liked or didn’t like(“too much music”). By the age of 4 childrenbeome aware, in a further step to individual-ity, that their experience might be differentfrom that of others.

Nelson’s narratives from the crib suggesthow the inner chattering that is a featureof everyday Woolfian consciousness devel-ops from spoken monologue. This idea isstrengthened by Baddeley’s (1993) accountof PET scanning of his own brain when hewas engaged in quiet inner speech. It showedinvolvement of those brain areas that aretypically involved both in speech productionand speech understanding.

Nelson (2003) concludes that conscious-ness is one of the functions that developsas an emergent property with the devel-opment of language, and most specificallywith the development of narrative language.It is with this ability that a move occursfrom simple Helmholtzian awareness of thehere and now to the idea of a self. Severalthings are accomplished in this move. The

self is experienced as unified, not merelyas a disparate bundle of reflexes. From psy-choanalytic thought, the somewhat ironicbut nonetheless suggestive idea comes thatthe moment of this realization occurs as achild can recognize him- or herself in a mir-ror (Lacan, 1949/1977). Other, more empir-ically minded psychologists have taken theinfant’s ability to touch a patch of rouge onhis or her forehead, when the image of theinfant bearing the patch of rouge is seen byhim- or herself in the mirror, as an indica-tion of the dawning of selfhood (Lewis et al.,1989). Nelson’s proposal is that, with thedawning of narrative consciousness, the selfis experienced in a world of widening possi-bilities, of other people and of other minds.Zelazo’s (2004) proposal is that this abilityto represent the self explicitly is fundamen-tal to the development of consciousness andof a unified as opposed to a fragmented con-trol of action.

Harris (2000) has argued that a majordevelopmental accomplishment for childrenoccurs when they start to construct, in theirimagination, things that are not immedi-ately present. They make such constructionsnot just in stories but also in pretend roleplay (discussed above). They start imagin-ing what others might know (other minds).The idea of specifically narrative conscious-ness has been extended in the recent workof developmental psychologists on the ques-tion of how children conceptualize a selfas persisting through time. Barresi (2001)and Moore and Macgillivray (2004) haveshown that the imagination by which onecan anticipate future experiences of theself (which they call prudence) is likely toinvolve the same processes as those in whichone can become interested in others (proso-cial behavior). This same ability of the imag-ination is the central core of understand-ing narrative. Arguably stories nurture it.As Vygotsky (1962) has proposed, language-based culture offers children resources thatthey do not innately possess, but also donot have to invent for themselves. Chil-dren in modern times aged 4 and 5 yearshave reached the point of the imaginationthat can take wing in the way that Woolf

Page 412: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

394 the cambridge handbook of consciousness

depicted and that Mithen (2001) argued hadbeen reached in evolution by our ancestors,30,000 years ago, when one may supposewith Donald (1991) that story-type accountsof meaning-making had become routine.

Conscious selfhood, in these kinds ofaccounts, is a meaning-making function thatdepends on, but has properties beyond,those of the separate neural processes andmodalities of the kind on which Dennettconcentrates. Although in this new field,there is debate about how best to charac-terize phases of child development, it seemsthat the systems pass through emergentstages of consciousness similar to those Ihave postulated – Helmholzian, Woolfian,Vygotskyan, and Meadean – or throughthose that Zelazo and Sommerville writeof in terms of progressively recursive repre-sentations. By adulthood, consciousness canoften be of the minimal Helmholtzian per-ceptual kind, or consciousness of an emo-tion, but it can also easily switch to narrativeconsciousness of self and of self in relationto others.

What developmental psychologists haveshown is that, with each level of emer-gence of new abilities, there arrives a setof functions that these abilities subservethat are unavailable to creatures withoutthese abilities. So, from the first movementsof naming and drawing attention of othersto things, infants enter into a world of ashared folk-theoretical understanding. Theybecome creatures of different kinds than anywild-living chimpanzee. They have takenthe first step toward constructing a narra-tive consciousness of themselves and othersas actors in the world, who cause intendedeffects and struggle with the vicissitudesof life.

A kind of culmination to the develop-ment of narrative by Nelson, Fivush, andothers has been offered by Pennebaker et al.(Pennebaker, Kiecolt-Glaser, & Glaser, 1988;Pennebaker & Seagal, 1999). The basic find-ing is that adults who wrote narratives for aslittle as 20 minutes a day for 3 days on top-ics that were emotionally important to them(vicissitudes), as compared with people whowrote about neutral subjects, made fewer

visits to doctors’ offices subsequently. Theyalso underwent improvements in immunefunction. The argument is that traumaticand stressful events that are not integratedby the meaning-making processes of narra-tive can impair health. Making such eventsconscious and integrating them in story formhas beneficial effects.

The Rise of Consciousness in WesternImaginative Literature

Writers have been fascinated by conscious-ness. The great book on the subject in West-ern literature is by Auerbach (1953). Its20 chapters span 3 ,000 years, from Genesisto Virginia Woolf. On the one hand West-ern literature involves the “representationof reality.” On the other it offers chroniclesat successive moments in a history of mindturning round, recursively, to reflect uponitself. Each chapter in Auerbach’s book startswith a quotation, ranging in length from aparagraph to several pages, from a particularwriter in a particular time and society. Foreach, Auerbach analyzes the subject mat-ter, the words, and the syntax. Immediatelythe reader is in the middle of a scene, withknights in medieval times, or with Dante andVirgil descending into the Inferno, or in LaMancha with Don Quixote, or in Paris withProust’s narrator Marcel. In each there is asociety with certain understandings. In each,the characters inhabit a certain implicit the-ory of consciousness.

The opening sequences of the Bible, writ-ten by the Hebrews about 3 ,000 years ago,narrate the story of God’s creation of theworld. When the first human beings, Adamand Eve, enter the scene (attributed to thewriter J., Genesis, Chapter 3), they eat of thefruit of the tree of knowledge of good andevil, and they become ashamed. It is hard toavoid the interpretation that in the momentof becoming conscious of good and evil theybecame self-conscious.

In the Greek tradition, The Iliad (Homer,850 bce/1987) was written from previouslyoral versions about the same time as Genesis.Jaynes (1976) has made several provocative

Page 413: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 395

proposals that further Auerbach’s idea thata different kind of consciousness from ourown is found in Genesis and The Iliad. Histheory is that consciousness is not only anarrative and unifying agency, the basis formodern selfhood, but that in early writingssuch as The Iliad one sees the fading of anolder mentality. He argues that before about3 ,000 years ago, human beings had whathe called bicameral minds. Bicameral meanshaving two chambers, like two chambers ofgovernment, a senate and a house of rep-resentatives. One chamber was of species-typical behavior: mating, attacking if one isattacked, and so on. The second was of obe-dience to the injunctions of a ruler. Theseinjunctions tended to be heard in acousticform – do this, don’t do that – and theyallowed hierarchical societies to live in citiesbeyond the immediacy of face-to face con-tact. One feature of this second chamber ofgovernment, argued Jaynes, was that whena ruler died, his injunctions could still beheard. The phenomena were referred to interms of gods. Each new ruler would takeon the mantle of command and be trans-lated into the role of god. So it was notthat the figurines that archaeologists haveunearthed from Mesopotamia were statuesof gods. They were gods. When one visitedtheir shrines in a house or public buildingand looked at a god, one could hear his orher words in one’s mind’s ear.

Modern narratizing consciousness arose,argued Jaynes, as people started to travelbeyond the rather simply governed city-states and began to encounter others, totake part in trade, and in wars. The sim-ple two-chamber mental government brokedown. It was inadequate for dealing withpeople from different cultures. At first a fewand then more individuals started to enter-tain thoughts and reasons that were nei-ther instinctive responses nor obedience toauthority. They began to tell more elabo-rate stories about themselves and to reasonin inner debate.

In the opening sequences of The IliadAchilles draws his sword in anger. He is res-ponding instinctively to an insult from theGreek army’s commander-in-chief, Agam-

emnon. At that moment a goddess appearsto Achilles. No one else sees her. It is the god-dess Athene, who utters to Achilles the tribalinjunction: “Do not kill the commander-in-chief.” The two chambers of mental gov-ernment are here in conflict. The outcome:Achilles obeyed Athene, but went into asulk. His refusal to fight resulted in theGreeks almost losing the Trojan War. In TheIliad, Jaynes and a number of other scholarshave noted that there is no word for mind.Emotions occur not as conscious preoccupa-tions prompting actions, but as physiologi-cal perturbations: thumos (livingness, whichmay include agitation), phrenes (breathing,which can become urgent), kradie (theheart, which may pound), and etor (guts,which may churn). In The Iliad the wordpsuche (from which later derived such con-cepts as soul and mind) was an insubstantialpresence that may persist after death. Theword noos did not mean mind, but was anorgan of visual images.

For the preclassical Greeks, there werebodily agitations of thumos, phrenes, and soon, and there were voices of gods offeringtribal injunctions but – according to Jaynes –no narrative self-consciousness. Plans werenot decided by mortals but by immortals.In the opening sequence the question isasked by the narrator: Whose plan (boule) setAchilles and Agamemnon to contend? Theanswer: the plan of a god, Zeus. The read-ers of this chapter might sense somethingfamiliar. Here were people whose importantactions were determined not by themselvesbut in some other way, by what they calledgods. The familiarity is that Dennett offersan echo of this same idea. For him the actionsof all of us are determined not by humanagents qua agents, but by something imper-sonal: brain processes.

By the time of The Odyssey, somethingelse was beginning to enter mental life.It was human cunning. But still the con-scious, reflective mind had not emerged orat least had not emerged in its modern form.It had, as Snell (1953 /1982) put it, to beinvented. Its invention perhaps began 200

years after Homer with the poet Sappho,as she began to reflect on the idea that her

Page 414: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

396 the cambridge handbook of consciousness

continual falling in love with young womenhad something repetitive about it. It wasbrought more fully into being by Aeschy-lus and Sophocles in their narrative playsabout how we human beings are responsiblefor our actions, although we cannot foreseeand do not necessarily intend all their conse-quences. It reached fully modern form withSocrates in his teachings about how we canchoose to think and decide how to act forthe good.

Not all scholars agree that large-scalechanges of the kind described by Jaynes andSnell did occur (see, e.g., Williams, 1993) ina progression of consciousness from Home-ric to classical times. If they did, however, itseems likely that they were cultural changes,rather than the neurological ones postulatedby Jaynes.

The beginnings of the modern world areset by many scholars (following Burckhardt,1860/2002) with Dante and the Renaissance.At this time the literary idea came into itsown of character that included the conceptthat some of it is hidden: People not only actas they did in Homer, but reflect consciouslyon what kind of person they would be to actin such and such a way. In the 20th century,literary consciousness took an inward turn.Now we read to recognize the images andthought-sequences of our own minds, of thekind that Woolf portrayed.

Following Auerbach, Watt (1957) tracedsome of the movements of consciousnessin English novels of the 18th century inresponse to social and economic changes.So Defoe’s (1719) Robinson Crusoe empha-sizes the individualism of the times, particu-larly in economic matters, as well as the traitthat would be characteristic of the novel,the reflective examination of the self and itsdoings, which Watt traces to Puritanism.

Romanticism, the literary era that westill inhabit, with its emphases on emotionsand on style, started around 1750. Althoughthere has been postmodern debate aboutwhether language can represent anythingat all outside itself, Abrams (1953) makesclear that throughout the Romantic periodsuch representation has not really been theintention of literary writers. The dominant

metaphor has not been the mirror held upto nature, but the lamp that illuminates. Justas the young child by pointing and saying“fire truck” is directing the attention of heror his companion, so dramatists and novel-ists direct the attention of their readers.

It is appropriate, perhaps, that the grow-ing elaboration of consciousness in the greatbourgeois European novels of the 19th cen-tury should reach a kind of culmination withFreud’s cases, in which the gaps in the sub-ject’s consciousness of intentions are filledprecisely by elaborating a story (see Freud,1905 /1979; Marcus, 1974 ; Oatley, 1990).

In recent times, the most influential liter-ary theorist has been Bakhtin (1963 /1984),who has moved beyond analyses of sin-gle narrators, such as Robinson Crusoe.With the fully developed novel, for whichBakhtin takes Dostoyevsky as the paradig-matic author, we do not so much listen to themonological voice of an author or narrator.We take part in something more like a con-versation. In such a novel, there are severalcentres of consciousness, and if the authordoes it right, one of these is the reader’s.

Bakhtin’s idea of the dialogical basis ofsocial life as depicted in the novel is presentin practical affairs in the West in the elabo-rate procedures by which justice is admin-istered in courts of law (Bruner, 2002). Ina criminal trial, two narratives are related,one by the prosecution and one by thedefense. The critical narrative, however, isconstructed by others (judge and jury) whohave listened and supplied a narrative end-ing in terms of one of a small number ofprescribed outcomes of morality stories thathave been told and retold in our society.Such completions are known as verdicts(Pennington & Hastie, 1991).

There are universals in the telling ofstories. All cultures use narrative for pur-poses that are somewhat didactic (Schank& Berman, 2002). That is to say, all sto-ries have attributes that Donald attributedto myths. They explain human relationshipsto a problematic world, to the gods, to soci-ety, as well as to individual emotions andthe self. Some kinds of stories are universal(Hogan, 2003), the love story for instance.

Page 415: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 397

Its typical plot is of two lovers who long tobe united. Their union is prevented, typi-cally by a father. In the comic version, aftervicissitudes the lovers are united. They livehappily together, and the previously oppo-sitional father rejoices in the union. In thetragic version, the union is prevented andthe lovers die, perhaps to unite on some non-material plane.

In Western literature, something changesas we move from the mentality of Achillesand Abraham to that of Mrs Dalloway justafter World War I or more recently to thatof Jacques Austerlitz, whose life was frac-tured by World War II (Sebald, 2001). Per-haps literature has changed in responseto changes in society. In the West, thesechanges have included a growing individu-alism and a growing faculty of inwardnessand reflection. Perhaps, too, literary narrativehas itself been partly responsible for some ofthese changes: a workshop of the mind. Wecan link Dennett to this series. To be thor-oughly Western, thoroughly modern (per-haps postmodern) he says, we must give upthe idea of mind and accept that there is onlythe brain. Meanwhile, however, others, suchas Flanagan (2002), reject the dichotomyof mind and brain: Perhaps we can acceptbrain science and still retain the idea of asoul.

A recent addition to the series of liter-ary analyses begun by Auerbach is by novel-ist and literary theorist David Lodge, whohas become interested in the comparisonof literary and scientific approaches to con-sciousness. In the title essay of Conscious-ness and the Novel (2002), Lodge describeshow, while writing Consciousness Explained,Dennett came across one of Lodge’s novels,Nice Work (1988), and found in it a satiricalportrait of literary postmodernism very likethe idea of the absent self that Dennett wasproposing.

Lodge concludes that, although cogni-tive scientists have started to devote them-selves to the question of consciousness, theymay neglect both the substance of what hasbeen discovered by writers of literature –which allows us to explore and under-stand our consciousness – and the human-

ism that is thereby promoted. In Lodge’s(2001) novel, Thinks, the literary idea is rep-resented in a talk given by the female pro-tagonist, novelist Helen Reed, at a cognitivescience conference on consciousness. Shepresents Marvell’s (1637–1678/1968) poem,“The Garden,” and ends her talk with thestanza in which occur these lines:

My soul into the boughs does glide:There like a Bird, it sits, and sings . . .

(p. 50)

We can smile at Descartes’ or Marvell’s pre-scientific idea of the soul. Or we can see itas a metaphor of what is most human in us:Perhaps like a bird, we are able to take to thewings of imagination toward what may yetbe possible for our species.

No matter how we conceptualize theseissues, we might agree that stories dependon mental simulation, which in Victoriantimes was spoken of in terms of imaginationand of dreams. To simulate, make models,derive analogies, project metaphors, formtheories . . . this is what mind does. The con-cept of simulation is used in several senses inpsychology. In one sense, the idea of simula-tion is used by Barsalou (2003) to concep-tualize how multimodal and multisensorymappings between conceptualization andsituated action are unified. A second senseis that of Harris (2000), who argues princi-pally in the context of child developmentfor simulation as the means by which wemay read another mind and more generallyproject ourselves into situations not imme-diately present. I describe a third, relatedsense (Oatley, 1999): that stories are sim-ulations that run not on computers but onminds. Although our human minds are goodat understanding single causes and singleintentions, we are not good at understand-ing several of them interacting. A simula-tion (in Oatley’s sense) is a means we useto understand such complex interactions: Itcan be run forward in time for planningand backward for understanding. In theirconstructions of characters’ intentions andactions, novels and dramas explore the what-if of problems we humans face (vicissitudes)when repercussions of our actions occur

Page 416: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

398 the cambridge handbook of consciousness

beyond the horizon of habitual understand-ing. Multiple centres of consciousness can beset up in literary simulations and enable usto enter the dialogues they afford.

Coda

If selfhood and the conscious understand-ing of ourselves and others are the workof the novelist in each of us, we may havelearned some of this work from writers. If,furthermore, we take Vygotsky’s view, sto-ries are conduits by which thinking aboutthe self, about human plans with others andtheir vicissitudes, and about human emo-tions has become explicitly available for ouruse via culture. Imaginative literature is ameans by which a range of human situa-tions and predicaments are explored so thatthey can be made part of ourselves. It isa range we could not hope to encounterdirectly.

I have argued that some of who we areflows from a unifying sense of ourselvesas a character whom we improvise. Withinnarrative-like constraints we come to acceptwho we are and consciously to create our-selves to be coherent with it. We commitourselves to certain others, and to a cer-tain kind of life. But Dennett is also right.Plenty of our brain processes – those thatproduce an emotion or mood here, a lackof attention there, a piece of selfishnesswhen we ought to be thinking of someoneelse – can proceed not from any consciousor narratively coherent self, but from ill-understood brain processes and unintegratedimpulses. At present we have few estimatesof what proportion of our acts and thoughtsis, in the terms of this chapter, consciouslydecided within a unifying narrative frameand what proportion derives from uninte-grated elements of what one might call theDennettian unconscious. In terms of mak-ing arrangements with other people, it looksas if as many as 95% of actions based onexplicitly mutual plans can be made as con-sciousness decides, based on folk-theoreticalcategories of goals and beliefs. In terms of

certain emotional processes, however, wemay be closer to the unconscious end of thespectrum. Certainly Shakespeare, that mostaccomplished of all narrators and one whohas perhaps prompted more insightful con-sciousness than any other, put it to us that afair amount of our judgement may not be asconscious as we might like to believe, espe-cially when we are in the grip of an emo-tion. In A Midsummer Night’s Dream, loveinduced by the administration of juice of “alittle western flower “ (a mere neurochem-ical substance as one might nowadays say)is followed not so much by conscious ratio-nality as by rationalization. It is the emotionrather than any consciously narrated choicethat sets the frame of relationship, as it didI think among our hominid ancestors. Theone so dosed with the juice gazes upon thenew loved one, and language comes from thelove, not love from the language-based deci-sions. Here in such a condition is Titania asshe speaks to Bottom (the weaver) who hasbeen changed into an ass. He is the first indi-vidual she sees when she opens her eyes aftersleeping.

I pray thee gentle mortal sing again.Mine ear is much enamoured of thy note;So is mine eye enthralled to thy shape;And thy fair virtue’s force perforce doth

move meOn the first view, to say, to swear, I love

thee (3 , 1, 12 4–12 8).

The question for us humans is how to inte-grate the various phenomena, caught aswe are between emotional urgencies and ameaning-making consciousness. Such inte-grative capacities are a recent emergencein human beings. Perhaps they are not yetworking as well as they might.

Acknowledgments

I am very grateful to Robyn Fivush, KeithStanovich, Richard West, Philip Zelazo, andtwo anonymous referees whose suggestionsmuch helped my thoughts and understand-ing in revising this chapter.

Page 417: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 399

References

Abrams, M. H. (1953). The mirror and thelamp: Romantic theory and the critical tradition.Oxford: Oxford University Press.

Aiello, L. C., & Dunbar, R. I. M. (1993). Neo-cortex size, group size, and the evolution oflanguage. Current Anthropology, 34 , 184–193 .

Astington, J. W., Harris, P. L., & Olson, D. R.(Eds.). (1988). Developing theories of mind. NewYork: Cambridge University Press.

Auerbach, E. (1953). Mimesis: The representationof reality in Western literature (W. R. Trask,Trans.). Princeton, NJ: Princeton UniversityPress.

Baddeley, A. D. (1993). Verbal and visual subsys-tems of working memory. Current Biology, 3 ,563–565 .

Bakhtin, M. (1984). Problems of Dostoevsky’s poet-ics (C. Emerson, Trans.). Minneapolis: Univer-sity of Minneapolis Press (Original work pub-lished 1963).

Bal, M. (1997). Narratology: Introduction to thetheory of narrative (2nd ed). Toronto: Univer-sity of Toronto Press.

Balleine, B., & Dickinson, A. (1998). Conscious-ness – the interface between affect and cogni-tion. In J. Cornwell (Ed.), Consciousness andhuman identity (pp. 57–85). Oxford: OxfordUniversity Press.

Barresi, J. (2001). Extending self-consciousnessinto the future. In C. Moore & K. Lemmon(Eds.). The self in time: Developmental perspec-tives (pp. 141–161). Mahwah, NJ: Erlbaum.

Barsalou, L. W. (2003). Situated simulation in thehuman conceptual system. Language and Cog-nitive Processes, 18, 513–562 .

Behne, T., Carpenter, M., Call, J., & Tomasello,M. (2005). Unwilling versus unable: Infants’understanding of intentional action. Develop-mental Psychology, 41, 328–337.

Booth, W. C. (1988). The company we keep: Anethics of fiction. Berkeley, CA: University ofCalifornia Press.

Brockmeier, J., & Carbaugh, D. (2001). Introduc-tion. In J. Brockmeier & D. Carbaugh (Eds.),Narrative and identity: Studies in autobiogra-phy, self, and culture (pp. 1–22). Amsterdam:Benjamins.

Brockmeier, J., & Harre, R. (2001). Problemsand promises of an alternative paradigm. In J.Brockmeier & D. Carbaugh (Eds.), Narrative

and identity: Studies in autobiography, self, andculture (pp. 39–58). Amsterdam: Benjamins.

Bruner, J. (1986). Actual minds, possible worlds.Cambridge, MA: Harvard University Press.

Bruner, J., S. (2002). Making stories: Law, litera-ture, life. New York: Farrar, Straus and Giroux.

Burckhardt, J. (2002). The civilization of theRenaissance in Italy (S. G. C Middlemore,Trans.). New York: Random House.

Call, J., Hare, B., Carpenter, M., & Tomasello,M. (2004). ‘Unwilling’ versus ‘unable’: Chim-panzees’ understanding of human intentionalaction. Developmental Science, 7, 488–498.

Chauvet, J.-M., Deschamps, E. , & Hillaire, C.(1996). Dawn of art: The Chauvet cave. NewYork: Abrams.

Churchland, P. S. (1986). Neurophilosophy.Cambridge, MA: MIT Press.

Claparede, E. (1934). La genese de la hypothese.Geneva: Kundig.

Defoe, D. (1719). Robinson Crusoe. London: Pen-guin.

Dennett, D. C. (1991). Consciousness explained.Boston: Little, Brown & Co.

Dennett, D. C. (1992). The self as a center of nar-rative gravity. In F. S. Kessel, P. M. Cole, & D.L. Johnson (Eds.), Self and consciousness: Mul-tiple perspectives (pp. 103–115). Hillsdale, NJ:Erlbaum.

Descartes, R. (1911). Passions of the soul. In E. L.Haldane & G. R. Ross (Eds.), The philosophicalworks of Descartes. New York: Dover (Originalwork published 1649).

de Waal, F. (1982). Chimpanzee politics. NewYork: Harper & Row.

Donald, M. (1991). Origins of the modern mind.Cambridge, MA: Harvard University Press.

Donne, J. (1960). The sermons of John Donne,Vol. 8. (G. R. Potter & E. M. Simpson, Eds.).Berkeley: University of California Press (Orig-inal work published 1615 -1631).

Dunbar, R. I. M. (1993). Coevolution of neocor-tical size, group size, and language in humans.Behavioral and Brain Sciences, 16, 681–735 .

Dunbar, R. I. M. (1996). Grooming, gossip and theevolution of language. London: Faber & Faber.

Dunbar, R. I. M. (2003). The social brain: Mind,language, and society in evolutionary perspec-tive. Annual Review of Anthropology, 32 , 163–181.

Dunbar, R. I. M. (2004). The human story: A newhistory of mankind’s evolution. London: Faber.

Page 418: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

400 the cambridge handbook of consciousness

Edelson, M. (1992). Telling and enacting stories inpsychoanalysis. In J. W. Barron, M. N. Eagle, &D. L. Wolitzsky (Eds.), Interface of psychoanal-ysis and psychology (pp. 99–123). Washington,DC: American Psychological Association.

Fillmore, C. J. (1968). The case for case. In E. Bach& R. T. Harms (Eds.), Universals in linguistictheory (pp. 1–88). New York: Holt, Rinehart &Winston.

Fireman, G. D., McVay, T. E., & Flanagan, O. J.(Eds.). (2003). Narrative and consciousness: Lit-erature, psychology, and the brain. New York:Oxford University Press.

Fivush, R. (2001). Owning experience: Develop-ing subjective perspective in autobiographicalnarratives. In C. Moore & K. Lemmon (Eds.),The self in time: Developmental perspectives(pp. 35–52). Mahwah, NJ: Erlbaum.

Flanagan, O. (2002). The problem of the soul: Twovisions of mind and how to reconcile them. NewYork: Basic Books.

Freud, S. (1979). Fragment of an analysis of acase of hysteria (Dora). In J. Strachey & A.Richards (Eds.), The Pelican Freud library, Vol.9: Case histories, II (Vol. 7, pp. 7–122). London:Penguin.

Gallese, V., Keysers, C., & Rizzolatti, G. (2004).A unifying view of the basis of social cognition.Trends in Cognitive Sciences, 8, 396–403 .

Gazzaniga, M. S. (1992). Brain modules and beliefformation. In F. S. Kessel, P. M. Cole, & D.L. Johnson (Eds.), Self and consciousness: Mul-tiple perspectives (pp. 88–102). Hillsdale, NJ:Erlbaum.

Gazzaniga, M. S. (1998). The split brain revisited.Scientific American, 2 79(1), 35–39.

Goldie, P. (2003). One’s remembered past:Narrative thinking, emotion, and the exter-nal perspective. Philosophical Papers, 32 , 301–309.

Goodall, J. (1986). The chimpanzees of Gombe:Patterns of behavior. Cambridge, MA: HarvardUniversity Press.

Grazzani-Gavazzi, I., & Oatley, K. (1999).The experience of emotions of interdepen-dence and independence following interper-sonal errors in Italy and Anglophone Canada.Cognition and Emotion, 13 , 49–63 .

Groden, M., & Kreisworth, M. (1994). The JohnsHopkins guide to literary theory and criticism.Baltimore: Johns Hopkins University Press.

Harris, P. L. (2000). The work of the imagination.Oxford: Blackwell.

Helmholtz, H. (1962). Treatise on physiologicaloptics, Vol 3 . New York: Dover (Original workpublished 1866).

Hogan, P. C. (2003). The mind and its stories.Cambridge: Cambridge University Press.

Homer (1987). The Iliad (M. Hammond, Ed.& Trans.). Harmondsworth: Penguin (Originalwork in 850 bce).

Humphrey, N. (1976). The social function ofthe intellect. In P. Bateson & R. Hinde(Eds.), Growing points in ethology. Cambridge:Cambridge University Press.

Humphrey, N. (2002). The mind made flesh:Frontiers of psychology and evolution. Oxford:Oxford University Press.

Humphrey, N., & Dennett, D. (2002). Speakingfor ourselves: An assessment of multiple per-sonality disorder. In N. Humphrey (Ed.), Themind made flesh: Frontiers of psychology and evo-lution (pp. 19–48). Oxford: Oxford UniversityPress.

James, W. (1890). The principles of psychology.New York: Holt.

Jaynes, J. (1976). The origin of consciousness inthe breakdown of the bicameral mind. London:Allen Lane.

Lacan, J. (1977). The mirror stage as formative ofthe function of the I as revealed in psychoana-lytic experience. In A. Sheridan (Ed.), JacquesLacan: Ecrits, a selection (pp. 1–7). London:Tavistock. (Original work published 1949.)

LeDoux, J. (1996). The emotional brain: The mys-terious underpinnings of emotional life. NewYork: Simon & Schuster.

Lee, R. B. (1984). The Dobe !Kung. New York:Holt, Rinehart & Winston.

Lewis, M., Sullivan, M. W., Stanger, C., & Weiss,M. (1989). Self development and self-consciousemotions. Child Development, 60, 146–156.

Lodge, D. (1988). Nice work. London: Secker &Warburg.

Lodge, D. (2001). Thinks. London: Secker &Warburg.

Lodge, D. (2002). Consciousness and the novel.Cambridge, MA: Harvard University Press.

Marcus, S. (1984). Freud and Dora: Story, his-tory, case history. In S. Marcus (Ed.), Freud andthe culture of psychoanalysis (pp. 42–86). NewYork: Norton.

Marvell, A. (1968). Complete poetry (George deF.Lord, Ed.). New York: Modern Library. (Orig-inal work 1637–1678).

Page 419: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

narrative modes of consciousness and selfhood 401

McGrew, W. (1992). Chimpanzee material culture.Cambridge: Cambridge University Press.

Mead, G. H. (1913). The social self. Journal of Phi-losophy, Psychology and Scientific Methods, 10,374–380.

Mithen, S. (1996). The prehistory of the mind:The cognitive origins of art and science. London:Thames and Hudson.

Mithen, S. (2001). The evolution of imagina-tion: An archaeological perspective. SubStance,#94–95 , 28–54 .

Moore, C., & Macgillivray, S. (2004). Socialunderstanding and the development of pru-dence and prosocial behavior. In J. Baird & B.Sokol, (Eds.). New directions for child and ado-lescent development. New York: Jossey Bass.

Nelson, K. (1989). Narratives from the crib. NewYork: Cambridge University Press.

Nelson, K. (1996). Language in cognitive devel-opment: Emergence of the mediated mind. NewYork: Cambridge University Press.

Nelson, K. (2003). Narrative and the emergenceof consciousness of self. In G. D. Fireman,T. E. McVay, & O. Flanagan (Eds.), Narrativeand consciousness: Literature, psychology, and thebrain (pp. 17–36). New York: Oxford Univer-sity Press.

Oatley, K. (1988). On changing one’s mind: Apossible function of consciousness. In A. J.Marcel & E. Bisiach (Ed.), Consciousness incontemporary science (pp. 369–389). Oxford:Oxford University Press.

Oatley, K. (1990). Freud’s psychology of inten-tion: The case of Dora. Mind and Language, 5 ,69–86.

Oatley, K. (1992a). Best laid schemes: The psychol-ogy of emotions. New York: Cambridge Univer-sity Press.

Oatley, K. (1992b). Integrative action of narrative.In D. J. Stein & J. E. Young (Eds.), Cognitivescience and clinical disorders (pp. 15 1–170.). SanDiego: Academic Press.

Oatley, K. (1999). Why fiction may be twice astrue as fact: Fiction as cognitive and emotionalsimulation. Review of General Psychology, 3 ,101–117.

Oatley, K. (2002). Character. Paper presented atthe Society for Personality and Social Psychol-ogy Annual Convention, Savannah, GA.

Oatley, K., & Larocque, L. (1995). Everydayconcepts of emotions following every-other-day errors in joint plans. In J. Russell, J.-M. Fernandez-Dols, A. S. R. Manstead, &

J. Wellenkamp (Eds.), Everyday conceptionsof emotions: An introduction to the psychology,anthropology, and linguistics of emotion. NATOASI Series D 81 (pp. 145–165). Dordrecht:Kluwer.

Oatley, K., & Mar, R. A. (2005). Evolutionarypre-adaptation and the idea of character in fic-tion. Journal of Cultural and Evolutionary Psy-chology, 3 , 181–196.

Oatley, K., Sullivan, G. D., & Hogg, D. (1988).Drawing visual conclusions from analogy: Atheory of preprocessing, cues and schemata inthe perception of three dimensional objects.Journal of Intelligent Systems 1, 97–133 .

Oatley, K., & Yuill, N. (1985). Perception of per-sonal and interpersonal actions in a cartoonfilm. British Journal of Social Psychology, 2 4 ,115–124 .

Panksepp, J. (1998). Affective neuroscience: Thefoundations of human and animal emotions.Oxford: Oxford University Press.

Panksepp, J. (2001). The neuro-evolutionary cuspbetween emotions and cognitions: Implicationsfor understanding consciousness and the emer-gence of a unified mind science. Evolution andCognition, 7, 141–163 .

Panksepp, J. (2005). Affective consciousness:Core emotional feelings in animals and hu-mans. Consciousness and Cognition, 14 , 30–80.

Paulhan, F. (1930). The laws of feeling. London:Kegan-Paul, French, Trubner & Co. (Originalwork published 1887).

Pennebaker, J. W., Kiecolt-Glaser, J. K., & Glaser,R. (1988). Disclosure of traumas and immunefunction: Health implications of psychother-apy. Journal of Consulting and Clinical Psychol-ogy, 56, 239–245 .

Pennebaker, J. W., & Seagal, J. D. (1999). Forminga story: The health benefits of narrative. Journalof Clinical Psychology, 55 , 1243–1254 .

Pennington, N., & Hastie, R. (1991). A cogni-tive theory of juror decision making: The storymodel. Cardozo Law Review, 13 , 519–557.

Peterson, J. (1999). Maps of meaning. New York:Routledge.

Povinelli, D. J. (2000). Folk physics for apes: Thechimpanzee’s theory of how the world works. NewYork: Oxford University Press.

Povinelli, D. J., & O’Neill, D. K. (2000). Dochimpanzees use their gestures to instructeach other? In S. Baron-Cohen, H. Tager-Flusberg, & D. Cohen (Eds.), Understandingother minds: Perspectives from developmental

Page 420: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c14 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 22 :58

402 the cambridge handbook of consciousness

cognitive neuroscience (pp. 459–487). Oxford:Oxford University Press.

Power, R. (1979). The organization of purposefuldialogues. Linguistics, 17, 107–152 .

Rizzolatti, G., & Arbib, M. A. (1998). Languagewithin our grasp. Trends in Neuroscience, 2 1,188–194 .

Rizzolatti, G., Fogassi, L., & Gallese, V. (2001).Neurophysiological mechanisms underlyingthe understanding and imitation of action.Nature Reviews: Neuroscience, 2 , 661–670.

Rumelhart, D. E. (1975). Notes on a schemafor stories. In D. G. Bobrow & A. M. Collins(Eds.), Representation and understanding: Stud-ies in cognitive science. New York: AcademicPress.

Schank, R. C. (1990). Tell me a story: A new look atreal and artificial memory. New York: Scribner.

Schank, R. C., & Berman, T. R. (2002). The perva-sive role of stories. In M. C. Green, J. J. Strange,& T. C. Brock (Eds.), Narrative impact: Socialand cognitive foundations (pp. 287–313). Mah-wah, NJ: Erlbaum.

Searle, J. R. (1969). Speech acts: An essay in thephilosophy of language. Cambridge: CambridgeUniversity Press.

Sebald, W. G. (2001). Austerlitz. Toronto: KnopfCanada.

Shakespeare, W. (1997). The Norton Shakespeare(S. Greenblatt, Ed.). New York: Norton. (Orig-inal work published 1623).

Snell, B. (1982). The discovery of the mind in Greekphilosophy and literature. New York: Dover.(Original work published 1953).

Spence, D. P. (1982). Historical truth and narrativetruth. New York: Dover.

Strawson, G. (2004). Against narrativity. Ratio,17, 428–452 .

Sussman, G. J. (1975). A computer model of skillacquisition. New York: American Elsevier.

Tomasello, M. (1999). The cultural origins ofhuman cognition. Cambridge, MA: HarvardUniversity Press.

Tomasello, M., & Rakoczy, H. (2003). Whatmakes human cognition unique? From individ-ual to shared to collective intentionality. Mindand Language, 18, 121–147.

Trabasso, T., & van den Broek, P. (1985). Causalthinking and the representation of narrativeevents. Journal of Memory and Language, 2 4 ,612–630.

Turner, S. R. (1994). The creative process: Acomputer model of storytelling and creativity.Hillsdale, NJ: Erlbaum.

Velleman, J. D. (2000). The possibility of practicalreason. Oxford: Oxford University Press.

Velleman, J. D. (2002). The self as narrator. Firstof the Jerome Simon Lectures in Philosophy,October 2002 University of Toronto. Retrievedfrom www-personal.umich.edu/∼velleman/Work/Narrator.html.

Vygotsky, L. (1962). Thought and language (E. H.G. Vakar, Trans.). Cambridge, MA: MIT Press.

Vygotsky, L. (1978). Tool and symbol in childdevelopment. In M. Cole, V. John-Steiner, S.Scribner, & E. Souberman (Eds.), Mind insociety: The development of higher mental pro-cesses (pp. 19–30). Cambridge, MA: HarvardUniversity Press. (Original work published1930).

Watt, I. (1957). The rise of the novel: Studiesin Defoe, Richardson, and Fielding. London:Chatto & Windus.

Willams, B. (1993). Shame and necessity. Berkeley,CA: University of California Press.

Wilson, G. M. (2003). Narrative. In J. Levin-son (Ed.), The Oxford handbook of aesthetics(pp. 392–407). New York: Oxford UniversityPress.

Woolf, V. (1925). Mrs. Dalloway. London: Ho-garth Press.

Wrangham, R. (2001). Out of the Pan, into thefire: How our ancestors’ evolution dependedon what they ate. In F. B. M. de Waal (Ed.),Tree of origin: What primate behavior can tellus about human social evolution (pp. 121–143).Cambridge, MA: Harvard University Press.

Zelazo, P. D. (2004). The development of con-scious control in childhood. Trends in CognitiveSciences, 8, 12–17.

Zelazo, P. D. , & Sommerville, J. A. (2001). Levelsof consciousness of the self in time. In C. Moore& K. Lemmon (Eds.). The self in time: Devel-opmental perspectives (pp. 229–252). Mahwah,NJ: Erlbaum.

Page 421: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

E. Developmental Psychology

403

Page 422: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

404

Page 423: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

C H A P T E R 15

The Development of Consciousness

Philip David Zelazo, Helena Hong Gao, and Rebecca Todd

Abstract

This chapter examines the extent towhich consciousness might develop duringontogeny. Research on this topic is con-verging on the suggestion that conscious-ness develops through a series of levels, eachof which has distinct consequences for thequality of subjective experience, the poten-tial for episodic recollection, the complexityof children’s explicit knowledge structures,and the possibility of the conscious controlof thought, emotion, and action. The dis-crete levels of consciousness identified bydevelopmental research are useful for under-standing the complex, graded structure ofconscious experience in adults, and theyreveal a fundamental dimension along whichconsciousness varies: the number of itera-tions of recursive reprocessing of the con-tents of consciousness.

Introduction

Despite the explosion of scientific interest inconsciousness during the past two decades,

there is a relative dearth of research onthe way in which consciousness developsduring ontogeny. This may be due in partto a widespread belief that it goes withoutsaying that children are conscious in sameway as adults. Indeed, most people probablybelieve that newborn infants – whetherprotesting their arrival with a vigorous cry orstaring wide-eyed and alert at their mother –are conscious in an essentially adult-likefashion. So, although there are dramaticdifferences between infants and toddlersand between preschoolers and adolescents,these differences are often assumed toreflect differences in the contents of chil-dren’s consciousness, but not in the natureof consciousness itself.

There is currently considerable debate,however, concerning when a fetus firstbecomes capable of conscious experience(including pain). This debate has beeninstigated by proposed legislation requir-ing physicians in the United States toinform women seeking abortions after 22

weeks gestational age (i.e., developmentalage plus 2 weeks) that fetuses are ableto experience pain (Arkansas, Georgia, and

405

Page 424: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

406 the cambridge handbook of consciousness

Minnesota have already passed similar laws).Professor Sunny Anand, a paediatrician andan expert on neonatal pain, recently testifiedbefore the U.S. Congress (in relation tothe Unborn Child Pain Awareness Act of2005 ; Anand, 2005) that the “substrate andmechanisms for conscious pain perception”develop during the second trimester. Others(e.g., Burgess & Tawia, 1996; Lee et al., 2005)have suggested that consciousness developslater, during the third trimester (at around30 weeks gestational age) – because that iswhen there is first evidence of functionalneural pathways connecting the thalamus tosensory cortex (see Chapter 27).

Inherent in these claims is the assump-tion that consciousness is an “all-or-nothing”phenomenon – one is either conscious ornot, and capable of consciousness or not(cf. Dehaene & Changeux, 2004). Thisassumption is also reflected, for exam-ple, in information-processing models (e.g.,Moscovitch, 1989; Schacter, 1989) in whichconsciousness corresponds to a single sys-tem and information is either available tothis system or not. From this perspective,it is natural to think of consciousness assomething that emerges full-blown at a par-ticular time in development, rather thansomething that itself undergoes transforma-tion – something that develops, perhaps grad-ually. Although the current debate concern-ing pain and abortion has centered on theprenatal period, there are those who believethat consciousness emerges relatively late.Jerome Kagan (1998), for example, writesthat “sensory awareness is absent at birthbut clearly present before the second birth-day” (p. 48). It should be noted that Kaganbelieves that consciousness does developbeyond the emergence of sensory aware-ness, but if you’ve ever met a toddler (say,a 14-month-old), it may be difficult to imag-ine that children at this age lack sensoryawareness – which Kagan (1998) defines as“awareness of present sensations” (p. 46).

The implications of Kagan’s claim areprofound. For example, if we follow Nagel(1974), who asserts that the essence of sub-jective experience is that it is “like some-thing” to have it (see Chapter 3), we mightconclude, as Carruthers (1989) does, that it

does not feel like anything to be an infant.Carruthers (1996) further tests our credulitywhen he argues that children are not actu-ally conscious until 4 years of age (becauseit is not until then that children can for-mulate beliefs about psychological states).Kagan and Carruthers characterize infantsand/or young children essentially as uncon-scious automata or zombies (cf. Chalmers,1996; see Chapter 3) – capable of cognitivefunction but lacking sentience.

Although many theorists treat consci-ousness as a single, all-or-nothing pheno-menon, others distinguish between first-order consciousness and a meta-level ofconsciousness. For example, they may dis-tinguish between consciousness and meta-consciousness (Schooler, 2002), primaryconsciousness and higher-order conscious-ness (Edelman & Tononi, 2000), or coreconsciousness and extended consciousness(Damasio, 1999). In the developmental lit-erature, this dichotomous distinction is usu-ally described as the difference betweenconsciousness and self-consciousness (e.g.,Kagan, 1981; Lewis, 2003). In most cases,the first-order consciousness refers to aware-ness of present sensations (e.g., an integratedmultimodal perceptual scene; Edelman &Tononi, 2000), whereas the meta-level con-sciousness is generally intended to cap-ture the full complexity of consciousnessas it is typically experienced by healthyhuman adults. Edelman and Tononi (2000),for example, suggest that higher-order con-sciousness is, according to their model,“accompanied by a sense of self and theability . . . explicitly to construct past and fu-ture scenes” (p. 102). Developmentally, theimplication is that infants are limited to rel-atively simple sensory consciousness until anenormous transformation (usually presumedto be neurocognitive in nature and involvingthe acquisition of language or some degreeof conceptual understanding) occurs thatsimultaneously adds multiple dimensions(e.g., self-other, past-future) to the qualita-tive character of experience. This profoundmetamorphosis has typically been hypothe-sized to occur relatively early in infancy (e.g.,Stern, 1990; Trevarthen & Aitken, 2001) orsome time during the second year (e.g.,

Page 425: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 407

Kagan, 1981; Lewis, 2003 ; Wheeler, 2000),depending on the criteria used for inferringhigher-order self-consciousness.

In this chapter, we propose that dis-cussions of the development of conscious-ness have been hampered by a reliance onrelatively undifferentiated notions of con-sciousness. Indeed, we argue that develop-mental data suggest the need for not just two,but many dissociable levels of conscious-ness; information may be available at onelevel but not at others (see Morin, 2004 ,2006, for a review of recent models of con-sciousness that rely on the notion of lev-els; see also Cleeremans & Jimenez, 2002 ,for a related perspective on consciousnessas a graded phenomenon). Consideration ofthese levels and of their utility in explain-ing age-related changes in children’s behav-ior has implications for our understanding ofconsciousness in general, including individ-ual differences in reflectivity and mindful-ness (see Chapter 19), but the focus here ison development during childhood. Many ofthe arguments regarding when consciousnessemerges – for example, at 30 months ges-tational age (e.g., Burgess & Tawia, 1996),at 12 to 15 months after birth (Perner &Dienes, 2003), at the end of the second year(Lewis, 2003), or around the fourth birthday(Carruthers, 2000) – have merit, and wepropose that some of the most salient dis-crepancies among these accounts can be rec-onciled from the perspective that conscious-ness has several levels. Different theoristshave directed their attention to the emer-gence of different levels of consciousness; adevelopmental perspective allows us to inte-grate these levels into a more comprehen-sive model of consciousness as a complex,dynamic phenomenon.

Early Accounts of the Developmentof Consciousness

For early theorists such as Baldwin (e.g.,1892), Piaget (1936/1952), and Vygotsky(1934/1986), consciousness was the problemto be addressed by the new science of psy-chology, and these theorists made major con-

tributions by showing how children’s con-sciousness, including the way in which chil-dren experience reality, is changed duringparticular developmental transformations.That is, they all understood that the struc-ture of consciousness itself – and not justthe contents of consciousness – developsover the course of childhood. For Baldwin,infants’ experience can initially be charac-terized as a state of adualism – meaningthat they are unaware of any distinctionsthat might be implicit in the structure ofexperience (e.g., subject vs. object, ego vs.alter; e.g., Baldwin, 1906). During the courseof development, however, children proceedthrough a series of “progressive differentia-tions between the knower and the known”(Cahan, 1984 , p. 13 1) that culminates intranscending these dualisms and recogniz-ing their origin in what Baldwin calls thedialectic of personal growth (by personal,Baldwin refers both to oneself and to per-sons). Baldwin, therefore, suggests that con-sciousness develops through a circular pro-cess of differentiation and then integration(cf. Eliot’s poem, “Little Gidding”: “And theend of all our exploring/ Will be to arrivewhere we started/ And know the place forthe first time.”).

Imitation plays a key role in this dialectic,which starts when an infant observes behav-ior that is (at least partially) outside of hisor her behavioral repertoire. At this point,the infant cannot identify with the behavioror the agent of the behavior, so the behavioris viewed solely in terms of its outward orprojective aspects. By imitating this behav-ior, however, the infant discovers the sub-jective side of it, including, for example,the feeling that accompanies it. Once thishappens, the infant automatically ejects thisnewly discovered subjectivity back into hisor her understanding of the original behav-ior. Baldwin (1894) gives the example of agirl who watches her father prick himselfwith a pin. Initially, she has no apprecia-tion of its painful consequence. When sheimitates the behavior, however, she will feelthe pain and then immediately understandthat her father felt it too. Subsequently, shewill view the behavior of pin pricking ina different light; her understanding of the

Page 426: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

408 the cambridge handbook of consciousness

behavior will have been transformed fromprojective to subjective to ejective. In effect,she will have brought the behavior into thescope of her self- and social-understanding,expanding the range of human behavior withwhich she can identify. Baldwin (1897, p. 36)writes, “It is not I, but I am to become it,” aformulation that seems to capture the samefundamental insight about the developmentof consciousness as Freud’s (1933 /1940,p. 86) famous “Wo Es war, soll Ich warden.”(“Where it was, there I shall be”).

Piaget similarly saw “increasing self-awareness of oneself as acting subject”(Ferrari, Pinard, & Runions, 2001, p. 207) – ordecreasing egocentrism – as one of the majordimensions of developmental change, andhe tied this development to the emergenceof new cognitive structures that allowedfor new ways of knowing or experienc-ing reality. Indeed, for Piaget, conscious-ness (the experience of reality) is depen-dent on one’s cognitive structures, whichare believed to develop through a series ofstages primarily as a result of a process ofequilibration, whereby they become increas-ingly abstract (from practical to concep-tual) and reflect more accurately the logicof the universe. Consciousness also devel-ops in a characteristic way regardless of chil-dren’s developmental stage; at all stages,from practical to conceptual, consciousness“proceeds from the periphery to the cen-ter” (Piaget, 1974/1977, p. 334), by whichPiaget meant that one first becomes awareof goals and results and then later comesto understand the means or intentions bywhich these results are accomplished. Forolder (formal operational) children, Piaget(1974/1977) noted, this development fromperiphery to center can occur quite quicklyvia the reflexive abstraction of practical senso-rimotor knowledge. This process, which cor-responds to conceptualization or reflectiveredescription, allows children more rapidlyto transform knowledge-in-action into artic-ulate conceptual understanding. In all cases,however, we see consciousness developingfrom action to conceptualization. Piaget’s(1974/1977) emphasis on the role of action inthe development of consciousness was sum-marized concisely at the end of his key vol-

ume on the topic, The Grasp of Conscious-ness, where he wrote, “The study of cog-nizance [i.e., consciousness] has led us toplace it in the general perspective of thecircular relationship between subject andobject. The subject only learns to know him-self when acting on the object, and the lat-ter can become known only as a result ofprogress of the actions carried out on it”(p. 353).

Vygotsky (1934/1986), in contrast to bothBaldwin and Piaget, noted that children’sconsciousness was transformed mainly viathe appropriation of cultural tools, chieflylanguage. Vygotsky, and then Luria (e.g.,1959, 1961), proposed that thought andspeech first develop independently but thenbecome tightly intertwined as a result ofinternalization – a process whereby theformal structure inherent in a culturalpractice, such as speaking, is first acquiredin overt behavior and then reflected in one’sprivate thinking. Initially, speech serves acommunicative purpose, but later it alsoacquires semantic, syntactic, and regulatoryfunctions. The emergent regulatory functionof speech is inherently self-conscious, andit allows children to organize and plan theirbehavior, essentially rendering them capableof consciously controlled behavior (Luria,1961; Vygotsky, 1934/1986). Vygotsky(1978) wrote, “With the help of speechchildren, unlike apes, acquire the capacityto be both the subjects and objects of theirown behavior” (p. 26). For Vygotsky, then,consciousness was transformed by language,with important consequences for action.

Contemporary theorists have elaboratedon some of these seminal ideas about thedevelopment of consciousness – althoughthey have not always explicitly addressedthe implications for the character of chil-dren’s subjective experience. Barresi andMoore (1996), for example, offered a modelof the development of perspective takingthat builds on Baldwin’s (1897) dialecticof personal growth. According to Barresiand Moore, young children initially takea first-person, present-oriented perspectiveon their own behavior (e.g., “I want candynow”) and a third-person perspective on thebehavior of others – seeing that behavior

Page 427: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 409

from the outside, as it were. Because simul-taneous consideration of first- and third-person perspectives is required for a repre-sentational understanding of mental states(e.g., “I know there are sticks in the box, buthe thinks there is candy”), young childrenhave difficulty understanding false beliefs –both their own and those of others (seeWellman, Cross, & Watson, 2001, for a re-view). With development, however, chil-dren are better able to adopt a third-personperspective on their own behavior, imaginea first-person perspective on the behaviorof others, and coordinate these perspectivesinto a single schema.

As another example, Karmiloff-Smith(1992) builds on Piaget’s (1974/1977) idea ofreflexive abstraction with her model of Rep-resentational Redescription, in which con-sciousness develops as a function of domain-specific experience. According to this model,knowledge is originally represented in animplicit, procedural format (Level I), but,with sufficient practice, behavioral masteryof these procedures is achieved and theknowledge is automatically redescribed intoa more abstract, explicit format (Level E1).This representational format reveals thestructure of the procedures, but is still notconscious: Consciousness comes with yetadditional levels of redescription or ‘explic-itation,’ which occur “spontaneously as partof an internal drive toward the creationof intra-domain and inter-domain relation-ships” (1992 , p. 18). Level E2 is conscious butnot verbalizable, whereas Level E3 is bothconscious and verbalizable.

Finally, Zelazo and his colleagues haveexpounded a model of consciousness, theLevels of Consciousness (LOC) model (e.g.,Zelazo, 1999, 2004 ; Zelazo & Jacques, 1996;Zelazo & Zelazo, 1998) that builds on thework of Baldwin, Piaget, and Vygotsky andLuria – but especially Vygotsky and Luria.Because this model is relatively compre-hensive and addresses explicitly the poten-tial implications of neurocognitive develop-ment for children’s subjective experience,we describe it in some detail. In what fol-lows, we first provide an overview of themodel and then show how it aims to pro-vide an account of the way in which con-

sciousness develops during the first 5 years oflife (and potentially beyond). Empirical andtheoretical contributions to our understand-ing of the development of consciousness arereviewed in the context of the LOC model.

Overview of the Levels ofConsciousness (LOC) Model

The Levels of Consciousness (LOC) modeldescribes the structure of consciousness andattempts to show the consequences thatreflection has on the structure and functionsof consciousness, including the key role thatreflection plays in the conscious control ofthought, action, and emotion via explicitrules. In what follows, we consider the impli-cations of the LOC model for (1) the struc-ture of consciousness, (2) cognitive controlvia the use of rules at different levels ofcomplexity, (3) the functions of prefrontalcortex, and (4) the development of con-sciousness in childhood.

The Structure of Consciousness

According to the LOC model, conscious-ness can operate at multiple discrete levels,and these levels have a hierarchical struc-ture – they vary from a first-order levelof consciousness (minimal consciousness) tohigher-order reflective levels that subsumelower levels. Higher levels of conscious-ness are brought about through an itera-tive process of reflection, or the recursivereprocessing of the contents of conscious-ness via thalamocortical circuits involvingregions of prefrontal cortex. Each degreeof reprocessing results in a higher level ofconsciousness, and this in turn allows forthe integration of more information into anexperience of a stimulus before a new stim-ulus is experienced; it allows the stimulus tobe considered relative to a larger interpretivecontext. In this way, each additional level ofconsciousness changes the structure of expe-rience, and the addition of each level hasunique consequences for the quality of sub-jective experience: The addition of higherlevels results in a richer, more detailed expe-rience and generates more “psychological

Page 428: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

410 the cambridge handbook of consciousness

distance” from stimuli (e.g., Carlson, Davis,& Leach, 2005 ; Dewey, 1931/1985 ; Sigel,1993). But the addition of new levels also hasimplications for the potential for episodicrecollection (because information is pro-cessed at a deeper level; Craik & Lockhart,1972), the complexity of children’s explicitknowledge structures, and the possibility ofthe conscious control of thought, emotion,and action.

Control by Rules at Various Levelsof Complexity

According to the LOC model, consciouscontrol is accomplished, in large part, bythe ability to formulate, maintain in work-ing memory, and then act on the basis ofexplicit rule systems at different levels ofcomplexity – from a single rule relating astimulus to a response, to a pair of rules, to ahierarchical system of rules that allows oneto select among incompatible pairs of rules,as explained by the Cognitive Complex-ity and Control (CCC) theory (e.g., Frye,Zelazo, & Palfai, 1995 ; Zelazo, Muller, Frye,& Marcovitch, 2003). On this account, rulesare formulated in an ad hoc fashion in poten-tially silent self-directed speech. These ruleslink antecedent conditions to consequences,as when we tell ourselves, “If I see a mailbox,then I need to mail this letter.” When peo-ple reflect on the rules they represent, theyare able to consider them in contradistinc-tion to other rules and embed them underhigher-order rules, in the same way that wemight say, “If it’s before 5 p.m., then if I seea mailbox, then I need to mail this letter,otherwise, I’ll have to go directly to the postoffice.” In this example, the selection of asimple conditional statement regarding themailbox is made dependent on the satisfac-tion of another condition (namely, the time).More complex rule systems, like the systemof embedded if-if-then rules in this exam-ple, permit the more flexible selection ofcertain rules for acting when multiple con-flicting rules are possible. The selection ofcertain rules then results in the amplifica-

tion and diminution of attention to poten-tial influences on thought (inferences) andaction when multiple possible influences arepresent.

According to the LOC model, increasesin rule complexity – whether age-related(see below) or in response to situationaldemands – are made possible by corre-sponding increases in the extent to whichone reflects on one’s representations: Theyare made possible by increases in level ofconsciousness. Rather than taking rules forgranted and simply assessing whether theirantecedent conditions are satisfied, reflec-tion involves making those rules themselvesan object of consideration and consideringthem in contradistinction to other rules atthat same level of complexity.

Figure 15 .1 contrasts relatively automaticaction at a lower level of consciousness (a)with relatively deliberate action at a higherlevel of consciousness (b). The former typeof action (a) is performed in response tothe most salient, low-resolution aspects ofa situation, and it is based on the formu-lation of a relatively simple rule system –in this case, nothing more than an explicitrepresentation of a goal maintained in work-ing memory. The more deliberate action(b) occurs in response to a more carefullyconsidered construal of the same situation,brought about by several degrees of repro-cessing the situation. The higher level of con-sciousness depicted in Figure 15 .1b allows forthe formulation (and maintenance in work-ing memory) of a more complex and moreflexible system of rules or inferences (in thiscase, a system of embedded rules consideredagainst the backdrop of the goal that occa-sions them).

The tree diagram in Figure 15 .2 illus-trates the way in which hierarchies of rulescan be formed through reflection – theway in which one rule can first become anobject of explicit consideration at a higherlevel of consciousness and then be embed-ded under another higher-order rule andcontrolled by it. Rule A, which indicatesthat response 1 (r 1) should follow stimulus 1

(s 1), is incompatible with rule C, which con-nects s1 to r2 . Rule A is embedded under, and

Page 429: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 411

semantic LTM

descs descA rel <IobjA>G1

items (e.g., goals)

working memory levels of consciousness

IobjA cc <objA>Iobjs action programs

procedural LTM

responseobjA

minC

recC

IobjA <objA>Iobjs action programs

recC

descs descA <IobjA>G1

Items (e.g., goals)

selfC

Sdescs

refC1

refC2

SdescC <SdescB>ER1=PR1+PR2

goal; embedded rules

rel

cc

rel

rel

rel

SdescA <descA>

SdescB <SdescA>G1; PR1=R1+R2 goal; pair of rules

G1; R1=C1 A1 goal; rule=if c then act

procedural LTM

responseobjA

semantic LTM

working memory levels of consciousness

minC

Figure 15 .1. The implications of reflection (levels of consciousness) for rule use.(a, top): Relatively automatic action on the basis of a lower level ofconsciousness. An object in the environment (objA) triggers an intentionalrepresentation of that object (IobjA) in semantic long-term memory (LTM); thisIobjA, which is causally connected (cc) to a bracketed objA, becomes thecontent of consciousness (referred to at this level as minimal consciousness orminC). The contents of minC are then fed back into minC via a re-entrantfeedback process, producing a new, more reflective level of consciousnessreferred to as recursive consciousness or recC. The contents of recC can berelated (rel) in consciousness to a corresponding description (descA) or label,which can then be deposited into working memory (WM) where it can serve as agoal (G1) to trigger an action program from procedural LTM in a top-downfashion. (b, bottom): Subsequent (higher) levels of consciousness, includingself-consciousness (selfC), reflective consciousness 1 (refC1), and reflectiveconsciousness 2 (refC2). Each level of consciousness allows for the formulationand maintenance in WM of more complex systems of rules. (Reprinted withpermission from Zelazo, P. D. (2004). The development of conscious control inchildhood. Trends in Cognitive Sciences, 8, 12–17.) (See color plates.)

Page 430: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

412 the cambridge handbook of consciousness

Figure 15 .2 . Hierarchical tree structuredepicting formal relations among rules. Note: c 1

and c 2 = contexts; s 1 and s 2 = stimuli; r 1 andr 2 = responses. (Adapted from Frye, D.,Zelazo, P. D., & Palfai, T. (1995). Theory of mindand rule-based reasoning. Cognitive Development,10, 483–527).

controlled by, a higher- order rule (rule E)that can be used to select rule A or rule B,and rule E, in turn, is embedded under aneven higher-order rule (rule F) that can beused to select the discrimination betweenrules A and B as opposed to the discrimi-nation between rules C and D. This higher-order rule makes reference to setting condi-tions or contexts (c 1 and c 2 ) that conditionthe selection of lower-order rules, and thatwould be taken for granted in the absence ofreflection. Higher-order rules of this type (F)are required in order to use bivalent rules inwhich the same stimulus is linked to differ-ent responses (e.g., rules A and C). Simplerrules like E suffice to select between uni-valent stimulus-response associations – rulesin which each stimulus is uniquely associ-ated with a different response, as when mak-ing discriminations within a single stimulusdimension.

To formulate a higher-order rule suchas F and deliberate between rules C andD, on the one hand, and rules A and B, onthe other, one has to be aware of the factthat one knows both pairs of lower-orderrules. Figuratively speaking, one has to viewthe two rule pairs from the perspective of(F). This illustrates how increases in reflec-tion on lower-order rules are required forincreases in embedding to occur. Each levelof consciousness allows for the formulation

and maintenance in working memory ofa more complex rule system. A particularlevel of consciousness (SelfC) is requiredto use a single explicit rule such as (A);a higher level of consciousness (refC1) isrequired to select between two univalentrules using a rule such as (E); a still higherlevel (refC2) is required to switch betweentwo bivalent rules using a rule such as (F).

The Role of Prefrontal Cortex inHigher Levels of Consciousness

The potential role of prefrontal cortex inreflection is arguably revealed by work onthe neural correlates of rule use (see Bunge,2004). Bunge and Zelazo (2006) summa-rized a growing body of evidence that pre-frontal cortex plays a key role in rule useand that different regions of prefrontal cor-tex are involved in representing rules at dif-ferent levels of complexity – from learnedstimulus-reward associations (orbitofrontalcortex; Brodmann’s area [BA] 11), to setsof conditional rules (ventrolateral prefrontalcortex [BA 44 , 45 , 47] and dorsolateral pre-frontal cortex [BA 9, 46]), to an explicitconsideration of task sets (rostrolateral pre-frontal cortex [or frontopolar cortex; BA 10];see Figure 15 .3).

Figure 15 .3 illustrates the way in whichregions of prefrontal cortex correspond torule use at different levels of complexity.Notice that the function of prefrontal cor-tex is proposed to be hierarchical in a waythat corresponds, roughly, to the hierarchi-cal complexity of the rule use underlyingconscious control. As individuals engage inreflective processing, ascend through levelsof consciousness, and formulate more com-plex rule systems, regions of lateral pre-frontal cortex are recruited and integratedinto an increasingly elaborate hierarchy ofprefrontal cortical function via thalamo-cortical circuits. As the hierarchy unfolds,information is first processed via circuitsconnecting the thalamus and orbitofrontalcortex. This information is then reprocessedand fed forward to ventrolateral prefrontal

c1 c2

s1 s2 s1 s2

r1 r2 r2 r1

(A) (B) (C) (D)

(E)

(F)

Page 431: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 413

9

44 46

45 10

4711

increasing rule complexity

stimulus-rewardassociations

S1 S2S1

R1 R2 S1

C1 C2 C1 C2

S2 S1 S1S1S2 S2 S2

R1 R2 R2 R2 R2R1 R1 R1

S2

univalent rules bivalent rules higher-order rules

Figure 15 .3 . A hierarchical model of rule representation in lateral prefrontalcortex. A lateral view of the human brain is depicted at the top of the figure,with regions of prefrontal cortex identified by the Brodmann areas (BA) thatcomprise them: Orbitofrontal cortex (BA 11), ventrolateral prefrontal cortex(BA 44 , 45 , 47), dorsolateral prefrontal cortex (BA 9, 46), and rostrolateralprefrontal cortex (BA 10). The prefrontal cortex regions are shown in variouscolors, indicating which types of rules they represent. Rule structures aredepicted below, with darker shades of blue indicating increasing levels of rulecomplexity. The formulation and maintenance in working memory of morecomplex rules depend on the reprocessing of information through a series oflevels of consciousness, which in turn depends on the recruitment of additionalregions of prefrontal cortex into an increasingly complex hierarchy of prefrontalcortex activation. Note: S = stimulus; check = reward; cross = nonreward;R = response; C = context, or task set. Brackets indicate a bivalent rule that iscurrently being ignored. (Reprinted with permission from Bunge, S., & Zelazo,P. D. (2006). A brain-based account of the development of rule use in childhood.Current Directions in Psychological Science 15 , 118–121.) (See color plates.)

cortex via circuits connecting the thalamusand ventrolateral prefrontal cortex. Furtherprocessing occurs via circuits connecting thethalamus to dorsolateral prefrontal cortex.Thalamocortical circuits involving rostrolat-eral prefrontal cortex play a transient role inthe explicit consideration of task sets at eachlevel in the hierarchy.

Developmental Increases in Children’sHighest Level of Consciousness

According to the LOC model, there arefour major age-related increases in the high-est level of consciousness that childrenare able to muster (although children mayoperate at different levels of consciousness

Page 432: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

414 the cambridge handbook of consciousness

in different situations). These age-relatedincreases in children’s highest level of con-sciousness correspond to the growth ofprefrontal cortex, which follows a pro-tracted developmental course that mirrorsthe development of the ability to use rules athigher levels of complexity. In particular,developmental research suggests that theorder of acquisition of rule types shownin Figure 15 .3 corresponds to the order inwhich corresponding regions of prefrontalcortex mature. Gray matter volume reachesadult levels earliest in orbitofrontal cortex,followed by ventrolateral prefrontal cortex,and then by dorsolateral prefrontal cortex(Giedd et al., 1999; Gogtay et al., 2004).Measures of cortical thickness suggest thatdorsolateral prefrontal cortex and rostrolat-eral prefrontal cortex (or frontopolar cor-tex) exhibit similar, slow rates of structuralchange (O’Donnell, Noseworthy, Levine, &Dennis, 2005). With development, childrenare able to engage neural systems involv-ing the hierarchical coordination of moreregions of prefrontal cortex – a hierarchicalcoordination that develops in a bottom-upfashion, with higher levels in the hierarchyoperating on the products of lower levelsthrough thalamocortical circuits.

Minimal consciousness. The LOC modelstarts with the assumption that simple sen-tience is mediated by minimal consciousness(minC; cf. Armstrong, 1980), the first-orderconsciousness on the basis of which morecomplex hierarchical forms of consciousnessare constructed (through degrees of repro-cessing). MinC is intentional in Brentano’s(1874/1973) sense – any experience, no mat-ter how attenuated, is experience of some-thing (see Brentano’s description of presen-tations, p. 78 ff.), and it motivates approachand avoidance behavior, a feature that isessential to the evolutionary emergence ofminC (e.g., Baldwin, 1894 ; Dewey, 1896;Edelman, 1989). However, minC is unre-flective and present-oriented and makes noreference to an explicit sense of self; thesefeatures develop during the course of child-hood. While minimally conscious, one isconscious of what one sees (the object ofone’s experience) as pleasurable (approach)

or painful (avoid), but one is not conscious ofseeing what one sees or that one (as an agent)is seeing what one sees. And because minCis tied to ongoing stimulation, one cannotrecall seeing what one saw. MinC is hypoth-esized to characterize infant consciousnessprior to the end of the first year of life.

In adults, this level of consciousness corre-sponds to so-called implicit information pro-cessing, as when we drive a car without fullawareness, perhaps because we are conduct-ing a conversation at a higher level of con-sciousness (in this example, we are operat-ing at two different levels of consciousnesssimultaneously). Our behavioral routinesare indeed elicited directly and automati-cally, but they are elicited as a function ofconsciousness of immediate environmentalstimuli (cf. Perruchet & Vinter, 2002). It fol-lows that implicit processing does not occurin a zombie-like fashion; it is simply unre-flective (because the contents of minC arecontinually replaced by new stimulation)and, as a result, unavailable for subsequentrecollection.

Consider how minC figures in the pro-duction of behavior according to the LOCmodel (Fig. 15 .1a). An object in the envi-ronment (objA) triggers a “description” fromsemantic long-term memory. This particu-lar description (or IobjA, for “intentionalobject”) then becomes an intentional objectof minC and automatically triggers an asso-ciated action program that is coded in pro-cedural long-term memory. A telephone, forexample, might be experienced by a minCbaby as ‘suckable thing,’ and this descrip-tion might trigger the stereotypical motorschema of sucking. Sensorimotor schemataare modified through practice and accom-modation (i.e., learning can occur; e.g.,DeCasper et al., 1994 ; Kisilevsky et al., 2003 ;Siqueland & Lipsitt, 1966; Swain, Zelazo,& Clifton, 1993), and they can be coordi-nated into higher-order units (e.g., Cohen,1998; Piaget, 1936/1952), but a minC infantcannot represent these schemata in minC(the infant is only aware of the stimulithat trigger them). In the absence of reflec-tion and a higher level of consciousness, thecontents of minC are continually replaced

Page 433: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 415

by new intero- and exteroceptor stimula-tion and cannot be deposited into workingmemory.

Thus, minC infants exhibit learning andmemory and may well perceive aspects ofthemselves and their current state implic-itly, but they have no means by which theycan consciously represent past experiencesor states or entertain future-oriented repre-sentations. That is, they cannot engage inconscious recollection, although they pro-vide clear behavioral evidence of memory,and they cannot entertain conscious expec-tations or plans, although their behavior isoften future-oriented (e.g., Haith, Hazan,& Goodman, 1988; see Reznick, 1994 , fora discussion of different interpretations offuture-oriented behavior). At present, thereis no behavioral evidence that young infantsare capable of conscious recollection (asopposed to semantic memory; Tulving 1985)or explicit self awareness; their experience ofevents seems to be restricted to the present(see Figure 15 .4a) – including objects in theimmediate environment and current physi-cal states.

Within the constraints of minC, however,infants may come to learn quite a bit abouttheir bodies in relation to the world (e.g.,Gallagher, 2005 ; Meltzoff, 2002 ; Rochat,2001). Rochat (2003), for example, pro-poses five levels of self-understanding that,from the perspective of the LOC model,can be seen to unfold within particularlevels of consciousness. The levels of self-understanding that develop in early infancyare characterized by somatic sensation andexpectancies about the world, but they arenot accompanied by explicit, higher-orderrepresentations of self and other. For exam-ple, the emergence of level 1, or the differ-entiated self, begins at birth, as infants learnto distinguish their own touch from that ofanother. Level 2 , which refers to what Rochatcalls the situated self, emerges at around 2

months and involves implicit awareness ofthe self as an agent situated in space. In thismodel, the differentiated and situated selvesemerge from the development of (a) expec-tations of contingency between differentsensory modalities (intermodal contingency)

and (b) a sense of self-agency that arises frominteraction with the world.

Rochat has emphasized the role of pro-prioception in the experience of self, andseveral studies have explored the role ofcontingency between visual and propriocep-tive information in the process of distin-guishing self from the world between theages of 2 and 5 months (Rochat & Morgan,1995). For example, in one study, infantswere shown split-screen images of their legsthat were either congruent or incongruentwith the view they would normally have oftheir own legs (i.e., the incongruent imageswere shown from a different angle). Theylooked significantly longer at the unfamiliar,incongruent view of their own legs, espe-cially if the general direction of depictedmovement conflicted with the direction ofthe actual, felt movement. The authors con-cluded that the infants have expectanciesabout what constitutes self-movement, thatself is specified by the temporal and dynamiccontingency of sensory information in differ-ent modalities, and that by 3 months of age,infants have an intermodal body schema thatconstitutes an implicit bodily self.

Based on a series of empirical studiesexamining infants’ actions, then, Rochat hasdescribed the self-differentiation process asthe systematic exploration of perceptualexperience, scaffolded by dyadic interaction,that allows the emergence of an implicitsense of self and other (Rochat & Striano,2000). For Rochat, such an implicit, interac-tive, and somatically based sense of self pro-vides the foundation for the more explicitintegration of first- and third-person infor-mation that will come later in childhood (seebelow). And in terms of the LOC model, allof this implicit learning about the self takesplace at the level of minC.

Given this characterization of minC –as the simplest, first-order consciousness onthe basis of which more complex conscious-ness is constructed, but one that allows animplicit understanding of self, we mightreturn to the question of first emergence:When does minC emerge? According to thisaccount, the onset of minC may be tied to aseries of anatomical and behavioral changes

Page 434: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

416 the cambridge handbook of consciousness

(a)

(b)

(c)

now

now

now

desired familiar

past future

(d) temporally decentered self

1 2 3 4 5 6

history of self and world

now

(e) temporally decentered self

1 2 3 4 5 6

history of self

1 2 3 4 5 6

history of world

now

now

Figure 15 .4. Levels of consciousness and their implications for the experience of events in time. (a)MinC. The contents of minimal consciousness are restricted to present intero- and exteroreceptorstimulation (Now). (b) RecC. Past and future events can now be considered but toddlers cannotsimultaneously represent the present when representing past or future events. When descriptions ofpast experiences become the contents of recursive consciousness, they will feel familiar.Future-oriented states (goals) may be accompanied by a feeling of desire. (c) SelfC. Children canconsider descriptions of past or future-oriented events in relation to a present experience. Forexample, while conscious of their current state (Now), 2-year-olds can appreciate that Yesterday theywent to the zoo. This creates the conditions for a subjective experience of self-continuity in time, butit does not allow simultaneous consideration of events occurring at two different times. (d) RefC1.From this higher level of consciousness, which allows for a temporally decentered perspective,children can consider two events occurring at two different times, including an event occurring in thepresent. For example, they can consider that Now, EventA is occurring, but Yesterday, EventBoccurred. This is an important advance in the development of episodic memory, but at this point, thehistory of own’s own subjective experiences (history of self) and the history of the world areconfounded – there is no means of conceptualizing the history of the world as independent of one’sown experiences of the world. (e) RefC2 . From a temporally decentered perspective, children cancoordinate two series, the history of the self and the history of the world. (Reprinted from Zelazo,P.D., & Sommerville, J. (2001). Levels of consciousness of the self in time. In C. Moore & K. Lemmon(Eds.), Self in time: Developmental issues ( pp. 229–252). Mahwah, NJ: Erlbaum.)

Page 435: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 417

that occur during the third trimester of pre-natal development – between about 24 and30 weeks gestational age. First, and perhapsforemost, is the development of thalamo-cortical fibres that show synaptogenesis insensory cortex (and not in prefrontal cortex –the provenance of higher levels of conscious-ness). These thalamocortical connections areestablished as early as 24 weeks gestationalage, but the first evidence of functionalitydoes not occur until about 30 weeks, as indi-cated by sensory-evoked potentials recor-ded in preterm infants (Hrbek, Karlberg, &Olsson, 1973 ; Klimach & Cooke, 1988). Anumber of other neural events also occur atthis time, including the emergence of bilat-erally synchronous electroencephalographic(EEG) patterns of activation (bursts) and theemergence of EEG patterns that distinguishbetween sleep and wakefulness (Torres &Anderson, 1985). Fetal behavior also changesat this age in ways that may indicate theonset of minC. For example, fetuses beginto show clear heart rate increases to vibroa-coustic stimuli (Kisilevsky, Muir, & Low,1992), evidence of habituation to vibroa-coustic stimuli (Groome, Gotlieb, Neely, &Waters, 1993), and sharp increases in cou-pling between their movement and theirheart rate (DiPietro, Caulfield, Costigan, etal., 2004). There are also good reasons tobelieve that fetuses at this age are capableof pleasure and pain (e.g., Anand & Hickey,1987; Lipsitt, 1986). So, on this account, weagree with those who hold that conscious-ness first emerges during the third trimesterof fetal development, but we emphasize therelatively simple nature of this initial level ofconsciousness.

According to this account, attribution ofminC manages to explain infant behavioruntil the end of the first year, when numer-ous new abilities appear within just a fewmonths, from about 9 to 13 months of age.For example, during this period, most infantsspeak their first words, begin to use objects ina functional way, point proto-declaratively,and start searching in a more flexible way forhidden objects (e.g., passing Piaget’s A-not-B task), among other milestones. Accordingto the LOC model, these changes can all be

explained by the emergence of the first newlevel of consciousness – recursive conscious-ness (recC). This new level of consciousnessallows for recollection and the maintenanceof a goal in working memory, key functionalconsequences.

Recursive consciousness. The term ‘recur-sive’ is used here in the sense of a com-puter program that calls itself (see Chap-ter 21). In recC (see Fig. 15 .1a), the contentsof minC at one moment are combined withthe contents of minC at another moment viaan identity relation (rel), allowing the tod-dler to label the initial object of minC. The1-year-old toddler who sees a dog and says,“dog,” for example, combines a perceptualexperience of a dog with a label from seman-tic long-term memory, effectively indicat-ing, “That [i.e., the object of minC] isa dog.” Similarly, pointing effectively indi-cates, “That is that.” There must be twothings, the experience and the label, for oneof them, the experience interpreted in termsof the label, to become an object of recC.

Whereas the contents of minC are con-tinually replaced by new perceptual stimu-lation, recC allows for conscious experiencein the absence of perceptual stimulation.Because a label can be decoupled from theexperience labelled, the label provides anenduring trace of that experience that canbe deposited into both long-term memoryand working memory. The contents of work-ing memory (e.g., representations of hiddenobjects) can then serve as explicit goals totrigger action programs indirectly so that thetoddler is no longer restricted to responsestriggered directly by minC of an immedi-ately present stimulus. Now when objA trig-gers IobjA (see Figure 15 .1a) and becomesthe content of minC, IobjA does not triggeran associated action program directly, butrather IobjA is fed back into minC (calledrecC after one degree of reflection) whereit can be related to a label (descA) fromsemantic long-term memory. This descAcan then be decoupled and deposited inworking memory where it can serve as agoal (G1) that triggers an action programeven in the absence of objA and even ifIobjA would otherwise trigger a different

Page 436: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

418 the cambridge handbook of consciousness

action program. For example, when pre-sented with a telephone, the recC toddlermay activate a specific semantic associationand put the telephone to her ear (functionalplay) instead of putting the telephone in hermouth (a generic, stereotypical response).The toddler responds mediately to the labelin working memory rather than immediatelyto an initial, minC gloss of the situation.

Despite these advances, recursively con-scious toddlers still cannot explicitly con-sider the relation between a means and anend (e.g., Frye, 1991) and hence cannot fol-low arbitrary rules (i.e., rules linking meansand ends or conditions and actions). More-over, although they are no longer exclusivelypresent-oriented, their experience of eventsin time is limited because they have no wayto consider relations among two or moreexplicit representations. As a result, theycannot consider past- or future-oriented rep-resentations from the perspective of thepresent (i.e., from the perspective of anexplicit representation of the present, orNow), because this would require an addi-tional element to be represented (namely,a description of Now). Therefore, it shouldbe impossible for these toddlers to appre-ciate past or future representations as suchbecause the concepts of both past and futureare only meaningful when considered in rela-tion to a perception of the present circum-stances. This situation is depicted in Figure15 .4b. As shown in the figure, recursivelyconscious infants are no longer restricted toNow, but they cannot explicitly considerevents as occurring in the future from theperspective of the present. Similarly, theycannot explicitly consider past events asoccurring in the past from the perspectiveof the present.

Perner and Dienes (2003) present anaccount of the emergence of consciousness(i.e., when children “become consciouslyaware of events in the world,” p. 64)that seems to be congruent with thisaccount of recC. These authors first distin-guish between “unconsciousness awareness”and “conscious awareness” and illustratethe distinction in terms of blindsight (e.g.,Weiskrantz, Sanders, & Marshall, 1974).

Patients with lesions to striate cortex maydeny that they can see anything in a partic-ular part of their visual field. Nonetheless,if they are asked to guess, they are oftenquite good at locating objects in that fieldor even describing features of the objects.Perner and Dienes suggest that the normalhealthy visual perception of objects involvesconscious awareness, whereas the impairedperception displayed by blindsight patientsinvolves unconscious awareness. In terms ofthe LOC model, this distinction would seemto map onto the distinction between minCand recC.

Given this distinction, Perner and Dienes(2003) then consider three behaviors forwhich consciousness seems necessary inadults: verbal communication, executivefunction (or voluntary control over action),and explicit memory (i.e., conscious recol-lection). They note that most babies saytheir first words at about 12 to 13 monthsof age and that the earliest signs of exec-utive function also appear at about thisage. They also argue, on the basis of workwith amnesic patients (McDonough, Man-dler, McKee, & Squire, 1995), that delayedimitation requires explicit memory. Melt-zoff’s work (1985 , 1988) suggests that infantsfirst exhibit delayed imitation sometimebetween 9 and 14 months (although seeMeltzoff & Moore, 1994).

In addition to these potential behavioralindices of consciousness, Perner and Dienes(2003) consider when children might besaid to possess the cognitive capabilitiesthat would allow them to entertain higher-order thoughts about their experiences –consistent with higher-order thought theo-ries of consciousness (e.g., Armstrong, 1968;Rosenthal, 1986; see Chapter 3). Higher-order thought theories claim that conscious-ness consists in a belief about one’s psycho-logical states (i.e., a psychological state isconscious when one believes that one is inthat state), which would seem to requirea fairly sophisticated conceptual under-standing of one’s own mind. According toone version of higher-order thought the-ory (Rosenthal, 2005), however, the relevanthigher-order thoughts may be relatively

Page 437: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 419

simple thoughts that just happen to be aboutone’s psychological state. Perner and Dienesobserve that children start referring to theirown mental states between about 15 and24 months of age, but they caution thatreliance on a verbal measure may lead usto underestimate the abilities of youngerchildren.

Another version of these theories (Car-ruthers, 2000) holds fast to the suggestionthat children will not be conscious until theyare capable of meta-representation – in par-ticular appreciating the distinction betweenappearance and reality, or the notion of sub-jective perspective on independent reality, asassessed by measures of false belief under-standing. It is fairly well established thatchildren do not understand these conceptsexplicitly until about 4 years of age (e.g.,Flavell, Flavell, & Green, 1983 ; Wellman,Cross, & Watson, 2001), and it is on thesegrounds that Carruthers (2000) suggests thatchildren do not have consciousness until thisage. Perner and Dienes, however, raise theintriguing possibility that perhaps higher-order thoughts do not require an explicitunderstanding of subjective perspective, butrather simply a procedural grasp of thenotion – as might be manifested in children’spretend play during the second year of life(e.g., Harris & Kavanaugh, 1993).

Evidently, it remains unclear exactly whatkinds of higher-order thoughts might berequired for conscious experience. Pernerand Dienes attempt to clarify this issuein terms of Dienes and Perner’s (1999)framework of explicit knowledge, and theyend up adopting a stance that resemblesCarruthers’ (2000) view more than Rosen-thal’s (2000) view. According to Perner andDienes (2003), “If one saw [an] object asa circle, but only unconsciously, one mightminimally represent explicitly only a feature,e.g., ‘circle.’ A minimal representation, ‘cir-cle’ would not provide conscious awarenessor conscious seeing since it does not qual-ify as a full constituent of a higher orderthought” (p. 77). Other explicit representa-tions that capture some fact about the objectalso fail to qualify: “The object in front ofme is a circle” and “It is a fact that the object

in front of me is a circle.” Rather, to be con-sciously aware of the circle, on this view, onemust represent one’s psychological attitudetoward the factuality of the representation:“‘I see that [it’s a fact that (the object in frontof me is a circle)]’” (p. 78).

This version of a higher-order thoughttheory is hardly compelling from a devel-opmental perspective, however. For exam-ple, it is by no means inconceivable that a3 -year-old could be conscious of a fact(“There are pencils in the Smarties box”)without being conscious of her attitude(belief) or being conscious that she herselfis entertaining the attitude toward the fact.Indeed, this is exactly what the LOC modelmaintains: RecC allows for conscious expe-riences that can persist in the absence ofperceptual stimulation, but this consciousexperience is still simpler than the com-plex conscious state described by Perner andDienes. From the perspective of the LOCmodel, a relatively high level of conscious-ness (see below), effected by several degreesof reflection, is required to represent one’spsychological attitude toward a fact aboutan object.

As Perner and Dienes (2003) imply, thedevelopmental conclusions to be drawn onthe basis of higher-order thought theoriesare not entirely clear. These authors sug-gest, however, that the balance of the evi-dence suggests that children become con-sciously aware between 12 and 15 months(plus or minus 3 months). In terms of theLOC model, the changes occurring in thisage range (or slightly earlier) do not corre-spond to the emergence of consciousness perse, but they do correspond to an importantdevelopmental change in the character ofexperience – one that allows the (singular)contents of consciousness to be made avail-able to the child in more explicit fashion.Consider again the example of long-distancedriving. The difference between MinC andRecC is the difference between the fleeting,unrecoverable awareness of a stop sign thatis responded to in passing (without anyelaborative processing) and the recoverableawareness that occurs when one not onlysees the stop sign but also labels it as such.

Page 438: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

42 0 the cambridge handbook of consciousness

Although the neural correlates of thebehavioral changes at the end of the firstyear are still relatively unknown, there areseveral reasons to believe that these changescorrespond to important developments inprefrontal cortical function. For example,in a pioneering study using positron emis-sion tomography (PET), Chugani and Phelps(1986) assessed resting glucose metabolismin the brains of nine infants. Although therewas activity in primary sensorimotor cortexin infants as young as 5 weeks of age, andthere were increases in glucose metabolismin other areas of cortex at about 3 months ofage, it was not until about 8 months of agethat increases were observed in prefrontalcortex.

As another example, Bell and Fox (1992)measured EEG activity longitudinally ininfants between 7 and 12 months of age andfound a correlation between putative mea-sures of frontal function and performanceon Piaget’s A-not-B task. In the A-not-Btask, infants watch as an object is hiddenat one location (location A), and then theyare allowed to search for it. Then infantswatch as the object is hidden at a new loca-tion (location B). When allowed to search,many 9-month-old infants proceed to searchat A, but performance on this task devel-ops rapidly at the end of the first year oflife and seems to provide a good measure ofkeeping a goal in mind and using it to con-trol behavior despite interference from pre-potent response tendencies (see Marcovitch& Zelazo, 1999, for a review). The putativemeasures of frontal function were frontalEEG power (in the infant “alpha” range; 6–9 Hz) and frontal/parietal EEG coherence.Frontal EEG power reflects the amount ofcortical neuronal activity as measured atfrontal sites on the scalp, whereas EEGcoherence reflects the correlation betweensignals within a particular frequency bandbut measured at difference scalp sites. Belland Fox (1992 , 1997) have suggested thatchanges in power may be associated withincreased organization and excitability infrontal brain regions and that increases incoherence may indicate that more poste-

rior regions are coming to be controlled byfrontal function.

More recently, Baird and colleagues(2002) used near infrared spectroscopy(NIRS) to compare blood flow in prefrontalcortex in infants who reliably searched forhidden objects (i.e., keeping a goal in mind)and those who did not. These authors foundthat infants who reliably searched for hid-den objects showed an increase in frontalblood flow after the hiding of the object,whereas those who failed to search showeda decrease.

Self-consciousness. Although a 12-month-old behaves in a way that is considerablymore controlled than, say, a 6-month-old,there is currently no convincing evidencethat children are explicitly self-conscious(e.g., at Rochat’s third level of self-awareness) until midway through the sec-ond year of life, at which point they beginuse personal pronouns, first appear to rec-ognize themselves in mirrors, and first dis-play self-conscious emotions like shame andembarrassment (see Kagan, 1981, and Lewis& Brooks-Gunn, 1979, for reviews). In thefamous mirror self-recognition paradigm, anexperimenter may surreptitiously put rougeon a toddler’s nose and then expose chil-dren to a mirror. It is well established thatmost children first exhibit mark-directedbehavior in this situation between about 18

and 24 months of age (e.g., Amsterdam,1972 ; Lewis & Brooks-Gunn, 1979), and thishas been taken to reflect the developmentof an objective self-concept (e.g., Lewis &Brooks-Gunn, 1979) or the sense of a ‘me’as opposed to the sense of an ‘I’ (James,1890/1950).

Kagan (1981) also noted the way in which2-year-olds respond when shown a complexseries of steps in the context of an imita-tive routine. Kagan found that infants at thisage (but not before) sometimes exhibitedsigns of distress, as if they knew that theseries of steps was beyond their ken and wasnot among the means that they had at theirdisposal. According to Kagan, this impliesthat children are now able to considertheir own capabilities (i.e., in the context

Page 439: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 42 1

of a goal of imitating the experimenter).Consideration of a means relative to thegoal that occasions it is a major advancethat allows children consciously to followrules linking means to ends. According tothe LOC model, the further development ofprefrontal cortex during the second year oflife allows children to engage in a higher levelof consciousness – referred to as SelfC. Thisnew level of consciousness is what allowschildren to use a single rule to guide theirbehavior.

As shown in Figure 15 .1b, a self-conscioustoddler can take as an object of con-sciousness a conditionally specified self-description (SdescA) of their behavioralpotential – they can consider conditionallyspecified means to an end. This SdescA canthen be maintained in working memory asa single rule (R1, including a condition, C,and an action, A), considered against thebackground of a goal (G1). Keeping a rule inworking memory allows the rule to governresponding regardless of the current environ-mental stimulation, which may pull for inap-propriate responses.

Among the many changes in children’squalitative experience will be changes intheir experience of themselves, and of them-selves in time – changes in conscious recol-lection. Unlike recursively conscious infants,self-conscious children can now considerdescriptions of events as past- or future-oriented, relative to a present experience(see Fig. 15 .4c). For example, while con-scious of their current state (Now), 2-year-olds can appreciate that yesterday they wentto the zoo (Friedman, 1993). The conceptsyesterday and tomorrow are intrinsically rela-tional because they are indexed with respectto today. Thus, for children to comprehendthat an event occurred yesterday (or willoccur tomorrow), children must be con-scious of Now and consider two linkeddescriptions: a description of the event anda further description of the event as occur-ring yesterday (or tomorrow). Doing so cor-responds in complexity to the use of a sin-gle rule considered against the backdrop ofa goal that occasions its use. That is, G1;

C1 → A1, from Figure 15 .1b, is instantiatedas follows: Now; Tomorrow → EventA.

When children can consider past or futureevents as such, they will have a subjectiveexperience of self-continuity in time. As aresult, they should now be able to engagein episodic recollection, which, accord-ing to Tulving (e.g., 1985), involves con-sciously recalling having experienced some-thing in the past and so depends, by defi-nition, on self-consciousness (or autonoetic[self-knowing] consciousness, in Tulving’sterms). The close relation between thechanges in children’s self-consciousness thatare indexed, for example, by mirror self-recognition, and the onset of episodic rec-ollection has been noted by several authors(e.g., Howe & Courage, 1997; Wheeler,2000), although others, such as Pernerand Ruffman (1995), believe that genuineepisodic recollection does not emerge untillater, coincident with changes in children’stheory of mind (see below).

Although the changes occurring duringthe second half of the second year areremarkable – so remarkable that Piaget(1936/1952) imagined they reflected theemergence of symbolic thought – there con-tinues to be considerable room for develop-ment. For example, an additional degree ofrecursion is required for children to considersimultaneously two different events occur-ring at two different times (e.g., EventA,further described as occurring Now, consid-ered in contradistinction to EventB, furtherdescribed as occurring Tomorrow).

This characterization of the limitationson 2-year-old children’s sense of them-selves in time is similar in some respectsto that offered by Povinelli (1995 , 2001)and McCormack and Hoerl (1999, 2001),although it differs in others. Povinelli (1995)suggests that, although children between18 and 24 months of age can pass mir-ror self-recognition tasks (e.g., Amsterdam,1972 ; Lewis & Brooks-Gunn, 1979), theydo not yet possess an objective andenduring self-concept. Instead, Povinelli(1995) suggests that children at this agemaintain a succession of present-oriented

Page 440: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

42 2 the cambridge handbook of consciousness

representations of self (termed presentselves; Povinelli, 1995 , p. 165), and theycannot compare these representations or“integrate previous mental or physical stateswith current ones” (p. 166). Consequently,their sense of self-continuity in time is tem-porally restricted, and they might still besaid to live in the present. In support ofthese claims, Povinelli and colleagues (1996)found that even 3 -year-olds perform poorlyon measures of delayed self-recognition. Intheir studies, children played a game dur-ing which an experimenter surreptitiouslyplaced a sticker on their heads. About 3 min-utes later, children were presented with avideo image of the marking event. Whereasthe majority of older children (4-year-olds)reached up to touch the sticker, few ofthe younger children (2- and 3 -year-olds)did so.

Zelazo, Sommerville, and Nichols (1999)argued that children perform poorly on mea-sures of delayed self-recognition not becausethey lack a subjective experience of self-continuity in time, but rather because theyhave difficulty adjudicating between con-flicting influences on their behavior (whichrequires the use of higher-order rules and ahigher level of consciousness). More specif-ically, children in Povinelli et al.’s (1996)experiment have a strong expectation thatthey do not have a sticker on their head(because they do not see it placed thereand cannot see it directly at the timeof testing). When they are provided withconflicting information via a dimly under-stood representational medium (e.g., video;Flavell, Flavell, Green, & Korfmacher, 1990),the new, conflicting information may beignored or treated as somehow irrelevantto the situation. Empirical support for thissuggestion comes from a study showingthat although 3 -year-olds can use delayed-video (and delayed-verbal) representationsto guide their search for a hidden objectin the absence of a conflicting expectationabout the object’s location, they have diffi-culty doing so in the presence of a conflict-ing expectation (Zelazo et al., 1999, Exp. 3).Because children have difficulty managingconflicting delayed representations in gen-

eral, poor performance on tests of delayedself-recognition does not necessarily indicatean immature self-concept, although it maywell reflect more general limitations on thehighest level of consciousness that childrenare able to adopt.

Reflective consciousness 1. The LOC modelholds that, in contrast to 2-year-olds, 3 -year-olds exhibit behavior that suggests an evenhigher LOC, reflective consciousness 1 (refC1).For example, they can systematically employa pair of arbitrary rules (e.g., things thatmake noise vs. are quiet) to sort pictures –behavior hypothesized to rely on lateral pre-frontal cortex. According to the model, 3 -year-olds can now reflect on a SdescA ofa rule (R1) and consider it in relation toanother Sdesc (SdescB) of another rule (R2).Both of these rules can then be depositedinto working memory where they can beused contrastively (via a rule like E in Fig-ure 15 .2) to control the elicitation of actionprograms. As a result, unlike 2-year-olds,3 -year-olds do not perseverate on a singlerule when provided with a pair of rules touse (Zelazo & Reznick, 1991).

Of course, there are still limitations on3 -year-olds’ executive function, as seenin their perseveration in the DimensionalChange Card Sort (DCCS). In this task, chil-dren are shown two bivalent, bidimensionaltarget cards (e.g., depicting a blue rabbit anda red boat), and they are told to match aseries of test cards (e.g., red rabbits and blueboats) to these target cards first according toone dimension (e.g., color) and then accord-ing to the other (e.g., shape). Regardless ofwhich dimension is presented first, 3 -year-olds typically perseverate by continuing tosort cards by the first dimension after the ruleis changed. In contrast, 4-year-olds seem toknow immediately that they know two dif-ferent ways of sorting the test cards. Zelazoet al. (2003) have argued that successful per-formance on this task requires the formu-lation of a higher-order rule like F in Fig-ure 15 .2 that integrates two incompatiblepairs of rules into a single structure.

Performance on measures such as theDCCS is closely related to a wide rangeof metacognitive skills studied under the

Page 441: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 42 3

rubric of theory of mind (e.g., Carlson &Moses, 2001; Frye et al., 1995). In onestandard task, called the representationalchange task (Gopnik & Astington, 1988),children are shown a familiar container (e.g.,a Smarties box) and asked what it contains.Subsequently, the container is opened toreveal something unexpected (e.g., string),and children are asked to recall their ini-tial incorrect expectation about its contents:“What did you think was in the box beforeI opened it?” To answer the representa-tional change question correctly, childrenmust be able to recollect (or reconstruct)their initial false belief. Most 3 -year-oldsrespond incorrectly, stating (for example)that they initially thought that the boxcontained string.

Three-year-old children’s difficulty onthis type of task has proven remarkablyrobust. Zelazo and Boseovksi (2001), forexample, investigated the effect of videoreminders on 3 -year-olds’ recollection oftheir initial belief in a representationalchange task. Children in a video supportcondition viewed videotapes of their initial,incorrect statements about the misleadingcontainer immediately prior to being askedto report their initial belief. For example,they watched a videotape in which theysaw a Smarties box for the first time andsaid, “Smarties.” They were then asked aboutthe videotape and about their initial belief.Despite correctly acknowledging what theyhad said on the videotape, children typi-cally failed the representational change task.When asked what they initially thought wasin the box (or even what they initially said),they answered, “String.”

At 3 years of age, then, children areable to consider two rules in contradistinc-tion (i.e., they can consider a single pairof rules) from a relatively distanced per-spective – even if they still cannot adoptthe level of consciousness required for suchmeasures as the DCCS and the representa-tional change task. The relatively psycholog-ically distanced perspective made possibleby RefC1 and the consequent increase in thecomplexity of children’s rule representationsallow for a richer qualitative experience than

was possible at SelfC. For example, childrencan now conceptualize Now from a tempo-rally decentered perspective (McCormack &Hoerl, 1999, 2001; see Figure 15 .4d). Fromthis perspective, children are now able toconsider two events occurring at two differ-ent times. For example, they can considerthat Now, EventA is occurring, but Yester-day, EventB occurred.

This important developmental advanceallows children to make judgments abouthistory (e.g., now vs. before). For example, ina control task used by Gopnik and Astington(1988, Exp. 1), most 3 -year-olds were ableto judge that Now there is a doll in a closedtoy house but Before there was an apple. Atthis level of consciousness, however, chil-dren cannot differentiate between the his-tory of the world and the history of the self.That is, the objective series and the subjec-tive series remain undifferentiated; the twoseries are conflated in a single dimension. Asa result, 3 -year-olds typically fail Gopnik andAstington’s (1988) representational changetask, where they must appreciate that theythemselves changed from thinking Smartiesto thinking string, even while the contentsof the box did not change. According tothe LOC model, this failure to differentiatebetween the history of the world and thehistory of the self occurs because childrenwho are limited to refC1 are only able to usea single pair of rules, which allows them tomake a discrimination within a single dimen-sion, but prevents them from making com-parisons between dimensions (e.g., betweenshape and color in the DCCS).

Reflective consciousness 2 . Research hasrevealed that between 3 and 5 years of age,there are important changes in children’sexecutive function and theory of mind,assessed by a wide variety of measures, andthese changes tend to co-occur in individ-ual children (e.g., Carlson & Moses, 2001;Frye et al., 1995 ; Perner & Lang, 1999). Forthe measures of executive function and the-ory of mind that show changes in this agerange, children need to understand how twoincompatible perspectives are related (e.g.,how it is possible to sort the same cards firstby shape and then by color; how it is possible

Page 442: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

42 4 the cambridge handbook of consciousness

for someone to believe one thing when Iknow something else to be true). Accordingto the LOC model, this understanding ismade possible by the further growth of pre-frontal cortex and the development of theability to muster a further level of conscious-ness – reflective consciousness 2 (refC2). AtrefC2 , the entire contents of refC1 can beconsidered in relation to a Sdesc of compara-ble complexity. This perspective allows chil-dren to formulate a higher-order rule thatintegrates the two incompatible perspectives(e.g., past and present self-perspectives inthe representational change task or colorvs. shape rules in the DCCS) into a sin-gle coherent system and makes it possibleto select the perspective from which to rea-son in response to a given question. (In theabsence of the higher-order rule, childrenwill respond from the prepotent perspec-tive.) In terms of Figure 15 .3 , RefC2 allowschildren to formulate and use a rule like F.

Being able to reflect on their discrimina-tions within a dimension (e.g., shape) andconsidering two (or more) dimensions incontradistinction, allows children to concep-tualize dimensions qua dimensions (see alsoSmith, 1989). In terms of their understand-ing of the self in time, this ability to con-sider dimensions qua dimensions (or seriesqua series) allows children to differentiateand coordinate two series, the history of theself and the history of the world, from atemporally decentered perspective (see Fig-ure 15 .4e). As Bieri (1986) notes, to have agenuine temporal awareness, one must diffe-rentiate the progression of the self from theprogression of events in the world and thenunderstand the former relative to the lat-ter. (The latter corresponds to the objectiveseries, which ultimately serves as the unify-ing temporal framework.) In Bieri’s (1986,p. 266, italics in the original) words: “Inorder to have a genuine temporal awareness,a being must be able to distinguish betweenthe history of the world and the history of itsencounters with this world. And the contin-uously changing temporal perspective . . . isnothing but the continuous process of con-necting these two series of events within arepresentation of one unified time.”

Behavioral evidence of children’s abilityto differentiate and yet coordinate the his-tory of the self and the history of the worldcan be seen in 4- and 5 -year-olds’ successon Gopnik and Astington’s (1988) repre-sentational change task. In this task, chil-dren now appreciate that they themselveschanged from thinking Smarties to think-ing string, but that the contents of the boxdid not change. Thus, against the backdropof Now, children appreciate the history ofthe world, on the one hand; that is, theyappreciate that in the past, EventA (stringin the box) occurred and Now, EventA isstill occurring. However, they also appreci-ate the history of the self, on the other hand:In the past, EventA (believed Smarties in thebox) occurred, and Now, EventB is occurring(believe string in the box).

Because refC2 allows children to inte-grate two incompatible pairs of rules withina single system of rules, it allows them tounderstand that they can conceptualize asingle thing in two distinct ways. For exam-ple, they understand that they can concep-tualize a red rabbit as a red thing and as arabbit in the DCCS, and they understandthat they can acknowledge that a spongerock looks like a rock even though it is reallya sponge (Flavell, Flavell, & Green, 1983).When applied to time, this understandingpermits children potentially to appreciatemultiple temporal perspectives on the sameevent (e.g., that time present is time pastin time future). This acquisition, at about4 or 5 years of age, corresponds to the majordevelopmental change identified in McCor-mack and Hoerl’s (1999) account of tempo-ral understanding: At a higher level of tem-poral decentering, children appreciate thatmultiple temporal perspectives are perspec-tives onto the same temporal reality, andthey acquire the concept of particular times(i.e., that events occur at unique, particulartimes).

Children’s ability to conceptualize a sin-gle thing in multiple ways can also be appliedto their understanding of themselves in time,where it allows children potentially to con-ceptualize themselves from multiple tempo-ral perspectives – to understand themselves

Page 443: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 42 5

as exhibiting both continuity and changein time. Muller and Overton (1998) dis-cuss this understanding in terms of Stern’s(1934/1938) notion of mnemic continuity: “Iam the same one who now remembers whatI then experienced” (p. 250; italics in theoriginal), and they note that Stern describedthis understanding as emerging around thefourth year of life.

Work with children during this periodof development – the transition to refC2 –has been useful in revealing one of the keyroles that language can play in fostering theadoption of higher levels of consciousness;namely, that it can promote reflection withindevelopmental constraints on the highestlevel of consciousness that children are ableto obtain. In particular, labeling one’s sub-jective experiences helps make those expe-riences an object of consideration at a higherlevel of consciousness. Increases in level ofconsciousness, in turn, allow for the flexibleselection of perspectives from which to rea-son. Therefore, for children who are capablein principle of adopting a particular higherlevel of consciousness, labeling perspectivesat the next lower level will increase the like-lihood that they will in fact adopt this higherlevel of consciousness, facilitating cognitiveflexibility.

The effect of labeling on levels of con-sciousness and flexibility can be illustratedby work by Jacques, Zelazo, Lourenco, andSutherland (2007), using the Flexible ItemSelection Task (see also Jacques & Zelazo,2005). On each trial of the task, childrenare shown sets of three items designed soone pair matches on one dimension, and adifferent pair matches on a different dimen-sion (e.g., a small yellow teapot, a large yel-low teapot, and a large yellow shoe). Chil-dren are first told to select one pair (i.e.,Selection 1), and then asked to select a dif-ferent pair (i.e., Selection 2). To respondcorrectly, children must represent the pivotitem (i.e., the large yellow teapot) accordingto both dimensions. Four-year-olds generallyperform well on Selection 1 but poorly onSelection 2 , indicating inflexibility (Jacques& Zelazo, 2001). According to the LOCmodel, although 4-year-olds may not do so

spontaneously, they should be capable ofcomprehending two perspectives on a sin-gle item (as indicated, e.g., by successful per-formance on the Dimensional Change CardSort and a variety of measures of perspec-tive taking (Carlson & Moses, 2001; Fryeet al., 1995). Therefore, the model predictsthat asking 4-year-old children to label theirperspective on Selection 1 (e.g., “Why dothose two pictures go together?”) shouldcause them to make that subjective per-spective an object of consciousness, neces-sarily positioning them at a higher level ofconsciousness from which it is possible toreflect on their initial perspective. From thishigher level of consciousness (i.e., the per-spective of Rule F in Figure 15 .2), it shouldbe easier to adopt a different perspective onSelection 2 , which is exactly what Jacqueset al. (2006) found. This was true whetherchildren provided the label themselves orwhether the experimenter generated itfor them.

In general, the adoption of a higher-orderperspective allows for both greater influ-ence of conscious thought on language andgreater influence of language on consciousthought. On the one hand, it allows for moreeffective selection and manipulation of rules(i.e., it permits the control of language in theservice of thought). On the other hand, itpermits children to respond more appropri-ately to linguistic meaning despite a mislead-ing context – allowing language to influencethought. An example comes from a recentstudy of 3 - to 5 -year-olds’ flexible under-standing of the adjectives “big” and “little”(Gao, Zelazo, & DeBarbara, 2005). Whenshown a medium-sized square together witha larger one, 3 -year-olds had little diffi-culty answering the question, “Which oneof these two squares is a big one?” How-ever, when the medium square was thenpaired with a smaller one, and children wereasked the same question, only 5 -year-oldsreliably indicated that the medium squarewas now the big one. This example showsan age-related increase in children’s sensi-tivity to linguistic meaning when it con-flicts with children’s immediate experience,and it reveals that interpretation becomes

Page 444: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

42 6 the cambridge handbook of consciousness

decoupled, to some degree, from stimulusproperties.

Another example of the same phe-nomenon comes from a study by Deak(2000), who examined 3 - to 6-year-olds’use of a series of different predicates (“lookslike a . . . ,” “is made of . . . ,” or “has a . . . ”)to infer the meanings of novel words. Hefound that 3 -year-olds typically used the firstpredicate appropriately to infer the mean-ing of the first novel word in a series, butthen proceeded to use that same predicateto infer the meanings of subsequent wordsdespite what the experimenter said. In con-trast, older children used the most recentpredicate cues. Again, children are increas-ingly likely to use language to restrict theirattention to the appropriate aspects of a sit-uation (or referent).

Notice that language and consciousthought become increasingly intertwined ina complex, reciprocal relation, as Vygotsky(1934/1986) observed. Thus, language (e.g.,labeling) influences thought (e.g., by pro-moting a temporary ascent to a higher levelof consciousness), which in turn influenceslanguage, and so on. This reciprocal rela-tion can be seen in the growing richnessof children’s semantic understanding and inthe increasing subtlety of their word usage.Consider, for instance, children’s developingunderstanding of the semantics of the verbhit. Children first understand hit from itsuse to depict simple accidental actions (e.g.,an utterance by a child at 2 ;4 .0: Table hithead; Gao, 2001, pp. 220). Usage is initiallyrestricted to particular contexts. Eventually,however, reflection on this usage allows chil-dren to employ the word in flexible and cre-ative ways (e.g., I should hit her with a penciland a stick uttered metaphorically by thesame child at 3 ;8.6; Gao, 2001, pp. 219).

Summary and Topics forFuture Research

According to the LOC model, there are atleast four age-related increases in the high-est level of consciousness that children can

muster, and each level has distinct conse-quences for the quality of subjective expe-rience, the potential for episodic recollec-tion, the complexity of children’s explicitknowledge structures, and the possibility ofthe conscious control of thought, emotion,and action. Higher levels of consciousnessin this hierarchical model are brought aboutby the iterative reprocessing of the contentsof consciousness via thalamocortical cir-cuits involving regions of prefrontal cortex.Each degree of reprocessing recruits anotherregion of prefrontal cortex and results in ahigher level of consciousness, and this in turnallows for a stimulus to be considered rela-tive to a larger interpretive context.

This model aims to provide a compre-hensive account of the development of con-sciousness in childhood that addresses extantdata on the topic and establishes a frame-work from which testable predictions can bederived. Naturally, however, there is consid-erable work to be done. Among the manyquestions for future research, a few seemparticularly pressing. First, future researchwill need to explore the possibility that thereare further increases in children’s highestlevel of consciousness beyond the refC2 levelidentified in the LOC model. Compared toearly childhood, relatively little is knownabout the development of consciousness inadolescence, although it is clear that theconscious control of thought, action, andemotion shows considerable development inadolescence. Indeed, to the extent that thesefunctions are dependent on prefrontal cor-tex, which continues to develop into adult-hood (e.g., Giedd et al., 1999), further age-related increases in the highest level of con-sciousness seem likely.

Second, future research should continueto search for more precise neural mark-ers of the development of consciousness.Among the possible indices are increases inneural coherence, dimensional complexity,and/or the amount or dominant frequencyof gamma EEG power. Such increases couldbe associated with the binding together ofthe increasingly complex hierarchical net-works of prefrontal cortical regions that wehave proposed are associated with higher

Page 445: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 42 7

levels of consciousness. Dimensional com-plexity (DCx), for example, is a non-linear measure of global dynamical com-plexity (for review see Anokhin et al.,2000) that can be derived from EEG data.In a cross-sectional study of children andadults (ages 7 to 61 years), Anokhin et al.(1996) found that whereas raw alpha andtheta power only changed until early adult-hood, structural DCx continued to increaseacross the life span. Other research indicatesthat DCx may show particularly promi-nent increases during adolescence (Anokhinet al., 2000; Farber & Dubrovinskaya, 1991;Meyer-Lindenberg, 1996). In a study com-paring children and adolescents (mean ages,7.5 , 13 .8, 16.4 years), Anokhin et al. (2000)found that both resting and task-relatedcomplexity (in visual and spatial cognitivetasks) increased with age, as did the differ-ence between resting and task-related DCx.

Finally, although formulated to explaindevelopmental data, the LOC model sug-gests a framework for understanding thevagaries of human consciousness across thelife span, and future research should explorethe extent to which this framework is use-ful for understanding the role of conscious-ness in adult behavior. One application isto research on mindfulness (e.g., Brown &Ryan, 2003 ; see Chapter 19). Acting mind-fully (and “super-intending” one’s behavior)may involve adopting higher levels of con-sciousness and coordinating these levels sothat they are all focused on a single thing –a single object of consciousness. This coor-dination of levels of consciousness on a sin-gle object would result in an experience thatdiffers dramatically from the kind of multi-tasking observed, for example, when drivinga car at the level of MinC but carrying on aconversation at a higher level of conscious-ness. Conceptualising mindfulness in termsof the LOC model yields predictions regard-ing the effects of mindfulness meditationon behavior (e.g., attentional control) andneural function (e.g., increasingly elaboratedhierarchies of prefrontal cortical regions).From this perspective, mindfulness medita-tion practice can be seen as a type of trainingthat may increase an individual’s ability to

enter a more coherent (coordinated) hierar-chy of levels of consciousness.

Conclusion

Discussions regarding the development ofconsciousness have focused on the questionof when consciousness emerges, with differ-ent authors relying on different notions ofconsciousness and different criteria for deter-mining whether consciousness is present. Inthis chapter, we presented a comprehensivemodel of consciousness and its developmentthat we believe helps clarify the way dif-ferent aspects of consciousness do indeedemerge at different ages. Our hope is thatthis model provides a useful framework forthinking about consciousness as a complex,dynamic phenomenon that is closely tied toneural function, on the one hand, and cog-nitive control, on the other.

Acknowledgments

Preparation of this article was supportedby grants from the Natural Sciences andEngineering Research Council (NSERC) ofCanada and the Canada Research ChairsProgram.

References

Amsterdam, B. (1972). Mirror self-image reac-tions before age two. Developmental Psychobi-ology, 5 , 297–305 .

Anand, K. J. S. (2005). A scientific appraisal of fetalpain and conscious sensory perception. Reportto the Constitution Subcommittee of the U.S.House of Representatives (109th United StatesCongress).

Anand, K. J. S., & Hickey, P. R. (1987). Pain andits effects in the human neonate and fetus. NewEngland Journal of Medicine, 317, 1321–1329.

Anokhin, A. P., Birnbaumer, N., Lutzenberger,W., Nikolaev, A., & Vogel, F. (1996). Ageincreases brain complexity. Electroencephalog-raphy and Clinical Neurophysiology, 99, 63–68.

Page 446: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

42 8 the cambridge handbook of consciousness

Anokhin, A. P., Vedeniapin, A. B., Sirevaag, E. J.,Bauer, L. O., O’Conner, S. J., Kuperman, S.,Porjesz, B., Reich, T., Begleiter, H., Polich, J.,& Rohrbaugh, J. W. (2000). The P300 brainpotential is reduced in smokers. Psychopharma-cology, 149, 409–413 .

Armstrong, D. M. (1968). A materialist theory ofthe mind. London: Routledge.

Armstrong, D. M. (1980). The nature of mindand other essays. Ithaca, NY: Cornell Univer-sity Press.

Baird, A. A., Kagan, J., Gaudette, T., Walz,K. A., Hershlag, N., & Baos, D. A. (2002).Frontal lobe activation during object perma-nence: Data from near-infrared spectroscopy.NeuroImage, 16, 1120–1126.

Baldwin, J. M. (1892). Origin of volition in child-hood. Science 2 0, 286–288.

Baldwin, J. M. (1894). Imitation: A chapter in thenatural history of consciousness. Mind, 3 , 25–55 .

Baldwin, J. M. (1897). Social and ethical interpre-tations in mental development: A study in socialpsychology. New York: Macmillan.

Baldwin, J. M. (1906). Thought and things: A studyof the development and meaning of thought, orgenetic logic, Vol. 1. London: Swan Sonnenschein& Co.

Barresi, J., & Moore, C. (1996). Intentional rela-tions and social understanding. Behavioral andBrain Sciences, 19, 104–154 .

Bell, M. A., & Fox, N. A. (1992). The relationsbetween frontal brain electrical activity andcognitive development during infancy. ChildDevelopment, 63 , 1142–1163 .

Bell, M. A., & Fox, N. A. (1997). Individual differ-ences in object permanence performance at 8

months: Locomotor experience and brain elec-trical activity. Developmental Psychology, 31,287–297.

Bieri, P. (1986). Zeiterfahrung und Personalitat.In H. Burger (Ed.), Zeit, natur und mensch(pp. 261–281). Berlin: Arno Spitz Verlag.

Brentano, F. (1973). Psychology from an empiricalstandpoint. London: Routledge & Kegan Paul.(Original work published 1874)

Brown, K. W., & Ryan, R. M. (2003). The benefitsof being present: The role of mindfulness inpsychological well-being. Journal of Personalityand Social Psychology, 84 , 822–848.

Bunge, S. A. (2004). How we use rules to selectactions: A review of evidence from cognitive

neuroscience. Cognitive, Affective, and Behav-ioral Neuroscience, 4 , 564–579.

Bunge, S. A., & Zelazo, P. D. (2006). A brain-based account of the development of rule use inchildhood. Current Directions in PsychologicalScience, 15 , 118–121.

Burgess, J. A., & Tawia, S. A. (1996). When didyou first begin to feel it? Locating the beginningof consciousness. Bioethics, 10, 1–26.

Cahan, E. (1984). The genetic psychologies ofJames Mark Baldwin and Jean Piaget. Devel-opmental Psychology, 2 0, 128–135 .

Carlson, S. M., Davis, A. C., & Leach, J. G.(2005). Less is more: Executive function andsymbolic representation in preschool children.Psychological Science, 16, 609–616.

Carlson, S. M., & Moses, L. J. (2001). Individualdifferences in inhibitory control and theory ofmind. Child Development 72 , 1032–1053 .

Carruthers, P. K. (1989). Brute experience. Jour-nal of Philosophy, 86, 258–269.

Carruthers, P. K. (1996) Language, thought, andconsciousness: An essay in philosophical psychol-ogy.New York: Cambridge University Press.

Carruthers, P. K. (2000). Phenomenal conscious-ness: A naturalistic theory. New York: Cam-bridge University Press.

Chalmers, D. J. (1996). The conscious mind.Oxford: Oxford University Press.

Chugani, H. T., & Phelps, M. E. (1986). Mat-urational changes in cerebral function ininfants determined by 18FDG positron emissiontomography. Science, 2 31, 840–843 .

Cleeremans, A., & Jimenez, L. (2002). Implicitlearning and consciousness: A graded, dynamicperspective. In R. M. French & A. Cleeremans(Eds.), Implicit learning and consciousness: Anempirical, philosophical and computational con-sensus in the making (pp. 1–40). Hove, England:Psychology Press.

Cohen, L. B. (1998). An information-processingapproach to infant perception and cognition. InF. Simion & G. Butterworth (Eds.), The devel-opment of sensory, motor, and cognitive capaci-ties in early infancy (pp. 277–300). East Sussex:Psychology Press.

Craik, F., & Lockhart, R. (1972). Levels of pro-cessing: A framework for memory research.Journal of Verbal Learning and Verbal Behavior,11, 671–684 .

Damasio, A. R. (1999). The feeling of what hap-pens. New York: Harcourt Press.

Page 447: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 42 9

Deak, G. O. (2000). The growth of flexi-ble problem-solving: Preschool children usechanging verbal cues to infer multiple wordmeanings. Journal of Cognition and Develop-ment, 1, 157–192 .

DeCasper, A., Lecanuet, J-P., Busnel, M-C.,Granier-Deferre, C., & Mangeais, R. (1994).Fetal reactions to recurrent maternal speech.Infant Behavior and Development, 17, 159–164 .

Dehaene, S., & Changeux, J.-P. (2004). Neu-ral mechanisms for access to consciousness. InM. Gazzaniga (Ed.), The cognitive neurosciences(3 rd ed., pp. 1145–1157). Cambridge, MA: MITPress.

Dewey, J. (1896). Review of studies in the evo-lutionary psychology of feeling. PhilosophicalReview, 5 , 292–299.

Dewey, J. (1985). Context and thought. In J. A.Boydston (Ed.) & A. Sharpe (Textual Ed.), JohnDewey: The later works, 192 5–1953 (Vol. 6 1931–1932 , pp. 3–21). Carbondale, IL: Southern Illi-nois University Press. (Original work published1931)

Dienes, Z., & Perner, J. (1999). A theory ofimplicit and explicit knowledge (target arti-cle). Behavioral and Brain Sciences, 2 2 , 735–755 .

DiPietro, J. A., Caulfield, L. E., Costigan, K. A.,Merialdi, M., Nguyen, R. H., Zavaleta, N., &Gurewitsch, E. D. (2004). Fetal neurobehav-ioral development: A tale of two cities. Devel-opmental Psychology, 40, 445–456.

Edelman, G. M. (1989). Neural Darwinism: Thetheory of group neuronal selection. Oxford Uni-versity Press, Oxford.

Edelman, G. M., & Tononi, G. (2000). A universeof consciousness. New York: Basic Books.

Farber, D. A., & Dubrovinskaya, N. V. (1991).Organization of developing brain functions:Age-related differences and some general prin-ciples. Human Physiology, 19, 326–335 .

Ferrari, M., Pinard, A., & Runions, K. (2001).Piaget’s framework for a scientific study of con-sciousness. Human Development, 44 , 195–213 .

Flavell, J. H., Flavell, E. R., & Green, F. L. (1983).Development of the appearance-reality dis-tinction. Cognitive Psychology, 17, 99–103 .

Flavell, J. H., Flavell, E. R., Green, F. L., &Korfmacher, J. E. (1990). Do young childrenthink of television images as pictures or realobjects?Journal of Broadcasting and ElectronicMedia, 34 , 399–419.

Freud, S. (1940). Neue folge der Vorlesungenzur Einfuhrung in die Psychoanalyse [Newintroductory lectures on psychoanalysis]. In A.Freud, E. Bibring, & E. Kris (Eds.), Gesammeltewerke: XV (Whole volume). London: ImagoPublishing. (Original work published 1933)

Friedman, W. J. (1993). Memory for the time ofpast events. Psychological Bulletin, 113 , 44–66.

Frye, D. (1991). The origins of intention in infancy.In D. Frye & C. Moore (Eds.), Children’s theoriesof mind: Mental states and social understanding(pp. 15–38). Hillsdale, NJ: Erlbaum.

Frye, D., Zelazo, P. D., & Palfai, T. (1995). The-ory of mind and rule-based reasoning. CognitiveDevelopment, 10, 483–527.

Gallagher, S. (2005). How the body shapesthe mind. Oxford: Oxford University Press/Clarendon Press.

Gao, H. (2001). The physical foundation of the pat-terning of physical action verbs. Lund, Sweden:Lund University Press.

Gao, H. H., Zelazo, P. D., & DeBarbara, K.(2005 , April). Beyond early linguistic compe-tence: Development of children’s ability to inter-pret adjectives flexibly. Paper presented at the2005 Biennial Meeting of Society for Researchin Child Development, Atlanta, GA.

Giedd, J. N., Blumenthal, J., Jeffries, N. O.,Castellanos, F. X., Liu. H, Zijdenbos, A., Paus,T., Evans, A. C., & Rapoport, J. L. (1999). Braindevelopment during childhood and adoles-cence: adolescence: A longitudinal MRI study.Nature Neuroscience, 2 , 861–863 .

Gogtay, N., Giedd, J. N., Lusk, L., Hayashi, K. M.,Greenstein, D., Vaituzis, A. C., et al. (2004).Dynamic mapping of human cortical develop-ment during childhood through early adult-hood. Proceedings of the National Academy ofSciences U S A, 101(21), 8174–8179.

Gopnik, A., & Astington, J. W. (1988). Children’sunderstanding of representational change andits relation to the understanding of false beliefand the appearance-reality distinction. ChildDevelopment, 59, 26–37.

Groome, L. J., Gotlieb, S. J., Neely, C. L.,Waters, M. D. (1993). Developmental trends infetal habituation to vibroacoustic stimulation.American Journal of Prerinatology, 10, 46–49.

Haith, M. M., Hazan, C., & Goodman, G. S.(1988). Expectation and anticipation of dyna-mic visual events by 3 .5 -month-old babies.Child Development, 59, 467–479.

Page 448: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

430 the cambridge handbook of consciousness

Harris, P. L., & Kavanaugh, R. D. (1993). Youngchildren’s understanding of pretence. Mono-graphs of the Society for Research in Child Devel-opment (Serial No. 237).

Howe, M., & Courage, M. (1997). The emer-gence and early development of autobiograph-ical memory. Psychological Review 104 , 499–523 .

Hrbek, A., Karlberg, P., & Olsson, T. (1973).Development of visual and somatosensoryresponses in pre-term newborn infants. Elec-troencephalography and Clinical Neurophysiol-ogy, 34 , 225–232 .

Jacques, S., & Zelazo, P. D. (2001). The flexibleitem selection task (FIST): A measure of exec-utive function in preschoolers. DevelopmentalNeuropsychology, 2 0, 573–591.

Jacques, S., & Zelazo, P. D. (2005). Language andthe development of cognitive flexibility: Impli-cations for theory of mind. In J. W. Astington &J. A. Baird (Eds.), Why language matters for the-ory of mind (pp. 144–162). New York: OxfordUniversity Press.

Jacques, S., Zelazo, P. D., Lourenco, S. F., &Sutherland, A. E. (2007). The roles of label-ing and abstraction in the development ofcognitive flexibility. Manuscript submitted forpublication.

James, W. (1950). The principles of psychology(Vol. 1). New York: Dover. (Original work pub-lished 1890)

Kagan, J. (1981). The second year: The emergence ofself-awareness. Cambridge, MA: Harvard Uni-versity Press.

Kagan, J. (1998). Three seductive ideas. Cam-bridge, MA: Harvard University Press.

Karmiloff-Smith, A. (1992). Beyond modularity:A developmental perspective on cognitive science.Cambridge, MA: MIT Press.

Kisilevsky, B. S., Hains, S. M. J., Lee, K., Xie,X., Huang, H., Ye, H. H., Zhang K., & Wang,Z. (2003). Effects of experience on fetal voicerecognition. Psychological Science, 14 , 220–224 .

Kisilevsky, B. S., Muir, D. W., & Low, J. A.(1992). Maturation of human fetal responses tovibroacoustic stimulation. Child Development,63 , 1497–1508.

Klimach, V. J., & Cooke, R. W. I. (1988). Matu-ration of the neonatal somatosensory evokedresponse in preterm infants. DevelopmentalMedicine & Child Neurology, 30, 208–214 .

Lee, S. J., Ralston, H. J. P., Drey, E. A., Partridge,J. C., & Rosen, M. A. (2005). Fetal pain: A

systematic multidisciplinary review of the evi-dence. Journal of the American Medical Associ-ation, 2 94 , 947–954 .

Lewis, M. (2003). The development of self-consciousness. In J. Roessler & N. Eilan(Eds.), Agency and self-awareness (pp. 275–295). Oxford: Oxford University Press.

Lewis, M., & Brooks-Gunn, J. (1979). Social cog-nition and the acquisition of self. New York:Plenum.

Lipsitt, L. P. (1986). Toward understanding thehedonic nature of infancy. In L. P. Lipsitt & J. H.Cantor (Eds.), Experimental child psychologist:Essays and experiments in honor of Charles C.Spiker (pp. 97–109). Hillsdale, NJ: Erlbaum.

Luria, A. R. (1959). The directive function ofspeech in development and dissolution. Part I.Development of the directive function ofspeech in early childhood. Word, 15 , 341–352

Luria, A. R. (1961). Speech and the regulation ofbehaviour. London: Pergamon Press.

Marcovitch, S., & Zelazo, P. D. (1999) The A-not-B error: Results from a logistic meta-analysis.Child Development, 70, 1297–13 13 .

McCormack, T., & Hoerl, C. (1999). Memoryand temporal perspective: The role of temporalframeworks in memory development. Develop-mental Review, 19, 154–182 .

McCormack, T., & Hoerl, C. (2001). The childin time: Episodic memory and the concept ofthe past. In C. Moore & K. Lemmon (Eds.),Self in time: Developmental issues (pp. 203–227).Mahwah, NJ: Erlbaum.

McDonough, L., Mandler, J. M., McKee, R. D.,& Squire, L. R. (1995). The deferred imita-tion task as a nonverbal measure of declarativememory. Proceedings of the National Academyof Sciences USA, 92 , 7580–7584 .

Meltzoff, A. N. (1985). Immediate and deferredimitation in 14- and 24-month-old infants.Child Development, 56, 62–72 .

Meltzoff, A. N. (1988). Infant imitation andmemory: Nine month olds in immediate anddeferred tests. Child Development, 59, 217–225 .

Meltzoff, A. (2002). Imitation as a mechanismof social cognition: Origins of empathy, the-ory of mind, and the representation of action.In U. Goswami (Ed.), Handbook of child-hood cognitive development (pp. 6–25). London:Blackwell.

Meltzoff, A. N., & Moore, M. K. (1994). Imi-tation, memory, and the representation of

Page 449: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

the development of consciousness 431

persons. Infant Behavior and Development, 17,83–99.

Meyer-Lindenberg, A. (1996). The evolution ofcomplexity in human brain development: AnEEG study. Electroencephalography and ClinicalNeurophysiology, 99, 405–411.

Morin, A. (2004 , August). Levels of conscious-ness. Science & Consciousness Review, 2.

Morin, A. (2006). Levels of consciousness andself-awareness: A comparison and integrationof various neurocognitive views. Consciousnessand Cognition, 15 , 358–371.

Moscovitch, M. M. (1989) Confabulation andthe frontal systems: Strategic versus associa-tive retrieval in neuropsychological theories ofmemory. In H. L. Roediger & F. I. M. Craik(Eds.), Varieties of memory and consciousness:Essays in honour of Endel Tulving (pp. 133–160).Mahwah, NJ: Erlbaum.

Muller, U., & Overton, W. F. (1998). How to growa baby: A reevaluation of image-schema andPiagetian action approaches to representation.Human Development, 41, 71–111.

Nagel, T. (1974). What is it like to be a bat?The Philosophical Review, 83 , 435–450.

O’Donnell, S., Noseworthy, M. D., Levine, B., &Dennis, M. (2005). Cortical thickness of thefrontopolar area in typically developing chil-dren and adolescents. NeuroImage, 2 4 , 948–954 .

Perner, J., & Dienes, Z. (2003). Developmentalaspects of consciousness: How much of a the-ory of mind do you need to be consciouslyaware?Consciousness and Cognition, 12 , 63–82 .

Perner, J., & Lang, B. (1999). Development of the-ory of mind and executive control. Trends inCognitive Sciences, 3 , 337–344 .

Perner, J., & Ruffman, T. (1995). Episodic mem-ory and autonoetic consciousness: Develop-mental evidence and a theory of childhoodamnesia. Journal of Experimental Child Psychol-ogy, 59, 516–548.

Perruchet, P., & Vinter, A. (2002). The self-organizing consciousness. Behavioral and BrainSciences, 2 5 , 297–388.

Piaget, J. (1952). The origins of intelligence in chil-dren. (M. Cook, Trans.). New York: Vintage.(Original work published 1936)

Piaget, J. (1977). The grasp of consciousness (S.Wedgewood, Trans.). Cambridge, MA: Har-vard University Press. (Original work pub-lished 1974 .)

Povinelli, D. J. (1995). The unduplicated self.In P. Rochat (Ed.), The self in infancy: The-ory and research (pp. 161–192). New York:Elsevier.

Povinelli, D. J. (2001). The Self: Elevated in con-sciousness and extended in time. In C. Moore& K. Lemmon (Eds.), The self in time: Devel-opmental perspectives (pp. 73–94). New York:Cambridge University Press.

Povinelli, D. J., Landau, K. R., & Perilloux, H. K.(1996). Self-recognition in young childrenusing delayed versus live feedback: Evidenceof a developmental asynchrony. Child Develop-ment, 67, 1540–1554 .

Reznick, J. S. (1994). In search of infant expec-tation. In M. Haith, J. Benson, B. Pennington,& R. Roberts (Eds.), The development of future-oriented processes (pp. 39–59). Chicago: Uni-versity of Chicago Press.

Rochat, P. (2001). The infant’s world. Cambridge,MA: Harvard University Press.

Rochat, P. (2003). Five levels of self-awarenessas they unfold early in life. Consciousness andCognition, 12 , 717–731.

Rochat, P., & Morgan, R. (1995). Spatialdeterminants in the perception of self-produced leg movements in 3 - to 5 -month-old infants. Developmental Psychology, 31, 626–636.

Rochat, P., & Striano, T. (2000). Perceived self ininfancy. Infant Behavior and Development, 2 3 ,513–530.

Rosenthal, D. M. (1986). Two concepts of con-sciousness. Philosophical Studies, 49, 329–359.

Rosenthal, D. (2000). Consciousness, content,and metacognitive judgements. Consciousnessand Cognition, 9, 203–214 .

Rosenthal, D. M. (2005). Consciousness, inter-pretation, and higher-order thought. In P.Giamperie-Deutsch (Ed.), Psychoanalysisas an empirical, interdisciplinary science:Collected papers on contemporary psychoan-alytic research (pp. 119-142). Vienna: Verlagder Osterreichischen Akademie der Wis-senschaften (Austrian Academy of SciencesPress).

Schacter, D. L. (1989). On the relation betweenmemory and consciousness: Dissociable inter-actions and conscious experience. In H. L.Roediger & F. I. M. Craik (Eds.), Varieties ofmemory and consciousness: Essays in honourof Endel Tulving (pp. 355–389). Mahwah, NJ:Erlbaum.

Page 450: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c15 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:39

432 the cambridge handbook of consciousness

Schooler, J. W. (2002). Re-presenting conscious-ness: dissociations between experience andmeta-consciousness. Trends in Cognitive Sci-ences, 6, 339–344 .

Sigel, I. (1993). The centrality of a distancingmodel for the development of representationalcompetence. In R. R. Cocking & K. A. Ren-ninger (Eds.), The development and meaning ofpsychological distance (pp. 91–107). Hillsdale,NJ: Erlbaum.

Siqueland, E. R., & Lipsitt, L. P. (1966). Condi-tioned head-turning in human newborns. Jour-nal of Experimental Child Psychology, 3 , 356–376.

Smith, L. B. (1989). From global similaritiesto kinds of similarities: The construction ofdimensions in development. In S. Vosriadou &A. Ortony (Eds.), Similarity and analogical rea-soning (pp. 146–178). Cambridge: CambridgeUniversity Press.

Stern, D. (1990). Diary of a baby. New York: BasicBooks.

Stern, W. (1938). General psychology from thepersonalistic standpoint. New York: Macmillan.(Original work published 1934 .)

Swain, I. U., Zelazo, P. R., & Clifton, R. K. (1993).Newborn infants’ memory for speech soundsretained over 24 hours. Developmental Psychol-ogy, 2 9, 312–323 .

Torres, F., & Anderson, C. ( 1985). The normalEEG of the human newborn. Journal of ClinicalNeurophysiology, 2 , 89–103 .

Trevarthen, C. , & Aitken, K. J. (2001). Infantintersubjectivity: Research, theory, and clini-cal applications. Journal of Child Psychology &Psychiatry 42 , 3–48.

Tulving, E. (1985). Memory and consciousness.Canadian Psychology, 2 5 , 1–12 .

Vygotsky, L. S. (1978). Mind in society. Cam-bridge, MA: Harvard University Press.

Vygotsky, L. S. (1986). Thought and language(A. Kozulin, Ed.). Cambridge, MA: MIT Press.(Original work published 1934 .)

Weiskrantz, L., Sanders, M. D., & Marshall, J.(1974). Visual capacity in the hemianopic fieldfollowing a restricted occipital ablation. Brain,97, 709–728.

Wellman, H. M., Cross, D., & Watson, J. (2001).Meta-analysis of theory-of-mind development:The truth about false belief. Child Development72 , 655–684 .

Wheeler, M. (2000). Varieties of consciousnessand memory in the developing child. In E. Tulv-ing (Ed.), Memory, consciousness, and the brain:The Tallinn conference (pp. 188–199). London:Psychology Press.

Zelazo, P. D. (1999). Language, levels of con-sciousness, and the development of intentionalaction. In P. D. Zelazo, J. W. Astington, & D. R.Olson (Eds.), Developing theories of intention:Social understanding and self-control (pp. 95–117). Mahwah, NJ: Erlbaum.

Zelazo, P. D. (2004). The development of con-scious control in childhood. Trends in CognitiveSciences, 8, 12–17.

Zelazo, P. D., & Boseovski, J. (2001). Videoreminders in a representational change task:Memory for cues but not beliefs or statements.Journal of Experimental Child Psychology, 78,107–129.

Zelazo, P. D., & Jacques, S. (1996). Children’srule use: Representation, reflection and cogni-tive control. Annals of Child Development, 12 ,119–176.

Zelazo, P. D., Muller, U., Frye, D., & Marcovitch,S. (2003). The development of executive func-tion in early childhood. Monographs of the Soci-ety for Research in Child Development, 68(3),Serial No. 274 .

Zelazo, P. D., & Reznick, J. S. (1991). Age-relatedasynchrony of knowledge and action. ChildDevelopment, 62 , 719–735 .

Zelazo, P. D., & Sommerville, J. (2001). Lev-els of consciousness of the self in time. In C.Moore & K. Lemmon (Eds.), Self in time: Devel-opmental issues (pp. 229–252). Mahwah, NJ:Erlbaum.

Zelazo, P. D., Sommerville, J. A., & Nichols, S.(1999). Age-related changes in children’s use ofrepresentations. Child Development, 35 , 1059–1071.

Zelazo, P. R., & Zelazo, P. D. (1998). The emer-gence of consciousness. Advances in Neurology,77, 149–165 .

Page 451: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

F. Alternative States of Consciousness

433

Page 452: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

434

Page 453: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

C H A P T E R 16

States of Consciousness: Normaland Abnormal Variation

J. Allan Hobson

Abstract

The goal of this chapter is to give anaccount of the phenomenology of the vari-ations in conscious state, and to show howthree mediating brain processes – activation,input-output gating, and modulation –interact over time so as to account for thosevariations in a unified way.

The chapter focuses on variations inconsciousness during the sleep-wake cycleacross species and draws on evidence fromlesion, electrophysiological, and functionalneuroimaging studies. A four-dimensionalmodel called AIM pictorializes both normaland abnormal changes in brain state and pro-vides a unified view of the genesis of a widevariety of normal and abnormal changes inconscious experience.

Introduction

The changes in brain state that result in nor-mal and abnormal changes in the state ofthe mind all share a common process: analteration in the influence of lower centers,

principally located in the brainstem, uponthe thalamus and cortex located in theupper brain. This means that consciousnessis state dependent and that understandingthe mechanisms of brain state control con-tributes indirectly to a solution of the mind-brain problem.

The normal and abnormal variationsin conscious state operate through threefairly well-understood physiological pro-cesses: activation (A), input-output gating(I), and modulation (M) (see Figure 16.1).

Definition and Componentsof Consciousness

Consciousness may be defined as our aware-ness of our environment, our bodies, andourselves. Awareness of ourselves impliesan awareness of awareness; that is, the con-scious recognition that we are consciousbeings. Awareness of oneself implies meta-awareness.

To develop an experimental, scientificapproach to the study of consciousness,it is convenient to subdivide the mental

435

Page 454: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

436 the cambridge handbook of consciousness

elements that constitute consciousness. Wemay discern at least the ten distinct capac-ities of mind defined in Table 16.1. Theseare the faculties of the mind that have beeninvestigated by scientific psychologists sincetheir formulation by William James in 1890.From an examination of this table, it canbe appreciated that consciousness is compo-nential. That is to say, consciousness is madeup of the many faculties of mind, which areseamlessly integrated in our conscious expe-rience. It should be noted that all of thesefunctions are also mediated unconsciously orimplicitly.

Only human beings fulfill all of thedemands of the definition of consciousnessgiven above and the components listed inTable 16.1. And humans are only fully con-scious when they are awake. It is evident thathigher mammals have many of the compo-nents of consciousness and may thus be con-sidered to be partially conscious. Conscious-ness is thus graded in both the presence andintensity of its components.

In Edelman’s terms (1992), animals pos-sess primary consciousness, which comprisessensory awareness, attention, perception,memory (or learning), emotion, and action.This point is of more than theoretical inter-est because so much that we know about thebrain physiology upon which consciousnessdepends comes from experimental work inanimals. In making inferences about how ourown conscious experience is mediated by thebrain, the attribution of primary conscious-ness to animals is not only naturalistic butalso strategic.

What differentiates humans from theirfellow mammals and gives humans whatEdelman calls secondary consciousnessdepends upon language and the associatedenrichment of cognition that allow humansto develop and to use verbal and numericabstractions. These mental capacities con-tribute to our sense of self as agents and ascreative beings. It also determines the aware-ness of awareness that we assume our animalcollaborators do not possess.

Because the most uniquely human cogni-tive faculties are likely to be functions of ourmassive cerebral cortex, it is unlikely that thestudy of animal brains will ever tell us whatwe would like to know about these aspectsof consciousness. Nonetheless, animals canand do tell us a great deal about how othercomponents of consciousness change withchanges in brain physiology. The reader whowishes to learn more about the brain basis ofconsciousness may wish to consult Hobson(1998).

It is obvious that when we go to sleepwe lose sensation and the ability to actupon the world. In varying degrees, allthe components of consciousness listed inTable 16.1 are changed as the brain changesstate. According to the conscious stateparadigm, consciousness changes state ina repetitive and stereotyped way over thesleep-wake cycle. These changes are so dra-matic that we can expect to make strong

Table 16.1. Definition of components ofconsciousness

Attention Selection of input dataPerception Rpresentation of input dataMemory Retrieval of stored

representationsOrientation Representation of time, place,

and personThought Reflection upon

representationsNarrative Linguistic symbolization of

representationsEmotion Feelings about representationsInstinct Innate propensities to actIntention Representations of goalsVolition Decisions to act

Activation

Input

Mode

Figure 16.1. The AIM model.

Page 455: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

states of consciousness: normal and abnormal variation 437

inferences about the major physiologicalunderpinnings of consciousness.

Two conclusions stem from this recogni-tion: The first is that consciousness is gradedwithin and across individuals and species.The second is that consciousness is alteredmore radically by diurnal changes in brainstate than it has been by millions of years ofevolution. We take advantage of these twofacts by studying normal sleep in humansand in those subhuman species with primaryconsciousness.

The Sleep-Waking Cycle

When humans go to sleep they rapidlybecome less conscious. The initial loss ofawareness of the external world that mayoccur when we are reading in bed is asso-ciated with the slowing of the EEG that iscalled Stage 1 (see Fig. 16.2). Frank sleeponset is defined by the appearance of acharacteristic EEG wave, the sleep spindle,which reflects independent oscillation of thethalamocortical system.

Consciousness is altered in a regular wayat sleep onset. Although awareness of theoutside world is lost, subjects may con-tinue to have visual imagery and associatedreflective consciousness. Sleep-onset dreamsare short-lived, and their content departsprogressively from the contents of previouswaking consciousness. They are associatedwith Stage I EEG, rapidly decreasing muscletone, and slow rolling eye movements. As thebrain activation level falls further, conscious-ness is further altered and may be obliter-

Figure 16.2 . The sleep-wake cycle.

ated as the EEG spindles of Stage II NREMsleep block the thalamocortical transmissionof both external and internal signals withinthe brain. When the spindles of Stage II arejoined by high-voltage slow waves in overhalf the record, the sleep is called NREMStage III; it is called NREM Stage IV whenthe whole record comes to be dominated bythe slow waves.

Arousal from Stage NREM IV is difficult,often requiring strong and repeated stimula-tion. On arousal, subjects evince confusionand disorientation that may take minutes tosubside. The tendency to return to sleep isstrong. This process, which is called sleepinertia, is enhanced in recovery sleep follow-ing sleep deprivation (Dinges et al., 1997).

As the activation level is falling, result-ing in the sequence of sleep Stages I toIV, muscle tone continues to abate pas-sively and the rolling eye movements cease.In Stage IV, the brain is maximally deacti-vated, and responsiveness to external stim-uli is at its lowest point. Consciousness, ifit is present at all, is limited to low-level,non-progressive thought. It is important tonote three points about these facts. The firstis that, because consciousness rides on thecrest of the brain activation process, evenslight dips in activation level lead to lapsesin waking vigilance. The second is that evenin the depths of Stage IV NREM sleep whenconsciousness appears to be largely obliter-ated, the brain remains highly active and isstill capable of processing its own informa-tion. From PET and single neurone studies,it can safely be concluded that the brainremains about 80% active in the depthsof sleep.

These conclusions not only emphasize thegraded and state-dependent nature of con-sciousness. They also indicate how small afraction of brain activation is devoted toconsciousness and that most brain activity isnot associated with consciousness. From thisit follows that consciousness, being evanes-cent, is not only a very poor judge of its owncausation and of information processing bythe brain. It is evident that consciousnessrequires a very specific set of neurophysio-logical conditions for its occurrence.

Waking

NREM

REM

Page 456: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

438 the cambridge handbook of consciousness

REM Sleep

In 1953 , Aserinsky and Kleitman reportedthat the sleep EEG was periodically acti-vated to near waking levels and that rapideye movements (REMs) could then berecorded. When aroused from this REMsleep state, subjects frequently reported hal-lucinoid dreaming (Dement & Kleitman,1957). It was later discovered by Jouvet andMichel (1959) that the EMG of the cat wasactively inhibited as the brain was sleep acti-vated and that the same inhibition of motoroutput occurs in humans during REM sleep(Hodes & Dement, 1964).

The overnight tendency is for the periodsof Stage I–IV brain deactivation to becomeshorter and less deep while the REM peri-ods become longer and more intense. As thebrain is activated more and more, the dif-ferentiation in consciousness is correspond-ingly less marked, with reports from earlymorning Stage II coming more and more toresemble those of Stage I. Dreaming, it canthus be reasonably concluded, is our con-scious experience of brain activation in sleep.Because brain activation is most intense inREM sleep, dreaming is most highly corre-lated with that brain state.

Waking and dreaming consciousness con-trast along many of the dimensions shownin Table 16.2 . It can be seen that, althoughdreaming constitutes a remarkable percep-tual and emotional simulacrum of waking,it has equally remarkable cognitive deficien-cies. The internally generated visual perceptsof dreaming are so rich and vivid that theyregularly lead to the delusion that we areawake. When they are associated with strongemotions (principally joy-elation, fear-anxiety, and anger), they can even be surreal.

“Why Does the Eye See a Thing MoreClearly in Dreaming Than When WeAre Awake?”

As Leonardo da Vinci pointed out, dreamconsciousness may be even more intensethan that of normal waking. Such phe-nomenology suggests that perception and

emotion centers of the brain are activated(or even hyperactivated) in REM sleep, andindeed, we have found that this is the case.

At the same time that the perceptual andemotional components of consciousness areenhanced in dreams, such cognitive func-tions as memory, orientation, and insight areimpaired. It is difficult upon awakening toremember one’s dreams but also previousscenes may be lost even as the dream unfolds(M. Fosse et al., 2002). It has recently beenshown that even well-remembered dreamsdo not faithfully reproduce waking experi-ence (M. Fosse et al., 2002). Perhaps relatedto the memory defect is the microscopic dis-orientation called dream bizarreness, whichresults in extreme inconstancy of the unitiesof time, place, person, and action (R. Fosseet al., 2001). It is these unities that constitutethe anchors of waking consciousness.

Reports of thinking are rare on arousalfrom REM sleep, and the thinking that isreported, although logical within the fan-ciful assumptions of the dream, is almostwholly lacking in insight as to the true stateof the mind (R. Fosse et al., 2001). Thus, indreams, we typically assume we are awakewhen we are, in fact, asleep. The conversealmost never occurs, weakening the thesisof such skeptical philosophers as Malcolm(1956), who hold that we never know cer-tainly what state we are in and that reportsof dreaming are fabricated upon awakening.

The Neurophysiology of Sleep withSpecial Reference to Consciousness

The deactivation of the brain at sleep onset isseen as the characteristic EEG change and isexperienced as an impairment of conscious-ness. It is related to decreases in activity ofthe neurones that constitute the brainstemreticular formation. This finding is in con-cordance with the classical experiments ofMoruzzi and Magoun (1949) who showedthat arousal and EEG activation were a func-tion of the electrical impulse traffic in thebrainstem core.

Since 1949, the reticular activating sys-tem has been shown to be anything but

Page 457: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

states of consciousness: normal and abnormal variation 439

Table 16.2 . Contrasts in the phenomenology of waking and dreaming consciousness

Function Nature of Difference Causal Hypothesis

Sensory input Blocked Presynaptic inhibitionPerception (external) Diminished Blockade of sensory inputPerception (internal) Enhanced Disinhibition of networks storing sensory

representationsAttention Lost Decreased aminergic modulation causes a

decrease in signal to noise ratioMemory (recent) Diminished Because of aminergic demodulation, activated

representations are not restored in memoryMemory (remote) Enhanced Distinhibition of networks storing mnemonic

representations increases access toconsciousness

Orientation Unstable Internally inconsistent orienting signals aregenerated by cholinergic system

Thought Reasoning ad hoc, logicalrigor weak, processing,hyperassociative

Loss of attention memory and volition leadsto failure of sequencing and ruleinconstancy; analogy replaces analysis

Insight Self-reflection lost (failureto recognize state asdreaming)

Failure of attention, logic, and memoryweaken second (and third) orderrepresentations

Language (internal) Confabulatory Aminergic demodulation frees narrativesynthesis from logical restraints

Emotion Episodically strong Cholinergic hyperstimulation of amygdalaand related temporal lobe structurestriggers emotional storms, which areunmodulated by aminergic restraint

Instinct Episodically strong Cholinergic hyperstimulation ofhypothalamus and limbic forebrain triggersfixed action motor programs, which areexperienced fictively but not enacted

Volition Weak Top-down motor control and frontalexecutive power cannot compete withdisinhibited subcortical network activation

Output Blocked Postsynaptic inhibition

non-specific (Hobson & Brazier, 1980).Instead, it consists of highly specificinterneurones that project mainly locallybut also reach upward to the thalamus anddownward to the spinal cord. By means ofthese connections, reticular formation neu-rones regulate muscle tone, eye movements,and other sensorimotor functions necessaryto waking consciousness.

The reticular formation also containschemically specific neuronal systems whoseaxons project widely throughout the brainwhere they secrete the so-called neuro-modulators: dopamine, norepinephrine, andserotonin (on the aminergic side) and acetyl-

choline (on the cholinergic side). The stateof the brain and consciousness is thus deter-mined not only by its activation level but alsoby its mix of neuromodulators.

Sngle-cell recording studies in cats haverevealed that in REM sleep, when globalbrain activation levels are as high as in wak-ing, the firing of two aminergic groups isshut off (Hobson, McCarley, & Wyzinski,1975 ; McCarley & Hobson, 1975). Thus theactivated brain of REM sleep is aminer-gically demodulated with respect to nore-pinephrine and serotonin. Because nore-pinephrine is known to be necessary forattention (Foote, Bloom, & Aston-Jones,

Page 458: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

440 the cambridge handbook of consciousness

1983) and serotonin is necessary for memory(Martin et al., 1997), we can begin to under-stand the cognitive deficiencies of dreamingconsciousness in physiological terms.

What about the enhancement of internalperception and emotion that characterizesdream consciousness? Could it be related tothe persistence of the secretion of dopamineand the increase in output of the choliner-gic neurones of the brainstem? It turns outthat the cholinergic neurones of the retic-ular formation are indeed hyperexciteablein REM; in fact, they fire in bursts that aretightly linked in a directionally specific wayto the eye movements that give REM sleepits name. The result is that such forebrainstructures as the amygdala (in the limbic,emotion-mediating brain) and the postero-lateral cortex (in the multimodal sensorybrain) are bombarded with cholinergicallymediated internal activation waves duringREM.

In the transition from waking to REM,consciousness has shifted from exterocep-tive perception to interoceptive and frommoderated to unmoderated emotion. Toexplain this shift, cholinergic hypermodu-lation together with persistent dopaminer-gic modulation is a candidate mechanism.The mind has simultaneously shifted fromoriented to disoriented and from mnemonicto amnesic cognition. To explain this shift,aminergic demodulation is the best currentcandidate mechanism.

Input-Ouput Gating

If the brain is activated in sleep, why don’twe wake up? One reason is the aminer-gic demodulation. Another powerful rea-son is that in REM sleep sensory inputand motor output are actively blocked. Thisclosing of the input and output gates isan active inhibitory process in the spinaland the motor neurones that convey move-ment commands to the muscles. Sensorimo-tor reticular formation neurones inhibit thesensory afferent sensory fibers coming fromthe periphery.

The net result is that in dreams we are notonly perceptually and emotionally hyper-conscious but also cognitively deficient and

off-line to sensory inputs and motor out-puts. That is to say, we are anesthetizedand paralyzed in addition to experiencinghallucinated emotion and being disorientedand amnesic. This is the activation-synthesistheory of dreaming (Hobson & McCarley,1977). What other evidence can be broughtto test these hypotheses?

A Four-Dimensional Modelof Conscious State

Three factors – activation level (A), input-output gating (I), and neuromodulation ratio(M) – determine the normal changes inthe state of the brain that give rise tochanges in the state of consciousness that dif-ferentiate waking, sleeping, and dreaming.Because these three variables can be mea-sured in animals, it is appropriate and heuris-tically valuable to model them. In so doing,we replace the traditional two-dimensionalmodel with the four-dimensional model inFigure 16.1.

In the AIM model, time is the fourthdimension because the instantaneous valuesof A, I, and M are points that move in thethree-dimensional state space. They form anelliptical trajectory that represents the sleep-wake sequence as a cyclical function, ratherthan as the stairway that is represented in thetraditional two-dimensional model in whichactivation is plotted against time (look againat Figures 16.1 and 16.2).

To understand the AIM model, it is help-ful to grasp the fact that the waking domainis in the back upper right corner of thestate space. It is there, and only there, thatactivation (A) level is high, input-outputgates (I) are open, and the modulatory mix(M) measured as the aminergic/cholinergicratio is also high. Because all three measureschange from moment to moment, the AIMpoints form a cloud in the waking domain ofthe state space.

When sleep supervenes, all three AIMvariables fall. The net result is that theNREM (N) sleep domain is the center ofthe state space. With the advent of REM,the activation level rises again to waking lev-els, but the input-output gates are actively

Page 459: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

states of consciousness: normal and abnormal variation 441

closed and aminergic neurones are shut off.Factors I and M therefore fall to their lowestpossible levels. The REM sleep domain (R)is thus in the right anterior lower corner ofthe state space. The AIM model clearly dif-ferentiates REM sleep from waking. It alsoaffords a valuable picture of how and whythe conscious states of waking and dreamingdiffer in the way that they do.

As shown by the dashed line formingan elliptical trajectory through the statespace, the sleep-wake cycle is representedas a recurrent cycle. Actually the sequentialcycles of sleep move to the right (as the acti-vation level increases overnight) and down-ward as the brain comes to occupy the REMdomain for longer and longer periods oftime.

Lucid dreaming is a normal variation inconscious state that serves to illustrate andemphasize the value of the AIM model.When subjects learn to recognize that theyare dreaming while they are dreaming, theyobviously have elements of both REM andwaking consciousness. They can continue tohallucinate, but they are no longer deludedabout the provenance of the imagery.

Lucid dreamers typically report that,although they may learn to watch and con-sciously influence the course of their dreamsand even to voluntarily awaken to enhancerecall, lucidity is difficult to maintain. Often,they are either pulled back down into non-lucid dreaming or wake up involuntarily.The lucid dreaming domain lies betweenREM and wake in the middle of the statespace near the right side wall. Subjects nor-mally cross the REM-wake transition zonerapidly, suggesting that lucid dreaming isa forbidden zone of the state space. Suchunwelcome processes as sleep paralysis andhypnopompic hallucinations occur whensubjects wake up but one or another REMprocess persists.

Brain Imaging and Lesion Studiesin Humans

Over the past decade, two parallel lines ofscientific inquiry have contributed striking

insights to the brain basis of conscious expe-rience via the conscious state paradigm.

Brain Imaging

Taking advantage of PET technology, threeseparate independent groups have imagedthe human brain in normal waking and sleep(Braun et al., 1997; Maquet, 2000; Nofzingeret al., 1997). At sleep onset, the blood flowto all regions of the brain declines. WhenREM sleep supervenes most brain regionsresume the wake state brain perfusion lev-els (from which we infer a restored activa-tion level compared to waking). But severalbrain regions are selectively hyperactivatedin REM. They include the pontine reticu-lar formation (which previous animal stud-ies have shown to regulate REM sleep),the amygdala and the deep basal forebrain(which are thought to mediate emotion),the parietal operculum (which is knownto be involved in visuospatial integration),and the paralimbic cortices (which integrateemotion with other modalities of consciousexperience).

Spontaneous Brain Damage

Patients who have suffered brain damagedue to stroke report a complete cessation ofdreaming when their lesion impairs eitherthe parietal operculum or the deep frontalwhite matter (Solms, 1997). This findingsuggests that those structures mediate con-nections that are essential to dream con-sciousness. When damage is restricted tothe visual brain, subjects continue to dreamvividly, but they lack visual imagery.

Intentional Lobotomy

The clinical histories of patients with mentalillness who had undergone frontal lobotomyin the 1950s revealed an effect on dream-ing. This surgical procedure was designed tocut the fibers connecting the frontal lobesto other parts of the limbic lobe on theassumption that the emotion that wasthought to be driving the patient’s psychosiswas mediated by these fibers. Some patientsdid indeed benefit from the surgery, butmany reported a loss of dreaming, again

Page 460: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

442 the cambridge handbook of consciousness

suggesting that fronto-limbic connectionswere as essential to that normal hallucina-tory process as they were to psychosis.

Other Abnormal Conditions

When traumatic brain damage or strokeaffects the brainstem, the resulting injury toneurones mediating activation, input-outputgating, and modulation can render subjectscomatose for long periods of time. Such sub-jects may be unable to wake or to sleep nor-mally, in which case they are said to be ina chronic vegetative state. They have beenpermanently moved to the left half of theAIM state space. As they move further andfurther to the left, they may lose the capac-ity to activate their thalamocortical systemeven to the NREM sleep level. A flat EEGindicates a complete absence of activationand intrinsic oscillation.

locked-in syndrome

Patients with amytrophic lateral sclerosis(popularly known as Lou Gehrig’s Disease)remain conscious during waking, but areunable to signal out because of motor neu-ronal death. Recent research suggests thatthey can be taught to signal out and say “yes”or “no” by raising or lowering their corticalDC potentials (Hinterberger et al., 2004).It is not known whether these subjects havenormal sleep cycles, but the assumptions ofthe AIM model predict that they should.

temporal lobe epilepsy and

“dreamy states”

When neuronal excitability is locally altered(as in temporal lobe epilepsy), patientssometimes experience the intrusion ofdream-like states into waking consciousness.This phenomenon serves to illustrate boththe value and the limitations of the AIMmodel.

If the abnormal discharge of the epilepticfocus in the temporal lobe is strong enough,it can come to dominate the rest of thebrain and cause it to enter an altered state ofwaking consciousness akin to dreaming. Thisshift, which is caused by an increase in inter-nal stimulus strength, causes a change in the I

dimension of AIM in the direction of REM.Such a formulation is compatible with thePET finding of selective temporal lobe acti-vation in normal REM sleep. It is reasonableto propose that the kinship of temporal lobeepilepsy “dreamy” states and normal dream-ing is due to a shared selective activation oflimbic structures.

However, this local excitability changecannot be easily modeled by AIM becausethe activation measure is global, and asPET studies indicate, the activation of REM(and TLE) is regionally selective, there beingsome brain areas (like the limbic lobe) thatare turned on and others (like the dorsolat-eral prefrontal cortex) that are turned off.

The only way to deal with this reality is toadd brain regions as a fifth dimension to theAIM model. Because it is impossible to rep-resent brain regions within the state space ofAIM, the easiest way to represent and visu-alize this modification is to see the brain as aregionally diverse set of AIM models. Thusthe value of the AIM may be locally alteredwith profound effects upon consciousness.

Conclusions

By studying the way that consciousness isnormally altered when we fall asleep andwhen we dream, it is possible to obtaininsights about how the brain mediates con-sciousness. So stereotyped and so robustare the corresponding changes in brain andconscious state as to assure the followingconclusions:

1. Consciousness is componential. It com-prises many diverse mental functionsthat, in waking, operate in a remarkablyunified fashion to mediate our experienceof the world, our bodies, and ourselves.

2 . Consciousness is graded. Within andacross species, animals are continuallymore or less conscious depending uponthe componential complexity and thestate of their brains.

3 . Consciousness is state dependent. Duringnormal sleep, consciousness undergoesboth global and selective componential

Page 461: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

states of consciousness: normal and abnormal variation 443

differentiation as the brain regions medi-ating the components of consciousnessare globally or selectively activated anddeactivated.

4 . Conscious state is a function of brainstate. Experimental studies of sleep haveidentified three factors that determinebrain state: activation level (A), input-output gating (I), and modulation (M).With time as a fourth dimension, theresulting AIM model represents the sleepcycle as an ellipse and more clearly dif-ferentiates waking and REM as the sub-strate of the conscious states of wakingand dreaming.

5 . Recent brain imaging and brain lesionstudies in humans indicate that activation(A) is not only global but also regionaland that selective activations and inacti-vations of specific brain subregions con-tribute to differences in conscious expe-rience. A fifth dimension may thereforehave to be added to the AIM model.

6. Armed with the AIM model, it is possibleto obtain a unified view of the genesis ofa wide variety of normal and abnormalchanges in conscious experience.

Acknowledgements

The author gratefully acknowledges the fol-lowing sponsors of his research: the NationalInstitutes of Health; the National ScienceFoundation; the John D. and Catherine T.MacArthur Foundation; and the Mind Sci-ence Foundation. Technical assistance wasprovided by Katerina di Perri and NicholasTranguillo.

References

Aserinsky, E., & Kleitman, N. (1953). Regularlyoccurring periods of eye motility and con-comitant phenomena during sleep. Science, 118,273–274 .

Braun, A. R., Balkin, T. J., Wesenten, N. J., Car-son, R. E., Varga, M., Baldwin, P., Selbie, S.,Belenky, G, & Herscovitch, P. (1997). Regionalcerebral blood flow throughout the sleep-wake

cycle. An H2(15)O PET study. Brain, 12 0(7),1173–1197.

Dement, W. C., & Kleitman, N. (1957). The rela-tion of eye movements during sleep to dreamactivity: An objective method for the study ofdreaming. Journal of Experimental Psychology,53(3), 339–346.

Dinges, D. F., Pack, F., Williams, K., Gillen,K. A., Powell, J. W., Ott, G. E., Aptowicz,C., & Pack, A. I. (1997). Cumulative sleepi-ness, mood disturbance, and psychomotor vig-ilance performance decrements during a weekof sleep restricted to 4–5 hours per night. Sleep,2 0(4), 267–277.

Edelman, G. M. (1992). Bright air, brilliant fire: Onthe matter of the mind. New York: Basic Books.

Foote, S. L., Bloom, F. E., & Aston-Jones, G.(1983). Nucleus locus coeruleus: New evidenceof anatomical physiological specificity. Physio-logical Review, 63 , 844–914 .

Fosse, M. J., Fosse, R., Hobson, J. A., & Stick-gold, R. (2002). Dreaming and episodic mem-ory: A functional dissociation? Journal of Cog-nitive Neuroscience, 15(1), 1–9.

Fosse, R., Stickgold, R., & Hobson, J. (2001).A. Brain-mind states: Reciprocal variationin thoughts and hallucinations. PsychologicalScience, 12 (1), 30–36.

Hinterberger, T., Neumann, N., Pham, M.,Kubler, A., Grether, A., Hofmayer, N.,Wilhelm, B., Flor, H., & Birbaumer, N. (2004).A multimodal brain-based feedback andcommunication system. Experimental BrainResearch, 154(4), 521–526.

Hobson, J. A. (1998). Consciousness. New York:W. H. Freeman.

Hobson, J. A., & Brazier, M. A. B. (Eds.). (1980).The reticular formation revisited: Specifying func-tion for a nonspecific system. New York: RavenPress.

Hobson, J. A., & McCarley, R. W. (1977). Thebrain as a dream state generator: An activa-tion synthesis hypothesis of the dream process.American Journal of Psychiatry, 134(12), 1335–1348.

Hobson, J. A., McCarley, R. W., & Wyzinski,P. W. (1975). Sleep cycle oscillation: Reciprocaldischarge by two brain stem neuronal groups.Science, 189, 55–58.

Hodes, R., & Dement, W. C. (1964). Depressionof electrically induced reflexes (“H-reflexes”) inman during low voltage EEG sleep. Electroen-cephalography and Clinical Neurophysiology, 17,617–629.

Page 462: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c16 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 22 , 2007 11:43

444 the cambridge handbook of consciousness

Jouvet, M. (1962). Recherches sur les structuresnerveuses et les mecanismes responsables desdifferentes phases du sommeil physiologique.Archives Italiennes de Biologie, 100, 125–206.

Jouvet, M. (1969). Biogenic amines and the statesof sleep. Science, 163(862), 32–41.

Jouvet, M., & Michel, F. (1959). Correlationelectromyographiques du sommeil chez lechat decortique mesencephalique chronique.Comptes Rendues des Seances de la Societe deBiologie et de Ses Filiales, 153 , 422–425 .

Malcolm, N. (1956). Dreaming and skepticism.Philosophical Review, 65 , 14–37.

Maquet, P. (2000). Functional neuroimaging ofsleep by positron emission tomography. Journalof Sleep Research, 9, 207–231.

Martin, K. C., Casadio, A., Zhu, H., Yaping,E., Rose, J. C., Chen, M., Bailey, C. H., &

Kandel, E. R. (1997). Synapse-specific, long-term facilitation of aplysia sensory to motorsynapses: A function for local protein syn-thesis in memory storage. Cell, 91(7), 927–938.

McCarley, R. W., & Hobson, J. A. (1975). Neu-ronal excitability modulation over the sleepcycle: A structural and mathematical model.Science, 189, 58–60.

Moruzzi, G., & Magoun, H W. (1949). Brainstemreticular formation and activation of the EEG.Electroencephalography and Clinical Neurophys-iology, 1, 455–473 .

Nofzinger, E. A., Mintun, M. A., Wiseman, M.,Kupfer, D. J., & Moore, R. Y. (1997). Forebrain,activation in REM sleep: An FDG PET study.Brain Research, 770(1–2), 192–201.

Solms, M. (1997). The neuropsychology of dreams.Hillsdale, NJ: Erlbaum.

Page 463: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

C H A P T E R 17

Consciousness in Hypnosis

John F. Kihlstrom

Abstract

In hypnosis, subjects respond to suggestionsfor imaginative experiences that can involvealterations in conscious perception, mem-ory, and action. However, these phenom-ena occur most profoundly in those subjectswho are highly hypnotizable. The chap-ter reviews a number of these phenomena,including posthypnotic amnesia; hypnoticanalgesia; hypnotic deafness, blindness, andagnosia; and emotional numbing, withan eye toward uncovering dissociationsbetween explicit and implicit memory, per-ception, and emotion. These dissociativephenomena of hypnosis bear a phenotypicsimilarity to the “hysterical” symptoms char-acteristic of the dissociative and conversiondisorders. The experience of involuntarinessin hypnotic response is considered in light ofthe concept of automatic processing. Hyp-nosis may be described as an altered stateof consciousness based on the convergenceof four variables: induction procedure, sub-jective experience, overt behavior, and psy-chophysiological indices – including neural

correlates of hypnotic suggestion revealed bybrain imaging.

Consciousness in Hypnosis

Hypnosis is a process in which one per-son (commonly designated the subject)responds to suggestions given by anotherperson (designated the hypnotist) for imag-inative experiences involving alterations inperception, memory, and the voluntary con-trol of action. Hypnotized subjects can beoblivious to pain; they hear voices that aren’tthere and fail to see objects that are clearlyin their field of vision; they are unable toremember the things that happened to themwhile they were hypnotized; and they carryout suggestions after hypnosis has been ter-minated, without being aware of what theyare doing or why. In the classic case, theseexperiences are associated with a degree ofsubjective conviction bordering on delusionand an experience of involuntariness border-ing on compulsion.

445

Page 464: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

446 the cambridge handbook of consciousness

The Importance of IndividualDifferences

The phenomena of hypnosis can be quitedramatic, but they do not occur in every-one. Individual differences in hypnotizabil-ity are measured by standardized psycho-logical tests, such as the Harvard GroupScale of Hypnotic Susceptibility, Form A(HGSHS:A) or the Stanford Hypnotic Sus-ceptibility Scale, Form C (SHSS:C). Thesepsychometric instruments are essentiallywork samples of hypnotic performance, con-sisting of a standardized induction of hypno-sis accompanied by a set of 12 representativehypnotic suggestions. For example, on bothHGSHS:A and SHSS:C, subjects are askedto hold out their left arm and hand, and thenit is suggested that there is a heavy object inthe hand, growing heavier and heavier, andpushing the hand and arm down. The sub-ject’s response to each suggestion is scoredaccording to objective behavioral criteria(for example, if the hand and arm lowerat least 6 inches over a specified intervalof time), yielding a single score represent-ing his or her hypnotizability, or responsive-ness to hypnotic suggestions. Hypnotizabil-ity, so measured, yields a quasi-normal distri-bution of scores in which most people are atleast moderately responsive to hypnotic sug-gestions, relatively few people are refractoryto hypnosis, and relatively few fall withinthe highest level of responsiveness (Hilgard,1965).

Although most people can experiencehypnosis to at least some degree, the mostdramatic phenomena of hypnosis – the onesthat really count as reflecting alterations inconsciousness – are generally observed inthose “hypnotic virtuosos” who comprise theupper 10 to 15% of the distribution of hyp-notizability. Accordingly, a great deal of hyp-nosis research involves a priori selection ofhighly hypnotizable subjects, to the exclu-sion of those of low and moderate hypno-tizability. An alternative is a mixed designin which subjects stratified for hypnotizabil-ity are all exposed to the same experimentalmanipulations, and the responses of hypno-

tizable subjects are compared to those whoare insusceptible to hypnosis. In any case,measurement of hypnotizability is crucialto hypnosis research: There is no point instudying hypnosis in individuals who cannotexperience it.

Some clinical practitioners believe thatvirtually everyone can be hypnotized, ifonly the hypnotist takes the right approach,but there is little evidence favoring thispoint of view. Similarly, some researchersbelieve that hypnotizability can be enhancedby developing positive attitudes, motiva-tions, and expectancies concerning hypnosis(Gorassini & Spanos, 1987), but there is alsoevidence that such interventions are heavilylaced with compliance (Bates & Kraft, 1991).As with any other skilled performance, hyp-notic response is probably a matter of bothaptitude and attitude: Negative attitudes,motivations, and expectancies can interferewith performance, but positive ones are notby themselves sufficient to create hypnoticvirtuosity.

Hypnotizability is not substantially corre-lated with most other individual differencesin ability or personality, such as intelligenceor adjustment (Hilgard, 1965). However, inthe early 1960s, Ronald Shor (Shor, Orne,& O’Connell, 1962), Arvid As (As, 1962),and others found that hypnotizability wascorrelated with subjects’ tendency to havehypnosis-like experiences outside of formalhypnotic settings, and an extensive interviewstudy by Josephine Hilgard (1970) showedthat hypnotizable subjects displayed a highlevel of imaginative involvement in suchdomains as reading and drama. In 1974 , Tel-legen and Atkinson developed a scale ofabsorption to measure the disposition to havesubjective experiences characterized by thefull engagement of attention (narrowed orexpanded), and blurred boundaries betweenself and object (Tellegen & Atkinson, 1974).Episodes of absorption and related phenom-ena such as “flow” (Csikszentmihalyi, 1990;Csikszentmihalyi & Csikszentmihalyi, 1988)are properly regarded as altered states ofconsciousness in their own right, but theyare not the same as hypnosis and so are notconsidered further in this chapter.

Page 465: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 447

Conventional personality inventories,such as the Minnesota Multiphasic Person-ality Inventory and California PsychologicalInventory, do not contain items related toabsorption, which may explain their failureto correlate with hypnotizability (Hilgard,1965). However, absorption is not whollyunrelated to other individual differences inpersonality. Recent multivariate researchhas revealed five major dimensions – the“Big Five” – which provide a convenientsummary of personality structure: neuroti-cism (emotional stability), extraversion,agreeableness, conscientiousness, and open-ness to experience (John, 1990; Wiggins &Trapnell, 1997). Absorption and hypnotiz-ability are correlated with those aspects ofopenness that relate to richness of fantasylife, aesthetic sensitivity, and awareness ofinner feelings, but not those that relate tointellectance or sociopolitical liberalism(Glisky & Kihlstrom, 1993 ; Glisky, Tataryn,Tobias, & Kihlstrom, 1991).

Absorption is the most reliable corre-late of hypnotizability; by contrast, vivid-ness of mental imagery is essentially uncor-related with hypnosis (Glisky, Tataryn, &Kihlstrom, 1995). However, the statisticalrelations between hypnotizability and eitherabsorption or openness are simply too weakto permit confident prediction of an indi-vidual’s actual response to hypnotic sug-gestion (Roche & McConkey, 1990). Sofar as the measurement of hypnotizabil-ity is concerned, there is no substitute forperformance-based measures such as theStanford and Harvard scales.

The Controversy over State

Consciousness has two principal aspects:monitoring ourselves and our environment,so that objects and events are accuratelyrepresented in phenomenal awareness, andcontrolling ourselves and the environmentthrough the voluntary initiation and termi-nation of thought and action (Kihlstrom,1984). From this point of view, the phe-nomena that mark the domain of hypno-sis (Hilgard, 1973a) seem to reflect alter-

ations in consciousness. The sensory alter-ations exemplified by hypnotic analgesiaor deafness, as well as posthypnotic amne-sia, are disruptions in conscious awareness:The subject seems to be unaware of per-cepts and memories that ought to be acces-sible to phenomenal awareness. Similarly,posthypnotic suggestion, as well as the expe-rience of involuntariness that frequentlyaccompanies suggested hypnotic experi-ences, reflects a loss of control over cognitionand behavior.

Despite these considerations, the status ofhypnosis as an altered state of consciousnesshas been controversial (e.g., Gauld, 1992 ;Hilgard, 1971; Kallio & Revensuo, 2003 ;Kirsch & Lynn, 1995 ; Shor, 1979a).1 Forexample, psychoanalytically inclined theo-rists classified hypnosis as an instance ofadaptive regression, or regression in theservice of the ego (Fromm, 1979; Gill &Brenman, 1959). Orne believed that theessence of hypnosis was to be found in“trance logic” (Orne, 1959), whereas Hil-gard argued that the phenomena of hyp-nosis were essentially dissociative in nature(Hilgard, 1973b, 1977). By contrast, Sarbinand Coe described hypnosis as a form ofrole-enactment (Sarbin & Coe, 1972); Bar-ber asserted that the phenomena of hyp-nosis could be produced by anyone whoheld appropriate attitudes, motivations, andexpectancies (Barber, 1969).

More recently, both Woody and Bowers(Woody & Bowers, 1994 ; Woody & Sadler,1998) and Kihlstrom (Kihlstrom, 1984 ,1992a, 1998) embraced some version ofHilgard’s neodissociation theory of dividedconsciousness. By contrast, the “sociocogni-tive” approach offered by Spanos (1986a,1991) emphasized the motivated subject’sattempt to display behavior regarded ascharacteristic of a hypnotized person andthe features of the social context thatshaped these displays. Kirsch and Lynn(Kirsch, 2001a,b; Kirsch & Lynn, 1998a,b)offered a “social cognitive” theory of hyp-nosis that attributed hypnotic phenom-ena to the automatic effect of subjects’response expectancies. Following Kuhn(1962), the “state” and “nonstate” views of

Page 466: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

448 the cambridge handbook of consciousness

hypnosis have sometimes been construedas competing paradigms (e.g., Spanos &Chaves, 1970, 1991).

Consciousness and Social Influence

Part of the problem is the multifacetednature of hypnosis itself. Hypnosis entailschanges in conscious perception, memory,and behavior, to be sure, but these changesalso occur following specific suggestionsmade by the hypnotist to the subject. AsWhite (1941) noted at the dawn of the mod-ern era of hypnosis research, hypnosis isa state of altered consciousness that takesplace in a particular motivational context –the motivation being to behave like a hypno-tized subject. Orne (1959), who was White’sprotege as both an undergraduate and a grad-uate student at Harvard, famously tried todistinguish between artifact and essence ofhypnosis, but a careful reading of his workmakes it clear that the demand characteris-tics that surround hypnosis are as importantas any “trance logic” that arises in hypnosis.

Similarly, at the dawn of what mightbe called the “golden age” of hypnosisresearch, Sutcliffe published a pair of semi-nal papers that contrasted a credulous viewof hypnosis, which holds that the mentalstates instigated by suggestion are identicalto those that would be produced by theactual stimulus state of affairs implied inthe suggestions, with a skeptical view thatholds that the hypnotic subject is acting asif the world were as suggested (Sutcliffe,1960, 1961). This is, of course, a versionof the familiar state-nonstate dichotomy,but Sutcliffe also offered a third view: thathypnosis involves a quasi-delusional alter-ation in self-awareness – an altered stateof consciousness that is constructed out ofthe interaction between the hypnotist’s sug-gestions and the subject’s interpretation ofthose suggestions.

Thus, hypnosis is simultaneously a stateof (sometimes) profound cognitive change,involving basic mechanisms of perception,memory, and thought, and a social interac-tion, in which hypnotist and subject cometogether for a specific purpose within a

wider sociocultural context. A truly ade-quate, comprehensive theory of hypnosiswill seek understanding in both cognitiveand interpersonal terms. We do not yet havesuch a theory, but even if we did individ-ual investigators would naturally emphasizeone aspect, whether altered consciousness orsocial context, over the other in their work.The interindividual competition that is partand parcel of science as a social enterpriseoften leads investigators to write as if alter-ations in consciousness and social influencewere mutually exclusive processes – whichthey simply are not.

Taken together with the null-hypothesisstatistical tests that remain part and parcelof the experimental method, and a propen-sity for making strong rather than weak infer-ences from experimental data, investigatorswill often present evidence for one processas evidence against the other. But if thereis one reason why hypnosis has fascinatedsuccessive generations of investigators, sincethe very dawn of psychology as a science, itis that hypnosis exemplifies the marvelouscomplexity of human experience, thought,and action. In hypnosis and elsewhere, com-prehensive understanding will require a cre-ative synthesis in the spirit of discovery,rather than the spirit of proof – a creativesynthesis of both-and, as opposed to a stanceof either-or.

Defining an Altered State

Part of the problem as well are the diffi-culties of defining precisely what we meanby an altered state of consciousness (Lud-wig, 1966). Some theorists have argued thatevery altered state should be associated witha unique physiological signature, much asdreaming is associated with the absence ofalpha activity in the EEG and the occur-rence of rapid eye movements (REM). Thelack of a physiological indicator for hypno-sis, then, is taken as evidence that hypnosisis not a special state of consciousness afterall. But of course, this puts the cart beforethe horse. Physiological indices are validatedagainst self-reports: Aserinsky and Kleitman(1953) had to wake their subjects up during

Page 467: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 449

periods of REM and ask them if they weredreaming. As such, physiological correlateshave no privileged status over introspectiveself-reports: Aserinsky and Kleitman werein no position to contradict subjects whosaid “no.” It is nice when our altered stateshave distinct physiological correlates, butour present knowledge of mind-body rela-tions is simply not sufficient to make suchcorrelates a necessary part of the definition.After all, cognitive neuroscience has madevery little progress in the search for theneural correlates of ordinary waking con-sciousness (Metzinger, 2000). How far inthe future do the neural correlates of alteredstates of consciousness, like hypnosis, await?

In the final analysis, it may be best to treathypnosis and other altered states of con-sciousness as natural concepts, representedby a prototype or one or more exem-plars, each consisting of features that areonly probabilistically associated with cate-gory membership, with no clear boundariesbetween one altered state and another, orbetween altered and normal consciousness(Kihlstrom, 1984). And because we can-not have direct knowledge of other minds,altered states of consciousness must alsoremain hypothetical constructs, inferred froma network of relations among variables thatare directly observable (Campbell & Fiske,1959; Garner, Hake, & Eriksen, 1956; Stoyva& Kamiya, 1968), much in the manner ofa psychiatric diagnosis. From this point ofview the diagnosis of an altered state of con-sciousness can be made with confidence tothe extent that there is convergence amongfour kinds of variables:

1. Induction Procedure: Operationally, a spe-cial state of consciousness can be defined,in part, by the means employed to induceit – or, alternatively, as the output result-ing from a particular input. Barber (1969)employed such an input-output defini-tion as the sole index of hypnosis, largelyignoring individual differences in hyp-notizability. At the very least, hypnosiswould seem to require both a hypnoticinduction and a hypnotizable individualto receive it. But in the case of very highly

hypnotizable subjects, even the inductionprocedure may be unnecessary.

2 . Subjective Experience: Introspective self-reports of changes in subjective experi-ence would seem to be central to anyaltered state of consciousness. As notedearlier, the domain of hypnosis is definedby changes in perception, memory, andthe voluntary control of behavior – anal-gesia, amnesia, the experience of involun-tariness, and the like. If the hypnotist givesa suggestion – for example, that there is anobject in the subject’s outstretched hand,getting heavier and heavier – and the sub-ject experiences nothing of the sort, itis hard to say that he or she has beenhypnotized.

3 . Overt Behavior: Of course, a reliance onself-reports has always made psycholo-gists nervous, so another residue of radi-cal behaviorism (the first was the relianceon operational definitions) is a focus onovert behavior. If a subject hallucinatesan object in his outstretched hand, andfeels it grow heavier and heavier, eventu-ally his arm ought to drop down to hisside. As noted earlier, individual differ-ences in hypnotizability are measured interms of the subject’s publicly observable,overt, behavioral response to suggestions.But in this instance, the overt behavior is,to borrow a phrase from the Book of Com-mon Prayer, an outward and visible sign ofan inward and spiritual grace: It is a con-sequence of the subject’s altered subjec-tive experience. Behavioral response is ofno interest in the absence of correspond-ing subjective experience. For this rea-son, requests for “honesty reports” (Bow-ers, 1967; Spanos & Barber, 1968) or otherappropriate postexperimental interviews(Orne, 1971; Sheehan & McConkey, 1982)can help clarify subjects’ overt behaviorand serve as correctives for simple behav-ioral compliance.

4 . Psychophysiological Indices: Because bothself-reports and overt behaviors are undervoluntary control, and thus subject to dis-tortion by social-influence processes, hyp-nosis researchers have been interested in

Page 468: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

450 the cambridge handbook of consciousness

psychophysiological indices of response –including, of course, various brain imag-ing techniques. Over the years, a numberof such indices have been offered, includ-ing skin conductance and alpha activ-ity, but these have usually proved tobe artifacts of relaxation and not intrin-sic to hypnosis. In retrospect, it wasprobably a mistake to expect that therewould be any physiological correlates ofhypnosis in general, following an induc-tion procedure but in the absence ofany specific suggestions (Maquet et al.,1999), because subjects can have a widevariety of experiences while they werehypnotized. Progress on this issue ismore likely to occur when investiga-tors focus on the physiological correlatesof specific hypnotic suggestions – as inbrain imaging work that shows specificchanges in brain activity corresponding tohypnotic visual hallucinations (Kosslyn,Thompson, Costantini-Ferrando, Alpert,& Spiegel, 2000) or analgesia (Rainville,Hofbauer, Bushnell, Duncan, & Price,2002).

Hypnosis and Hysteria

At least since the late 19th century, interestin hypnosis has had its roots in the med-ical and psychiatric phenomenon knownas hysteria (for historical overviews anddetailed references, see Kihlstrom, 1994a;Veith, 1965). This term originated some4 ,000 years ago in ancient Egyptian (andlater Greek) medicine to refer to a variety ofdiseases thought to be caused by the migra-tion of the uterus to various parts of thebody. In the 17th century, the English physi-cian Thomas Sydenham reformulated thediagnosis so that hysteria referred to physi-cal symptoms produced by non-organic fac-tors. In the 19th century, the concept of hys-teria was refined still further, by Briquet, aFrench neurologist, to include patients withmultiple, chronic physical complaints withno obvious organic basis (Briquet, 1859).Sometime later, Charcot noticed that thesymptoms of hysteria mimicked those of cer-tain neurological illnesses, especially those

affecting tactile sensitivity, “special senses”such as vision and audition, and motor func-tion. Charcot held that these symptoms,in turn, were the products of “functional”lesions in the nervous system produced byemotional arousal and suggestion.

Charcot’s interest in hysteria passed tohis protege Pierre Janet, who held thatthe fundamental difficulty in hysteria wasa restriction in awareness – such that, forexample, hysterically deaf patients were notaware of their ability to hear and hysteri-cally paralyzed patients were not aware oftheir ability to move (Janet, 1907). LikeCharcot, Janet was particularly impressed bythe apparently paradoxical behavior of hys-terical patients, as exemplified by ostensiblyblind individuals who nevertheless displayedvisually guided behavior. Janet argued thatthese behaviors were mediated by mentalstructures called psychological automatisms.In his view, these complex responses toenvironmental events were normally acces-sible to conscious awareness and control,but had been “split off” from the normalstream of conscious mental activity by trau-matic stress – a situation that Janet calleddesaggregation, or, in English translation,“dissociation.”

Although the hegemony of Freudian psy-choanalysis in psychiatry during the first halfof the 20th century led to a decline of inter-est in the classical syndromes of hysteria,the syndrome as such was listed in the early(1952 and 1968) editions of the Diagnos-tic and Statistical Manual for Mental Dis-orders (DSM) published by the AmericanPsychiatric Association. Beginning in 1980,more recent versions of DSM dropped thecategory “hysteria” in favor of separate list-ings of dissociative disorders – including psy-chogenic amnesia and multiple personal-ity disorder – and conversion disorder, listedunder the broader rubric of the somato-form disorders (Kihlstrom, 1992b, 1994a).As the official psychiatric nosology is cur-rently constituted, only the functional dis-orders of memory (Kihlstrom & Schac-ter, 2000; Schacter & Kihlstrom, 1989) areexplicitly labeled as dissociative in nature.However, it is clear that the conversion

Page 469: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 451

disorders also involve disruptions in con-scious awareness and control (Kihlstrom,1992b, 1994a; 2001a; Kihlstrom & Schac-ter, 2000; Kihlstrom, Tataryn, & Hoyt, 1993 ;Schacter & Kihlstrom, 1989). Renewedinterest in the syndromes of hysteria, recon-strued in terms of dissociations affectingconscious awareness, was foreshadowed byHilgard’s “neodissociative” theory of dividedconsciousness, which re-established the linkbetween hypnosis and hysteria (Hilgard,1973b, 1977; see also Kihlstrom, 1979, 1992a;Kihlstrom & McGlynn, 1991).

Viewed from a theoretical perspectivecentered on consciousness, the dissociativedisorders include a number of different syn-dromes all involving disruptions in the mon-itoring and/or controlling functions of con-sciousness that are not attributable to braininsult, injury, or disease (Kihlstrom, 1994a,2001a). These syndromes are reversible,in the sense that it is possible for thepatient to recover the lost functions. Buteven during the symptomatic phase ofthe illness, the patient will show evi-dence of intact functioning in the affectedsystem, outside awareness. Thus, patientswith psychogenic (dissociative) amnesia,fugue, and multiple personality disorder mayshow impaired explicit memory but sparedimplicit memory (Kihlstrom, 2001a; Schac-ter & Kihlstrom, 1989). In the same way,patients with conversion disorders affect-ing vision and hearing may show impairedexplicit perception but spared implicitperception (Kihlstrom, 1992b; Kihlstrom,Barnhardt, & Tataryn, 1992). In light ofthese considerations, a more accurate tax-onomy of dissociative disorders (Kihlstrom,1994a) would include three subcategories ofsyndromes:

1. those affecting memory and identity (e.g.,functional amnesia, fugue, and multiplepersonality disorder);

2 . those affecting sensation and perception(e.g., functional blindness and deafness,analgesia, and tactile anesthesia);

3 . those affecting voluntary action (e.g., fun-ctional weakness or paralysis of the limbs,aphonia, and difficulty swallowing).

Dissociative Phenomena in Hypnosis

As intriguing and historically important asthe syndromes of hysteria and dissociationare, it is also true that they are very rareand for that reason (among others) haverarely been subject to controlled experi-mental investigation. However, beginningwith Charcot’s observation that hystericalpatients are highly suggestible, a number oftheorists have been impressed by the phe-notypic similarities between the symptomsof hysteria and the phenomena of hypno-sis. Accordingly, it has been suggested thathypnosis might serve as a laboratory modelfor hysteria (Kihlstrom, 1979; Kihlstrom &McGlynn, 1991; see also Oakley, 1999). Inthis way, study of alterations in conscious-ness in hypnosis might not just help usunderstand hypnosis, but also hysteria andthe dissociative and conversion disorders aswell. In this regard, it is interesting to notethat hypnotically suggested limb paralysisseems to share neural correlates, as well assurface features, with conversion hysteria(Halligan, Athwal, Oakley, & Frackowiak,2000; Halligan, Oakley, Athwal, & Frack-owiak, 2000; Terao & Collinson, 2000).

Implicit Memory in Posthypnotic Amnesia

Perhaps the most salient alteration in con-sciousness observed in hypnosis is the onethat gave hypnosis its name: posthypnoticamnesia. Upon termination of hypnosis,some subjects find themselves unable toremember the events and experiences thattranspired while they were hypnotized – anamnesia that is roughly analogous to thatexperienced after awakening from sleep-ing. Posthypnotic amnesia does not occur inthe absence of direct or implied suggestions(Hilgard & Cooper, 1965), and the forgot-ten memories are not restored when hyp-nosis is reinduced (Kihlstrom, Brenneman,Pistole, & Shor, 1985). Posthypnotic amnesiais so named because the subject’s memoryis tested in hypnosis, but hypnotic amnesia,in which both the suggestion and the testoccur while the subject is hypnotized, hasthe same properties. Although posthypnotic

Page 470: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

452 the cambridge handbook of consciousness

amnesia typically covers events and experi-ences that transpired during hypnosis, it isalso possible to suggest amnesia for eventsthat occurred while the subject was nothypnotized (Barnier, 1997; Bryant, Barnier,Mallard, & Tibbits, 1999). Both features fur-ther distinguish posthypnotic amnesia fromstate-dependent memory (Eich, 1988).

In contrast to the amnesic syndrome asso-ciated with hippocampal damage, posthyp-notic amnesia is temporary: On administra-tion of a prearranged cue, the amnesia isreversed and the formerly amnesic subject isnow able to remember the previously forgot-ten events (Kihlstrom & Evans, 1976; Nace,Orne, & Hammer, 1974) – although there issome evidence that a small residual amnesiamay persist even after the reversibility cuehas been given (Kihlstrom & Evans, 1977).Reversibility marks posthypnotic amnesia asa disruption of memory retrieval, as opposedto encoding or storage, somewhat like thetemporary retrograde amnesias observedin individuals who have suffered concus-sive blows to the head (Kihlstrom, 1985 ;Kihlstrom & Evans, 1979). The difference,of course, is that posthypnotic amnesia is afunctional amnesia – an abnormal amountof forgetting that is attributable to psycho-logical factors, rather than to brain insult,injury, or disease (Kihlstrom & Schacter,2000). In fact, as noted earlier, posthypnoticamnesia has long been considered to be alaboratory model of the functional amne-sias associated with hysteria and dissociation(Barnier, 2002 ; Kihlstrom, 1979; Kihlstrom& McGlynn, 1991).

Probably the most interesting psychologi-cal research concerning posthypnotic amne-sia concerns dissociations between explicitand implicit memory (Schacter, 1987), andposthypnotic amnesia is no exception. Fol-lowing Schacter (1987), we can identifyexplicit memory with conscious recollec-tion, as exemplified by performance on tra-ditional tests of recall and recognition. Bycontrast, implicit memory refers to the influ-ence of some past event on current expe-rience, thought, and action in the absenceof (or independent of) conscious recollec-tion. Implicit memory, as exemplified by

various sorts of priming effects observed inamnesic patients, is for all intents and pur-poses unconscious memory.

Early evidence that posthypnotic amne-sia impaired explicit memory but sparedimplicit memory came from a pair of exper-iments by Kihlstrom (1980), which werein turn inspired by an earlier investiga-tion by Williamsen and his colleagues (seealso Barber & Calverley, 1966; Williamsen,Johnson, & Eriksen, 1965). Kihlstrom foundthat hypnotizable subjects, given an amne-sia suggestion, were unable to recall theitems in a word list that they had mem-orized during hypnosis. However, theyremained able to use these same itemsas responses on free-association and cat-egory instance-generation tasks. Kihlstromoriginally interpreted this as reflecting adissociation between episodic and seman-tic memory – as did Tulving (1983), whocited the experiment as one of four con-vincing demonstrations of the episodic-semantic distinction. However, Kihlstromalso noted a priming effect on the pro-duction of list items as free associationsand category instances, compared to con-trol items that had not been learned; fur-thermore, the level of priming observedwas the same as that shown by insuscep-tible subjects who were not amnesic forthe word list.2

Spared priming during posthypnoticamnesia was subsequently confirmed bySpanos and his associates (Bertrand, Spanos,& Radtke, 1990; Spanos, Radtke, & Dubreuil,1982), although they preferred to interpretthe results in terms of the demands con-veyed by test instructions rather than disso-ciations between explicit and implicit mem-ory. Later, Dorfman and Kihlstrom (1994)bolstered the case for spared priming by cor-recting a methodological oversight in theearlier studies: The comparison of prim-ing with free recall confounded explicitand implicit memory with the cue environ-ment of the memory test. The dissociationbetween explicit and implicit memory wasconfirmed when a free-association test ofpriming was compared to a cued-recall testof explicit memory. Similarly, Barnier and

Page 471: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 453

her colleagues extended the dissociation toexplicit and implicit memory for materiallearned outside as well as within hypnosis(Barnier, Bryant, & Briscoe, 2001).

Whereas most studies of implicit mem-ory in the amnesic syndrome employ testsof repetition priming, such as stem and frag-ment completion, the studies just describedemployed tests of semantic priming, whichcannot be mediated by a perceptual repre-sentation of the stimulus materials. How-ever, David and his colleagues (David,Brown, Pojoga, & David, 2000) foundthat posthypnotic amnesia spared repetitionpriming on a stem-completion task. Simi-lar results were obtained by Barnier et al.(2001). In an especially important twist,David et al. employed Jacoby’s process dis-sociation paradigm (Jacoby, 1991) to con-firm that the priming spared in posthypnoticamnesia is a reflection of involuntary uncon-scious memory, rather than either involun-tary or voluntary conscious memory.3 That isto say, the spared priming is a genuine reflec-tion of implicit, or unconscious, memory.

With the benefit of hindsight, we cantrace studies of implicit memory in posthyp-notic amnesia at least as far as the clas-sic work of Hull (Hull, 1933 ; Kihlstrom,2004a), who demonstrated that posthyp-notic amnesia impaired recall but had noeffect on practice effects, savings in relearn-ing, or retroactive interference (see furtherdiscussion below). Hull concluded merelythat the forgetting observed in posthypnoticamnesia was “by no means complete” (p.138) – much as Gregg (1979, 1982) laterinterpreted the evidence as reflecting the dis-tinction dissociation between optional andobligatory aspects of memory performance.But we can now interpret the same evidenceas illustrating a strong dissociation betweenexplicit and implicit memory.

In addition to priming, the dissocia-tion between explicit and implicit mem-ory is revealed by the phenomenon ofsource amnesia, in which the subject retainsknowledge acquired through some learn-ing experience while forgetting the learn-ing experience itself (Schacter, Harbluk, &McClachlan, 1984 ; Shimamura & Squire,

1987). Interestingly, source amnesia wasfirst identified in the context of hypnosis(Cooper, 1966; Evans, 1979a,b, 1988; Evans& Thorne, 1966). Evans and Thorne (1966)found that some amnesic subjects retainedworld-knowledge that had been taught tothem during hypnosis, (e.g., the color anamethyst turns when exposed to heat orthe difference between the antennae ofmoths and butterflies), although they didnot remember the circumstances in whichthey acquired this information. In a laterstudy, Evans (1979a) showed that sourceamnesia did not occur in insusceptible sub-jects who simulated hypnosis and posthyp-notic amnesia. Although the methodology ofEvans’ study has been criticized (Coe, 1978;Spanos, Gwynn, Della Malva, & Bertrand,1988), most of these criticisms pertain tothe real-simulating comparison and do notundermine the phenomenon itself. Alongwith the notion of demand characteris-tics (Kihlstrom, 2002a; Orne, 1962 , 1973),source amnesia is one of the most salientexamples of a concept developed in hyp-nosis research that has become part of thecommon parlance of psychological theory.4

Source amnesia might be interpreted asa form of implicit learning (Berry & Dienes,1993 ; Reber, 1967, 1993 ; Seger, 1994). In linewith the traditional definition of learning,as a relatively permanent change in behav-ior that occurs as a result of experience, wemay define implicit learning as the acquisi-tion of new knowledge in the absence eitherof conscious awareness of the learning expe-rience or conscious awareness of what hasbeen learned, or both. Although evidencefor implicit learning can be construed as evi-dence for implicit memory as well (Schac-ter, 1987), we may distinguish between thetwo phenomena with respect to the sortof knowledge affected. In implicit mem-ory, the memories in question are episodicin nature, representing more or less discreteepisodes in the life of the learner. Mem-ories are acquired in implicit learning aswell, of course, but in this case we are con-cerned with new semantic and proceduralknowledge acquired by the subject. Whenimplicit and explicit learning are dissoci-

Page 472: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

454 the cambridge handbook of consciousness

ated, subjects have no conscious access to theknowledge – in which case implicit learningcounts as a failure of metacognition (Flavell,1979; Metcalfe & Shimamura, 1994 ; Nelson,1992 ,1996; Nelson & Narens, 1990; Reder,1996; Yzerbyt, Lories, & Dardenne, 1998).Because the subjects in Evans’ experimentswere aware of what they had learned, thoughthey were amnesic for the learning experi-ence, source amnesia is better construed asan example of implicit memory.

Preserved priming on free-association andcategory-generation tasks, in the face ofimpaired recall, is a form of dissociationbetween explicit and implicit memory. Pre-served learning, in the face of amnesia forthe learning experience, is also a form ofdissociation between explicit and implicitmemory. But the case of posthypnotic amne-sia is different, in at least three respects,from other amnesias in which these disso-ciations are observed. First, in contrast tothe typical explicit-implicit dissociation, theitems in question have been deeply pro-cessed at the time of encoding. In the prim-ing studies, for example, the critical targetswere not just presented for a single trial,but rather were deliberately memorized overthe course of several study-test cycles to astrict criterion of learning (Dorfman &Kihlstrom, 1994 ; Kihlstrom, 1980). Second,the priming that is preserved is semanticpriming, which relies on the formation dur-ing encoding and preservation at retrievalof a semantic link between cue and target.This priming reflects deep, semantic process-ing of a sort that cannot be mediated by aperceptual representation system. Third, theimpairment in explicit memory is reversible:Posthypnotic amnesia is the only case I knowwhere implicit memories can be restored toexplicit recollection.

Taken together, then, these primingresults reflect the unconscious influence ofsemantic representations formed as a resultof extensive attentional activity at the timeof encoding. The priming itself may be anautomatic influence, but again it is not thesort that is produced by automatic pro-cesses mediated by a perceptual represen-tation system.

Implicit Perception in Hypnotic Analgesia

In addition to their effects on memory,hypnotic suggestions can have very dra-matic effects on the experience of pain(Hilgard & Hilgard, 1975 ; Montgomery,DuHamel, & Redd, 2000). Although hyp-notic analgesia was supplanted by more reli-able chemical analgesia almost as soon asits efficacy was documented in the mid-19th century, modern psychophysical stud-ies confirm that hypnotizable subjects givensuggestions for analgesia can experienceconsiderable relief from laboratory pain(Faymonville et al., 2000; Hilgard, 1969;Knox, Morgan, & Hilgard, 1974). In fact, acomparative study found that, among hyp-notizable subjects, hypnotic analgesia wassuperior not just to placebo but also to mor-phine, diazepam, aspirin, acupuncture, andbiofeedback (Stern, Brown, Ulett, & Sletten,1977). Although hypnosis can serve as thesole analgesic agent in surgery, it is proba-bly used more appropriately as an adjunctto chemical analgesics, where it has beenshown to be both effective and cost effec-tive in reducing actual clinical pain (Lang,Benotsch et al., 2000; Lang, Joyce, Spiegel,Hamilton, & Lee, 1996).5

Hypnotic analgesia is not mediated byrelaxation, and the fact that it is notreversed by narcotic antagonists would seemto rule out a role for endogenous opiates(Barber & Mayer, 1977; Goldstein & Hil-gard, 1975 ; Moret et al., 1991; Spiegel &Albert, 1983). There is a placebo com-ponent to all active analgesic agents, andhypnosis is no exception; however, hypno-tizable subjects receive benefits from hyp-notic suggestion that outweigh what theyor their insusceptible counterparts achievefrom plausible placebos (McGlashan, Evans,& Orne, 1969; Stern et al., 1977). It hasalso been argued that hypnotized subjectsemploy such techniques as self-distraction,stress inoculation, cognitive reinterpreta-tion, and tension management to reducepain (Nolan & Spanos, 1987; Spanos, 1986b).Although there is no doubt that cognitivestrategies can reduce pain, their success,unlike the success of hypnotic suggestions,

Page 473: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 455

is not correlated with hypnotizability andthus is unlikely to be responsible for theeffects observed in hypnotizable subjects(Hargadon, Bowers, & Woody, 1995 ; Miller& Bowers, 1986, 1993).

Rather, Hilgard suggested that hypnoticanalgesia entails a division of consciousnessthat prevents the perception of pain frombeing represented in conscious awareness(Hilgard, 1973b, 1977). In other words, ver-bal reports of pain and suffering reflect theconscious perception of pain, whereas phys-iological responses reflect the processing ofpain processed outside of conscious aware-ness. Hilgard’s “hidden observer” is both ametaphor for the subconscious perceptionof pain and a label for a method by whichthis subconscious pain can be accessed(Hilgard, Morgan, & Macdonald, 1975 ; Knoxet al., 1974). Although it has been suggestedthat hidden observer reports are artifactsof experimental demands (Spanos, 1983 ;Spanos, Gwynn, & Stam, 1983 ; Spanos &Hewitt, 1980), Hilgard showed that both theovert and covert pain reports of hypnotizedsubjects differed from those given by sub-jects who are simulating hypnosis (Hilgard,Hilgard, Macdonald, Morgan, & Johnson,1978; Hilgard, Macdonald, Morgan, & John-son, 1978; see also Laurence, Perry, &Kihlstrom, 1983).

The division in consciousness in hypnoticanalgesia, as proposed by Hilgard, wouldhelp explain one of the paradoxes of hyp-notic analgesia, which is that it alters sub-jects’ self-reports of pain but has little or noeffect on reflexive, physiological responsesto the pain stimulus (e.g., Hilgard & Morgan,1975 ; Hilgard et al., 1974). One interpreta-tion of this difference is that hypnotized sub-jects consciously feel the pain after all. How-ever, we know on independent grounds thatphysiological measures are relatively unsatis-factory indices of the subjective experienceof pain (Hilgard, 1969). From the perspec-tive of neodissociation theory, the dimin-ished self-ratings are accurate reflections ofthe subjects’ conscious experience of pain,whereas the physiological measures showthat the pain stimulus has been registeredand processed outside of awareness – a reg-

istration that can be tapped by the hiddenobserver method.

The paradox of hypnotic analgesia canalso be viewed through an extension ofthe explicit-implicit distinction from learn-ing and memory to perception (Kihlstrom,1996; Kihlstrom et al., 1992). Explicit per-ception refers to the conscious perception ofa stimulus event, whereas implicit percep-tion refers to the effect of such an event onthe subject’s ongoing experience, thought,and action in the absence of, or indepen-dent of, conscious awareness. Just as explicitand implicit memory can be dissociated inthe amnesic syndrome and in posthypnoticamnesia, so explicit and implicit perceptioncan be dissociated in “subliminal” percep-tion (Marcel, 1983) or prosopagnosia (Bauer,1984). In the case of hypnotic analgesia,explicit perception of the pain stimulus isreflected in subjects’ self-reports of pain,whereas implicit perception is reflected intheir physiological responses to the painstimulus.

Implicit Perception in Hypnotic Deafness

Dissociations between explicit and implicitperception can also be observed in twoother classes of hypnotic phenomena. Inhypnotic esthesia, the subject experiences amarked reduction in sensory acuity: Exam-ples include hypnotic deafness, blindness,and tactile anesthesia. In hypnotic negativehallucinations, the subject fails to perceivea particular object (or class of objects) in theenvironment, but otherwise retains normallevels of sensory function (hypnotized sub-jects can experience positive hallucinationsas well, perceiving objects that are not actu-ally present in their sensory fields). Althoughthe hypnotic esthesias mimic sensory disor-ders, the content-specificity of the negativehallucinations marks them as more percep-tual in nature.

Careful psychophysical studies, employ-ing both magnitude-estimation (Crawford,Macdonald, & Hilgard, 1979) and signal-detection (Graham & Schwarz, 1973)paradigms, have documented the lossof auditory acuity in hypnotic deafness.

Page 474: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

456 the cambridge handbook of consciousness

Nevertheless, as is the case in posthyp-notic amnesia and hypnotic analgesia, sub-jects experiencing these phenomena showthrough their behavior that stimuli in the tar-geted modality continue to be processed, ifoutside of awareness. For example, Hilgard’shidden observer has also been observed inhypnotic deafness (Crawford et al., 1979).Hypnotically deaf subjects continue to man-ifest speech dysfluencies when subjected todelayed auditory feedback (Scheibe, Gray, &Keim, 1968; Sutcliffe, 1961) and in the caseof unilateral deafness show substantial num-bers of intrusions from material presentedto their deaf ear (Spanos, Jones, & Malfara,1982). Nor does hypnotic deafness abol-ish the “beats” produced by dissonant tones(Pattie, 1937) or cardiovascular responses toan auditory conditioned stimulus (Sabourin,Brisson, & Deschamb, 1980).

Spanos and Jones (Spanos et al., 1982)preferred to interpret their findings as reveal-ing that hypnotically deaf subjects heard per-fectly well, but Sutcliffe (1960, 1961) offereda more subtle interpretation. In his view, thepersisting effects of delayed auditory feed-back certainly contradicted the “credulous”view that hypnotic deafness was identicalto the actual stimulus state of affairs thatmight arise from damage to the auditorynerve or lesions in the auditory projec-tion area – or, for that matter, the simpleabsence of an auditory stimulus (Erickson,1938a,b). But instead of drawing the “skep-tical” conclusion that hypnotized subjectswere engaged in mere role-playing activity,Sutcliffe suggested that they were deludedabout their experiences – that is, that theybelieved that they heard nothing, when infact they did. Sutcliffe’s emphasis on delu-sion can be viewed as an anticipation ofHilgard’s (1977) neodissociation theory ofdivided consciousness, where the subjects’delusional beliefs reflect their actual phe-nomenal experience, and the evidence ofpreserved hearing reflects something likeimplicit perception.

Only one study has used priming to exam-ine implicit perception in hypnotic deafness.Nash and his colleagues found that hyp-notic deafness reduced the likelihood that

subjects would select, from a visually pre-sented array, words that were phonetically(but not semantically) similar to words thathad been spoken to them (Nash, Lynn, Stan-ley, & Carlson, 1987). Because the hypnoticsubjects selected fewer such words comparedto baseline control subjects, this counts as aninstance of negative phonological priming,and thus of implicit perception as well.

Implicit Perception in Hypnotic Blindness

Similar paradoxes are observed in the visualdomain. Inspired by an earlier experimen-tal case study of hypnotic blindness (Brady,1966; Brady & Lind, 1961; see also Bryant& McConkey, 1989d; Grosz & Zimmerman,1965), Sackeim and his colleagues (Sack-eim, Nordlie, & Gur, 1979) asked a hyp-notically blind subject to solve a puzzle inwhich the correct response was indicatedby the illumination of a lamp. Performancewas significantly below chance. Bryant andMcConkey (1989a,b) conducted a similarexperiment, with a larger group of subjects,generally finding above-chance performance.The difference in outcomes may reflect anumber of factors, including the subjects’motivation for the experiment and individ-ual differences in cognitive style (Bryant &McConkey, 1990a,b,c), but either outcomeshows that the visual stimulus was processedby the hypnotically blind subjects.

Dissociations between explicit andimplicit perception are also suggested bya series of studies by Leibowitz and hiscolleagues, who found that ablation ofthe background did not affect perceptionof the Ponzo illusion (Miller, Hennessy,& Leibowitz, 1973) and that suggestionsfor tubular (tunnel) vision had no effecton the size-distance relation (Leibowitz,Lundy, & Guez, 1980) or on illusory feelingsof egomotion (roll vection) induced byviewing a rotating object (Leibowitz, Post,Rodemer, Wadlington, & Lundy, 1980).These experiments are particularly inter-esting because they make use of a class ofperceptual phenomena known as perceptualcouplings, which are apparently inviolablelinks between one perceptual organization

Page 475: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 457

and another (Epstein, 1982 ; Hochberg,1974 ; Hochberg & Peterson, 1987; Peterson& Hochberg, 1983). If an observer seestwo lines converging in a distance, he orshe must see two identical horizontal barsarranged vertically along these lines asdiffering in length. In the Miller et al. study,ablation of the converging lines is a failureof explicit perception, but the persistenceof the perceptually coupled Ponzo illusionindicates that they have been perceivedimplicitly.

Perceptual couplings also seem to beinvolved in the finding of Blum and his col-leagues that hypnotic ablation of surround-ing stimuli did not alter either the magni-tude of the Titchener-Ebbinghaus illusion(Blum, Nash, Jansen, & Barbour, 1981) or theperception of slant in a target line (Jansen,Blum, & Loomis, 1982). They are also impli-cated in the observation that hypnotic anes-thesia of the forearm does not affect percep-tual adaptation of the pointing response todisplacing prisms (Spanos, Dubreuil, Saad, &Gorassini, 1983 ; Spanos, Gorassini, & Petru-sic, 1981; Spanos & Saad, 1984). The subjectsmay not feel their arms moving during thepointing trials, but the fact that adaptationoccurs indicates that the kinesthetic infor-mation has been processed anyway.6

Although the evidence from perceptualcouplings is consistent with the notion ofspared implicit perception, only two stud-ies have used priming methodologies to seekevidence of unconscious vision in hypnoticblindness. Bryant and McConkey (1989a)showed subjects pairs of words consisting ofa homophone and a disambiguating contextword (e.g., window-pane), half under con-ditions of ordinary vision and half duringhypnotically suggested blindness. On a latermemory test, the subjects generally failedto recall words they had been shown whileblind. On a subsequent test, however, whenthe words were presented auditorially, theytended to spell them in line with their ear-lier visual presentation (e.g., pane rather thanpain). A subsequent study found a similarpriming effect on word-fragment comple-tion (Bryant & McConkey, 1994). In bothcases, priming was diminished somewhat by

hypnotic blindness compared to trials wherethe subjects saw the primes clearly, but anyevidence of priming counts as evidence ofimplicit perception – and the magnitude ofpriming in both studies was substantial byany standards.

Color, Meaning, and the Stroop Effect

In addition to total (binocular or uniocu-lar) or tubular blindness, hypnotic subjectscan also be given suggestions for color blind-ness. Although some early research indi-cated that hypnotic colorblindness affectedperformance on the Ishihara test and otherlaboratory-based tests of color perception(Erickson, 1939), the claim has long beencontroversial (Grether, 1940; Harriman,1942a,b), and the most rigorous study ofthis type found no effects (Cunningham &Blum, 1982). Certainly, hypnotically color-blind subjects do not show patterns of testperformance that mimic those of the con-genitally colorblind. Nor do hypnotic sug-gestions for colorblindness abolish Stroopinterference effects (Harvey & Sipprelle,1978; Mallard & Bryant, 2001). All of theseresults are consistent with the hypothesisthat color is processed implicitly in hypnot-ically induced colorblindness, even if it isnot represented in the subjects’ phenome-nal awareness.

However, hypnotic suggestions of a dif-ferent sort may indeed abolish Stroop inter-ference. Instead of suggesting that subjectswere colorblind, Raz and his colleagues sug-gested that the color words were “meaning-less symbols . . . like characters of a foreignlanguage that you do not know . . . gibberish”(Raz, Shapiro, Fan, & Posner, 2002 , p. 1157).The focus on meaning, rather than color,makes this suggestion more akin to thehypnotic agnosia (or, perhaps, alexia) stud-ied by Spanos and his colleagues in rela-tion to hypnotic amnesia (Spanos, Radtkeet al., 1982). In contrast to the effects ofsuggested colorblindness, suggested agnosiacompletely abolished the Stroop interfer-ence effect. Subsequent research, employ-ing a drug to induce cycloplegia and thuseliminate accommodation effects, ruled out

Page 476: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

458 the cambridge handbook of consciousness

peripheral mechanisms, such as visual blur-ring or looking away from the stimulus (Razet al., 2003). However, preliminary fMRIresearch suggests that the reduced Stroopinterference reflects a nonspecific dampen-ing of visual information processing (Raz,Fan, Shapiro, & Posner, 2002) – a general-ized effect of visual information processing,rather than an effect mediated at linguisticor semantic levels. This generalized effect onvisual information processing may explainwhy Stroop interference did not persist as animplicit expression of semantic processing,despite the conscious experience of agnosia.

Implicit Emotion

Hypnotic suggestions can alter consciousemotion as well as perception and memory.In fact, the suggested alteration of emotionhas been a technique for psychotherapy atleast since the time of Janet (Ellenberger,1970), and has played a role in hypnoticstudies of psychodynamic processes (Blum,1961, 1967, 1979; Reyher, 1967). Aside fromits inclusion in an advanced scale of hyp-notic susceptibility (Hilgard, 1965), the phe-nomenon and its underlying mechanismshave not been subject to much empiricalstudy. However, more recent studies leavelittle doubt that hypnotic suggestions canalter subjects’ conscious feeling states, justas they can alter their conscious perceptsand memories (Bryant & McConkey, 1989c;Weiss, Blum, & Gleberman, 1987).

As with perception and memory, how-ever, special interest attaches to the ques-tion of whether the “blocked” emotionalresponses can nevertheless influence theperson’s ongoing experience, thought, andaction outside of conscious awareness. Untilrecently, the idea of unconscious emo-tion has generally been seen as a holdoverfrom an earlier, more psychodynamicallyoriented period in the history of psychol-ogy. However, in an era where dissociationsbetween explicit and implicit perceptionand memory are widely accepted as evidenceof unconscious cognitive processing, thereseems little reason to reject out of hand theprospect of dissociations between explicit

and implicit emotion (Kihlstrom, Mulvaney,Tobias, & Tobis, 2000).7 Kihlstrom et al.have proposed that, in the absence of self-reported emotion, behavioral and physio-logical indices of emotional response, suchas facial expressions and heart rate changes,might serve as evidence of implicit, uncon-scious emotional responding. In fact, a studyby Bryant and Kourch found that hypnoticsuggestions for emotional numbing dimin-ished self-reported emotional responses, buthad no effect on facial expressions of emo-tion (Bryant & Kourch, 2001). Althoughthis finding is suggestive of a dissociationbetween explicit and implicit expressionsof emotion, two other studies found thatemotional numbing diminished both subjec-tive reports and facial expressions (Bryant& Mallard, 2002 ; Weiss et al., 1987). Withrespect to the dissociation between explicitand implicit emotion, then, the effects ofhypnotically induced emotional numbingare currently uncertain.

Anomalies of Dissociation in Hypnosis

Most of the classic phenomena of hypnosis –amnesia, analgesia, and the like – appear tobe dissociative in two related but differentsenses. In the first place, hypnotized sub-jects lack awareness of percepts and mem-ories that would ordinarily be accessible toconsciousness. This disruption in consciousawareness is the hallmark of the dissociativedisorders encountered clinically, including“functional” amnesia and “hysterical” deaf-ness. In the second place, these perceptsand memories continue to influence thesubject’s ongoing experience, thought, andaction outside awareness – creating dissocia-tions between explicit and implicit memory,or explicit and implicit perception, similarto those that have now become quite famil-iar in the laboratory or neurological clinic. AsHilgard (1977) noted, it is as if consciousnesshas been divided, with one stream of mentallife (e.g., a failure of conscious recollection)proceeding in phenomenal awareness whileanother stream (e.g., the implicit expressionof memory encoding, storage, and retrieval)proceeds outside of awareness.

Page 477: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 459

Co-Consciousness and Trance Logic

Sometimes, however, the suggested andactual state of affairs are both representedin conscious awareness, leading to a set ofinconsistencies and paradoxes that Orne, ina classic paper, labeled “trance logic” (Orne,1959). Orne defined trance logic as the“apparently simultaneous perception andresponse to both hallucinations and realitywithout any apparent attempts to satisfy aneed for logical consistency” (p. 295) – or, ashe often put it in informal conversation, “thepeaceful coexistence of illusion and reality.”For example, in the double hallucination, itis suggested that the subject will see andinteract with a confederate sitting in a chairthat is actually empty. When the subject’sattention is drawn to the real confederate,who has been quietly sitting outside his orher field of vision, Orne reported that hyp-notized subjects typically maintained boththe perception of the real confederate andthe hallucinations, exhibiting confusion asto which was the real confederate. Similarly,many subjects reported that they could seethrough the hallucinated confederate to theback of the armchair. Thus, the subjects weresimultaneously aware of two mutually con-tradictory states of affairs, apparently with-out feeling the need to resolve the contra-dictions inherent in the experience.

Orne’s initial report of trance logic wassomewhat impressionistic in nature, butlater investigators have attempted to studythe phenomenon more quantitatively –with somewhat mixed results (Hilgard,1972 ; R.F.Q. Johnson, 1972 ; Johnson, Maher,& Barber, 1972 ; McConkey, Bryant, Bibb,Kihlstrom, & Tataryn, 1990; McConkey &Sheehan, 1980; Obstoj & Sheehan, 1977;Sheehan, Obstoj, & McConkey, 1976). Onthe other hand, everyone who has everworked with hypnotized subjects has seenthe phenomenon. Although Orne (Orne,1959) held the view that trance logic wasa defining characteristic of hypnosis, thisdoes not seem to be the case – not leastbecause similar inconsistencies and anoma-lies of response can occur in ordinary imag-ination as well as in hypnosis (McConkey,

Bryant, Bibb, & Kihlstrom, 1991). Spanos(Spanos, DeGroot, & Gwynn, 1987) sug-gested that the occurrence of trance logicwas an artifact of incomplete response to thesuggestion on the part of the subject, but thisproposal seems to be based on the assump-tion that a “complete” image or hallucinationwould be tantamount to “the real thing” –the actual perceptual state of affairs pro-duced by an adequate environmental stimu-lus. On the other hand, it may well be thatthe hallucination is quite complete, in thesense of being subjectively compelling to theperson who experiences it – but the accom-panying division of consciousness might beincomplete. In this case, trance logic reflectsa kind of co-consciousness in which two dif-ferent and mutually contradictory streams ofmental activity – one perceptual, one imag-inary – are represented simultaneously inphenomenal awareness.

Making the Unconscious Conscious

In the case of posthypnotic amnesia and hyp-notic analgesia, as well as the hypnotic esthe-sias and negative hallucinations, it seemsthat hypnotized subjects are able to becomeunaware of percepts and memories thatwould ordinarily be represented in phenom-enal awareness. In contrast, it has some-times been suggested that hypnosis also hasthe opposite capacity – to enable subjectsto become aware of percepts and memo-ries that would not ordinarily be accessibleto conscious introspection. For example, inhypnotic hypermnesia subjects receive sug-gestions that they will be able to rememberevents that they have forgotten. In hypnoticage regression, it is suggested that they willrelive a previous period in their lives – anexperience that is often accompanied by theapparent recovery of long-forgotten child-hood memories.

Hypermnesia suggestions are sometimesemployed in forensic situations, with for-getful witnesses and victims, or in thera-peutic situations, to help patients remembertraumatic personal experiences. Althoughfield studies have sometimes claimed thathypnosis can powerfully enhance memory,

Page 478: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

460 the cambridge handbook of consciousness

these reports are mostly anecdotal in natureand generally fail to seek independent cor-roboration of the memories produced dur-ing hypnosis. Moreover, they have not beensupported by studies run under labora-tory conditions. A report by the Com-mittee on Techniques for the Enhance-ment of Human Performance, a unit of theU.S. National Research Council, concludedthat gains in recall produced by hypnoticsuggestion were rarely dramatic and werematched by gains observed when subjectsare not hypnotized (Kihlstrom & Eich, 1994 ;Nogrady, McConkey, & Perry, 1985). In fact,there is some evidence from the labora-tory that hypnotic suggestion can interferewith normal hypermnesic processes (Reg-ister & Kihlstrom, 1987). To make thingsworse, any increases obtained in valid recol-lection can be met or exceeded by increasesin false recollections (Dywan & Bowers,1983). Moreover, hypnotized subjects (espe-cially those who are highly hypnotizable)may be vulnerable to distortions in mem-ory produced by leading questions andother subtle, suggestive influences (Sheehan,1988).

Similar conclusions apply to hypnoticage regression (Nash, 1987). Although age-regressed subjects may experience them-selves as children and may behave in a child-like manner, there is no evidence that theyactually undergo either abolition of char-acteristically adult modes of mental func-tioning or reinstatement of childlike modesof mental functioning. Nor do age-regressedsubjects experience the revivification of for-gotten memories of childhood. Hypnotic ageregression can be a subjectively compellingexperience for subjects, but it is first andforemost an imaginative experience. As withhypnotic hypermnesia, any memories recov-ered during hypnotic age regression cannotbe accepted at face value, in the absence ofindependent corroboration.

Some clinical practitioners have objectedto these conclusions, on the ground thatlaboratory studies of memory generallylack ecological validity (Brown, Scheflin, &Hammond, 1998). In fact, one diary-basedstudy did find some evidence that hypno-

sis enhanced the recovery of valid mem-ory of actual personal experiences (Hofling,Heyl, & Wright, 1971). This study has notbeen replicated, however, and another study,also employing lifelike stimulus materials –a gangland assassination staged before anaudience of law enforcement officers –found no advantage for hypnosis whatso-ever (Timm, 1981). Perhaps not surpris-ingly, many legal jurisdictions severely limitthe introduction of memories recoveredthrough hypnosis, out of a concern that suchmemories may be unreliable and taintedby suggestion and inappropriately high lev-els of confidence. An abundance of cautionseems to be appropriate in this instance, butin the present context it seems that hyp-notic suggestion is better at making perceptsand memories inaccessible to consciousnessthan it is at making unconscious perceptsand memories accessible to phenomenalawareness.

Automaticity in Hypnosis

Even before the discovery of implicit mem-ory and the rediscovery of “subliminal” per-ception, psychology’s renewed interest inunconscious mental life was signalled by thegeneral acceptance of a distinction betweenautomatic and controlled mental processes.As a first approximation, automatic pro-cesses are executed unconsciously in areflex-like fashion, whereas controlled pro-cesses are executed consciously and delib-erately (Kihlstrom, 1987, 1994b). A popu-lar example of automaticity is the Stroopcolor-word effect, in which subjects havedifficulty naming the colors in which wordsare printed when the words themselvesname a different color (MacLeod, 1991, 1992 ;Stroop, 1935). Despite the subjects’ con-scious intention to name the ink colorsand to ignore the words, they automaticallyprocess the words anyway, and this pro-cessing activity interferes with the namingtask.

According to traditional formulations(LaBerge & Samuels, 1974 ; Posner & Snyder,1975 ; Schneider & Shiffrin, 1977; Shiffrin &

Page 479: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 461

Schneider, 1977, 1984), automatic processesshare five properties in common:

1. Inevitable Evocation: Automatic pro-cesses are necessarily engaged by theappearance of specific cues in the stimu-lus environment, independent of the per-son’s conscious intentions.

2 . Incorrigible Execution: Once invoked,automatic processes proceed unalterablyto their conclusion and cannot be modi-fied by conscious activity.

3 . Effortlessness: The execution of an auto-matic process consumes little or noattentional resources and therefore doesnot interfere with other ongoing mentalprocesses.

4 . Speed: Automatic processes are executedrapidly, on the order of seconds or evenfractions of a second – too quickly to bevulnerable to conscious control.

5 . Unavailability: Perhaps because they con-sume no attentional resources, perhapsbecause they are fast, or perhaps becausethey are represented as procedural ratherthan declarative knowledge (Anderson,1992), automatic processes are uncon-scious in the strict sense of being unavail-able to conscious introspection in princi-ple, and they can be known only by infer-ence from performance data.

The Experience of Involuntarinessin Hypnosis

As indicated at the outset of this chapter,there is much about hypnosis that appearsto be automatic. Indeed, the experience ofinvoluntariness – sometimes called the clas-sic suggestion effect (Weitzenhoffer, 1974) –is part and parcel of the experience of hyp-nosis. Hypnotic subjects don’t simply imag-ine heavy objects in their hands and allowtheir arms to lower accordingly. They out-stretch their hands voluntarily, as an act ofordinary compliance with the hypnotist’sinstruction or request to do so, but whenhe or she starts giving the suggestion theyfeel the heaviness in their hands, their armsdrop, as involuntary happenings rather than

as voluntary doings (Sarbin & Coe, 1972).Not all responses to hypnotic suggestion areexperienced as completely involuntary, butthe experience is strongest among those whoare most highly hypnotizable (Bowers, 1982 ;Bowers, Laurence, & Hart, 1988).

Automaticity lies at the heart of the“social cognitive” theory of hypnosis pro-posed by Kirsch and Lynn (Kirsch, 2000;Kirsch & Lynn, 1997, 1998b), which assertsthat hypnotic behaviors are generated auto-matically by subjects’ expectancies that theywill occur – much in the manner of aself-fulfilling prophecy (Rosenthal & Rubin,1978; Snyder & Swann, 1978). This view, inturn, is rooted in James’s (1890) theory ofideomotor action (see also Arnold, 1946),which held that motor behavior was gener-ated automatically by the person’s idea ofit. Conscious control over behavior, then, isaccomplished by exerting conscious controlover one’s cognitive and other mental states;but once a subject attends to a particularidea, the resulting behavior occurs naturally.

Kirsch and Lynn’s social cognitive, ideo-motor theory of hypnosis is distinct fromSpanos’s “sociocognitive” approach (Spanos,1986b), which holds either that subjects fab-ricate reports of involuntariness to convincethe hypnotist that they are, in fact, deeplyhypnotized (Spanos, Cobb, & Gorassini,1985) or that certain features of the hyp-notic context lead subjects to misattributetheir responses to the hypnotist’s sugges-tions, instead of to their own voluntaryactions (Spanos, 1986a). Spanos’s latter view,that the hypnotic experience of involuntari-ness is illusory, was also embraced by Weg-ner (2002 ; but see Kihlstrom, 2004b). Work-ing from a neuropsychological perspective,Woody and Bowers have suggested thatthe experience of involuntariness is a gen-uine reflection of the effects of hypnosis onfrontal-lobe structures involved in executivefunctioning (Woody & Bowers, 1994 ; Woody& Sadler, 1998).

On the other hand, it is possible thatthe hypnotic experience of involuntarinessis illusory after all – though not for the rea-sons suggested by Spanos and Wegner. Afterall, as Shor noted, “A hypnotized subject

Page 480: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

462 the cambridge handbook of consciousness

is not a will-less automaton. The hypnotistdoes not crawl inside a subject’s body andtake control of his brain and muscles”(Shor, 1979b, p. 127). From the frame-work of Hilgard’s neodissociation theory ofdivided consciousness (Hilgard, 1977; seealso Kihlstrom, 1992a), the experience ofinvoluntariness reflects an amnesia-like bar-rier that impairs subjects’ conscious aware-ness of their own role in producing hypnoticresponses. In this view, the hypnotic subjectactively imagines a heavy object in his out-stretched hand, and actively lowers his handand arm as if it were heavy, but is not awareof doing so. Thus, the subject’s behavior istechnically voluntary in nature, but is experi-enced as involuntary – as occurring automat-ically – because the subject is unaware of hisor her own role as the agent of the behavior.In other words, the apparent disruption ofconscious control actually occurs by virtue ofa disruption of conscious awareness – a pro-posal that (perhaps) gains credence from thedissociations between explicit and implicitmemory and perception discussed earlier.

Automaticity in Posthypnotic Suggestion

Perhaps the most dramatic demonstrationof apparent automaticity in hypnosis isposthypnotic suggestion, in which the sub-ject responds after the termination of hyp-nosis to a suggestion administered while heor she was still hypnotized. On the group-administered HGSHS:A, for example, itis suggested that when the subjects heartwo taps, they will reach down and touchtheir left ankles, but forget that they wereinstructed to do so. After the termination ofhypnosis, many highly hypnotizable subjectswill respond quickly to such a prearrangedcue – without knowing why they are doing soor confabulating a reason, such as that theyfeel an itch. They may even be unaware thatthey are doing anything unusual at all.

Any suggested experience that can occurduring hypnosis can also occur posthypnoti-cally, provided that the subject is sufficientlyhypnotizable. For this reason, posthypnoticsuggestion has always been problematic for

some views of hypnosis as an altered stateof consciousness, because the phenomenonoccurs after the hypnotic state has beenostensibly terminated. So far as we cantell, subjects do not re-enter hypnosis whilethey are responding to the posthypnoticsuggestion. At least, they are not particu-larly responsive to other hypnotic sugges-tions during this time (Reyher & Smyth,1971). We cannot say that hypnosis causedthe behavior to occur, because the sub-jects are not hypnotized when they maketheir response. Nevertheless, some alterationof consciousness has occurred, because atthe very least they are not aware of whatthey are doing or why (Sheehan & Orne,1968).

In the present context, posthypnotic sug-gestion is of interest because it seems tooccur automatically in response to the prear-ranged cue (Erickson & Erickson, 1941). Cer-tainly posthypnotic suggestion differs fromordinary behavioral compliance. Damaser(Damaser, 1964 ; see also Orne, 1969) gavesubjects a posthypnotic suggestion to mailthe experimenter one postcard per day, acontrol group received an ordinary socialrequest to perform the same behavior, and athird group received both the posthypnoticsuggestion and the social request. Surpris-ingly, the subjects who received the socialrequest mailed more postcards than didthose who received only the posthypnoticsuggestion (see also Barnier & McConkey,1999b). Apparently, those who agreed tothe social request felt that they were undersome obligation to carry it out, but thosewho received the posthypnotic suggestioncarried it out only so long as they felt theurge to do so. This urge can be powerful:Subjects who fail to respond to a posthyp-notic suggestion on an initial test appear toshow a persisting tendency to perform thesuggested behavior at a later time (Nace &Orne, 1970). Posthypnotic behavior can per-sist for long periods of time (Edwards, 1963),even after the posthypnotic suggestion hasbeen formally canceled (Bowers, 1975).

Nevertheless, close examination showsthat posthypnotic behavior does not meet

Page 481: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 463

the technical definition of automaticity,as it has evolved within cognitive psy-chology (Barnier, 1999). In the first place,posthypnotic suggestion fails the test ofinevitable evocation. Except under specialcircumstances (Orne, Sheehan, & Evans,1968), response to a posthypnotic sugges-tion declines markedly outside the experi-mental context in which the suggestion isoriginally given (Barnier & McConkey, 1998;Fisher, 1954 ; Spanos, Menary, Brett, Cross,& Ahmed, 1987). Moreover, like all otheraspects of hypnosis, posthypnotic behaviordepends intimately on the both the subject’sinterpretation of the hypnotist’s suggestionand the context in which the cue appears(Barnier & McConkey, 1999a, 2001). It isin no sense reflexive in nature. Moreover,posthypnotic suggestion is not effortless. Sub-jects respond to simple posthypnotic sug-gestions more frequently than to complexones (Barnier & McConkey, 1999c), sug-gesting that the activity makes demands onthe subject’s information-processing capac-ity. Responding to a posthypnotic sugges-tion interferes with responding to a wakinginstruction, even when the response require-ments of the two tasks do not conflict (Hoyt& Kihlstrom, 1986). Thus, responding to aposthypnotic suggestion seems to consumemore information-processing capacity thanwould be expected of a truly automaticprocess.

Posthypnotic suggestion does not appearto be an instance of automaticity, but itdoes appear to be an instance of prospec-tive memory (Einstein & McDaniel, 1990),in which subjects must remember to per-form a specified activity at some timein the future. Awareness of the posthyp-notic suggestion does not seem to inter-fere with posthypnotic behavior (Barnier& McConkey, 1999c; Edwards, 1956; Gan-dolfo, 1971). But when accompanied byposthypnotic amnesia, posthypnotic behav-ior takes on some of the qualities of implicitmemory. Even though subjects may for-get the suggestion, the fact that they carryout the suggestion on cue shows clearly thatthe prospective memory has been encoded

and influences subsequent behavior in theabsence of conscious recollection.

Hypnosis in Mind and Body

Researchers have long been interested in bio-logical correlates of hypnosis. In the 19thcentury, Braid likened hypnosis to sleep,whereas Pavlov considered it to be a state ofcortical inhibition (Gauld, 1992). In the mid-20th century revival of interest in conscious-ness, some theorists speculated that hyp-nosis entailed an increase in high-voltage,low-frequency alpha activity in the EEG,though this proved to be an artifact of relax-ation and eye closure (Dumas, 1977; Evans,1979b). The discovery of hemispheric spe-cialization, with the left hemisphere gearedto analytic and the right hemisphere to non-analytic tasks, coupled with the notion thatthe right hemisphere is “silent” or uncon-scious,” led to the speculation that hypnoticresponse is somehow mediated by right-hemisphere activity (Bakan, 1969). Studiesemploying both behavioral and electrophys-iological paradigms (e.g., MacLeod-Morgan& Lack, 1982 ; Sackeim, 1982) have beeninterpreted as indicating increased activa-tion of the right hemisphere among highlyhypnotizable individuals, but positive resultshave proved difficult to replicate (e.g.,Graffin, Ray, & Lundy, 1995 ; Otto-Salaj,Nadon, Hoyt, Register, & Kihlstrom, 1992),and interpretation of these findings remainscontroversial.

It should be understood that hypnosisis mediated by verbal suggestions, whichmust be interpreted by the subject in thecourse of responding. Thus, the role ofthe left hemisphere should not be min-imized (Jasiukaitis, Nouriani, Hugdahl, &Spiegel, 1995 ; Rainville, Hofbauer, Paus,Bushnell, & Price, 1999). One interestingproposal is that hypnotizable individualsshow greater flexibility in deploying the leftand right hemispheres in a task-appropriatemanner, especially when they are actuallyhypnotized (Crawford, 2001; Crawford &Gruzelier, 1992). Because involuntariness

Page 482: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

464 the cambridge handbook of consciousness

is so central to the experience of hyp-nosis, it has also been suggested that thefrontal lobes (which organize intentionalaction) may play a special role in hyp-nosis, and especially in the experience ofinvoluntariness (Woody & Bowers, 1994 ;Woody & Sadler, 1998). Along these lines,Farvolden and Woody have found thathighly hypnotizable individuals performrelatively poorly on neuropsychologicaltasks that assess frontal-lobe functioning(Farvolden & Woody, 2004).

“Neutral” Hypnosis

Although most work on the neural correlatesof hypnosis has employed psychophysiolog-ical measures such as the EEG and event-related potentials, it seems likely that a bet-ter understanding of the neural substratesof hypnosis may come from the applica-tion of brain imaging technologies (Barnier &McConkey, 2003 ; Killeen & Nash, 2003 ; Ray& Tucker, 2003 ; Woody & McConkey, 2003 ;Woody & Szechtman, 2003). One approachhas been to scan subjects after they havereceived a hypnotic induction but beforethey have received any specific suggestions,on the assumption that such a procedure willreveal the neural correlates (if indeed anyexist) of hypnosis as a generalized alteredstate of consciousness. For example, one PETstudy found that the induction of hypnosisgenerated widespread activation of occipital,parietal, precentral, premotor, and ventro-lateral prefrontal cortex in the left hemi-sphere, and the occipital and anterior cin-gulate cortex of the right hemisphere – inother words, pretty much the entire brain(Maquet et al., 1999). At the same time,another PET study found that the inductionof hypnosis was accompanied by increasedactivation of occipital cortex and decreasesin the right inferior parietal lobule, left pre-cuneus, and posterior cingulate (Rainville,Hofbauer et al., 1999). As is so often thecase in brain imaging experiments, the dif-ference in results may be due to differencesin control conditions. Whereas Rainvilleet al. asked their hypnotized subjects simplyto relax (Rainville, Hofbauer et al., 1999),

Maquet et al. asked their subjects to reviewa pleasant life experience (Maquet et al.,1999).

Although the concept of “neutral” hyp-nosis has had its proponents (Kihlstrom& Edmonston, 1971), in subjective termsthe state, such as it is, differs little fromeyes-closed relaxation (Edmonston, 1977,1981) and bears little resemblance to thedissociative and hallucinatory experiencesassociated with specific hypnotic sugges-tions. Moreover, it is unlikely that imagingsubjects who are merely in neutral hypno-sis and not responding to particular hyp-notic suggestions will tell us much about theneural correlates of hypnosis, because theexperiences of hypnotic subjects are so var-ied, depending on the suggestion to whichthey are responding. A more fruitful tackwill likely involve imaging subjects whilethey are responding to particular hypnoticsuggestions. Just as the neural correlates ofNREM sleep differ from those of REM sleep(Hobson, Pace-Schott, & Stickgold, 2000),so the neural correlates of neutral hypnosiswill differ from those of specific, suggestedhypnotic phenomena.

Hypnotic Analgesia

Perhaps because of the added interest valuethat comes with clinical application, mostbrain imaging studies of hypnotic sugges-tions have focused on analgesia. A pio-neering study using the 133 Xe techniquefound bilateral increases in the activationof the orbitofrontal region, as well as insomatosensory cortex, during analgesia com-pared to resting baseline and a controlcondition in which subjects attended tothe pain (Crawford, Gur, Skolnick, Gur, &Benson, 1993). They suggested that thesechanges reflected the increased mental effortneeded to actively inhibit the process-ing of somatosensory information. A morerecent PET study implicated quite differentregions, particularly the anterior cingulatecortex (ACC). However, this later study alsoemployed quite a different procedure, mod-ulating pain perception through a pleasantautobiographical reverie instead of a specific

Page 483: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 465

suggestion for analgesia (Faymonville et al.,2000).

Because the specific wording of sugges-tions is so important in hypnosis, perhapsthe most interesting brain imaging studiesof analgesia compared suggestions targetingsensory pain, which relates to the locationand physical intensity of the pain stimu-lus, with suggestions targeting suffering, orthe meaning of the pain (Melzack, 1975).Standard hypnotic suggestions for analge-sia affect both sensory pain and suffering(Hilgard & Hilgard, 1975), but these twodimensions can also be dissociated by alter-ing the specific wording of the suggestion(Rainville, Carrier, Hofbauer, Bushnell, &Duncan, 1999). Using hypnotic suggestions,Rainville and his colleagues have found thatsuggestions that alter the unpleasantness of apain stimulus, without altering its intensity,are associated with changes in ACC but notin somatosensory cortex (Rainville, Duncan,Price, Carrier, & Bushnell, 1997; Rainvilleet al., 2002).

Hallucinations and Imagery

Brain imaging studies also bear on the rela-tion between hypnotic hallucinations andnormal imagery. On the surface, at least,imagery would seem to be a cognitiveskill relevant to hypnosis, and some theo-rists sometimes write as if hypnosis wereonly a special case of a larger domainof mental imagery (for reviews, see Bow-ers, 1992 ; Glisky et al., 1995 ; Kunzen-dorf, Spanos, & Wallace, 1996; Sheehan,1982). On the contrary, Szechtman and hiscolleagues found that hypnotized subjectsexperiencing suggested auditory hallucina-tions showed activation of the right ACC;this area was also activated during normalhearing, but not during auditory imagery(Szechtman, Woody, Bowers, & Nahmias,1998). Interestingly, a parallel study foundthat schizophrenic patients also showedright ACC activation during their audi-tory hallucinations (Cleghorn et al., 1992).They suggested that activation of this regionmight cause internally generated thoughtsand images to be confused with those aris-

ing from the external stimulus environment(Woody & Szechtman, 2000a,b). Anotherinterpretation, based on the role of ACCin emotion, is that the activity in thisregion reflects affective arousal to experi-ences, whether perceptual or hallucinatory,which surprise the subject; mental images,being deliberately constructed by the sub-ject, would not have this surprise value.

In another study, Kosslyn and his col-leagues studied the modulation of colorperception through hypnotic suggestion(Kosslyn et al., 2000). After PET imagingidentified a region (in the fusiform area) thatwas differentially activated by the presen-tation of chromatic and gray-scale stimuli,these investigators gave suggestions to highlyhypnotizable subjects that they would per-ceive the colored stimulus in gray scale,and the gray-scale stimulus as colored. Theresult was that the fusiform region was acti-vated in line with subjects’ perceptions –actual and hallucinated color or actual andhallucinated gray scale, independent of thestimulus. In contrast to nonhypnotic colorimagery, which appears to activate onlythe right fusiform region (Howard et al.,1998), hypnotically hallucinated color acti-vated both the left and right hemispheres.Taken together with the Szechtman et al.study (1998), these results suggest that hyp-notic hallucinations are in at least some sensedistinct from mental images.

Brain States and Statesof Consciousness

The controversy over the very nature of hyp-nosis has often led investigators to seek evi-dence of neural and other biological changesto demonstrate that hypnosis is “real” – or,alternatively, to debunk the phenomenon asillusion and fakery. For example, the lack ofreliable physiological correlates of hypnoticresponse has been interpreted by Sarbinas supporting his role-enactment interpre-tation of hypnosis (Sarbin, 1973 ; Sarbin &Slagle, 1979). On the other hand, Kosslynand his colleagues argued that the activityof the fusiform color area in response to

Page 484: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

466 the cambridge handbook of consciousness

suggestions for altered color vision “supportthe claim that hypnosis is a psychologicalstate with distinct neural correlates and isnot just the result of adopting a role” (Koss-lyn et al., 2000, p. 1279).

Neither position is quite correct. Physio-logical correlates are nice when they exist,and they may enable otherwise skepticalobservers to accept the phenomena of hyp-nosis as real. But such correlates are neithernecessary nor sufficient to define an alteredstate of consciousness. In the final analysis,consciousness is a psychological construct,not a biological one, and can only be definedat a psychological level of analysis. The phe-nomena of hypnosis – amnesia, analgesia,positive and negative hallucinations, and thelike – obviously represent alterations in con-scious perception and memory. The neuralcorrelates of these phenomena are a matterof considerable interest, but they are anothermatter entirely.

At the same time, the phenomena of hyp-nosis seem to offer a unique vantage pointfrom which consciousness and its neural cor-relates can be studied, because they remindus that consciousness is not just a matter ofattention and alertness. Mental states are alsoa matter of aboutness: They have intentional-ity, in that they refer to objects that exist andevents that occur in the world outside themind. Hypnotized subjects are conscious, inthe sense of being alert and attentive, butwhen certain suggestions are in effect theyare not conscious of some things – of someevent in the past or some object in their cur-rent environment. The fact that percepts andmemories can be explicit or implicit meansthat mental states themselves can be con-scious or unconscious.

The phenomena of hypnosis remind usthat there is a difference between beingaware of something explicitly and beingunaware of something that nonetheless,implicitly influences our ongoing experi-ence, thought, and action. Almost uniquely,hypnosis allows us to create, and reverse,dissociations between the explicit andthe implicit – between the conscious and theunconscious – at will in the laboratory. Thedifference between implicit and explicit per-cepts and memories, then, is the difference

that makes for consciousness. And the neu-ral correlates of that difference are the neuralcorrelates of consciousness.

Acknowledgements

The point of view represented in this paper isbased on research supported by Grant #MH-35856 from the National Institute of MentalHealth.

Notes

1. This was true even before hypnosis receivedits name (Braid, 1843 ; Gravitz & Gerton,1984 ; Kihlstrom, 1992c) – and for that mat-ter even before that, the status of hypnosisas an altered organismal state was controver-sial. In the 18th century, Mesmer thought his“crises” were induced by animal magnetism,but the Franklin Commission chalked themup to mere imagination (Kihlstrom, 2002b). Inthe 19th century, Charcot thought that hypno-sis was closely related to hysteria and to neu-rological disease, whereas Liebeault and Bern-heim attributed its effects to simple sugges-tion. Perhaps because he was writing in theheyday of functional behaviorism, Hull (1933)did not confront the “state-nonstate” issue: Forhim, hypnosis was an intrinsically interestingphenomenon that psychology ought to be ableto explain (Kihlstrom, 2004a).

2 . Lacking the explicit-implicit distinction sub-sequently introduced by Schacter (see alsoGraf & Schacter, 1985 ; Schacter, 1987; Schac-ter & Graf, 1986), Kihlstrom noted simply thatthe priming represented “a residual effect ofthe original learning episode on a subsequenttask involving retrieval from ‘semantic’ mem-ory” (p. 246), that it “took place outside of phe-nomenal awareness,” and that it was “similar toone which occurs in patients diagnosed withthe amnesic syndrome” (p. 246). A similarinterpretation appeared in 1985 (Kihlstrom,1985), in a paper that had been written in1984 , and the relevance of the explicit-implicitdistinction was made explicit (sorry) in 1987

(Kihlstrom, 1987).3 . Interestingly, David et al. obtained a similar

pattern of results for directed forgetting inthe normal waking state. Posthypnotic amne-sia and directed forgetting are both exam-ples of retrieval inhibition (Anderson & Green,

Page 485: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 467

2001; Anderson et al., 2004 ; Geiselman, Bjork,& Fishman, 1983 ; Levy & Anderson, 2002),but the two paradigms generally differ greatlyin other respects (Kihlstrom, 1983); for exam-ple, the role of incidental or intentional learn-ing, the amount of study devoted to the items,the temporal location of the cue to forget.the retention interval involved, and the meansby which memory is measured – as well asthe degree to which the to-be-forgotten itemsare actually inaccessible, whether the forget-ting is reversible, and the extent of inter-ference between to-be-forgotten and to-be-remembered items.

4 . Source amnesia is a failure of source monitor-ing (Johnson, Hashtroudi, & Lindsay, 1993), aprocess that in turn is closely related to realitymonitoring (Johnson & Raye, 1981). It proba-bly lies at the heart of the experience of deja vu(Brown, 2003). As noted by Evans and Thorne(1966), their work had been anticipated byBanister and Zangwill (1941a,b) who used hyp-notic suggestion to produce visual and olfac-tory “paramnesias” in which subjects recognizea previously studied item but confabulate thecontext in which it has been studied.

5 . A thorough discussion of experimental andclinical research on hypnotic analgesia isbeyond the scope of this chapter. Interestedreaders may wish to consult Kihlstrom (2000,2001b).

6. Note, however, Wallace and his colleagueshave found that hypnotic anesthesia actu-ally abolishes prism adaptation, so this find-ing remains in some dispute (Wallace, 1980;Wallace & Fisher, 1982 , 1984a,b; Wallace &Garrett, 1973 , 1975).

7. McClelland and his colleagues have made adistinction between explicit (conscious) andimplicit (unconscious) motivation, as well(McClelland, Koestner, & Weinberger, 1989),but to date there have been no studies of hyp-nosis along these lines.

References

Anderson, J. R. (1992). Automaticity and theACT* theory. American Journal of Psychology,105(2), 165–180.

Anderson, M. C., & Green, C. (2001, March).Suppressing unwanted memories by executivecontrol. Nature, 410(15), 366–369.

Anderson, M. C., Ochsner, K. N., Kuhl, B.,Cooper, J., Robertson, E., Gabrieli, S. W.,

Glover, G. H., & Gabrieli, J. D. E. (2004).Neural systems underlying the suppression ofunwanted memories. Science, 303 , 232–235 .

Arnold, M. B. (1946). On the mechanism of sug-gestion and hypnosis. Journal of Abnormal andSocial Psychology, 41, 107–128.

As, A. (1962). Non-hypnotic experiences relatedto hypnotizability in male and female collegestudents. Scandinavian Journal of Psychology, 3 ,112–121.

Aserinsky, E., & Kleitman, N. (1953). Regularlyoccurring periods of eye motility, and concomi-tant phenomena, during sleep. Science, 118,273–274 .

Bakan, P. (1969). Hypnotizability, laterality of eyemovements and functional brain asymmetry.Perceptual and Motor Skills, 2 8, 927–932 .

Banister, H., & Zangwill, O. L. (1941a). Experi-mentally induced olfactory paramnesia. BritishJournal of Psychology, 32 , 155–175 .

Banister, H., & Zangwill, O. L. (1941b). Exper-imentally induced visual paramnesias. BritishJournal of Psychology, 32 , 30–51.

Barber, J., & Mayer, D. (1977). Evaluation of effi-cacy and neural mechanism of a hypnotic anal-gesia procedure in experimental and clinicaldental pain. Pain, 4 , 41–48.

Barber, T. X. (1969). Hypnosis: A scientificapproach. New York: Van Nostrand Reinhold.

Barber, T. X., & Calverley, D. S. (1966). Towarda theory of “hypnotic” behavior: Experimentalanalyses of suggested amnesia. Journal of Abnor-mal Psychology, 71, 95–107.

Barnier, A. J. (1997). Autobiographical amnesia:An investigation of hypnotically created per-sonal forgetting. Proposal to Australian ResearchCouncil.

Barnier, A. J. (1999). Posthypnotic suggestion:Attention, awareness, and automaticity. Sleep& Hypnosis, 1, 57–63 .

Barnier, A. J. (2002). Posthypnotic amnesia forautobiographical episodes: A laboratory modelof functional amnesia? Psychological Science,13(3), 232–237.

Barnier, A. J., Bryant, R. A., & Briscoe, S. (2001).Posthypnotic amnesia for material learnedbefore or during hypnosis: Explicit and implicitmemory effects. International Journal of Clini-cal and Experimental Hypnosis, 49(4), 286–304 .

Barnier, A. J., & McConkey, K. M. (1998).Posthypnotic responding: Knowing when tostop helps to keep it going. International Journalof Clinical & Experimental Hypnosis, 46, 204–219.

Page 486: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

468 the cambridge handbook of consciousness

Barnier, A. J., & McConkey, K. M. (1999a).Hypnotic and posthypnotic suggestion: Find-ing meaning in the message of the hypnotist.International Journal of Clinical & ExperimentalHypnosis, 47, 192–208.

Barnier, A. J., & McConkey, K. M. (1999b).Posthypnotic responding away from the hyp-notic setting. Psychological Science, 9, 256–262 .

Barnier, A. J., & McConkey, K. M. (1999c).Posthypnotic suggestion, response complexity,and amnesia. Australian Journal of Psychology,51(1), 1–5 .

Barnier, A. J., & McConkey, K. M. (2001). Post-hypnotic responding: The relevance ofsuggestion and test congruence. InternationalJournal of Clinical and Experimental Hypnosis,49, 207–219.

Barnier, A. J., & McConkey, K. M. (2003). Hyp-nosis, human nature, and complexity: Inte-grating neuroscience approaches into hypnosisresearch. International Journal of Clinical andExperimental Hypnosis, 51(3), 282–308.

Bates, B. L., & Kraft, P. M. (1991). The natureof hypnotic performance following administra-tion of the Carleton Skills Training Program.International Journal of Clinical & ExperimentalHypnosis, 39, 227–242 .

Bauer, R. M. (1984). Autonomic recognition ofnames and faces in prosopagnosia: A neuropsy-chological application of the guilty knowledgetest. Neuropsychologia, 2 2 , 457–469.

Berry, D. C., & Dienes, Z. (1993). Implicit learn-ing: Theoretical and empirical issues. Hove, UK:Erlbaum.

Bertrand, L. D., Spanos, N. P., & Radtke, H. L.(1990). Contextual effects on priming duringhypnotic amnesia. Journal of Research in Per-sonality, 2 4 , 271–290.

Blum, G. S. (1961). A model of the mind: Exploredby hypnotically controlled experiments and exam-ined for its psychodynamic implications. NewYork: Wiley.

Blum, G. S. (1967). Hypnosis in psychodynamicresearch. In J. E. Gordon (Ed.), Handbook ofclinical and experimental hypnosis (pp. 83–109).New York: Macmillan.

Blum, G. S. (1979). Hypnotic programmingtechniques in psychological experiments. InE. Fromm & R. E. Shor (Eds.), Hypnosis:Developments in Research and new perspectives(pp. 457–481). New York: Aldine.

Blum, G. S., Nash, J. K., Jansen, R. D., & Bar-bour, J. S. (1981). Posthypnotic attenuation of a

visual illusion as reflected in perceptual reportsand cortical event related potentials. AcademicPsychology Bulletin, 3 , 251–271.

Bowers, K. S. (1967). The effect for demands ofhonesty upon reports of visual and auditoryhallucinations. International Journal of Clinicaland Experimental Hypnosis, 15 , 31–36.

Bowers, K. S. (1975). The psychology ofsubtle control: An attributional analysis ofbehavioural persistence. Canadian Journalof Behavioral Science, 7, 78–95 .

Bowers, K. S. (1992). Imagination and dissocia-tion in hypnotic responding. International Jour-nal of Clinical & Experimental Hypnosis, 40,253–275 .

Bowers, P. (1982). The classic suggestion effect:Relationships with scales of hypnotizability,effortless experiencing, and imagery vividness.International Journal of Clinical and Experimen-tal Hypnosis, 30, 270–279.

Bowers, P., Laurence, J. R., & Hart, D. (1988).The experience of hypnotic suggestions. Inter-national Journal of Clinical and ExperimentalHypnosis, 36, 336–349.

Brady, J. P. (1966). Hysteria versus malingering: Aresponse to Grosz & Zimmerman Bound withBrady & Lind. Behavior Research & Therapy, 4 ,321–322 .

Brady, J. P., & Lind, D. I. (1961). Experimentalanalysis of hysterical blindness. Archives of Gen-eral Psychiatry, 4 , 331–339.

Braid, J. (1843). Neurypnology: or the rationale ofnervous sleep considered in relation to animalmagnetism. London: Churchill.

Briquet, P. (1859). Traite clinque et therapeutiquea l’hysterie. Paris: Balliere et Fils.

Brown, A. S. (2003). A review of the deja vuexperience. Psychological Bulletin, 12 9, 394–413 .

Brown, D., Scheflin, A. W., & Hammond, D. C.(1998). Memory, trauma treatment, and the law.New York: W.W. Norton.

Bryant, R. A., Barnier, A. J., Mallard, D., &Tibbits, R. (1999). Posthypnotic amnesia formaterial learned before hypnosis. InternationalJournal of Clinical & Experimental Hypnosis, 47,46–64 .

Bryant, R. A., & Kourch, M. (2001). Hypnoti-cally induced emotional numbing. InternationalJournal of Clinical and Experimental Hypnosis,49(3), 220–230.

Bryant, R. A., & Mallard, D. (2002). Hypnoticallyinduced emotional numbing: A real-simulating

Page 487: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 469

analysis. Journal of Abnormal Psychology, 111,203–207.

Bryant, R. A., & McConkey, K. M. (1989a).Hypnotic blindness, awareness, and attribu-tion. Journal of Abnormal Psychology, 98, 443–447.

Bryant, R. A., & McConkey, K. M. (1989b). Hyp-notic blindness: A behavioral and experientialanalysis. Journal of Abnormal Psychology, 98,71–77.

Bryant, R. A., & McConkey, K. M. (1989c).Hypnotic emotions and physical sensations: Areal-simulating analysis. International Journal ofClinical and Experimental Hypnosis, 37, 305–319.

Bryant, R. A., & McConkey, K. M. (1989d).Visual conversion disorder: A case analysis ofthe influence of visual information. Journal ofAbnormal Psychology, 98, 326–329.

Bryant, R. A., & McConkey, K. M. (1990a). Hyp-notic blindness and the relevance of attention.Australian Journal of Psychology, 42 , 287–296.

Bryant, R. A., & McConkey, K. M. (1990b). Hyp-notic blindness and the relevance of cognitivestyle. Journal of Personality & Social Psychology,59, 756–761.

Bryant, R. A., & McConkey, K. M. (1990c). Hyp-notic blindness: Testing the influence of moti-vation instructions. Australian Journal of Clini-cal & Experimental Hypnosis, 18, 91–96.

Bryant, R. A., & McConkey, K. M. (1994). Hyp-notic blindness and the priming effect of visualmaterial. Contemporary Hypnosis, 12 , 157–164 .

Campbell, D. T., & Fiske, D. W. (1959). Con-vergent and discriminant validation by themultitrait-multimethod matrix. PsychologicalBulletin, 56, 82–105 .

Cleghorn, J. M., Franco, S., Szechtman, H.,Brown, G. M., Nahmias, C., & Garnett, E. S.(1992). Toward a brain map of auditory hallu-cinations. American Journal of Psychiatry, 149,1062–1069.

Coe, W. C. (1978). Credibility of post-hypnoticamnesia – a contextualist’s view. InternationalJournal of Clinical and Experimental Hypnosis,2 6, 218–245 .

Cooper, L. M. (1966). Spontaneous and sug-gested posthypnotic source amnesia. Interna-tional Journal of Clinical & Experimental Hyp-nosis, 2 , 180–193 .

Crawford, H. J. (2001). Neuropsychophysiologyof hypnosis: Towards an understanding of howhypnotic interventions work. In G. D. Burrows,

R. O. Stanley, & P. B. Bloom (Eds.), Advances inclinical hypnosis (pp. 61–84). New York: Wiley.

Crawford, H. J., & Gruzelier, J. H. (1992). Amidstream view of the neuropsychophysiol-ogy of hypnosis: Recent research and futuredirections. In E. Fromm & M. R. Nash (Eds.),Contemporary hypnosis research (pp. 227–266).New York: Guilford.

Crawford, H. J., Gur, R. C., Skolnick, B., Gur, R.E., & Benson, D. M. (1993). Effects of hypnosison regional cerebral blood flow during ischemicpain with and without suggested hypnotic anal-gesia. International Journal of Psychophysiology,15 , 181–195 .

Crawford, J. H., Macdonald, H., & Hilgard, E.R. (1979). Hypnotic deafness – psychophysicalstudy of responses to tone intensity as modi-fied by hypnosis. American Journal of Psychol-ogy, 92 , 193–214 .

Csikszentmihalyi, M. (1990). Flow: The psychol-ogy of optimal experience. New York: Harper &Row.

Csikszentmihalyi, M., & Csikszentmihalyi, E. S.(Eds.). (1988). Optimal experience: Psychologi-cal studies of flow in consciousness. New York:Cambridge University Press.

Cunningham, P. V., & Blum, G. S. (1982). Fur-ther evidence that hypnotically induced colorblindness does not mimic congenital defects.Journal of Abnormal Psychology, 91, 139–143 .

Damaser, E. (1964). An experimental study of long-term post-hypnotic suggestion. Unpublished doc-toral dissertation, Harvard University, Cam-bridge, MA.

David, D., Brown, R., Pojoga, C., & David, A.(2000). The impact of posthypnotic amnesiaand directed forgetting on implicit and explicitmemory: New insights from a modified processdissociation procedure. International Journalof Clinical and Experimental Hypnosis, 48(3),267–289.

Dorfman, J., & Kihlstrom, J. F. (1994 , Novem-ber). Semantic priming in posthypnotic amnesia.Paper presented at the Psychonomic Society,St. Louis, MO.

Dumas, R. A. (1977). EEG alpha-hypnotizabilitycorrelations: A review. Psychophysiology, 14 ,431–438.

Dywan, J., & Bowers, K. (1983). The use of hyp-nosis to enhance recall. Science, 2 2 2 , 184–185 .

Edmonston, W. E.,(1977). Neutral hypnosis asrelaxation. American Journal of Clinical Hyp-nosis, 2 0, 69–75 .

Page 488: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

470 the cambridge handbook of consciousness

Edmonston, W. E. (1981). Hypnosis and relax-ation: Modern verification of an old equation.New York: Wiley.

Edwards, G. (1956). Post-hypnotic amnesia andpost-hypnotic effect. British Journal of Psychia-try, 11, 316–325 .

Edwards, G. (1963). Duration of post-hypnoticeffect. British Journal of Psychiatry, 109, 259–266.

Eich, E. (1988). Theoretical issues in state depen-dent memory. In H. L. Roediger & F. I. M. Craik(Eds.), Varieties of memory and consciousness:Essays in honor of Endel Tulving (pp. 331-354).Hillsdale, NJ: Erlbaum.

Einstein, G. O., & McDaniel, M. A. (1990). Nor-mal aging and prospective memory. Journal ofExperimental Psychology: Learning, Memory, &Cognition, 16, 717–726.

Ellenberger, H. F. (1970). The discovery ofthe unconscious: The history and evolution ofdynamic psychiatry. New York: Basic Books.

Epstein, W. (1982). Percept-percept couplings.Perception, 11, 75–83 .

Erickson, M. H. (1938a). A study of clinical andexperimental findings on hypnotic deafness: I.Clinical experimentation and findings. Journalof General Psychology, 19, 127–150.

Erickson, M. H. (1938b). A study of clinical andexperimental findings on hypnotic deafness:II. Experimental findings with a conditionedresponse technique. Journal of General Psychol-ogy, 19, 15 1–167.

Erickson, M. H. (1939). The induction of colorblindness by a technique of hypnotic sugges-tion. Journal of General Psychology, 2 0, 61–89.

Erickson, M. H., & Erickson, E. M. (1941). Con-cerning the nature and character of posthyp-notic suggestion. Journal of General Psychology,2 4 , 95–133 .

Evans, F. J. (1979a). Contextual forgetting:Posthypnotic source amnesia. Journal of Abnor-mal Psychology, 88, 556–563 .

Evans, F. J. (1979b). Hypnosis and sleep: Tech-niques for exploring cognitive activity duringsleep. In E. Fromm & R. E. Shor (Eds.), Hypno-sis: Developments in research and new perspec-tives (pp. 139–183). New York: Aldine.

Evans, F. J. (1988). Posthypnotic amnesia: Disso-ciation of context and context. In H. M. Petti-nati (Ed.), Hypnosis and memory (pp. 157–192).New York: Guilford.

Evans, F. J., & Thorne, W. A. F. (1966). Two typesof posthypnotic amnesia: Recall amnesia and

source amnesia. International Journal of Clinicaland Experimental Hypnosis, 14(2), 162–179.

Farvolden, P., & Woody, E. Z. (2004). Hypno-sis, memory, and frontal executive functioning.International Journal of Clinical & ExperimentalHypnosis, 52 , 3–26.

Faymonville, M. E., Laureys, S., Degueldre, C.,Del Fiore, G., Luxen, A., Franck, G., Lamy,M., & Maquet, P. (2000). Neural mechanismsof antinociceptive effects of hypnosis. Anesthe-siology, 92 (5), 1257–1267.

Fisher, S. (1954). The role of expectancy in theperformance of posthypnotic behavior. Journalof Abnormal & Social Psychology, 49, 503–507.

Flavell, J. H. (1979). Metacognition and cog-nitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist,34(10), 906–911.

Fromm, E. (1979). The nature of hypnosis andother altered states of consciousness: An egopsychological theory. In E. Fromm & R. E.Shor (Eds.), Hypnosis: Developments in researchand new perspectives (pp. 81–103). New York:Aldine.

Gandolfo, R. L. (1971). Role of expectancy, amne-sia, and hypnotic induction in the performanceof posthypnotic behavior. Journal of AbnormalPsychology, 77, 324–328.

Garner, W. R., Hake, H. W., & Eriksen, C. W.(1956). Operationism and the concept of per-ception. Psychological Review, 63 , 149–159.

Gauld, A. (1992). A history of hypnotism. NewYork : Cambridge University Press.

Geiselman, R. E., Bjork, R. A., & Fishman, D. L.(1983). Disrupted retrieval in directed forget-ting: A link with posthypnotic amnesia. Journalof Experimental Psychology: General, 112 , 58–72 .

Gill, M. M., & Brenman, M. (1959). Hypno-sis and related states: Psychoanalytic studies(Vol. 2). New York: International UniversitiesPress.

Glisky, M. L., & Kihlstrom, J. F. (1993). Hypno-tizability and facets of openness. InternationalJournal of Clinical & Experimental Hypnosis,41(2), 112–123 .

Glisky, M. L., Tataryn, D. J., & Kihlstrom, J. F.(1995). Hypnotizability and mental imagery.International Journal of Clinical & Experimen-tal Hypnosis, 43(1), 34–54 .

Glisky, M. L., Tataryn, D. J., Tobias, B. A., &Kihlstrom, J. F. (1991). Absorption, openness toexperience, and hypnotizability. Journal of Per-sonality & Social Psychology, 60(2), 263–272 .

Page 489: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 471

Goldstein, A. P., & Hilgard, E. R. (1975). Lackof influence of the morphine antagonist nalox-one on hypnotic analgesia. Proceedings of theNational Academy of Sciences USA, 72 , 2041–2043 .

Gorassini, D. R., & Spanos, N. P. (1987). A socialcognitive skills approach to the successful mod-ification of hypnotic susceptibility. Journal ofPersonality and Social Psychology, 50, 1004–1012 .

Graf, P., & Schacter, D. L. (1985). Implicit andexplicit memory for new associations in normaland amnesic subjects. Journal of ExperimentalPsychology: Learning, Memory, and Cognition,11, 501–518.

Graffin, N. F., Ray, W. J., & Lundy, R. (1995).EEG concomitants of hypnosis and hypnoticsusceptibility. Journal of Abnormal Psychology,104 , 123–13 1.

Graham, K. R., & Schwarz, L. M. (1973 , August).Suggested deafness and auditory signal detectabil-ity. Paper presented at the annual meeting ofthe American Psychological Association, Mon-treal.

Gravitz, M. A., & Gerton, M. I. (1984). Originsof the term hypnotism prior to Braid. AmericanJournal of Clinical Hypnosis, 2 7, 107–110.

Gregg, V. H. (1979). Posthypnotic amnesia andgeneral memory theory. Bulletin of the BritishSociety for Experimental and Clinical Hypnosis,1979(2), 11–14 .

Gregg, V. H. (1982). Posthypnotic amnesia forrecently learned material: A comment on thepaper by J. F. Kihlstrom (1980). Bulletin ofthe British Society of Experimental & ClinicalHypnosis, 5 , 27–30.

Grether, W. F. (1940). A comment on “The induc-tion of color blindness by a technique of hyp-notic suggestion”. Journal of General Psychol-ogy, 2 3 , 207–210.

Grosz, H. J., & Zimmerman, J. (1965). Exper-imental analysis of hysterical blindness: Afollow-up report and new experimental data.Archives of General Psychiatry, 13 , 255–260.

Halligan, P. W., Athwal, B. S., Oakley, D. A., &Frackowiak, R. S. J. (2000, March 18). Imaginghypnotic paralysis: Implications for conversionhysteria. Lancet, 355 , 986–987.

Halligan, P. W., Oakley, D. A., Athwal, B. S., &Frackowiak, R. S. J. (2000). Imaging hypnoticparalysis – Reply. Lancet, 356(9224), 163 .

Hargadon, R., Bowers, K. S., & Woody, E. Z.(1995). Does counterpain imagery mediate

hypnotic analgesia? Journal of Abnormal Psy-chology, 104(3), 508–516.

Harriman, P. L. (1942a). Hypnotic induction ofcolor vision anomalies: I. The use of the Ishi-hara and the Jensen tests to verify the accep-tance of suggested color blindness. Journal ofGeneral Psychology, 2 6, 289–298.

Harriman, P. L. (1942b). Hypnotic induction ofcolor vision anomalies: II. Results on two othertests of color blindness. Journal of General Psy-chology, 2 7, 81–92 .

Harvey, M. A., & Sipprelle, C. N. (1978). Colorblindness, perceptual interference, and hypno-sis. American Journal of Clinical Hypnosis, 2 0,189–193 .

Hilgard, E. R. (1965). Hypnotic susceptibility. NewYork: Harcourt, Brace, & World.

Hilgard, E. R. (1969). Pain as a puzzle for psy-chology and physiology. American Psychologist,2 4 , 103–113 .

Hilgard, E. R. (1971, September). Is hypnosisa state, trait, neither? Paper presented at theAmerican Psychological Association, Washing-ton, DC.

Hilgard, E. R. (1972). A critique of Johnson,Maher, and Barber’s “Artifact in the ‘essence ofhypnosis: An Evaluation of trance logic”, witha recomputation of their findings. Journal ofAbnormal Psychology, 79, 221–233 .

Hilgard, E. R. (1973a). The domain of hyp-nosis, with some comments on alternativeparadigms. American Psychologist, 2 8, 972–982 .

Hilgard, E. R. (1973b). A neodissociation inter-pretation of pain reduction in hypnosis. Psy-chological Review, 80, 396–411.

Hilgard, E. R. (1977). Divided consciousness: Mul-tiple controls in human thought and action. NewYork: Wiley-Interscience.

Hilgard, E. R., & Cooper, L. M. (1965). Spon-taneous and suggested posthypnotic amnesia.International Journal of Clinical & ExperimentalHypnosis, 13 , 261–273 .

Hilgard, E. R., & Hilgard, J. R. (1975). Hypnosisin the relief of pain. Los Altos, CA: Kaufman.

Hilgard, E. R., Hilgard, J. R., Macdonald, H.,Morgan, A. H., & Johnson, L. S. (1978).Covert pain in hypnotic analgesia: Its reality astested by the real-simulator paradigm. Journalof Abnormal Psychology, 87, 655–663 .

Hilgard, E. R., Macdonald, H., Morgan, A.H., & Johnson, L. S. (1978). The reality ofhypnotic analgesia: A comparison of highly

Page 490: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

472 the cambridge handbook of consciousness

hypnotizables with simulators. Journal ofAbnormal Psychology, 87, 239–246.

Hilgard, E. R., & Morgan, A. H. (1975). Heartrate and blood pressure in the study of labora-tory pain in man under normal conditions andas influenced by hypnosis. Acta NeurobiologicaExperimentalis, 35 , 741–759.

Hilgard, E. R., Morgan, A. H., Lange, A. F.,Lenox, J. R., Macdonald, H., Marshall, G. D., &Sachs, L. B. (1974). Heart rate changes in painand hypnosis. Psychophysiology, 11, 692–702 .

Hilgard, E. R., Morgan, A. H., & Macdonald, H.(1975). Pain and dissociation in the cold pressortest: A study of hypnotic analgesia with “hid-den reports” through automatic key pressingand automatic talking. Journal of Abnormal Psy-chology, 84 , 280–289.

Hilgard, J. R. (1970). Personality and hypnosis: Astudy in imaginative involvement. Chicago: Uni-versity of Chicago Press.

Hobson, J. A., Pace-Schott, E., & Stickgold,R. (2000). Dreaming and the brain: Towardsa cognitive neuroscience of conscious states.Behavioral & Brain Sciences, 2 3(6).

Hochberg, J. (1974). Higher-order stimuli andinterresponse coupling in the perception ofthe visual world. In R. B. MacLeod & H. L.Pick (Eds.), Perception: Essays in honor of JamesJ. Gibson (pp. 17–39). Ithaca, NY: CornellUniversity Press.

Hochberg, J., & Peterson, M. A. (1987).Piecemeal organization and cognitive compo-nents in object perception: Perceptually cou-pled responses to moving objects. Journal ofExperimental Psychology: General, 116, 370–380.

Hofling, C. K., Heyl, B., & Wright, D. (1971).The ratio of total recoverable memories to con-scious memories in normal subjects. Compre-hensive Psychiatry, 12 , 371–379.

Howard, R. J., Ffytche, D. H., Barnes, J.,McKeefry, D., Ha, Y., Woodruff, P. W.,Bullmore, E. T., Simmons, A., Williams, S. C.R., David, A. S., & Brammer, M. (1998). Thefunctional anatomy of imagining and perceiv-ing colour. Neuroreport, 9, 1019–1023 .

Hoyt, I. P., & Kihlstrom, J. F. (1986, August).Posthypnotic suggestion and waking instruction.Paper presented at the 94th annual meetingof the American Psychological Association,Washington, DC.

Hull, C. L. (1933). Hypnosis and suggestibility: Anexperimental approach. New York: Appleton.

Jacoby, L. L. (1991). A process dissociation frame-work: Separating automatic from intentionaluses of memory. Journal of Memory & Lan-guage, 30, 513–541.

James, W. (1890). Principles of psychology. NewYork: Holt.

Janet, P. (1907). The major symptoms of hysteria.New York: Macmillan.

Jansen, R. D., Blum, G. S., & Loomis, J. M. (1982).Attentional alterations of slant specific interfer-ence between line segments in eccentric vision.Perception, 11, 535–540.

Jasiukaitis, P., Nouriani, B., Hugdahl, K., &Spiegel, D. (1995). Relateralizing hypnosis; orhave we been barking up the wrong hemi-sphere? International Journal of Clinical &Experiemental Hypnosis, 45 , 158–177.

John, O. P. (1990). The “big five” factor taxon-omy: Dimensions of personality in the naturallanguage and in questionnaires. In L. A. Per-vin (Ed.), Handbook of personality: Theory andresearch (pp. 66–100). New York: Guilford.

Johnson, M. K., Hashtroudi, S., & Lindsay, D. S.(1993). Source monitoring. Psychological Bul-letin, 114(1), 3–28.

Johnson, M. K., & Raye, C. L. (1981). Realitymonitoring. Psychological Review, 88, 67–85 .

Johnson, R. F. Q. (1972). Trance logic revisited: Areply to Hilgard’s critique. Journal of AbnormalPsychology, 79, 234–238.

Johnson, R. F. Q., Maher, B. A., & Barber, T. X.(1972). Artifact in the “essence of hypnosis”: Anevaluation of trance logic. Journal of AbnormalPsychology, 79, 212–220.

Kallio, S., & Revensuo, A. (2003). Hypnotic phe-nomena and altered states of consciousness: Amultilevel framework of description and expla-nation. Contemporary Hypnosis, 2 0, 111–164 .

Kihlstrom, J. F. (1979). Hypnosis and psy-chopathology: Retrospect and prospect. Jour-nal of Abnormal Psychology, 88(5), 459–473 .

Kihlstrom, J. F. (1980). Posthypnotic amnesiafor recently learned material: Interactions with“episodic” and “semantic” memory. CognitivePsychology, 12 , 227–251.

Kihlstrom, J. F. (1983). Instructed forgetting:Hypnotic and nonhypnotic. Journal of Experi-mental Psychology: General, 112 (1), 73–79.

Kihlstrom, J. F. (1984). Conscious, subconscious,unconscious: A cognitive perspective. In K. S.Bowers & D. Meichenbaum (Eds.), The uncon-scious reconsidered (pp. 149–211). New York:Wiley.

Page 491: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 473

Kihlstrom, J. F. (1985). Posthypnotic amnesiaand the dissociation of memory. Psychology ofLearning and Motivation, 19, 13 1–178.

Kihlstrom, J. F. (1987). The cognitive uncon-scious. Science, 2 37(4821), 1445–1452 .

Kihlstrom, J. F. (1992a). Dissociation and dissoci-ations: A comment on consciousness and cog-nition. Consciousness & Cognition: An Interna-tional Journal, 1(1), 47–53 .

Kihlstrom, J. F. (1992b). Dissociative and conver-sion disorders. In D. J. Stein & J. Young (Eds.),Cognitive science and clinical disorders (pp. 247–270). San Diego: Academic Press.

Kihlstrom, J. F. (1992c). Hypnosis: A sesquicen-tennial essay. International Journal of Clinical &Experimental Hypnosis, 40(4), 301–314 .

Kihlstrom, J. F. (1994a). One hundred years ofhysteria. In S. J. Lynn & J. W. Rhue (Eds.),Dissociation: Clinical and theoretical perspectives(pp. 365–394). New York: Guilford Press.

Kihlstrom, J. F. (1994b). The rediscovery ofthe unconscious. In H. Morowitz & J. L.Singer (Eds.), The mind, the brain, and complexadaptive systems (pp. 123–143). Reading, MA:Addison-Wesley.

Kihlstrom, J. F. (1996). Perception without aware-ness of what is perceived, learning withoutawareness of what is learned. In M. Vel-mans (Ed.), The science of consciousness: Psycho-logical, neuropsychological and clinical reviews(pp. 23–46). London: Routledge.

Kihlstrom, J. F. (1998). Dissociations and dissoci-ation theory in hypnosis: Comment on Kirschand Lynn (1998). Psychological Bulletin, 12 3(2),186–191.

Kihlstrom, J. F. (2000, November 2). Hypnosisand pain: Time for a new look. Paper presentedat the Annual meeting of the American PainSociety, Atlanta, GA.

Kihlstrom, J. F. (2001a). Dissociative disorders. InP. B. Sutker & H. E. Adams (Eds.), Compre-hensive handbook of psychopathology (3 rd ed.,pp. 259–276). New York: Plenum.

Kihlstrom, J. F. (2001b, August). Hypnosis insurgery: Efficacy, specificity, and utility. Paperpresented at the the annual meeting ofthe American Psychological Association, SanFrancisco.

Kihlstrom, J. F. (2002a). Demand characteristics inthe laboratory and the clinic: Conversations andcollaborations with subjects and patients. Retri-eved from http://journals.apa.org/prevention/volume5 /pre0050036c.html.

Kihlstrom, J. F. (2002b). Mesmer, the FranklinCommission, and hypnosis: A counterfactualessay. International Journal of Clinical & Exper-imental Hypnosis, 50, 408–419.

Kihlstrom, J. F. (2004a). Clark L. Hull, hypno-tist [Review of Hypnosis and Suggestibility: AnExperimental Approach by C. L. Hull]. Con-temporary Psychology, 49, 141–144 .

Kihlstrom, J. F. (2004b). “An unwarrantableimpertinence” [Commentary on The Illusion ofConscious Will by D.M.Wegner]. Behavioral &Brain Sciences, 2 7, 666–667,

Kihlstrom, J. F., Barnhardt, T. M., & Tataryn, D. J.(1992). Implicit perception. In R. F. Bornstein &T. S. Pittman (Eds.), Perception without aware-ness: Cognitive, clinical, and social perspectives(pp. 17–54). New York: Guilford Press.

Kihlstrom, J. F., Brenneman, H. A., Pistole, D. D.,& Shor, R. E. (1985). Hypnosis as a retrieval cuein posthypnotic amnesia. Journal of AbnormalPsychology, 94(3), 264–271.

Kihlstrom, J. F., & Edmonston, W. E. (1971).Alterations in consciousness in neutral hypno-sis: Distortions in semantic space. AmericanJournal of Clinical Hypnosis, 13(4), 243–248.

Kihlstrom, J. F., & Eich, E. (1994). Altering statesof consciousness. In D. Druckman & R. A. Bjork(Eds.), Learning, remembering, and believing:Enhancing performance (pp. 207–248). Wash-ington, DC: National Academy Press.

Kihlstrom, J. F., & Evans, F. J. (1976). Recovery ofmemory after posthypnotic amnesia. Journal ofAbnormal Psychology, 85(6), 564–569.

Kihlstrom, J. F., & Evans, F. J. (1977). Residualeffect of suggestions for posthypnotic amnesia:A reexamination. Journal of Abnormal Psychol-ogy, 86(4), 327–333 .

Kihlstrom, J. F., & Evans, F. J. (1979). Memoryretrieval processes in posthypnotic amnesia. InJ. F. Kihlstrom & F. J. Evans (Eds.), Functionaldisorders of memory (pp. 179–218). Hillsdale,NJ: Erlbaum.

Kihlstrom, J. F., & McGlynn, S. M. (1991). Exper-imental research in clinical psychology. In M.Hersen, A. E. Kazdin, & A. S. Bellack (Eds.),The clinical psychology handbook (2nd ed.,pp. 239–257). New York: Pergamon Press.

Kihlstrom, J. F., Mulvaney, S., Tobias, B. A., &Tobis, I. P. (2000). The emotional unconscious.In E. Eich, J. F. Kihlstrom, G. H. Bower, J. P.Forgas, & P. M. Niedenthal (Eds.), Cognitionand emotion (pp. 30–86). New York: OxfordUniversity Press.

Page 492: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

474 the cambridge handbook of consciousness

Kihlstrom, J. F., & Schacter, D. L. (2000). Func-tional amnesia. In F. Boller & J. Grafman (Eds.),Handbook of neuropsychology (2nd ed., Vol. 2 ,pp. 409–427). Amsterdam: Elsevier.

Kihlstrom, J. F., Tataryn, D. J., & Hoyt, I. P.(1993). Dissociative disorders. In P. J. Sutker &H. E. Adams (Eds.), Comprehensive handbookof psychopathology (2nd ed., pp. 203–234). NewYork: Plenum Press.

Killeen, P. R., & Nash, M. R. (2003). The fourcauses of hypnosis. International Journal of Clin-ical and Experimental Hypnosis, 51(3), 195–231.

Kirsch, I. (2000). The response set theory of hyp-nosis. American Journal of Clinical Hypnosis,42 (3–4), 274–292 .

Kirsch, I. (2001a). The altered states of hypnosis.Social Research, 68(3), 795–807.

Kirsch, I. (2001b). The response set theory ofhypnosis: Expectancy and physiology. Ameri-can Journal of Clinical Hypnosis, 44(1), 69–73 .

Kirsch, I., & Lynn, S. J. (1995). Altered state ofhypnosis: Changes in the theoretical landscape.American Psychologist, 50(10), 846–858.

Kirsch, I., & Lynn, S. J. (1997). Hypnotic involun-tariness and the automaticity of everyday life.American Journal of Clinical Hypnosis, 40(1),329–348.

Kirsch, I., & Lynn, S. J. (1998a). Dissociation the-ories of hypnosis. Psychological Bulletin, 12 3(1),100–115 .

Kirsch, I., & Lynn, S. J. (1998b). Social-cognitivealternatives to dissociation theories of hypnoticinvoluntariness. Review of General Psychology,2 (1), 66–80.

Knox, V. J., Morgan, A. H., & Hilgard, E. R.(1974). Pain and suffering in ischemia: Theparadox of hypnotically suggested anesthesiaas contradicted by reports from the “hiddenobserver.”Archives of General Psychiatry, 30,840–847.

Kosslyn, S. M., Thompson, W. L., Costantini-Ferrando, M. F., Alpert, N. M., & Spiegel, D.(2000). Hypnotic visual hallucination altersbrain color processing. American Journal ofPsychiatry, 157(8), 1279–1284 .

Kuhn, T. (1962). The structure of scientific revolu-tions. Chicago: University of Chicago Press.

Kunzendorf, R., Spanos, N., & Wallace, B. (Eds.).(1996). Hypnosis and imagination. New York:Baywood.

LaBerge, D., & Samuels, S. J. (1974). Toward atheory of automatic information processing inreading. Cognitive Psychology, 6, 293–323 .

Lang, E. V., Benotsch, E. G., Fick, L. J., Lut-gendorf, S., Berbaum, M. L., Berbaum, K. S.,Logan, H., & Spiegel, D. (2000, April 29).Adjunctive non-pharmacological analgesia forinvasive medical procedures: A randomisedtrial. Lancet, 355 , 1486–1500.

Lang, E. V., Joyce, J. S., Spiegel, D., Hamilton, D.,& Lee, K. K. (1996). Self-hypnotic relaxationduring interventional radiological procedures:Effects on pain perception and intravenousdrug use. International Journal of Clinical &Experimental Hypnosis, 44 , 106–119.

Laurence, J. R., Perry, C., & Kihlstrom, J. F.(1983). Hidden observer phenomena in hyp-nosis: An experimental creation?Journal of Per-sonality and Social Psychology, 44 , 163–169.

Leibowitz, H. W., Lundy, R. M., & Guez, J. R.(1980). The effect of testing distance on sug-gestion induced visual field narrowing. Inter-national Journal of Clinical and ExperimentalHypnosis, 2 8, 409–420.

Leibowitz, H. W., Post, R. B., Rodemer, C. S.,Wadlington, W. L., & Lundy, R. M. (1980). Rollvection analysis of suggestion induced visualfield narrowing. Perception and Psychophysics,2 8, 173–176.

Levy, B. L., & Anderson, M. C. (2002). Inhibitoryprocesses and the control of memory retrieval.Trends in Cognitive Sciences, 6, 299–305 .

Ludwig, A. M. (1966). Altered states of con-sciousness. Archives of General Psychiatry, 15 ,225–234 .

MacLeod, C. M. (1991). Half a century of researchon the Stroop effect: An integrative review.Psychological Bulletin, 109(2), 163–203 .

MacLeod, C. M. (1992). The Stroop task: The“gold standard” of attentional measures. Journalof Experimental Psychology: General, 12 1(1), 12–14 .

MacLeod-Morgan, C., & Lack, L. (1982). Hemi-spheric specificity: A physiological concomi-tant of hypnotizability. Psychophysiology, 19,687–690.

Mallard, D., & Bryant, R. A. (2001). Hypnoticcolor blindness and performance on the Strooptest. International Journal of Clinical and Exper-imental Hypnosis, 49, 330–338.

Maquet, P., Faymonvi., M. E., DeGuelder, C.,DelFiore, G., Franck, G., Luxen, A., & Lamy,M. (1999). Functional neuroanatomy of hyp-notic state. Biological Psychiatry, 45 , 327–333 .

Marcel, A. J. (1983). Conscious and unconsciousperception: Experiments on visual masking

Page 493: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 475

and word recognition. Cognitive Psychology, 15 ,197–237.

McClelland, D. C., Koestner, R., & Weinberger,J. (1989). How do self-attributed and implicitmotives differ?Psychological Review, 96, 690–702 .

McConkey, K. M., Bryant, R. A., Bibb, B. C., &Kihlstrom, J. F. (1991). Trance logic in hypnosisand imagination. Journal of Abnormal Psychol-ogy, 100(4), 464–472 .

McConkey, K. M., Bryant, R. A., Bibb, B. C.,Kihlstrom, J. F., & Tataryn, D. J. (1990). Hyp-notically suggested anaesthesia and the circle-touch test: A real-simulating comparison.British Journal of Experimental & Clinical Hyp-nosis, 7, 153–157.

McConkey, K. M., & Sheehan, P. W. (1980).Inconsistency in hypnotic age regression andcue structure as supplied by the hypnotist.International Journal of Clinical and Experimen-tal Hypnosis, 38, 394–408.

McGlashan, T. H., Evans, F. J., & Orne, M. T.(1969). The nature of hypnotic analgesia andplacebo response to experimental pain. Psycho-somatic Medicine, 31, 227–246.

Melzack, R. (1975). The McGill Pain Question-naire: Major properties and scoring methods.Pain, 1, 277–299.

Metcalfe, J., & Shimamura, A. P. (1994). Metacog-nition: Knowing about knowing. Cambridge,MA: MIT Press.

Metzinger, T. (Ed.). (2000). Neural correlates ofconsciousness. Cambridge, MA: MIT Press.

Miller, M. E., & Bowers, K. S. (1986). Hypnoticanalgesia and stress inoculation in the reduc-tion of pain. Journal of Abnormal Psychology,95 , 6–14 .

Miller, M. E., & Bowers, K. S. (1993). Hypnoticanalgesia: Dissociated experience or dissoci-ated control?Journal of Abnormal Psychology,102 , 29–38.

Miller, R. J., Hennessy, R. T., & Leibowitz, H. W.(1973). The effect of hypnotic ablation of thebackground on the magnitude of the Ponzo per-spective illusion. International Journal of Clini-cal and Experimental Hypnosis, 2 1, 180–191.

Montgomery, G. H., DuHamel, K. N., & Redd,W. H. (2000). A meta-analysis of hypnoticallyinduced analgesia: How effective is hypno-sis?International Journal of Clinical and Experi-mental Hypnosis, 48(2), 138–153 .

Moret, V., Forster, A., Laverriere, M.-C. , Gail-lard, R. C., Bourgeois, P., Haynal, A., Gem-

perle, M., & Buchser, E. (1991). Mechanismof analgesia induced by hypnosis and acupunc-ture: Is there a difference? Pain, 45 , 135–140.

Nace, E. P., & Orne, M. T. (1970). Fate of anuncompleted posthypnotic suggestion. Journalof Abnormal Psychology, 75 , 278–285 .

Nace, E. P., Orne, M. T., & Hammer, A. G. (1974).Posthypnotic amnesia as an active psychic pro-cess. Archives of General Psychiatry, 31, 257–260.

Nash, M. (1987). What, if anything, is regressedabout hypnotic age regression: A review of theempirical literature. Psychological Bulletin, 102 ,42–52 .

Nash, M. R., Lynn, S. J., Stanley, S., & Carlson, V.(1987). Subjectively complete hypnotic deaf-ness and auditory priming. International Jour-nal of Clinical and Experimental Hypnosis, 35 ,32–40.

Nelson, T. O. (1992). Metacognition: Core read-ings. Boston: Allyn and Bacon.

Nelson, T. O. (1996). Consciousness and meta-cognition. American Psychologist, 51(2), 102–116.

Nelson, T. O., & Narens, L. (1990). Metamem-ory: A theoretical framework and some newfindings. In G. H. Bower (Ed.), The psychologyof learning and motivation Vol. 26 (pp. 125–173).New York: Academic Press.

Nogrady, H., McConkey, K. M., & Perry, C.(1985). Enhancing visual memory: Trying hyp-nosis, trying imagination, and trying again.Journal of Abnormal Psychology, 94 , 195–204 .

Nolan, R. P., & Spanos, N. P. (1987). Hypnoticanalgesia and stress inoculation: A critical reex-amination of Miller and Bowers. PsychologicalReports, 61, 95–102 .

Oakley, D. A. (1999). Hypnosis and conversionhysteria: A unifying model. Cognitive Neu-ropsychiatry, 4 , 243–265 .

Obstoj, I., & Sheehan, P. W. (1977). Aptitudefor trance, task generalizability, and incon-gruity response in hypnosis. Journal of Abnor-mal Psychology, 86, 543–552 .

Orne, M. T. (1959). The nature of hypnosis: Arti-fact and essence. Journal of Abnormal and SocialPsychology, 58, 277–299.

Orne, M. T. (1962). On the social psychology ofthe psychological experiment: With particularreference to demand characteristics and theirimplications. American Psychologist, 17, 776–783 .

Page 494: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

476 the cambridge handbook of consciousness

Orne, M. T. (1969). On the nature of theposthypnotic suggestion. In L. Chertok (Ed.),Psychophysiological mechanisms of hypnosis(pp. 173–192). Berlin: Springer-Verlag.

Orne, M. T. (1971). The simulation of hypno-sis: Why, how, and what it means. InternationalJournal of Clinical and Experimental Hypnosis,19, 183–210.

Orne, M. T. (1973). Communication by the totalexperimental situation: Why it is important,how it is evaluated, and its significance forthe ecological validity of findings. In P. Pliner,L. Krames, & T. Alloway (Eds.), Communi-cation and affect (pp. 157–191). New York:Academic.

Orne, M. T., Sheehan, P. W., & Evans, F. J. (1968).Occurrence of posthypnotic behavior outsidethe experimental setting. Journal of Personality& Social Psychology, 9, 189–196.

Otto-Salaj, L. L., Nadon, R., Hoyt, I. P., Regis-ter, P. A., & Kihlstrom, J. F. (1992). Lateral-ity of hypnotic response. International Journalof Clinical & Experimental Hypnosis, 40, 12–20.

Pattie, F. A. (1937). The genuineness of hypnoti-cally produced anesthesia of the skin. AmericanJournal of Psychology, 49, 435–443 .

Peterson, M. A., & Hochberg, J. (1983). Opposedset measurement procedure: A quantitativeanalysis of the role of local cues and intentionin form perception. Journal of Experimental Psy-chology: Human Perception and Performance, 9,183–193 .

Posner, M. I., & Snyder, C. R. R. (1975). Atten-tion and cognitive control. In R. L. Solso (Ed.),Information processing and cognition: The LoyolaSymposium (pp. 55–85). New York: Wiley.

Rainville, P., Carrier, B., Hofbauer, R. K., Bush-nell, M. C., & Duncan, G. H. (1999). Disso-ciation of sensory and affective dimensions ofpain using hypnotic modulation. Pain, 82 (2),159–171.

Rainville, P., Duncan, G. H., Price, D. D., Carrier,B., & Bushnell, M. C. (1997, August 15). Painaffect encoded in human anterior cingulate butnot somatosensory cortex. Science, 2 77, 968–971.

Rainville, P., Hofbauer, R. K., Bushnell, M. C.,Duncan, G. H., & Price, D. D. (2002). Hypno-sis modulates the activity in cerebral structuresinvolved in the regulation of consciousness.Journal of Cognitive Neuroscience, 14(Suppl),887–901.

Rainville, P., Hofbauer, R. K., Paus, T., Bushnell,M. C., & Price, D. D. (1999). Cerebral mech-anisms of hypnotic induction and suggestion.Journal of Cognitive Neuroscience, 11, 110–125 .

Ray, W. J., & Tucker, D. M. (2003). Evolution-ary approaches to understanding the hypnoticexperience. International Journal of Clinical andExperimental Hypnosis, 51(3), 256–281.

Raz, A., Fan, J., Shapiro, T., & Posner, M. I.(2002 , November). fMRI of posthypnotic sugges-tion to modulate reading of Stroop words. Paperpresented at the Society for Neuroscience,Washington, DC.

Raz, A., Landzberg, K. S., Schweizer, H. R.,Zephrani, Z., Shapiro, T., Fan, J., & Posner, M. I.(2003). Posthypnotic suggestion and the mod-ulation of Stroop interference under cyclople-gia. Consciousness & Cognition, 12 , 332–346.

Raz, A., Shapiro, T., Fan, J., & Posner, M. I.(2002). Hypnotic suggestion and the modula-tion of Stroop interference. Archives of GeneralPsychiatry, 59, 1155–1161.

Reber, A. S. (1967). Implicit learning of artificialgrammars. Journal of Verbal Learning & VerbalBehavior, 6, 855–863 .

Reber, A. S. (1993). Implicit learning and tacitknowledge: An essay on the cognitive unconscious.Oxford: Oxford University Press.

Reder, L. M. (1996). Implicit memory and metacog-nition. Mahwah, NJ: Erlbaum.

Register, P. A., & Kihlstrom, J. F. (1987). Hypnoticeffects on hypermnesia. International Journal ofClinical & Experimental Hypnosis, 35(3), 155–170.

Reyher, J. (1967). Hypnosis in research on psy-chopathology. In J. E. Gordon (Ed.), Handbookof clinical and experimental hypnosis (pp. 110–147). New York: Macmillan.

Reyher, J., & Smyth, L. (1971). Suggestibility dur-ing the execution of a posthypnotic suggestion.Journal of Abnormal Psychology, 78, 258–265 .

Roche, S. M., & McConkey, K. M. (1990).Absorption: Nature, assessment, and corre-lates. Journal of Personality and Social Psychol-ogy, 59, 91–101.

Rosenthal, R., & Rubin, D. B. (1978). Interper-sonal expectancy effects: The first 345 studies.Behavioral & Brain Sciences, 3 , 377–415 .

Sabourin, M., Brisson, M. A., & Deschamb, A.(1980). Evaluation of hypnotically suggestedselective deafness by heart-rate conditioningand reaction time. Psychological Reports, 47,995–1002 .

Page 495: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 477

Sackeim, H. A. (1982). Lateral asymmetry in bod-ily response to hypnotic suggestions. BiologicalPsychiatry, 17, 437–447.

Sackeim, H. A., Nordlie, J. W., & Gur, R. C.(1979). A model of hysterical and hypnoticblindness: Cognition, motivation, and aware-ness. Journal of Abnormal Psychology, 88, 474–489.

Sarbin, T. R. (1973). On the recently reportedphysiological and pharmacological reality ofthe hypnotic state. Psychological Record, 2 3 ,501–511.

Sarbin, T. R., & Coe, W. C. (1972). Hypnosis: Asocial psychological analysis of influence commu-nication. New York: Holt, Rinehart, & Winston.

Sarbin, T. R., & Slagle, R. W. (1979). Hyp-nosis and psychophysiological outcomes. In.In E. Fromm & R. E. Shor (Eds.), Hypnosis:Developments in research and new perspectives(pp. 273–303). New York: Aldine.

Schacter, D. L. (1987). Implicit memory: Historyand current status. Journal of Experimental Psy-chology: Learning, Memory, and Cognition, 13 ,501–518.

Schacter, D. L., & Graf, P. (1986). Effects ofelaborative processing on implicit and explicitmemory for new associations. Journal of Exper-imental Psychology: Learning, Memory, andCognition, 12 .

Schacter, D. L., Harbluk, J. L., & McClachlan, D.R. (1984). Retrieval without recollection: Anexperimental analysis of source amnesia. Jour-nal of Verbal Learning and Verbal Behavior, 2 3 ,593–611.

Schacter, D. L., & Kihlstrom, J. F. (1989). Func-tional amnesia. In F. Boller & J. Graffman(Eds.), Handbook of neuropsychology (Vol. 3 ,pp. 209–231). Amsterdam: Elsevier.

Scheibe, K. E., Gray, A. L., & Keim, C. S. (1968).Hypnotically induced deafness and delayedauditory feedback: A comparison of real andsimulating subjects. International Journal ofClinical & Experimental Hypnosis, 16, 158–164 .

Schneider, W., & Shiffrin, R. M. (1977). Con-trolled and automatic human information pro-cessing: I. Detection, search, and attention.Psychological Review, 84(1), 1–66.

Seger, C. A. (1994). Criteria for implicit learn-ing: De-emphasize conscious access, empha-size amnesia. Behavioral & Brain Sciences, 17,421–422 .

Sheehan, P. W. (1982). Imagery and hypnosis:Forging a link, at least in part. Research Commu-

nications in Psychology, Psychiatry & Behavior,7, 357–272 .

Sheehan, P. W. (1988). Memory distortion inhypnosis. International Journal of Clinical andExperimental Hypnosis, 36, 296–311.

Sheehan, P. W., & McConkey, K. M. (1982). Hyp-nosis and experience: The exploration of phenom-ena and process. Hillsdale, NJ: Erlbaum.

Sheehan, P. W., Obstoj, I., & McConkey, K. M.(1976). Trance logic and cue structure as sup-plied by the hypnotist. Journal of AbnormalPsychology, 85 , 459–472 .

Sheehan, P. W., & Orne, M. T. (1968). Some com-ments on the nature of posthypnotic behavior.Journal of Nervous & Mental Disease, 146, 209–220.

Shiffrin, R. M., & Schneider, W. (1977). Con-trolled and automatic human informationprocessing: II. Perceptual learning, automaticattending and a general theory. PsychologicalReview, 84(2), 127–190.

Shiffrin, R. M., & Schneider, W. (1984). Auto-matic and controlled processing revisited. Psy-chological Review, 91(2), 269–276.

Shimamura, A. P., & Squire, L. R. (1987). Aneuropsychological study of fact memory andsource amnesia. Journal of Experimental Psy-chology: Learning, Memory, and Cognition, 13 ,464–473 .

Shor, R. E. (1979a). The fundamental problemin hypnosis research as viewed from historicperspectives. In E. Fromm & R. E. Shor (Eds.),Hypnosis: Developments in research and new per-spectives. New York: Aldine.

Shor, R. E. (1979b). A phenomenological methodfor the measurement of variables important toan understanding of the nature of hypnosis.In E. Fromm & R. E. Shor (Eds.), Hypnosis:Developments in research and new perspectives(pp. 105–135). New York: Aldine.

Shor, R. E., Orne, M. T., & O’Connell, D.N. (1962). Validation and cross-validationof a scale of self-reported personal experi-ences which predicts hypnotizability. Journalof Psychology, 53 , 55–75 .

Snyder, M., & Swann, W. B. (1978). Behav-ioral confirmation in social interaction: Fromsocial perception to social reality. Journalof Experimental Social Psychology, 14 , 148–162 .

Spanos, N. P. (1983). The hidden observer asan experimental creation. Journal of Personal-ity and Social Psychology, 44 , 170–176.

Page 496: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

478 the cambridge handbook of consciousness

Spanos, N. P. (1986a). Hypnosis, nonvolitionalresponding, and multiple personality: A socialpsychological perspective. In B. A. Maher &W. B. Maher (Eds.), Progress in experimentalpersonality research (pp. 1–62), New York: Aca-demic Press.

Spanos, N. P. (1986b). Hypnotic behavior: Asocial psychological interpretation of amne-sia, analgesia, and trance logic. Behavioral andBrain Sciences, 9, 449–467.

Spanos, N. P. (1991). A sociocognitive approachto hypnosis. In S. J. Lynn & J. W. Rhue (Eds.),Theories of hypnosis: Current models and per-spectives (pp. 324–361). New York: GuilfordPress.

Spanos, N. P., & Barber, T. X. (1968). “Hyp-notic” experiences as inferred from auditoryand visual hallucinations. Journal of Experimen-tal Research in Personality, 3 , 136–150.

Spanos, N. P., & Chaves, J. F. (1970). Hypnosisresearch: A methodological critique of experi-ments generated by two alternative paradigms.American Journal of Clinical Hypnosis, 13(2),108–127.

Spanos, N. P., & Chaves, J. F. (1991). Historyand historiography of hypnosis. In S.J. Lynn &J.W. Rhue (Eds.), Theories of hypnosis: Currentmodels and perspectives (pp. 43–78). New York:Guilford Press.

Spanos, N. P., Cobb, P. C., & Gorassini, D. R.(1985). Failing to resist hypnotic test sugges-tions: A strategy for self-presenting as deeplyhypnotized. Psychiatry, 48, 282–292 .

Spanos, N. P., DeGroot, H. P., & Gwynn, M. I.(1987). Trance logic as incomplete responding.Journal of Personality and Social Psychology, 53 ,911–921.

Spanos, N. P., Dubreuil, D. L., Saad, C. L., &Gorassini, D. (1983). Hypnotic elimination ofprism-induced aftereffects: Perceptual effect orresponses to experimental demands. Journal ofAbnormal Psychology, 92 , 216–222 .

Spanos, N. P., Gorassini, D. R., & Petrusic, W.(1981). Hypnotically induced limb anesthesiaand adaptation to displacing prisms: A failureto confirm. Journal of Abnormal Psychology, 90,329–333 .

Spanos, N. P., Gwynn, M. I., et al. (1988).Social psychological factors in the genesis ofposthypnotic source amnesia. Journal of Abnor-mal Psychology 97, 322–329.

Spanos, N. P., Gwynn, M. I., & Stam, H. J. (1983).Instructional demands and ratings of overt and

hidden pain during hypnotic analgesia. Journalof Abnormal Psychology, 92 , 479–488.

Spanos, N. P., & Hewitt, E. C. (1980). The hid-den observer in hypnotic analgesia: Discoveryor experimental creation. Journal of Personality& Social Psychology, 39, 1201–1214 .

Spanos, N. P., Jones, B., & Malfara, A. (1982).Hypnotic deafness: Now you hear it – Now youstill hear it. Journal of Abnormal Psychology, 91,75–77.

Spanos, N. P., Menary, E., Brett, P. J., Cross, W.,& Ahmed, Q. (1987). Failure of posthypnoticresponding to occur outside the experimentalsetting. Journal of Abnormal Psychology, 96, 52–57.

Spanos, N. P., Radtke, H. L., & Dubreuil, D.L. (1982). Episodic and semantic memory inposthypnotic amnesia: A reevaluation. Journalof Personality and Social Psychology, 43 , 565–573 .

Spanos, N. P., & Saad, C. L. (1984). Prism adap-tation in hypnotically limb-anesthetized sub-jects: More disconfirming data. Perceptual andMotor Skills, 59, 379–386.

Spiegel, D., & Albert, L. H. (1983). Naloxone failsto reverse hypnotic alleviation of chronic pain.Psychopharmacology, 81, 140–143 .

Stern, J. A., Brown, M., Ulett, G. A., & Sletten,I. (1977). A comparison of hypnosis, acupunc-ture, morphine, valium, aspirin, and placeboin the management of experimentally inducedpain. Annals of the New York Academy of Sci-ence, 2 96, 175–193 .

Stoyva, J., & Kamiya, J. (1968). Electrophysiolog-ical studies of dreaming as the prototype ofa new strategy in the study of consciousness.Psychological Review, 75 , 192–205 .

Stroop, J. R. (1935). Studies of interference inserial verbal reactions. Journal of ExperimentalPsychology, 18, 643–662 .

Sutcliffe, J. P. (1960). “Credulous” and “skeptical”views of hypnotic phenomena: A review of cer-tain evidence and methodology. InternationalJournal of Clinical and Experimental Hypnosis,8, 73–101.

Sutcliffe, J. P. (1961). “Credulous” and “skeptical”views of hypnotic phenomena: Experimentsin esthesia, hallucination, and delusion. Jour-nal of Abnormal & Social Psychology, 62 , 189–200.

Szechtman, H., Woody, E., Bowers, K. S., & Nah-mias, C. (1998). Where the imaginal appearsreal: A positron emission tomography study

Page 497: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

consciousness in hypnosis 479

of auditory hallucination. Proceedings of theNational Academy of Sciences USA, 95 , 1956–1960.

Tellegen, A., & Atkinson, G. (1974). Open-ness to absorbing and self-altering experiences(“absorption”), a trait related to hypnotic sus-ceptibility. Journal of Abnormal Psychology, 83 ,268–277.

Terao, T., & Collinson, S. (2000). Imaging hyp-notic paralysis. Lancet, 356(9224), 162–163 .

Timm, H. W. (1981). The effect of forensic hypno-sis techniques on eyewitness recall and recog-nition. Journal of Police Science, 9, 188–194 .

Tulving, E. (1983). Elements of episodic memory.Oxford: Oxford University Press.

Veith, I. (1965). Hysteria: The history of a disease.Chicago: University of Chicago Press.

Wallace, B. (1980). Factors affecting proprio-ceptive adaptation to prismatic displacement.Perception & Psychophysics, 2 8, 550–554 .

Wallace, B., & Fisher, L. E. (1982). Hypnoti-cally induced limb anesthesia and adaptation todisplacing prisms: Replication requires adher-ence to critical procedures. Journal of AbnormalPsychology, 91, 390–391.

Wallace, B., & Fisher, L. E. (1984a). Prism adap-tation with hypnotically induced limb anes-thesia: The critical roles of head position andprism type. Perception and Psychophysics, 36,303–306.

Wallace, B., & Fisher, L. E. (1984b). The rolesof target and eye motion in the production ofthe visual shift in prism adaptation. Journal ofGeneral Psychology, 110, 251–262 .

Wallace, B., & Garrett, J. B. (1973). Reducedfelt arm sensation effects on visual adaptation.Perception and Psychophysics, 14 , 597–600.

Wallace, B., & Garrett, J. B. (1975). Perceptualadaptation with selective reductions of felt sen-sation. Perception, 4 , 437–445 .

Wegner, D. M. (2002). The illusion of consciouswill. Cambridge, MA: MIT Press.

Weiss, F., Blum, G. S., & Gleberman, L. (1987).Anatomically based measurement of facialexpressions in simulated versus hypnotically

induced affect. Motivation and Emotion, 11,67–81.

Weitzenhoffer, A. M. (1974). When is an“instruction” an “instruction”?InternationalJournal of Clinical & Experimental Hypnosis,2 2 , 258–269.

White, R. W. (1941). A preface to the theoryof hypnotism. Journal of Abnormal & SocialPsychology, 36, 477–505 .

Wiggins, J. S., & Trapnell, P. D. (1997). Person-ality structure: The return of the big five. InR. Hogan, J. A. Johnson, & S .R.dr Briggs(Eds.), Handbook of personality psychology (pp.737–765). San Diego: Academic Press.

Williamsen, J. A., Johnson, H. J., & Eriksen, C. W.(1965). Some characteristics of posthypnoticamnesia. Journal of Abnormal Psychology, 70,123–13 1.

Woody, E. Z., & Bowers, K. S. (1994). A frontalassault on dissociated control. In S. J. Lynn &J. W. Rhue (Eds.), Dissociation: Clinical andtheoretical perspectives (pp. 52–79). New York:Guilford Press.

Woody, E. Z., & McConkey, K. M. (2003).What we don’t know about the brain andhypnosis, but need to: A view from theBuckhorn Inn. International Journal of Clinicaland Experimental Hypnosis, 51(3), 309–337.

Woody, E. Z., & Sadler, P. (1998). On reinte-grating dissociated theories: Commentary onKirsch and Lynn (1998). Psychological Bulletin,12 3 , 192–197.

Woody, E. Z., & Szechtman, H. (2000a).Hypnotic hallucinations and yedasentience.Contemporary Hypnosis, 17(1), 26–31.

Woody, E. Z., & Szechtman, H. (2000b). Hyp-notic hallucinations: Towards a biology of epis-temology. Contemporary Hypnosis, 17(1), 4–14 .

Woody, E. Z., & Szechtman, H. (2003). Howcan brain activity and hypnosis inform eachother? International Journal of Clinical andExperimental Hypnosis, 51(3), 232–255 .

Yzerbyt, V., Lories, G., & Dardenne, B. (1998).Metacognition: Cognitive and social dimensions.Thousand Oaks, CA: Sage Publications.

Page 498: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c17 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :47

480

Page 499: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

C H A P T E R 18

Can We Study Subjective ExperiencesObjectively? First-Person PerspectiveApproaches and Impaired SubjectiveStates of Awareness in Schizophrenia

Jean-Marie Danion and Caroline Huron

Abstract

One of the main challenges of scientificresearch in psychiatry and clinical psychol-ogy is to take account of subjectivity, asdefined by the experiential sense of existingas a subject of experience, or the first-personperspective of the world (Sass & Parnas,2003). Such clinical symptoms as halluci-nations, delusions of alien control, feelingsof guilt, thoughts of worthlessness, dereal-ization, and depersonalization are subjec-tive experiences that have to be studied inthemselves if research in clinical psychologyand psychiatry is not to be excessively sim-plistic. First-person approaches, such as theremember/know procedure (Tulving, 1985),make it possible to study subjective expe-riences objectively. We show how resultsfrom studies exploring conscious awarenessin schizophrenia using first- and third-personperspective approaches provide new evi-dence for the validity of using first-personperspective approaches.

All this I do within myself, in that hugehall of my memory. [. . .]. There also I meetmyself and recall myself – what, when, or

where I did a thing, and how I felt when Idid it. There are all the things that I remem-ber, either having experienced them myselfor been told about them by others. Out ofthe same storehouse, with these past impres-sions, I can construct now this, now that,image of things that I either have experi-enced or have believed on the basis of expe-rience – and from these I can further con-struct future actions, events, and hopes; andI can meditate on all these things as if theywere present. [. . .]. I speak to myself inthis way; and when I speak, the imagesof what I am speaking about are presentout of the same store of memory; and if theimages were absent I could say nothing atall about them. [. . .]. Here also is all, learntof the liberal sciences and as yet unfor-gotten; removed as it were to some innerplace, which is yet no place: nor are they theimages thereof, but the things themselves.

St Augustine, Confessions, Book X

First-Person Perspective Approachesto Conscious Awareness

Since it became an object of scientificinvestigation, conscious awareness has been

481

Page 500: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

482 the cambridge handbook of consciousness

studied using the so-called third-personperspective approaches. Typically, theseapproaches contrast performance in tasksthat rely heavily on conscious processes tothat in tasks that do not rely, or rely less,on conscious processes. Demonstration of animpaired performance only in tasks that relyheavily on conscious processes is taken asevidence of a specific impairment of theseconscious processes. These approaches toconscious awareness are described as third-person perspective approaches because theworkings of consciousness are inferred byinvestigators from performance patterns inselected tasks. However, as the investigator’sinterpretation is based on indirect data, alter-native explanations sometimes have to beconsidered. Indeed, the two selected tasksmay differ in terms of parameters other thanthe involvement of conscious processes, inwhich case these different parameters mayaccount for the dissociation of performance.

First-person perspective approaches havebeen developed recently by cognitive sci-entists as a means of studying conscious-ness directly as a subjective experience,rather than indirectly as a function. Theseapproaches are not aimed at explaining theindividual subjective experience of a par-ticular subject, a goal that remains beyondthe realms of science. Rather, the goal is toaccount for populations of subjective expe-riences that may be experienced by numer-ous subjects. Thus, first-person perspectiveapproaches are aimed at defining these pop-ulations of subjective experiences as pre-cisely as possible and measuring them in areproducible way (Gardiner, 2000).

First-person Perspective Approaches toRecognition Memory: The DistinctionBetween Autonoetic and NoeticAwareness

All of us, at least once in our lives, haverecognized someone as being familiar buthave not been able to remember who heor she was or been able to recollect any-thing about the person and our previousencounter with him or her. Similarly, we canknow that we have read a book or watched

a film but fail to remember anything elseabout it. These examples from everydaylife suggest that recognition memory maybe based either on feelings of familiarityaccompanied by no recollection of contex-tual information or alternatively on the con-scious recollection of details from a pastevent. Tulving (1985) was the first to pro-pose a first-person perspective approach tomeasure these two subjective experiences(see Chapter 10). This approach hypothe-sizes that consciousness is not a unitary phe-nomenon and it has to be fragmented tobe accessible to experiments. Thus, Tulving(1985) distinguishes two subjective experi-ences, referred to as autonoetic and noeticawareness, which are characterized by dis-tinct phenomenological attributes. Auto-noetic awareness is the kind of consciousawareness that is experienced by normalsubjects who consciously recollect personalevents by reliving them mentally. It makesit possible to be aware of one’s own expe-riences across subjective time and to havea feeling of individuality, uniqueness, andself-direction. It is intimately associated withour awareness of ourselves as persons witha past and a future. Noetic awareness, onthe other hand, corresponds to the knowl-edge that an event occurred but withoutany conscious recollection. It conveys a moreabstract sense of the past and future, basedon feelings of familiarity (Tulving, 1985).It does not entail time travel but aware-ness of knowledge that one has about theworld in which one lives. Unlike autonoeticawareness, noetic awareness does not enableus to re-experience personal events in aself-reflective way (Gardiner, 2000). Tulvingsuggests that memory systems should beredefined in accordance with the relatedsubjective experience at retrieval. In thiscontext, autonoetic awareness stems from anepisodic system, whereas noetic awarenessstems from a semantic system.

The Remember/Know Procedure

To investigate the distinction between auto-noetic and noetic awareness experimentally,Tulving (1985) developed the remember/

Page 501: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 483

know procedure, an experiential procedurein which the states of awareness relatedto memory recognition are measured. Typi-cally, participants are asked to report theirsubjective state of awareness at the timethey recognize each individual item. Theymake a remember response if recognition isaccompanied by the conscious recollectionof some specific feature of the item’s pre-sentation (where it was, what they thought,etc.). Thus, remember responses are associ-ated with a qualitatively rich mental expe-rience, including perceptual, spatial, tempo-ral, semantic, emotional, and other detailsthat are attributed to the past learning phase(Johnson, Hashtroudi, & Lindsay, 1993).These remember responses index auto-noetic awareness. Participants make a knowresponse if recognition is associated withfeelings of familiarity in the absence of con-scious recollection. Thus, know responsesare associated with the simple knowledgethat an item has been seen previously. Theyindex noetic awareness.

Recent studies using the remember/knowprocedure suggest that some know responsesare not in fact based on feelings of familiar-ity but are simply guesses (Gardiner, Java,& Richardson-Klavehn, 1996): Participantsguess that they studied an item previouslybut do not experience familiarity (know-ing) or recollect any details from the learn-ing phase (remembering). To distinguishbetween knowing and guessing, a third cat-egory of responses, namely guess responses,has been added in some studies.

Following the first study by Tulving in1985 , the remember/know procedure hasbeen used widely in numerous recognitionmemory studies. Findings from these stud-ies provide evidence that the scientific studyof subjective experiences is relevant. Thefirst type of evidence comes from reports ofsystematic and replicable dissociations andassociations between remember and knowresponses as a function of various experi-mental manipulations. A fourfold patternsof outcomes has been observed: Some vari-ables influence remember but not knowresponses, some variables influence knowbut not remember responses, some variables

influence remember and know responses inan opposite way, and finally, some variablesinfluence remember and know responses ina parallel way (for a review, see Gardiner& Richardson-Klavehn, 2000). These resultsshow that remember and know responsesare not only dissociable but also function-ally independent. They indicate that remem-ber responses involve strategic, intentional,and goal-directed processes, whereas knowresponses are based on more perceptualprocesses.

The second type of evidence comesfrom studies carried out in brain-damagedpatients and neuroimaging studies. Thesestudies show that remember and knowresponses are associated with the activationof distinct neural substrates (Eldridge et al.,2000; Henson et al., 1999; Yonelinas, 2002).Broadly speaking, remember responses areassociated with activations of left prefrontaland hippocampal regions, whereas knowresponses are associated with activationsof right prefrontal and parahippocampalregions. Taken together, these findings lendmuch weight to the view that remember andknow responses index two distinct subjec-tive states of conscious awareness.

First-Person Perspective Approachesto Conscious Awareness inSchizophrenia

Henry Ey (1963) was the first person topostulate that schizophrenia is primarily adisorder of consciousness. He argued thatan impairment of consciousness is associ-ated with the typical impairment of the selfin patients with schizophrenia. However,his view of schizophrenia as a disorder ofconsciousness was based on a philosophicalpremise, and concepts and methods to assessconsciousness empirically did not exist atthe time. Recently, several theoretical mod-els of schizophrenia have reformulated thehypothesis of schizophrenia as a disorder ofconsciousness with reference to the concep-tualization of consciousness as a function.Nancy Andreasen (1999) argues that the dis-ruption of the fluid, coordinated sequences

Page 502: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

484 the cambridge handbook of consciousness

of thought and action that underlie con-sciousness in normal subjects is the funda-mental deficit in schizophrenia. Frith (1992)regards schizophrenia as a disorder of con-sciousness, impairing the ability to thinkusing metarepresentations, which are repre-sentations of mental states.

Several studies using third-person per-spective approaches to consciousnessprovide consistent experimental evidencefor an impairment of consciousness inschizophrenia. Patients with schizophreniaexhibit a dissociation between impaired per-formance in explicit tasks, such as recall andrecognition ones, in which participants arerequired to retrieve information from mem-ory consciously (Clare, McKenna, Mortimer,& Baddeley, 1993 ; Gras-Vincendon et al.,1994), and preserved performance inimplicit tasks, such as perceptual primingtasks (Gras-Vincendon et al., 1994) andprocedural memory tasks (Goldberg, Saint-Cyr, & Weinberger, 1990; Michel, Danion,Grange, & Sandner, 1998), for which sub-jects are not required to retrieve materialconsciously. Performance of patients withschizophrenia is also intact in implicitlearning tasks in which the acquisition ofknowledge is also implicit (Danion, Gokals-ing, Robert, Massin-Krauss, & Bacon, 2001).Furthermore, patients with schizophreniaexperience a dissociation between preservedautomatic subliminal priming and impairedconscious control (Dehaene et al., 2003).

Together with evidence of impairedawareness of self-generated action (Francket al., 2001), these results converge to sug-gest that an impairment of conscious aware-ness might be a core deficit in schizophrenia.However, this assumption is drawn from aninference based on indirect evidence, and theuse of first-person perspective approaches inpatients with schizophrenia to measure con-scious awareness directly might be a particu-larly relevant and informative way of findingout more about the cognitive mechanisms ofthis mental disease.

But if it is conceded that mental disor-ders may impair not only subjective experi-ences but also the ability of patients to assessthese subjective experiences, the question of

the validity of using first-person perspectiveapproaches seems to be especially critical inthese patients. This question is so crucial thatsome psychiatrists and psychologists denythe scientific interest of these approachesin schizophrenia. It has to be said, though,that this denial seems somewhat paradoxi-cal as it implies that first-person perspectiveapproaches should not be applied to subjectsfor which they are the most likely to be inter-esting. We argue that the only way to dealsatisfactorily with the issue of the validityof using first-person perspective approachesin patients with schizophrenia is to exam-ine available empirical data. In the next partof the chapter, we present the results ofour studies using the remember/know pro-cedure in patients with schizophrenia.

Impairment of Autonoetic Awarenessin Schizophrenia

A set of studies using the remember/knowprocedure to assess subjective states ofawareness in patients with schizophre-nia showed that autonoetic awareness isimpaired.

impairment of word frequency

effect in remember responses

Huron et al. (1995) used a recognitionmemory task including high- and low-frequency words. The results show thatthe level of remember responses is reducedfor low-frequency words in patients withschizophrenia, whereas the number ofremember responses for high-frequencywords and the number of know responsesfor both high- and low-frequency wordsdo not differ between groups. There-fore, patients with schizophrenia do notexhibit the word frequency effect (moreremember responses for low-frequencythan high-frequency words) observed innormal subjects (see also Gardiner & Java,1990). The word frequency effect has beenaccounted for in normal subjects by encod-ing differences in information processingthat appear during the study phase: The dis-tinctive low-frequency words undergo morestrategic processing than the less distinctive

Page 503: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 485

high-frequency words. Therefore, theabsence of a word frequency effect onremember responses in patients with schi-zophrenia suggests that the impairment ofautonoetic awareness may be attributed toa failure of strategic processes engaged atencoding.

impairment of false memories

associated with remember

responses

We have also studied subjective states ofawareness associated with false memories –that is, memories for events that never hap-pened – in schizophrenia. The most widelyused experimental procedure to induce falsememories in normal subjects is that initiallyintroduced by Deese and subsequently mod-ified by Roediger and McDermott (1995).In this procedure, subjects study lists of15 words that are semantically related to anon-presented theme word or critical lure.For instance, the words presented for thecritical word mountain are hill, valley, climb,summit, top, molehill, peak, plain, glacier, goat,bike, climber, range, steep, ski. A subsequentrecognition test includes both previouslypresented words and non-presented criticalwords, along with unrelated new items. Innormal subjects, this procedure induces arobust false recognition effect for the crit-ical lures (mountain, in this case). More-over, when subjects are asked to report, foreach item they identify as being old, whetherthey remember or know that the item wason the list they studied, the false recogni-tion of critical lures is most often accompa-nied by an experience of remembering. Ithas been hypothesized that this false recol-lection involves strategic processes (Holmeset al., 1998; Mather et al., 1997). On thewhole, studies of false memories in normalsubjects provide direct evidence that memo-ries and associated awareness are not a literalreproduction of the past but depend insteadon constructive and reconstructive processesthat are sometimes prone to errors and dis-tortions (Conway, 1997; Holmes et al., 1998;Schacter et al., 1998).

We used the Deese/Roediger-McDer-mott approach to investigate false recog-

nition and related states of awareness inschizophrenia (Huron & Danion, 2002). Theresults show that patients with schizophre-nia recognize fewer critical lures (false recog-nition) and studied words (correct recogni-tion) than normal subjects. This deficit isrestricted to items associated with remem-ber responses. The proportion of knowresponses does not differ between groups.The results confirm the selective impair-ment of autonoetic awareness associatedwith true memories and extend these find-ings to false memories. They are consis-tent with an impairment of strategic pro-cesses in schizophrenia. They also indicatethat the mere construction of memoriesand autonoetic awareness is defective in thispathology.

impairment of contents of

autonoetic awareness

As well as studying the frequency ofautonoetic awareness in schizophrenia, weinvestigated the content of autonoeticawareness (Sonntag et al., 2003). More pre-cisely, we used a remember/know proceduretogether with a directed forgetting paradigmto investigate the contents of awareness atretrieval depending on whether informationhas been identified as relevant or irrelevantat encoding. In this paradigm, patients withschizophrenia and comparison subjects arepresented with words and instructed to learnhalf of them and forget the other half. Theinstruction “to be learned” or “to be forgot-ten” occurs just after each word is presentedduring the study phase. The recognition taskis carried out on all the words presentedpreviously, mixed with new words. Partici-pants are instructed to identify all the wordsfrom the study list irrespective of whetherthe words were to be learned or forgottenand to report their subjective state of aware-ness at the time they recognize a word. Thisapproach tells us about the strategic regula-tion of the content of awareness for relevantinformation, which is beneficial to recollect,and irrelevant information, which is benefi-cial to forget. The results show that both nor-mal subjects and patients with schizophre-nia recognize more to-be-learned than

Page 504: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

486 the cambridge handbook of consciousness

to-be-forgotten words, indicating that bothgroups exhibited a directed forgetting effect.However, whereas the effect was observedboth for remember and know responses incomparison subjects, it was observed forknow, but not for remember, responses inpatients. This experiment provides evidencethat schizophrenia impairs the relevance ofthe content of autonoetic awareness. It ispossible that patients, unlike comparisonsubjects, fail to engage the strategic regu-lation of encoding that makes the relevantinformation easier to retrieve than the irrel-evant information.

Is the Remember/Know ProcedureValid in Schizophrenia?

Evidence for the validity of using theremember/know procedure in schizophre-nia is provided by checking that patientswith schizophrenia properly understand andapply the task instructions, demonstratingthat some experimental variables inducethe same patterns of responses in patientsand in controls, and showing the consis-tency of findings from first-person and third-person perspective approaches to recogni-tion memory.

do patients with schizophrenia

properly understand and apply

instructions during the

remember/know procedure?

When using the remember/know procedurein patients with schizophrenia, it is par-ticularly important to check carefully thatpatients fully understand the instructionsgiven for the task and apply them prop-erly. Because a proper understanding ofthe distinction among remember, know, andguess responses is critical to the task, wetook numerous precautions to ensure thatthe subjects fully understood the meaningsof these responses in all our studies usingthis procedure in patients with schizophre-nia. Instructions were presented orally andthen in written form. Some examples fromeveryday life were described, and subjectswere asked whether they would choose aremember, know, or guess response for eachinstance. Corrections were made by the

investigator when necessary. All participantsperformed a practice test on 10 items, 5 ofwhich were presented just after the items tobe studied in the main test and 5 of whichwere new items. For each item, subjects wereasked whether they recognized it as hav-ing been presented previously or not. Whenthey recognized an item, they were asked toselect a remember, know, or guess response.At the end of the practice test, they wereasked to explain each response to check thatthey had correctly interpreted the instruc-tions. Throughout all of this, there was noindication that patients had any difficultyunderstanding or remembering the instruc-tions. The very few participants who failedto perform the practice test properly wereleft out of the experiment. They representless than 5% of the overall participants inour studies and include both patients withschizophrenia and normal subjects. Thesefindings confirm that the remember/knowdistinction is psychologically relevant notonly in normal subjects (Gardiner, 2000) butalso in patients with schizophrenia.

Another possibility is to ask participants,at the end of the main recognition task,to explain their remember responses byreporting exactly what they remembered.Like comparison subjects, patients withschizophrenia explain these responses by therecollection of highly specific details fromthe learning phase. However, a more preciseanalysis of these explanations shows that,in some experimental conditions, patientswith schizophrenia report fewer associa-tions between words from the study listthan comparison subjects, whereas theyrecollect as many associations with per-sonal events (Huron et al., 1995). Thisfinding does not raise any doubts aboutthe accuracy of the remember responsesreported by patients with schizophrenia.Indeed, it is likely that these differencesreflect the failure of strategic processes inschizophrenia: Associations between stud-ied words require intentional, strategic orga-nization of the information to be learned,whereas the spontaneous evocation of a per-sonal event may be triggered automaticallyby a studied word. This interpretation is

Page 505: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 487

consistent with the view expressed byYonelinas (2002) that autonoetic awarenesssometimes depends on strategic processesand sometimes depends on more automaticprocesses. Thus, schizophrenia might specif-ically impair autonoetic awareness basedon strategic processes, but appears to spareautonoetic awareness involving more auto-matic processes.

A further way of assessing the validity ofusing remember, know, and guess responsesis to compare patients with schizophreniaand comparison subjects in terms of thequalitative characteristics or, in other words,the perceptual, spatial, temporal, semantic,and emotional attributes of the subjectiveexperience for each reported response. Thiskind of assessment has been performed ina study of the picture superiority effect,in which participants were instructed notonly to report a remember, know, and guessresponse for each recognized item but alsoto rate the specific qualitative characteris-tics of their memory on visual analog scales(Huron et al., 2003). The picture superi-ority effect describes the finding that it istypical for normal subjects to recognize pic-tures more readily than words in a subse-quent recognition memory task. Moreover,this effect is mainly related to recognitionaccompanied by remember responses. Ourresults show that patients with schizophre-nia exhibit a lower picture superiority effectselectively related to remember responsesthan comparison subjects. Most importantly,they show that the qualitative characteristicsof memories do not differ between patientswith schizophrenia and controls. Despitethe lower frequency of remember responsesin patients with schizophrenia, when theyreport a remember response, the qualita-tive characteristics of this subjective stateof awareness seem to be similar to thosereported by comparison subjects. Moreover,in both groups the qualitative character-istics of subjective experiences associatedwith remember responses are quite differentfrom those associated with know and guessresponses. These findings suggest that theremember and know responses of patientswith schizophrenia index two distinct sub-

jective experiences of awareness that arequalitatively similar to those experienced bycomparison subjects.

Evidence that the memory of the sourceof an item and, more generally, the mem-ory of an association is better for a con-sciously recollected item than for a familiaritem also seems to demonstrate the valid-ity of using remember and know responses(Conway & Dewhurst, 1995 ; Perfect, Mayes,Downes, & Van Eijk, 1996). Such evidencehas been found in patients with schizophre-nia (Danion et al., 1999). We have used asource recognition memory task to measurethe relation between defective autonoeticawareness and impaired source memory inschizophrenia. During the study phase, par-ticipants are presented with a set of com-mon objects (e.g., a candle, a toothbrush, ahandkerchief, a battery, and a tire pump).They are instructed to make pairs of objectsby positioning an object next to another(e.g., the subject has to put the candle nextto the tire pump) or to watch the experi-menter perform the action (e.g., the exper-imenter puts the toothbrush next to thebattery). In this way, participants have tostudy complex events, which each consistof target information (a pair of objects) andsource contextual information (who pairedthe two objects).

In a recognition task, participants are pre-sented with pairs of objects. All the objectshave been presented during the study phase,so that the recognition of objects has noinfluence on the recognition of pairs. Aspresented, the pairs consist of old pairs oftwo old objects occurring in their previ-ous combination and new pairs of two oldobjects occurring in a new combination.Accordingly, correct recognition of old pairsdepends on specific associations betweenobjects made by participants during thestudy session, which make the pairs distinc-tive. Participants are asked to identify oldpairs (recognition of pairs of objects) and tomake a remember or know response for thepair. They then have to say whether theyperformed the action or watched it (sourcerecognition) and to make a remember orknow response for the action.

Page 506: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

488 the cambridge handbook of consciousness

The performance of patients withschizophrenia is particularly impaired inobserved actions, as both recognition ofpairs of objects and recognition of sourceare impaired. This impairment is associatedwith a reduction in the frequency of remem-ber, but not know, responses. Comparisonsubjects make few errors in source recogni-tion when recognition of pairs of objects isaccompanied by remember responses. Theymake significantly more errors when recog-nition is accompanied by know responses.Patients with schizophrenia make numer-ous source recognition errors. However,their performance is better for rememberresponses than for know responses, albeit toa lesser degree than in comparison subjects.Both groups perform better when theypaired the objects themselves: Pair recogni-tion, source recognition, and the frequencyof remember responses increase. Sourcerecognition performance improves whenrecognition of pairs of objects is accompa-nied by remember, but not know, responses.Therefore, in patients with schizophrenia,as in comparison subjects, subjective reportsof awareness and objective measures ofmemory (recognition of pairs of objectsand source recognition) are consistent inall experimental conditions. Evidence thatsource recognition is higher for consciouslyrecollected pairs of objects than for familiarpairs provides a powerful argument forthe validity of using remember and knowresponses in schizophrenia. Moreover,these results indicate that patients withschizophrenia are less able than comparisonsubjects to link the separate aspects ofevents together into a cohesive, memorable,and distinctive whole.

do some experimental variables

induce the same pattern of

remember/know responses in

patients with schizophrenia

as in controls?

Most of the studies that have used theremember/know procedure in schizophre-nia have shown that the effect on rememberresponses of an experimental variable (i.e.,word frequency, picture) differs between

patients and controls. Evidence that anexperimental variable has the same impacton remember responses in both groupswould show that patients with schizophre-nia and comparison subjects behave similarlyduring this first-person perspective task.Such evidence has been provided in a studyin which the effect of the affective valence(positive, negative, or neutral) of words onsubjective states of awareness was comparedbetween patients with schizophrenia andcomparison subjects. The results show lowerlevels of remember responses in patients.However, like comparison subjects, patientsreport more remember responses for emo-tional words than for neutral words. In con-trast, the level of know responses is notinfluenced by emotional words. Evidencethat both patients and comparison subjectsconsciously recollect emotional words bet-ter than neutral words suggests that theimpact of the emotional valence of wordson autonoetic awareness is preserved inschizophrenia.

Are the Results from First-Person andThird-Person Perspective Approachesto Recognition Memory Consistentin Schizophrenia?

The distinction between two subjectivestates of consciousness proposed by Tulving(1985) is similar to the distinction betweentwo types of memory processes or sys-tems, generally referred to as conscious rec-ollection and familiarity,1 reported in dualrecognition memory models. These mod-els, which have been developed mainly byAtkinson and colleagues, Mandler, Jacoby,and Yonelinas (reviewed in Yonelinas,2002), have been tested using a vari-ety of third-person perspective methods,including recall/recognition comparisons,item/associative recognition comparisonsand the process-dissociation procedure.Yonelinas (2002) took advantage of the sim-ilarity between the remember/know and theconscious recollection/familiarity distinc-tions to compare findings from first-personand third-person perspective approaches torecognition memory in normal subjects and

Page 507: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 489

brain-lesioned patients. He showed thatfindings from the remember/know proce-dure are consistent with those from third-person perspective approaches, providingevidence to support the validity of usingfirst-person perspective approaches in thesepopulations. In keeping with this line ofreasoning, we review the studies that usedthese third-person perspective approachesto recognition memory in schizophrenia.Doing so enables us to draw inferences aboutrecollection and familiarity in schizophre-nia and to compare the results from thesestudies with those from first-person perspec-tive studies. Consistent findings will pro-vide further arguments for the validity ofusing first-person perspective approachesto recognition memory in patients withschizophrenia.

recall/recognition comparison

The rationale underlying third-person per-spective approaches to conscious recog-nition is similar to that underlying theabove-mentioned third-person perspectiveapproaches to conscious awareness. How-ever, instead of using tasks selected tocompare conscious and unconscious pro-cesses, these approaches use tasks selected tocompare recollection and familiarity. Theycompare performance in task conditionsassumed to require one of the two recogni-tion memory processes more than the other.For instance, performance in a recall taskis assumed to rely more on conscious rec-ollection than on familiarity. On the otherhand, whereas familiarity has few effectson recall performance, it contributes to per-formance in an item recognition task to agreater extent. Accordingly, recall perfor-mance is taken as an index of consciousrecollection, whereas item recognition per-formance is taken as an index of familiar-ity. If a condition influences recall perfor-mance to a greater extent than recognitionperformance, this condition is assumed toinfluence conscious recollection more thanfamiliarity.

On the whole, studies on recall inschizophrenia show defective performance:Patients with schizophrenia consistently

recall fewer items than control subjects(Culver et al., 1986; Gerver, 1967; Koh& Kayton, 1974 ; Koh et al., 1973 , 1980;McClain, 1983 ; Russel & Beekhuis, 1976;Sengel & Lovallo, 1973 ; Truscott, 1970). Therobustness of this deficit has been confirmedby a recent meta-analysis (Aleman et al.,1999) including 70 studies on long-termmemory in schizophrenia that reported alarge effect size for recall. This deficit occursfor verbal and non-verbal stimuli.

In studies using recognition tasks, itis sometimes reported that patients withschizophrenia perform worse than normalcontrols (Barch et al., 2002 ; Russel et al.,1975 ; Sullivan et al., 1997; Traupman, 1975),but sometimes there seems to be no dif-ference (Koh et al., Exp. 2 , 1973 ; Rusheet al., 1999). A meta-analysis by Alemanet al. (1999) indicates that recognition isless impaired than recall. Studies that assessboth recall and recognition performance inthe same patients with schizophrenia leadto the same conclusion. Some of these stud-ies found impaired recall along with nor-mal recognition in patients compared tocontrols (Bauman, 1971; Bauman & Muray,1968; Beatty et al, 1993 ; Koh & Peterson,1978; Nachmani & Cohen, 1969). Otherstudies have shown that, even if patientswith schizophrenia exhibit a recognitiondeficit, they are nevertheless more impairedin recall tasks than in recognition mem-ory tasks (Brebion et al., 1997; Calev, 1984 ;Calev et al., 1983 ; Chan et al., 2000; Goldet al, 1992 ; Goldberg et al., 1989; Paulsenet al., 1995 ; Russel et al., 1975). In the fewstudies in which recognition and recall taskshave been matched for difficulty (e.g., Calev,1984), it has also been reported that perfor-mance is less impaired in recognition thanin recall. Because conscious recollection isassumed to contribute more to recall than torecognition performance, a greater impair-ment in recall provides evidence for a deficitin recollection.

item/associative recognition

comparison

A complementary third-person perspec-tive approach contrasts performance in a

Page 508: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

490 the cambridge handbook of consciousness

single-item recognition task and an associa-tive recognition task. In an associative recog-nition task participants have to recollect aspecific association between a target infor-mation and contextual information. Per-formance depends mainly on a consciousrecollection process because this processrequires the binding together of the dis-tinct aspects of an event to be remem-bered. Conversely, performance in an item-recognition task relies more on familiarity.Several studies have compared the perfor-mance of patients with schizophrenia in asingle-item recognition task and an associa-tive recognition task requiring memory ofcontextual information. They have shownthat patients with schizophrenia exhibit dis-proportionate deficits in associative recog-nition tests compared to item-recognitiontests. This finding has been confirmed bya meta-analysis (Achim & Lepage, 2005)of 23 studies of recognition memory inschizophrenia that observed an impairment20% greater for associative recognition rela-tive to item recognition. In comparison tosingle-item recognition tests, patients per-form poorly in tests that require them toremember when or where an item waspresented (Rizzo et al., 1996a,b; Schwartzet al., 1991; Sullivan et al., 1997; Waterset al, 2004 ; but see Shoqueirat & Mayes,1998), which modality it was presented in(for instance, verbally or visually; Brebionet al., 1997), or how frequently it was pre-sented (Gold et al., 1992 ; Gras-Vincendonet al., 1994). Patients with schizophreniahave also been found to exhibit deficits inreality-monitoring tasks in which they haveto discriminate (1) self-generated informa-tion from information generated by an exter-nal source (Brebion et al., 1997; Harvey,1985 ; Keefe et al., 1999; Moritz et al., 2003 ;Vinogradov et al., 1997; Waters et al, 2004),(2) information from two external sources –a male and a female voice, and (3) informa-tion from two internal sources – i.e., wordsthey imagine themselves saying and wordsthey imagine the experimenter saying (Keefeet al., 1999) or imagined answers and verbal-ized answers (Henquet et al., 2005). Theseresults suggest a greater deficit in associa-tive recognition memory, which is assumed

to rely primarily on recollection, than initem recognition, which may reflect bothrecollection and familiarity. They are con-sistent with an impairment of consciousrecollection in schizophrenia. Direct evi-dence of a link between defective associa-tive recognition memory and impaired con-scious recollection has been provided by thestudy by Danion et al. (1999) referred toabove using a combination of the remem-ber/know procedure and source judgementsin schizophrenia.

process-dissociation procedure

Both recall/recognition and item/associativerecognition comparisons provide consistentfindings about conscious recollection inschizophrenia. However, these third-personperspective approaches are not designedto estimate quantitatively the respectivecontribution of conscious recollection andfamiliarity processes. Jacoby (1991) hasdeveloped the process-dissociation proce-dure to overcome this limitation (see Chap-ter 10). This procedure was devised to sep-arate mathematically the respective contri-butions of consciously controlled and auto-matic memory processes to performance ina single memory task by combining inclu-sion and exclusion test conditions. The pro-cedure initially proposed by Jacoby involvesa source memory task. During the studyphase, subjects are shown words that theyare not instructed to learn and then asked tolearn words that they hear. During the testphase, under the inclusion condition, partic-ipants are instructed to give a yes responseto all previously presented words; that is,both to words they have seen and to wordsthey have heard. Under this condition, con-sciously controlled and automatic memoryprocesses act in concert to facilitate per-formance. Under the exclusion condition,participants are instructed to give a yesresponse only to words that they have heard.Under this condition, consciously controlledand automatic memory processes act inopposition: The controlled use of memoryboth increases the number of correct yesresponses (yes responses to words heard) anddecreases the number of false alarms (yesresponses to words seen), whereas automatic

Page 509: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 491

influences increase the number of falsealarms. Estimates of recollection and famil-iarity are derived from equations based onperformance (correct responses and falsealarms) under each condition.

Two studies (Kazes et al., 1999; Linscott& Knight, 2001) have used the process-dissociation procedure to assess recollectionand familiarity in schizophrenia directly. Ina source memory task, Linscott and Knight(2001) reported lower estimates of consciousrecollection in patients with schizophreniathan in comparison subjects, but no differ-ence between groups for the estimates offamiliarity. Using a version of the process-dissociation procedure that involves a word-stem completion task, both Kazes et al.(1999) and Linscott and Knight observedreduced levels of conscious recollection inschizophrenia. This impaired conscious rec-ollection was associated with a spared famil-iarity in the study by Kazes et al. (1999)and an increased familiarity in Linscott andKnight’s study (2001). These two studiesconfirm the impairment of conscious recol-lection and do not provide any evidence ofa deficit in familiarity.

To summarize, findings from studiesof schizophrenia using first- and third-person perspective approaches are con-cordant, showing a consistent impairmentof autonoetic awareness and the underly-ing process of conscious recollection. Incombination with empirical evidence thatpatients with schizophrenia properly under-stand and use task instructions, and thatsome experimental variables induce thesame response patterns in patients and incomparison subjects, these findings providesubstantial evidence of the validity of usingthe remember/know procedure in schizo-phrenia.

Does Schizophrenia Impair NoeticAwareness and Familiarity?

Whereas there is converging evidencethat autonoetic awareness and consciousrecollection are consistently impaired inschizophrenia, the results from both first-and third-person perspective studies ofnoetic awareness and the underlying pro-

cess of familiarity are less clear. Evidencefrom third-person perspective approachesthat recognition memory is sometimes intactin schizophrenia is consistent with pre-served familiarity. However, some studieshave shown lowered recognition perfor-mance, and the two studies in which theprocess-dissociation-procedure was used tomeasure familiarity directly have produceddiscrepant findings: Familiarity is preservedin one of these studies and increased inthe other. Studies using the remember/knowprocedure show no impairment of noeticawareness as measured by know responses.However, results from studies by Danionet al. (2003) show a decrease in familiaritywhen familiarity is estimated using an inde-pendence model applied to the proportionsof remember and know responses (Yonelinaset al., 1998).

These discrepancies prompted us toreview the results of all our remem-ber/know studies using the framework of theindependence model devised by Yonelinaset al. (1998). The underlying assumptionof this model is that conscious recollectionand familiarity are independent processes,whereas the typical remember/know proce-dure is based on a mutually exclusive rela-tion. The results of these reviews, which arepresented in Table 18.1, show that familiar-ity decreases in two studies (Danion et al.,2003 ; Huron et al., 1995), but not in all theothers.

Overall, when results from both first-person perspective and third-person pers-pective approaches in schizophrenia are con-sidered, there is no evidence of a deficitin noetic awareness as measured by knowresponses and little evidence of an impair-ment in the process of familiarity. Therefore,we can conclude that, although a deficit infamiliarity may be observed under certainexperimental conditions, this deficit is muchless pronounced than the deficit of consciousrecollection. However, this conclusion can-not be regarded as definitive because nostudy to date has taken the experimentalvariables that are known to influence knowresponses specifically in normal subjectsand applied them to patients with schizo-phrenia.

Page 510: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

Tabl

e18

.1.

Mea

nva

lues

ofre

colle

ctio

n(R

)an

dfa

mili

arit

y(F

)of

pati

ents

wit

hsc

hizo

phre

nia

and

com

pari

son

subj

ects

,con

diti

onby

cond

itio

n,co

mpu

ted

from

indi

vidu

alva

lues

Patie

nts

with

Schi

zoph

reni

aC

ompa

riso

nSu

bjec

ts

Rem

embe

rK

now

Gue

ssR

1F

2R

emem

ber

Kno

wG

uess

R1

F2

Old

New

Old

New

Old

New

Old

New

Old

New

Old

New

Wor

dfr

eque

ncy

(Hur

onet

al.,

19

95

)H

igh-

freq

uenc

yw

ords

0.2

20

.03

0.2

80.0

40

.20

0.3

20

.29

0.0

20

.34

0.0

50

.28

0.4

5

Low

-fre

quen

cyw

ords

0.2

20

.01

0.3

30

.02

0.2

10

.42

0.3

90

.00

20

.33

0.0

30

.39*

0.5

1

Fals

ere

cogn

itio

n(H

uron

etal

.,2

00

2a)

Stud

ied

0.3

30.0

20

.19

0.0

30

.14

0.0

90

.32

0.3

70

.47

0.0

20

.18

0.1

60

.10

0.0

70

.46

*0

.41

Cri

tica

llur

es0

. 37

0.0

70

.13

0.0

60

.13

0.0

80.3

20

.29

0.4

30

.03

0.0

20

.03

0.1

70.1

00

.32

0.4

6

Dir

ecte

dfo

rget

ting

para

digm

(Son

ntag

etal

.,2

00

3)

To-b

e-le

arne

dit

ems

0.2

60

.04

0.1

90

.05

0.0

70

.04

0.2

30

.20

0.3

60

.01

0.1

40

.02

0.0

90.0

70

.35

*0

.19

To-b

e-fo

rgot

ten

item

s0

.24

0.1

40

.07

0.2

10

.13

0.2

50.1

10

.10

0.2

40

.12

The

pict

ure

supe

rior

ity

effe

ct(H

uron

etal

.,2

00

3)

Wor

ds0

.14

0.0

10.0

70

.01

0.0

60

.01

0.1

50

.22

0.1

70

.01

0.1

00

.01

0.1

30

.05

0.1

30

.13

Pict

ures

0.5

60

.16

0.0

70

.73

0.4

50

.74

0.1

10

.04

0.5

6*

0.4

2

Aff

ecti

veva

lenc

e(D

anio

net

al.,

20

03

)Po

siti

ve0

.31

0.0

10.3

60

.03

0.3

10.4

80

.68

0.0

20

.18

0.0

20

.67

0.5

2

Neu

tral

0.2

00

.01

0.4

00.0

20

.19

0.5

00

.50

0.0

02

0.3

60

.02

0.4

9*

0.6

6

Neg

ativ

e0

.26

0.0

10.3

80

.04

0.2

50

.49

0.5

60

.02

0.2

90

.03

0.5

5*

0.6

4

∗Si

gnifi

cant

diff

eren

ces

betw

een

pati

ents

wit

hsc

hizo

phre

nia

and

com

pari

son

subj

ects

,p<

.05

.1

For

each

part

icip

ant,

reco

llect

ion

(R)

was

calc

ulat

edby

subt

ract

ing

the

prop

orti

onof

rem

embe

rre

spon

ses

for

new

item

s(R

new

)fr

omth

epr

opor

tion

ofre

mem

ber

resp

onse

sfo

rol

dit

ems

(Rol

d)

and

then

divi

ding

byth

eop

port

unit

yto

assi

gna

rem

embe

rre

spon

seto

old

item

s(1

-Rne

w).

2Fo

rea

chpa

rtic

ipan

t,es

tim

ates

offa

mili

arit

yw

ere

com

pute

dse

para

tely

for

old

and

new

item

s.Fo

rol

dit

ems,

F old

was

esti

mat

edby

the

prop

orti

ons

ofkn

owre

spon

ses

tool

dit

ems

divi

ded

byth

epr

obab

ility

that

anol

dit

emdi

dno

tre

ceiv

ea

rem

embe

rre

spon

se(1

-Rol

d).

Sim

ilarl

y,fo

rne

wit

ems,

F new

was

esti

mat

edby

the

prop

orti

ons

ofkn

owre

spon

ses

tone

wit

ems

divi

ded

byth

epr

obab

ility

that

ane

wit

emdi

dno

tre

ceiv

ea

rem

embe

rre

spon

se(1

-Rne

w).

Toca

lcul

ate

fam

iliar

ity

(F),

F new

was

subt

ract

edfr

omF o

ld.

Itha

sto

beno

ted

that

whe

nth

est

udie

sal

low

both

know

and

gues

sre

spon

ses,

the

prop

orti

onof

know

resp

onse

sw

asre

plac

edby

the

sum

ofth

epr

opor

tion

sof

know

and

gues

sre

spon

ses.

492

Page 511: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 493

Conclusions

Throughout this chapter, we have arguedthat the concepts and methods needed tostudy some subjective experiences objec-tively are already available. For instance, thefirst-person perspective approach proposedby Tulving to assess subjective states of con-scious awareness at retrieval seems to bevalid not only in normal subjects (Yonelinas,2002) but also in patients with schizophre-nia. The distinction between remember andknow responses is relevant from a psycholog-ical viewpoint and is, more often than not,appropriately applied by normal subjectsand by patients with schizophrenia. More-over, as is indicated by numerous exper-imental and neuropsychological dissocia-tions, remember and know responses indextwo distinct subjective states of awareness:autonoetic and noetic awareness. These sub-jective states are not a literal reproduction ofthe past but instead a reconstruction of thepast, which takes into account the presenttime. Finally, this review shows a strongconsistency of results from first-person andthird-person perspective approaches. In thefuture, it will be of interest to developintegrative and multidisciplinary approachesthat combine first-person and third-personperspective methods in the same studies.First-person perspective approaches com-bined with brain imaging (e.g., fMRI) willalso be required.

The use of the remember/know proce-dure in patients with schizophrenia pro-vides a better understanding of the cogni-tive impairments associated with the disease.It makes it possible to present a coherentand accurate picture of the various recalland recognition disturbances that have beenreported in these patients. Schizophreniaseems to be characterized by an impairmentof autonoetic awareness and its underlyingconscious recollection process. This impair-ment results from a failure of strategic pro-cessing at encoding (e.g., Brebion et al., 1997;Gold et al., 1992 ; Koh & Peterson, 1978),but an impairment of strategic processing atretrieval cannot be ruled out. The impair-ment of autonoetic awareness and conscious

recollection might explain some behavioralabnormalities associated with schizophre-nia, notably inadequate functional outcomesin everyday life. Because autonoetic aware-ness is severely disrupted in schizophrenia, apast event cannot be used with great flexibil-ity to guide and control behavior, affects, andbeliefs, which in turn are likely to be inap-propriate inasmuch as they can be drivenonly by noetic awareness or implicit mem-ory. This probably explains why memoryimpairments of patients with schizophre-nia are so consistently related to inade-quate functional outcome in their daily lives(Green, 1996).

Future Prospects

We conclude this chapter by looking atsome of the outstanding questions thatfirst-person perspective approaches haveopened up in the research field of psychi-atry and clinical psychology. From a clin-ical viewpoint, patients with schizophre-nia frequently experience perplexity abouttheir own identity, which can take theform of derealization and depersonalization.These symptoms are taken to be a distur-bance of the subjective sense of self. It hasbeen argued that the sense of self is sup-ported by autobiographical events associ-ated with autonoetic awareness (Conway& Pleydell-Pearce, 2000). Therefore, theuse of the remember/know procedure tostudy subjective states of awareness associ-ated with autobiographical memories makesit possible to study the subjective sense ofthe self. However, until now, studies usingthe remember/know procedure in patientswith schizophrenia have been performedunder conditions that have had little todo with real life and so prevent the gen-eralization of their conclusions to autobio-graphical memory. Indeed, the stimuli havebeen words and pictures, and the learn-ing and test phases have been measuredin minutes or hours. These stimuli are notcomparable with complex and meaning-ful autobiographical events that have reten-tion intervals measured in weeks, months,

Page 512: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

494 the cambridge handbook of consciousness

and years. Using an autobiographical mem-ory inquiry in combination with theremember/know procedure, we showed alower frequency and consistency amongpatients of remember responses associatedwith autobiographical memories (Danionet al., 2005). This finding is consistent withthe impairment of the sense of self reportedby patients. There is preliminary evidencethat this impairment might result from adefective construction of personal identityoccurring during adolescence or early adult-hood (Riutort, Cuervo, Danion, Peretti, &Salame, 2003).

First-person perspective approacheshave also been developed to investigatemetamemory; that is, subjects’ knowledgeabout their own memory capabilities.They make it possible to study subjectiveexperiences related to the knowledge thatsubjects possess about the functioning oftheir memory. These experiences can beevaluated either qualitatively, such as thephenomenon of something being on thetip of the tongue, or quantitatively, such asthe Feeling of Knowing or the Judgment ofConfidence. Applying these approaches tothe study of schizophrenia or other mentaldiseases opens up a new field of research,which is still virtually unexplored (butsee Bacon, Danion, Kauffmann-Muller, &Bruant, 2001; Danion et al., 2001).

From a more theoretical point of view,because first-person perspective methodsfocus on the subjective dimension of psy-chopathological manifestations, they may bethe first step in a major conceptual changethat would modify our understanding ofthese manifestations. Behrendt and Young(2004) point out that psychiatry has usuallyadopted a philosophical position of realismthat assumes that the world that we perceiveis an objective reality. This world is thoughtto exist independently of those who perceiveit and not to be a product of their mind.In contrast, Gestalt psychologists, in keep-ing with the philosophical position of tran-scendental idealism of Kant, consider thata clear distinction has to be made betweenthe world that we subjectively perceive andthe external physical world with which we

interact. The world subjectively experiencedis distinct from the external physical world.It is an active construction arising from ourmind that we project outside it. This sub-jective representation of the world has to beconstrained by external physical realities inorder to be adaptative. Realism and ideal-ism lead to diametrically opposed views ofpsychopathological manifestations, as illus-trated by hallucinations. According to therealism point of view, hallucinations are falseperceptions that arise in the absence of anexternal object or event. According to tran-scendental idealism, both normal percep-tions and hallucinations are subjective expe-riences subserved by the same internal pro-cess. They only differ in respect to the extentto which they are constrained by sensoryinput from the external world. Such a posi-tion might open up new prospects for theunderstanding of hallucinations (Behrendt &Young, 2004), and more generally of the sub-jectivity impairments associated with men-tal disorders.

Note

1. Conscious recollection and familiarity aresometimes used to describe subjective states ofawareness, as well as cognitive processes under-lying recognition memory. The use of the sameterms to describe separate concepts is confus-ing. Indeed, in the former case, conscious rec-ollection and familiarity refer to experimen-tal data in the form of remember and knowresponses, whereas in the latter case, they referto hypothetical constructs – processes – arisingfrom a theoretical model and its underlyinghypotheses. To avoid any confusion, we useautonoetic and noetic awareness to qualifysubjective states of awareness associated withrecognition memory and conscious recollec-tion and familiarity to qualify cognitive pro-cesses underlying recognition memory.

References

Achim, A. M., & Lepage, M. (2005). Episodicmemory-related activation in schizophrenia:meta-analysis. British Journal of Psychiatry,187(12), 500–509.

Page 513: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

can we study subjective experiences objectively? 495

Aleman, A., Hijman, R., de Haan, E. H., &Kahn, R. S. (1999). Memory impairment inschizophrenia: a meta-analysis. American Jour-nal of Psychiatry, 156(9), 1358–1366.

Andreasen, N. C. (1999). A unitary model ofschizophrenia: Bleuler’s “fragmented phrene”as schizencephaly. Archives of General Psychia-try, 56(9), 781–787.

Bacon, E., Danion, J. M., Kauffmann-Muller,F., & Bruant, A. (2001). Consciousness inschizophrenia: A metacognitive approach tosemantic memory. Consciousness and Cogni-tion, 10(4), 473–484 .

Barch, D. M., Csernansky, J. G., Conturo, T.,& Snyder, A. Z. (2002). Working and long-term memory deficits in schizophrenia: Is therea common prefrontal mechanism? Journal ofAbnormal Psychology, 111(3), 478–494 .

Bauman, E. (1971). Schizophrenic short-termmemory: A deficit in subjective organization.Canadian Journal of Behavioral Sciences, 3(1),55–65 .

Bauman, E., & Murray, D. J. (1968). Recognitionversus recall in schizophrenia. Canadian Jour-nal of Psychology, 2 2 (1), 18–25 .

Beatty, W. W., Jocic, Z., Monson, N., & Staton,R. D. (1993). Memory and frontal lobe dysfunc-tion in schizophrenia and schizoaffective dis-order. Journal of Nervous and Mental Diseases,181(7), 448–453 .

Behrendt, R. P., & Young, C. (2004). Hallucina-tions in schizophrenia, sensory impairment andbrain disease: A unifying model. Behavioral &Brain Sciences, 2 7(6), 771–787.

Brebion, G., Amador, X., Smith, M. J., & Gor-man, J. M. (1997). Mechanisms underlyingmemory impairment in schizophrenia. Psycho-logical Medicine, 2 7(2), 383–393 .

Calev, A. (1984). Recall and recognition in mildlydisturbed schizophrenics: the use of matchedtasks. Psychological Medicine, 14(2), 425–429.

Calev, A., Venables, P. H., & Monk, A. F.(1983). Evidence for distinct verbal memorypathologies in severely and mildly disturbedschizophrenics. Schizophrenia Bulletin, 9(2),247–264 .

Chan, A. S., Kwok, I. C., Chiu, H., Lam, L.,Pang, A., & Chow, L. Y. (2000). Memory andorganizational strategies in chronic and acuteschizophrenic patients. Schizophrenia Research,41(3), 431–445 .

Clare, L., McKenna, P. J., Mortimer, A. M., &Baddeley, A. D. (1993). Memory in schizophre-

nia: What is impaired and what is pre-served?Neuropsychologia, 31(11), 1225–1241.

Conway, M. A. (1997). Past and present: Recov-ered memories and false memories. In M. A.Conway (Ed), Recovered memories and falsememories. (pp. 150–191). New York: OxfordUniversity Press.

Conway, M. A., & Dewhurst, S. A. (1995).Remembering, familiarity, and source moni-toring. The Quarterly Journal of ExperimentalPsychology A, Human Experimental Psychology,48(1), 125–140.

Conway, M. A., & Pleydell-Pearce, C. W. (2000).The construction of autobiographical memo-ries in the self-memory system. PsychologicalReview, 107(2), 261–288.

Culver, L. C., Kunen, S., & Zinkgraf, S. A. (1986).Patterns of recall in schizophrenics and normalsubjects. Journal of Nervous and Mental Dis-eases, 174(10), 620–623 .

Danion, J. M, Cuervo, C., Piolino, P., Huron, C.,Riutort, M., Peretti, S., et al. (2005). Abnor-mal subjective sense of self in patients withschizophrenia. Consciousness and Cognition,14(3), 535–47.

Danion, J. M., Gokalsing, E., Robert, P., Massin-Krauss, M., & Bacon, E. (2001). Defective rela-tionship between subjective experience andbehavior in schizophrenia. American Journal ofPsychiatry, 158(12), 2064–2066.

Danion, J. M., Kazes, M., Huron, C., &Karchouni, N. (2003). Do patients withschizophrenia consciously recollect emotionalevents better than neutral events? AmericanJournal of Psychiatry, 160(10), 1879–1881.

Danion, J. M., Rizzo, L., & Bruant, A. (1999).Functional mechanisms underlying impairedrecognition memory and conscious awarenessin patients with schizophrenia. Archives of Gen-eral Psychiatry, 56(7), 639–644 .

Dehaene, S., Artiges, E., Naccache, L., Martelli,C., Viard, A., Schurhoff, F., et al. (2003).Conscious and subliminal conflicts in normalsubjects and patients with schizophrenia: Therole of the anterior cingulate. Proceedings ofthe National Academy of Science USA, 100(23),13722–13727.

Eldridge, L. L., Knowlton, B. J., Furmanski, C.S., Bookheimer, S. Y., & Engel, S. A. (2000).Remembering episodes: a selective role for thehippocampus during retrieval. Nature Neuro-science, 3(11), 1149–1152 .

Ey, H. (1963). La conscience [Consciousness].Oxford: Presses Universitaires France.

Page 514: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland

P1: KAE0521857430c18 CUFX049/Zelazo 0 521 85743 0 printer: cupusbw February 12 , 2007 23 :51

496 the cambridge handbook of consciousness

Franck, N., Farrer, C., Georgieff, N., Marie-Cardine, M., Dalery, J., d’Amato, T., et al.(2001). Defective recognition of one’s ownactions in patients with schizophrenia. Amer-ican Journal of Psychiatry, 158(3), 454–459.

Frith, C. D. (1992). The cognitive neuropsychol-ogy of schizophrenia. Hove: Lawrence ErlbaumAssociates.

Gardiner, J. M. (2000). On the objectivity of sub-jective experiences of autonoetic and noeticconsciousness. In E. Tulving (Ed.), Memory,consciousness, and the brain: The Tallinn Con-ference (pp. 159–172). Philadelphia: PsychologyPress.

Gardiner, J. M., & Java, R. I. (1990). Recollectiveexperience in word and nonword recognition.Memory and Cognition, 18(1), 23–30.

Gardiner, J. M., Java, R. I., & Richardson-Klavehn, A. (1996). How level of process-ing really influences awareness in recognitionmemory. Canadian Journal of ExperimentalPsychology, 50(1), 114–122 .

Gardiner, J. M., & Richardson-Klavehn, A.(2000). Remembering and knowing. In E.Tulving & F. I. M. Craik (Eds.), Hand-book of memory. Oxford: Oxford UniversityPress.

Gerver, D. (1967). Linguistic rules and the per-ception and recall of speech by schizophrenicpatients. The British Journal of Social and Clin-ical Psychology, 6(3), 204–211.

Gold, J. M., Randolph, C., Carpenter, C. J., Gold-berg, T. E., & Weinberger, D. R. (1992). Formsof memory failure in schizophrenia. Journal ofAbnormal Psychology, 101(3), 487–494 .

Goldberg, T. E., Saint-Cyr, J. A., & Wein-berger, D. R. (1990). Assessment of procedurallearning and problem solving in schizophrenicpatients by Tower of Hanoi type tasks. Journalof Neuropsychiatry and Clinical Neuroscience,2 (2), 165–173 .

Goldberg, T. E., Weinberger, D. R., Pliskin,N. H., Berman, K. F., & Podd, M. H. (1989).Recall memory deficit in schizophrenia. A pos-sible manifestation of prefrontal dysfunction.Schizophrenia Research, 2 (3), 251–257.

Gras-Vincendon, A., Danion, J. M., Grange, D.,Bilik, M., Willard-Schroeder, D., Sichel, J. P.,et al. (1994). Explicit memory, repetition prim-ing and cognitive skill learning in schizophre-nia. Schizophrenia Research, 13(2), 117–126.

Green, M. F. (1996). What are the functionalconsequences of neurocognitive deficits in

schizophrenia? American Journal of Psychiatry,153(3), 321–330.

Harvey, P. D. (1985). Reality monitoring in maniaand schizophrenia. The association of thoughtdisorder and performance. Journal of Nervousand Mental Diseases, 173(2), 67–73 .

Henquet, C., Krabbendam, L., Dautzenberg, J.,Jolles, J., Merckelbach, H. (2005). Confus-ing thoughts and speech: source monitoringand psychosis. Psychiatry Research, 133(1), 57–63 .

Henson, R. N., Rugg, M. D., Shallice, T., Josephs,O., & Dolan, R. J. (1999). Recollection andfamiliarity in recognition memory: an event-related functional magnetic resonance imagingstudy. Journal of Neurosciences, 19(10), 3962–3972 .

Holmes, J. B., Waters, H. S., Rajaram, S.(1998). The phenomenology of false memo-ries: episodic content and confidence. Journalof Experimental Psychology: Learning, Memory,and Cognition, 2 4(4), 1026–40.

Huron, C., & Danion, J. M. (2002). Impair-ment of constructive memory in schizophre-nia. International Clinical Psychopharmacology,17(3), 127–133 .

Huron, C., Danion, J. M., Giacomoni, F., Grange,D., Robert, P., & Rizzo, L. (1995). Impairmentof recognition memory with, but not without,conscious recollection in schizophrenia. Amer-ican Journal of Psychiatry, 152 (12), 1737–1742 .

Huron, C., Danion, J. M., Rizzo, L., Killofer,V., & Damiens, A. (2003). Subjective quali-ties of memories associated with the picturesuperiority effect in schizophrenia. Journal ofAbnormal Psychology, 112 (1), 152–158.

Huron, C., Giersch, A., & Danion, J. M. (2002).Lorazepam, sedation, and conscious recollec-tion: a dose-response study with healthy vol-unteers. International Clinical Psychopharma-cology, 17(1), 19–26.

Jacoby, L. L. (1991). A process dissociation frame-work – separating autonomic from intentionaluses of memory. Journal of Memory and Lan-guage, 30, 513–541.

Johnson, M. K., Hashtroudi, S., & Lindsay, D. S.(1993). Source monitoring. Psychological Bul-letin, 114(1), 3–28.

Kazes, M., Berthet, L., Danion, J. M., Amado, I.,Willard, D., Robert, P., et al. (1999). Impair-ment of consciously controlled use of memoryin schizophrenia. Neuropsychology, 13(1), 54–61.

Page 515: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 516: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 517: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 518: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 519: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 520: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 521: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 522: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 523: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 524: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 525: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 526: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 527: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 528: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 529: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 530: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 531: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 532: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 533: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 534: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 535: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 536: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 537: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 538: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 539: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 540: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 541: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 542: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 543: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 544: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 545: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 546: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 547: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 548: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 549: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 550: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 551: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 552: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 553: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 554: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 555: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 556: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 557: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 558: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 559: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 560: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 561: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 562: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 563: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 564: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 565: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 566: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 567: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 568: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 569: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 570: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 571: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 572: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 573: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 574: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 575: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 576: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 577: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 578: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 579: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 580: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 581: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 582: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 583: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 584: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 585: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 586: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 587: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 588: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 589: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 590: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 591: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 592: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 593: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 594: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 595: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 596: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 597: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 598: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 599: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 600: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 601: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 602: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 603: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 604: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 605: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 606: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 607: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 608: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 609: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 610: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 611: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 612: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 613: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 614: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 615: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 616: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 617: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 618: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 619: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 620: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 621: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 622: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 623: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 624: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 625: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 626: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 627: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 628: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 629: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 630: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 631: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 632: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 633: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 634: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 635: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 636: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 637: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 638: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 639: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 640: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 641: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 642: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 643: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 644: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 645: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 646: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 647: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 648: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 649: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 650: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 651: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 652: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 653: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 654: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 655: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 656: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 657: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 658: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 659: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 660: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 661: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 662: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 663: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 664: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 665: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 666: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 667: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 668: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 669: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 670: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 671: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 672: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 673: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 674: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 675: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 676: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 677: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 678: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 679: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 680: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 681: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 682: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 683: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 684: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 685: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 686: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 687: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 688: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 689: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 690: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 691: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 692: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 693: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 694: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 695: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 696: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 697: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 698: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 699: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 700: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 701: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 702: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 703: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 704: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 705: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 706: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 707: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 708: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 709: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 710: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 711: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 712: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 713: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 714: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 715: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 716: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 717: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 718: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 719: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 720: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 721: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 722: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 723: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 724: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 725: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 726: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 727: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 728: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 729: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 730: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 731: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 732: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 733: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 734: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 735: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 736: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 737: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 738: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 739: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 740: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 741: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 742: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 743: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 744: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 745: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 746: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 747: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 748: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 749: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 750: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 751: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 752: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 753: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 754: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 755: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 756: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 757: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 758: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 759: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 760: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 761: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 762: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 763: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 764: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 765: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 766: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 767: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 768: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 769: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 770: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 771: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 772: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 773: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 774: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 775: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 776: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 777: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 778: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 779: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 780: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 781: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 782: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 783: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 784: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 785: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 786: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 787: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 788: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 789: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 790: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 791: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 792: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 793: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 794: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 795: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 796: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 797: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 798: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 799: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 800: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 801: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 802: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 803: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 804: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 805: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 806: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 807: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 808: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 809: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 810: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 811: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 812: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 813: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 814: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 815: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 816: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 817: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 818: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 819: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 820: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 821: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 822: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 823: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 824: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 825: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 826: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 827: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 828: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 829: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 830: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 831: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 832: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 833: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 834: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 835: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 836: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 837: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 838: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 839: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 840: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 841: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 842: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 843: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 844: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 845: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 846: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 847: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 848: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 849: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 850: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 851: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 852: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 853: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 854: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 855: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 856: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 857: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 858: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 859: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 860: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 861: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 862: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 863: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 864: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 865: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 866: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 867: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 868: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 869: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 870: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 871: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 872: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 873: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 874: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 875: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 876: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 877: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 878: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 879: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 880: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 881: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 882: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 883: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 884: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 885: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 886: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 887: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 888: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 889: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 890: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 891: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 892: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 893: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 894: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 895: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 896: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 897: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 898: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 899: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 900: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 901: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 902: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 903: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 904: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 905: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 906: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 907: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 908: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 909: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 910: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 911: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 912: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 913: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 914: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 915: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 916: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 917: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 918: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 919: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 920: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 921: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 922: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 923: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 924: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 925: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 926: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 927: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 928: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 929: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 930: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 931: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 932: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 933: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 934: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 935: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 936: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 937: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 938: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 939: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 940: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 941: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 942: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 943: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 944: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 945: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 946: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 947: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 948: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 949: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 950: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 951: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 952: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 953: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 954: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 955: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 956: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 957: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 958: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 959: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 960: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 961: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 962: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 963: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 964: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 965: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 966: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 967: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 968: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 969: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 970: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 971: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 972: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 973: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 974: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 975: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 976: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 977: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 978: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 979: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 980: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 981: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 982: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 983: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 984: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 985: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 986: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 987: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 988: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 989: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 990: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 991: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 992: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 993: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 994: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 995: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 996: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 997: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 998: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland
Page 999: This page intentionally left blankperpus.univpancasila.ac.id/repository/EBUPT181231.pdf · 2018. 9. 13. · Michael C. Corballis, PhD Department of Psychology University of Auckland