26 philosophy

24
Philosophical Foundations Chapter 26

Upload: eshal-fatima

Post on 10-Jul-2016

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 26 Philosophy

Philosophical Foundations

Chapter 26

Page 2: 26 Philosophy

Searle v. Dreyfus argumentDreyfus argues that computers will never be

able to simulate intelligenceSearle, on the other hand, allows that computers

may someday be able to pass the Turing test [they can simulate human intelligence] However, does this mean that computers are

intelligent? Is simulation duplication? Attack on the Turing test Directed against strong AI

Page 3: 26 Philosophy

Simulation v. duplicationNo one would suppose that we could produce

milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis

No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched

Page 4: 26 Philosophy

MotivationThe Turing test is an inadequate determination of intelligence [only deals with simulation]

The Turing test is an example of behaviorism ‘states’ are defined by how they make people act

• happiness is acting happy• love is acting in a loving manner• intelligence is acting in an intelligent manner• to understand the word ‘knife’ means to be able to use it

But behaviorism is inadequate since love & happiness are more than simply the way in

which a person acts, so too must be intelligence to be x, a person must be in the correct ‘state’

Page 5: 26 Philosophy

Is behaviorism plausible?Dualism is the belief that there are two substances

that make up human beings: minds & bodiesThese two substances are absolutely different &

incompatibleThus, to understand the mind we need not concern

ourselves with the body the mind can be abstracted from its ‘implementation’ in

the brain [behaviorism]Does AI thus subscribe to dualism?Dualism is rejected by most philosophers

Page 6: 26 Philosophy

Alternative to dualismBiological naturalism says that

consciousness, intentionality, etc. are caused/produced by the brain in the same way that bile is produced by the stomach there thus aren’t two substances rather, the so-called mental phenomena is

simply a result of the physical process realism?

There is something essentially biological about the human mind

Page 7: 26 Philosophy

ArgumentTo show behaviorism is inadequate for

understanding/consciousness, Searle designed a famous thought experiment in which he is locked in a room

Under the door are slipped various Chinese characters which he does not understand

In the room with him is a rule set (in English) that tells him how to manipulate the characters that come under the door and what characters to slip back under the door, and a pad of paper for making intermediate calculations

Page 8: 26 Philosophy

Argument continuedThe Chinese characters slipped under the door

are called ‘stories’ and ‘questions’ by the people providing them

The characters that Searle returns to the outside world are called ‘answers’

The answers perfectly answer the questions about the stories that he was given

To an outside observer, it appears that Searle understands Chinese!

Page 9: 26 Philosophy

Argument concludedHowever, it is manifest [given] that Searle

doesn’t understand the stories, the questions or the answers he is giving he doesn’t understand Chinese!

Thus, since intelligence requires a ‘state of understanding’ (the story must mean something to you), Searle can’t be said to understand Chinese although he gives the correct answers correct input/output, but no understanding

Page 10: 26 Philosophy

ConclusionsSimilarly, just because a computer can produce

the correct answers doesn’t mean that it is intelligent

Merely manipulating meaningless symbols is inadequate for intelligence; a ‘state of intelligence’ (intentionality) is also needed what does it mean when I say x is intelligent? problems with the behaviorist definition

Thus, a computer can pass the Turing test and still not be said to be intelligent

Page 11: 26 Philosophy

Abstracting the argument [givens]Brains cause minds [empirical fact]Syntax [formalism] is not sufficient for semantics

[contents] syntax & semantics are qualitatively different aspects & no

qualitative increase of the former will ever produce the latterComputer programs are entirely defined by their

formal, or syntactical, structure [definition] the symbols have no meaning; they have no semantic

content; they are not about anythingMinds have mental contents; specifically they have

semantic contents [empirical fact]

Page 12: 26 Philosophy

Conclusions I

No program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds

The way that brain functions cause minds cannot be solely in virtue of running a computer program

Page 13: 26 Philosophy

Conclusions IIAnything else that caused minds would have

to have causal powers at least equivalent to those of the brain

For any artifact that we might build which had mental states equivalent to human mental states, the implementation of a computer program would not by itself be sufficient. Rather the artifact would have to have powers equivalent to the powers of the human brain

Page 14: 26 Philosophy

The expected conclusionThe brain has the causal power to give rise to

intentional [semantic] statesComputer programs can’t give rise to intentional

[semantic] states since they’re only syntaxThus, computer programs are not of the same

causal power as brainsThus, computer programs can’t give rise to the

mind & consciousness

Page 15: 26 Philosophy

Objections

Systems reply Russell & Norvig

Robot replyBrain simulation replyOther minds reply

Page 16: 26 Philosophy

Systems replyObjection: Perhaps not the man in the room,

nor the rules in English, nor the scratch paper understand anything, but the system taken as a whole can be said to understand

Answer: Put the room within a single person make the person memorize the rules, etc. thus, there is no system the person still can’t be said to understand syntactic information processing sub-systems

can’t give rise to semantic content [can’t be called intelligent]

Page 17: 26 Philosophy

Information processingFurther, it seems that if all we are requiring for

intelligence is information processing, then everything can be seen as doing information processing

But this leads to a contradiction we don’t want to say that the stomach or a

thunderstorm is intelligent• the stomach takes in something [food], processes it

[digests it], and puts something out [energy]• but if our definition of intelligence is that it is

‘information processing’, why isn’t the stomach intelligent?

Page 18: 26 Philosophy

Russell & NorvigCertain kinds of objects are incapable of conscious

understanding (of Chinese)The human, paper, and rule book are of this kindIf each of a set of objects is incapable of conscious

understanding, then any system constructed from the objects is incapable of conscious understanding

Therefore, there is no conscious understanding in the Chinese room [as a whole]

But molecules, which make up brains, have no understanding [cf. Brain simulation reply]

Page 19: 26 Philosophy

Robot replyObjection: If a robot was perceiving & acting

in the world, then it would be intelligent intentionality arises from being in a world

Answer: Put the Chinese room in the robot’s head give the robot’s perceptions to the Chinese room as

Chinese characters & give the directions to the robot in terms of Chinese characters

we are in the same spot we were before: no intentionality because everything is still happening formally

Page 20: 26 Philosophy

Brain simulation replyObjection: Simulate the actual sequence of

neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives replies to them

Answer: This simulates the wrong things about the brain As long as it simulates only the formal structure of

the sequence of neuron firings at the synapses, it won’t have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states

Page 21: 26 Philosophy

Other minds replyObjection: How do we know someone

understands Chinese? --Only by their behaviorAnswer: The problem in this discussion is not

about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them The thrust of the argument is that it couldn’t be

just computational processes and their output because the computational processes and their output can exist without the cognitive state

Page 22: 26 Philosophy

Minds & machinesMachines can think; we just are machines!However, computational processes over

formally defined elements is insufficient i.e., a computer program is insufficient formal elements can’t give rise to intentional states they can only give rise to the next state in the

computational device only syntax, no semantics interpretation is in the eyes of the beholder

Page 23: 26 Philosophy

Meaning of the Chinese RoomPoint of the Chinese room example: adding a formal

system doesn’t suddenly make the man understand Chinese The formal system doesn’t endow the man with intentionality

vis-à-vis Chinese Why would we expect it to endow a computer with

intentionality?E.g., Computers don’t know that ‘4’ means 4Only more & more symbols; never grounds out on

meaning

Page 24: 26 Philosophy

Conclusion

We (I) don’t understand what “consciousness” or “self-awareness” is

If it flies like a duck, swims like a duck, walks like a duck, and quacks like a duck . .

?