the intentionality of cognitive states

14
The Intentionality of Cognitive States FRED I. DRETSKE o know, perceive, or remember is to know, perceive, or remember something. T Subtleties aside, this something may be either a thing or a fact.' We remember a party, see a game, and know a person; but we also remember that the party was a bore, see that the game has started, and know that Hilda is a grouch. It may be, as some have argued, that we cannot know, remember, or perceive a thing without knowing, remembering, or perceiving some fact about that thing. According to this view, what we know, perceive, and remember is always proposi- tional in character. To describe someone as knowing a person, thing, or event is just to describe the person as knowing some relevant facts about the item in question without disclosing, by one's manner of description, what facts it is that are known. 1 do not intend to quarrel with this view. I think it mistaken, but I do not have the time to argue the p&nt here. My objectives are more limited. I mean to discuss our propositional attitudes and, in particular, those propositional attitudes that in- volve the possession of knowledge. I mean, that is, to discuss those mental states whose expression calls for a factive nominal, a that-clause, as complement to the verb and, moreover, whose expression implies that the subject of that state knows what is expressed by that factive nominal. I am concerned with knowing, seeing, and remembering that your dog is lame, not with knowing, seeing and remembering your (lame) dog. I s h d call such states cognitive states. The beZief that your dog is lame is not, on this characterization, a cognitive state. 1. INTENTIONAL STATES If I know that the train is moving and you know that its wheels are turning, it does not follow that I know what you know just because the train never moves without its wheels turning. More generally, if all (and only) Fs are G, one can nonetheless know that something is F without knowing that it is G. Extensionally equivalent 281

Upload: fred-i-dretske

Post on 15-Jul-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Intentionality of Cognitive States

The Intentionality of Cognitive States

FRED I. DRETSKE

o know, perceive, or remember is to know, perceive, or remember something. T Subtleties aside, this something may be either a thing or a fact.' We remember a party, see a game, and know a person; but we also remember that the party was a bore, see that the game has started, and know that Hilda is a grouch.

I t may be, as some have argued, that we cannot know, remember, or perceive a thing without knowing, remembering, or perceiving some fact about that thing. According to this view, what we know, perceive, and remember is always proposi- tional in character. To describe someone as knowing a person, thing, or event is just to describe the person as knowing some relevant facts about the item in question without disclosing, by one's manner of description, what facts it is that are known.

1 do not intend to quarrel with this view. I think it mistaken, but I do not have the time to argue the p&nt here. My objectives are more limited. I mean to discuss our propositional attitudes and, in particular, those propositional attitudes that in- volve the possession of knowledge. I mean, that is, to discuss those mental states whose expression calls for a factive nominal, a that-clause, as complement to the verb and, moreover, whose expression implies that the subject of that state knows what is expressed by that factive nominal. I am concerned with knowing, seeing, and remembering that your dog is lame, not with knowing, seeing and remembering your (lame) dog. I s h d call such states cognitive states. The beZief that your dog is lame is not, on this characterization, a cognitive state.

1. INTENTIONAL STATES

If I know that the train is moving and you know that its wheels are turning, it does not follow that I know what you know just because the train never moves without its wheels turning. More generally, if all (and only) Fs are G, one can nonetheless know that something is F without knowing that it is G. Extensionally equivalent

281

Page 2: The Intentionality of Cognitive States

282 FRED I. DRETSKE

predicate expressions, when applied to the same object, do not (necessarily) express the same cognitive content. Furthermore, if Tom is my uncle, one cannot infer (with a possible exception to be mentioned later) that if S knows that Tom is get- ting married, he thereby knows that my uncle is getting married. The content of a cognitive state, and hence the cognitive state itself, depends (for its identity) on something beyond the extension or reference of the terms we use to express the content. I shall say, therefore, that a description of a cognitive state is nonexten- sional.

Any state of affairs having a propositional content whose expression is non- extensional I shall call an intentional state. On this characterization our cognitive states are all intentional states. The truth of the statement “S knows that a if F” does not depend, simply, on the extension or reference of the terms ‘a’ and IF’. This statement therefore describes an intentional state of S. I think that this use of the word ‘intentional’ is in reasonably close agreement with current philosophical usage-even if it does not capture all the Brentano intended in speaking of inten- tionality as the mark of the mental.

Intentional states (and, therefore, cognitive states) appear to have something like meanings (propositions) as their object (content), as that on which the mind is directed. I say things appear this way since virtually any change in meaning (in the terms used to express the content) generates a different content and, thus, a differ- ent intentional state.

A materialist confronts the task of explaining, or explaining away, this in- tentional feature of cognitive states. Some account must be given of how a purely physical system could occupy states having a content of this sort. Or, failing this, some explanation must be given of why we systematically delude ourselves into thinking that w e occupy states of this sort. What follows is a crude blueprint, an attempt to sketch, along realistic lines, an explanation for how purely physical systems could (because even the simplest mechanical systems d o ) occupy inten- tional states of the appropriate kind.-The distinctive character of our cognitive states lies, not in their intentionality (for even the humble thermometer occupies intentional states), but in their degree of intentionality.

2. BEHAVIOR AND MEANING Central state materialists find themselves tugged in two directions: inward as the locus of our mental states; and outward as the locus for whatever meaning or content these central states might have. The result of this tension is often a curious blend of behaviorism with psychological realism. Our psychological states are genuine inner states (to be distinguished from the behavior they help to pro- duce), but everything that makes them psychological (in contrast, say, to gastro- nomical or just neurological) is borrowed, so to speak, from the sort of behavior they help to determine. The flower of mentality has its roots inside, but all the blossoms are outside. I t is behaviorism with a displaced reference. Some call it functionalism.

Page 3: The Intentionality of Cognitive States

THE INTENTIONALITY OF COGNITNE STATES 283

The approach to intentionality is typical. The output, or some of the output, of languageusing creatures has a semantic dimension (a meaning) that neatly paral- lels the kind of content we (as materialists) want t o attribute to the system’s intern- al physical states in describing its cognitive processes. Why not let the internal states “borrow” this content?’ This, of course, would make our attributions of content to the internal states themselves (in our ordinary descriptions of people knowing and remembering things) a bit of a fiction. The internal states would not literally have this content. Nevertheless, this is the best that can be done with the confused ontology of ordinary language. The idea, roughly, is that if S utters the words, ‘The sun is shining,” and if his utterance of these words is causally explic- able in terms of a central neural state, then this central state acquires the content: The sun is shining. It, so to speak, shares in the glory of meaning this. By virtue of this borrowed content, the central state acquires the status of a belief: the belief that the sun is shining. Harnessing this account of belief with a causal theory of knowledge, one then goes on the say that if this state is brought about (in the right way) by a shining sun, then it constitutes S’s knowledge that the sun is shin- ing. The verbal output provides the “pattern” for assigning semantic properties (meaning or content) to those internal, neurological states that produced it. The intentional structure of our cognitive states is merely a reflection of the semantic properties of the output they produce.

What about creatures that do not have a language? One option is to simply deny that they (dogs, cats, birds) know or believe anything at all-at least nothing expressible in O U T language. My dog does not know (believe, think) that I am leav- ing. He just acts that way. My preparations to leave may cause him to act that way. I t may even be true to say that the dog sees me getting ready to leave (and his seeing me getting ready to leave is why he is getting so excited), but he does not see that I am getting ready to leave. He has no internal state with this content because he exhibits no output with precisely this meaning (a meaning, it should be noted, that contrasts with “My master is putting on his coat and moving toward the

Anoher option in the case of creatures without language is to appeal to other, non-verbal, behavior as the source of cognitive content. Food is to be eaten. Predators are to be avoided. I t is the appropriateness of these responses to one thing rather than another, just as it is (given the ordinary meanings of the words) the ap- propriateness of the utterance “The sun is shining” to one state of affairs (a shining sun) rather than another, that confers on the internal source of this behavior the derived content “This is food” or “That is a predator (or dange ro~s ) . ”~ Roughly speaking, if the dog eats it, he must think it is food. But thinking it is food is an explanatory artifact-nothing more or less than being in a state (whatever neural state this is) that prompts the dog to salivate, chew, swallow, etc. Once again, the intentional character of the internal state, its having a content expressible as ‘This is food,” is only a reflection of the properties of the consequent behavior. The dog knows or believes that there is food in front of him, he occupies a physical state having this content or meaning, only because the state prompts the dog to exhibit

Page 4: The Intentionality of Cognitive States

284 FRED I. DRETSKE

behavior appropriate to food. If the dog has no response that is appropriate to X (e.g., to daisies qua daisies), then he is incapable of believing or knowing that any- thing is an X.

This behavioristically inspired approach to the analysis of intentional structure has a certain degree of plausibility. Nevertheless, it always stumbles on the circular- ity inherent in analyzing cognitive content in terms of something (verbal behavior, appropriateness of response) that lacks the relevant properties (meaning, appropri- ateness) unless the internal source of that behavior is already, and independently of its producing that output, conceived of as having a determinate content. The appro- priateness of what we do depends on what we know and believe (not to mention what we desire and intend). There is nothing inappropriate about my saying, “The sun is shining” at midnight if 1 sincerely believe the sun is shining. At least there is nothing inappropriate about it in any sense of ‘appropriate’ that tells us something about what I know or believe. Is it inappropriate for the hen nor to run from the fox? This depends. It depends, among other things, on whether the hen recognizes the fox, on whether she wants t o protect her chicks, on what her purposes are. Inde- pendently of these factors, the hen’s behavior is neither appropriate nor inappropri- ate. To describe the hen, for example, as engaging in diversionary tactics (to protect her chicks) is already to describe her behavior in a way that presupposes an inten- tional structure for the internal source of that behavior. The appropriateness of re- sponse, then, insofar as this is relevant to what the organism believes and intends,’ is a property the response acquires only in virtue of its production by internal states having a content.

This is particularly obvious with verbal behavior. If what I say is to have a content of the sort required, then it cannot be understood as merely the sounds I make. It must be understood as the meaning, the semantic content, of these sounds. But this, I submit, is circular! Until we have a system, or community of systems, with beliefs and intentions, the output does not have the requisite semantic struc- ture (meaning). Internal states cannot acquire their meaning from the output they produce because until the internal states have a content, they cannot produce a rele- vantly meaningful output. Replacing a door bell by a device (tape recorder, etc.) that announced “Someone is at the door” whenever the door button is pushed brings one no closer t o a system with internal states having content. The output of such a device may be said to mean that someone is at the door, but this is either Grice’s natural meaning (in which case the ringing bell means the same thing) or i t is a meaning w e assign it in virtue of the acoustic pattern’s significance in our system of communication. In the latter case the output may be said to mean something in the relevant linguistic sense, but this is a meaning it derives from its occurrence in an appropriate community of fully intentional systems (speakers of the language). Output or behavior has nothing relevant to give except where the gift is not needed.

3. THE INTENTIONAL STRUCTURE OF INFORMATION The problem of intentionality loses some of its mystery if we think of simple com-

Page 5: The Intentionality of Cognitive States

THE INTENTIONALITY OF COGNITIVE STATES 285

munication systems. If we approach the problem in this way, it soon becomes clear that intentionality, rather than being a “mark of the mental,” is a pervasive feature of all reality-mental and physical. Even the humble thermometer occupies in- tentional states. What is distinctive, and hence problematic, about our cognitive states is not the fact that they have a content, not the fact that this content has intentional characteristics (for this is true as well of the thermometer), but the fact that they have a higher order intentionality.

To see why this is so, consider a simple information processing device. The fundamental idea of Communication Theory is that the amount of information transmitted between two points is a function of the degree of nomic or lawful de- pendence between the evens occurring in these two locations. The mathematical details are not really important for our purposes. What is important is that the quantity of information arriving at R (the receiver) from S (source) depends on the set of conditional probabilities relating events at R and S. If there is only a chance correlation between what occurs at S and what occurs a t R, then no information passes between them. From this extreme we pass through a continuum of possible gradations until we reach a situation in which the flow of information between S and R is optimal: a noiseless (or equivocation free) channel between S and R. For this condition- to exist, a certain set of conditional probabilities must obtain. Every conditional probability must be either 0 or l-strict nomic dependence between the events occurring a t S and R.’

I t is important to emphasize that the conditional probabilities governing the flow of information are nomic or lawful in character. I t is not enough that the type of event at the receiver should correspond, one-toane, to the type of event occur- ring at the source. I t is essential to the transmission of information that this cor- respondence have its basis in a lawful dependence, statistical or deterministic, be- tween the events at S and R. For me to communicate, telepathically, with you it is not enough to have thoughts occurring to you that correspond exactly to what I am thinking. For genuine communication to occur, for you to receive information from me, it is essential that there be a lawful dependence between what I am think- ing and what you think I am thinking.

If we conceive of information in this way, a thermometer may be said to car- ry information about its environment to the extent to which its state (e.g., the height of the mercury column) depends, lawfully, on the ambient temperature. And a pressure gauge carries information about, say, altitude.

With only these rudiments of Communication Theory in hand, we are in a position to appreciate an important fact about the transmission, receipt, and pro- cessing of information. Information has an intentional structure that it derives from the nomic relationships on which it depends. Since a nomic relation between properties (magnitudes) F and G is an intentional relationship, information, under- stood as the measure of this mutual dependency, inherits this structure. If F is law- fully related to G, and ‘G’ is extensionally equivalent to ‘H’, F is not necessarily related in a lawful way to H. If it is a natural law that things having the property F have the property G, it does not follow that there is a law relating the property F

Page 6: The Intentionality of Cognitive States

286 FRED 1. DRETSKE

to the property H just because, as a matter of fact, everything that is G is also H (and vice versa). To use a well-known example, drunks may have liver problems and these problems may have their basis in a nomic relationship between excessive alco- holic intake and the condition of the liver. But we cannot infer from this fact that there is a nomic connection between sitting on park bench B and liver trouble just because all, and only, drunks sit (have sat, are sitting, and wili sit) on park bench B. To reach this conclusion we would first have to be assured that there was some lawful regularity between sitting on the bench and being a drunk. This, though, is an assurance not provided by being told, simply, that there is an extensional equivalence between ‘is a drunk’ and ‘sits on park bench B’.

I t is obvious that there is a vast network of lawful relationships existing be- tween the properties and magnitudes of our physical universe. Some of these we know. Others we do not. I t is this kind of nomic dependence that underlies and supports our assertion of subjunctive conditionals of the form: “The metal would not have expanded unless it was heated”; “If the capacitor had discharged, it would have moved the galvanometer needle”; and “The pressure would not increase unless we were losing altitude.” It is also obvious, except perhaps to a few philosophers, that statements describing the lawful relations between properties and magnitudes give expression to something more than an extensional relationship between these properties or magnitudes. To assert that it is a law that all Fs are G is to assert something stronger than that nothing is (was or will be) F that is not G.

I t is not my purpose in this paper to analyze this feature of natural laws. Sufficient unto my purpose is the fact that laws have this feature. This, I think, is undeniable. How it is t o be explained is another matter.

Since, therefore, the amount of infomation transmitted from one point t o another depends on the system of nomic regularities that prevail between the events at these points, the information reaching the receiver about the events occurring at the source has the very same intentional character as d o the underlying regularities. Even when ‘F’ and ‘G’ are extensionalli equivalent, one can receive the information that something is F without receiving the information that something is G. One can receive information about propeny F (e.g., x is F) without receiving information about property G (eg., x is G) even though nothing is F that is not G. This is pos- sible because the signal one receives may have properties that depend, lawfully, on x’s being F without depending, lawfully, on x’s being G. And in such a case the signal carries information about the one property that it does not carry about the other.

Any physical system, then, whose internal states are lawfully dependent, in some statistically significant way, on the value of an external magnitude (in the way a properly connected measuring instrument is sensitive to the value of the quantity i t is designed to measure) qualifies as an intentional system. It occupies states having a content that can be expressed only in nontxtensional ways. A de- vice that measures the value of F, and hence occupies a state with the content “F is increasing,” does not (necessarily) occupy a state with the content “G is increas- ing” even though G always increases when F increases.

Page 7: The Intentionality of Cognitive States

THE INTENTIONALITY OF COGNITIVE STATES 287

What this suggests, of course, is that the intentionality of our cognitive states has its source in the intentionality of informational structures. S can know that x is F without knowing that x is G (despite the extensional equivalence of ‘F’ and ‘G’) because S can receive information to the effect that x is F without receiving infor- mation to the effect that x is G . And if we assume, as it seems plausible to assume, that S cannot know that x is G unless he receives some quantity of information to the effect that x is G, then we have a tidy explanation for the intentionality of epistemic contexts. One cannot substinite co-referential expressions, salva veritate, in the context “S knows t h a t . . .” because knowledge requires infomation and statements describing the infomation S has received are themselves non-extension- al. And this intentionality derives, in turn, from the nontxtensionality of state- menrs describing nomic dependencies: Cognitive states exhibit an intentional struc- ture because they are, fundamentally, nomically dependent states.

I have, so far, concentrated on the predicate term in those clauses we use to give expression to what is known. Despite extensional equivalence, S’s knowing that x is F is different than S’s knowing that x is G , and this difference is to be ex- plained by the difference between receiving information to the effect that x is F and receiving information to the effect that x is G. I t may be thought, however, that the parallel between knowledge and information collapses when we examine the subject term, the ‘x’, in these expressions. Someone can know that the blonde, standing in the comer, is angry without knowing that my sister is angry despite the fact that the blonde is my sister. That is t o say, the context “S knows that . . .” is opaque, not only with respect to the embedded predicate expressions, but also with respect to the embedded subject terms. How is this feature of our epistemic descriptions to be explained?

As it turns out, it can be explained quite easily. I would not have raised the point (not, at least, at this time) unless it could be. A signal cancarry the informa- tion that A is F without carrying th.e information that it is, in fact, A which is F. S can receive information that a woman (who happens to be my sister) js angry without receiving the information that she (the angry woman) is my sister. A ther- momerer, immersed in water, can tell us what the temperature of the water is, but it does not thereby tell us what it is that has that temperature. When partial infor- mation of this kind is received, we can explain why S does not know that my sister is angry even though the person he knows to be angry is my sister. He does not know it because, although he received the information that she (my sister) was angry, he did not (or may not have) received the information that she was my sister.

4. HIGHER LEVELS OF INTENTIONALITY

I t seems, then, that the intentionality associated with our cognitive*states can be viewed as a manifestation of an underlying network of nomic regularities. If, there- fore, the lawful dependence of one magnitude on another is part of the physicist’s picture of reality, then intentionality is also part of that picture. Hence, to the

Page 8: The Intentionality of Cognitive States

258 FRED I. DRETSKE

extent to which the mentality of our cognitive states resides in their intentional structure, knowledge, perception, and memory are perfectly “natural” phenomena. There is nothing unique about them.

It may seem, however, that I have gone too far. In my efforts to naturalize the mental, have I not succeeded, only, in mentalizing the natural? For if the inten- tional structure of our cognitive states can be understood in terms of their informa- tion-carrying status (where the latter is understood, merely, as their nomic depen- dence on the condition defining rhe content of the cognitive state), then not only will living organisms perceive, know, and remember, but such simple devices as gal- vanometers, television sets, and pressure gauges will also qualify for cognitive at- tributes. For they also receive, process, and store information of the kind now under discussion, and their outputs are variously regulated by this information. Hence, they occupy intentional states of the same kind. Of course, they do not be- have in interestingly diverse ways; they do not exhibit what we think of as intelli- gent behavior; they do not learn. But since behavior has already been rejected as the source of intentional structure, their dull, predictable responses to information- bearing stimuli should be irrelevant. According to the present line of argument, simple information-processing devices, mechanical artifacts, should have internal states possessing a content of the same sort that living organisms have when they know or perceive that something is the case.

The danger here is real. If galvanometers occupy intentional states, the con- clusion to be drawn is not that galvanometers know things, but that knowledge is not simply a matter of occupying an intentional state, a state with a content corresponding to what is known when something is known.

Let me clarify this by contrasting the intentional state of a galvanometer (a device for detecting and measuring electrical current flow) and a genuine cognitive state-the state we are in when we know, for example, that there is a current flow between points A and B. Despite the fact that galvanometers receive, process, and display (for the convenience of someon: using the instrument) informatidn abput affairs external to them, they do not occupy cognitive states. The reason they do not is not because their internal states (the amount of torque, say, on the mobile armature to which a pointer is affixed) fail to have a content, a content to which we can give expression with the sentence “There is a current flow between points A and B.” The instrument’s internal states certainly have this content. If they did not, we could never learn that there was a current flow between A and B by using the instrument. We get this information from the instrument; we depend on i t to deliver, transmit, something with this content. This is only to say that such instru- ments are designed so that their output (hence the internal states responsible for that output) depend, nomically, on the amount of electrical current flowing be- tween points to which their probes are affixed.

No, the reason the galvanometer does not know anything is becacse those states of the instrument that do carry information, and hence possess a content, carry t o o much information, have too much content, to qualify as genuine cogni- tive states. A galvanometer, and every other simple information-processing device

Page 9: The Intentionality of Cognitive States

THE INTENTIONALITY OF COGNITWE STATES 289

of the sort now in question, is so constituted that it cannot distinguish between pieces of information that, from a cognitive standpoint, are different. If there is, as we know there is, a law that relates current flow to voltage differences (current will flow between points A and B only if there is a voltage difference between points A and B), then the galvanometer is incapable of representing one state of affairs without representing the other. It cannot carry the information (hence have the content) that there is a current flow between A and B without carrying the information (hence having the content) that there is a voltage difference between A and B. This is a consequence of the fact that nomic dependence is a transitive rela- tion: if the position of the instrument’s pointer depends on there being a current flow between A and B, and the latter depends on there being a voltage difference between A and B, then the pointer’s position depends on there being a voltage difference between A and B. The swing of the pointer carries both pieces of infor- mation. Both pieces of information qualify as the pointer position’s content.’ But there is more. The movement of the pointer also carries information about the in- tensity of the magnetic field created by the current flow, the amount of increased tension in the instrument’s restraining spring, and so on. All this information is embodied in the behavior of the pointer. There is, as it were, no way for the gal- vanometer to “know” that there is a current flow without “knowing” that there is a voltage difference, a magnetic field, an increased tension in the restraining spring, and so forth. All of these are part of the pointer’s informational content. We, the users of the instrument, may be interested only in one magnitude, but the instru- ment itself is absolutely undiscriminating with respect to these contents.

This is why the galvanometer cannot know anything-even about those things about which it carries information. Its internal states, although they have a content of sorts, a content whose expression is non-extensional, do not have an exclusive semantic content of the sort characterizing genuine cognitive states. What is known when something is known differs, not only from extensionally equivalent pieces of information, but from nomically equivalent pieces of information. Knowledge that there is a current flow between points A and B is different than knowledge that there is a voltage difference between A and B despite the nomic inseparability of current flow from voltage differences. This is a distinction the galvanometer cannot make. I t is insensitive to such cognitive differences. Hence the intentional states of a galvanometer do not qualify as cognitive states. They are intentional (as this was defined) but they are not intentional enough.

The point can perhaps be put more simply in the following way. If there is a natural law to the effect that every F is G, then no informationprocessing system (man included) can occupy a state having the informational content that something is F without, thereby, occupying a state (the same state) having the informational content that something is G. Every internal state that represents x as being F auto- matically represents x a s being G. This is not a question of unsophisticated filter- ing techniques. N o filter can make a structure depend on x’s being F without mak- ing it depend on x’s being G if, as we are assuming, x’s being F itself depends on X’S being G. Therefore, if a system is to be capable of knowing that x is F without

Page 10: The Intentionality of Cognitive States

290 FRED 1. DRETSKE

knowing that x is G, and I take this possibility to be pan of what it means to say that S knows that x is F (where ‘F’ and ‘G’, though differing in meaning, specify properties that are nomically dependent), this system must be endowed with the resources for representing nomically related states of affairs in different ways. The system must be given the wherewithal to represent x’s being F without representing x’s being G even though it cannot carry the one piece of information without car- rying the other. For this to occur, the cognitive content of the system’s internal states must be a function, not only of the information they are designed (or were evolved) to carry, but of the manner in which this information is represented or coded.

How is this possible? This is not the place to develop a full account of the matter, but perhaps the following, extremely oversimplified, example will help illustrate a promising avenue of development. Think of an organism that is sensitive to hydrocloric acid (HCI). This is the only acid to which it is sensitive. Some kind of receptor system is responsible for picking up and delivering the information that the organism is in the presence of HC1. We put the organism in a solution of HC1 and it reacts appropriately (i.e., the way it always reacts in the presence of HCl). Assuming that the organism’s response is controlled by some internal nervous state, what can we say about the content of this internal state? If we think merely in terms of the information carried by the organism’s internal states, any state having the’content “This is HCl” will also have the content “This is an acid.” The neural state may be said to carry the information (hence have the content) that HCl is pre- sent, but the very same state that carries this information also carries the informa- tion (hence has the content) that an acid is present. Such an organism resembles our galvanometer in its inability to distinguish between Sognitively different con- tents. These contents are different (cognitively different) because one can know that something is an acid without knowing that it is HCI-perhaps (on some ac- counts of these matters) even know that something is HCl without knowing it is an acid. The organism we have described does not have internal states that exhibit this kind of difference in content. Hence it does not know that it is in the presence of HCI nor does it know that it is in the presence of an acid. It does not know any- thing.

Compare the organism just described to one that is sensitive to a variety of different acids-exhibiting the same response to all. Let us suppose, however, that occasionally, in the presence of HCl (but no other acid) it exhibits a unique re- sponse. If we assume, once again, that there are different types of neural states responsible for differedt types of response, then we can begin to see the crude be- ginning of genuine cognition. For when we place this organism in HCl, there are t w o different ways it can code the information (or represent the fact) that it is in the presence of an acid. I f it responds in the way it does to acids in general, then the associated neural state is one w a y it has of representing the acidity of its sur- roundings. If it responds in the unique way to HCI, then the associated neural state is another w a y it has of representing the acidity of its surroundings. Both internal states carry the information that an acid is present (recall: no state that carries the

Page 11: The Intentionality of Cognitive States

THE INTENTIONALITY OF COGNITIVE STATES 291

information that x is HCI can fail to carry the information that x is acid), but the coding of this information is different in the two cases.

Suppose, then, we understand a structure’s cognitive content to be deter- mined not solely by the information it carries but by the way it codes or repre- sents this information. We now have a basis for distinguishing internal structures in a way that approximates the way we distinguish cognitive states. We have, in other words, a more satisfactory model of that higher order intentionality characteristic of cognitive content. When placed in HCI, the second organism described can oc- cupy either one of two distinct states: one having the (cognitive) content: This is HCI; and the other having the (cognitive) content: This is an acid. I t senses, so to speak, HCI (this information gets in), but this (sensory) information is capable of generating either one of two different cognitive states-either the “belief” that it is HCI or the “belief” that it is an acid. Although no structure can carry the infor- mation that x is HCI without carrying the information that x is an acid, a system that is sufficiently rich in the kind of information it can receive can nonetheless extract one of these pieces of information without extracting the other. It does so by having different structure types for encoding the incoming information: one structure type “meaning” that x is HCl, the other “meaning” that it is an acid.

On this account of things, the difference between a system that knows that something is F and a system that merely receives, processes, and has its output controlled by the information that x is F is that the former has, while the latter lacks, a representational or coding system that is sufficiently rich to distinguish between something’s being F and its being G where nothing can be F without be- ing G. This capacity for differentially encoding the various pieces of information arriving in a given signal is a capacity a system has (or can develop) in virtue of its capacity for receiving information about x’s being G without receiving informa- tion about x’s being F (either because x is not F or because, though it is F, the signal fails to carry this more specific piece of information). If the only way an organism can receive the informa6on that x is an acid is through the information that x is HCI, then it cannot possibly develop a way of coding information about acidity that is different from the way it encodes information about this specific Rind of acid. It cannot, as it were, acquire the concept acidity. Hence its internal states will never reflect the cognitive differences between this is HCI and this is an acid. This is why our galvanometer can never distinguish between voltage and cur- rent flow; it has no way of obtaining information about voltage differences except through information about current flow-hence no way of representing voltage dif- ferences in a way that is different from the way it represents current flow.

For a system to know what it is described as knowing, for it to occupy cog- nitive states with an appropriate content, it must have an informational receiving and coding capacity that is at least as rich in its representational powers as the language we use to express what is known. If it does not, then the linguage we use to describe what it knows cuts the intentional pie into slices that are too thin for the system to. handle. I t does not have a representational system rich enough to reflect the kind of distinctions (between various cognitive contents) required. Many

Page 12: The Intentionality of Cognitive States

292 FRED I. DRETSKE

simple information-processing devices can receive, process, and transmit the infor- mation they are required to have in order to know things. In fact, we often rely on them as conduits for the information that w e require in order to know things. The cognitive inadequacy of these devices lies not in their informationprocessing capa- bilities but in their failure to have (and inability to develop) a singular w a y of rep- resenting the individual components of information embodied in the signals they do receive. They exhibit intentionality, but they do not exhibit it at the level required for cognitive systems.

5. CONCLUSIONS AND EXCUSES So much remains to be said that I hesitate to claim much, or anything, for what has been said. What I have tried to do is to indicate how a materialist might go about analyzing cognitive structure in a way that preserved a number of realistic intuitions about the nature of our propositional (more specifically, cognitive) at- tidues. The resulting picture, assuming it could be fleshed out, yields a view of our cognitive states that (1) makes them internal states; ( 2 ) gives them a content exhib- iting a significant degree of intentionality; and ( 3 ) makes the content of these states independent of the particular way the states themselves happen to be manifested in overt behavior (you d o not have to eat it just because you believe it is food-even when you happen to be hungry).

Now the excuses, I have not discussed-indeed, I have carefully avoided-the notion of beliefi I have restricted my attention to states whose contents were true (since cognitive states involve knowledge and knowledge implies truth). This was a convenient restriction since it allowed me to characterize a cognitive state's content in terms of the situation (condition, state of affairs) on which it was nomically dependent (about which it carried information). But beliefs can be false; there need be no facts corresponding the belief's content in the way there must be for cogni- tive states. How, then, can we understad a belief's content in informational terms? A belief, or the internal structure that is t o qualify as a belief, need not haveiny in formational content ._

This is a problem, but not, I think, an insuperable problem. A story must be told about the way certain types of structures develop (during learning) as informa- tion-bearing structures and, hence, acquire an (informationcarrying) role-a role which they sometimes fail t o play. This, though, is another story.

I have talked, rather glibly, about levels or grades of intentionality. I have suggested that we can, with simple mechanical models, simulate a system with internal states having a considerable degree of intentionality. But there are still higher levels of intentionality, and our cognitive states exhibit these higher levels. To construct an adequate model of cognitive content, we need structures that can distinguish not only between nomically related situations but between anhlytically (or, if you do not like that word, logically) related contents. Since knowing that P can be distinguished from knowing that Q even when P entails Q (sometimes at

Page 13: The Intentionality of Cognitive States

THE INTENTIONALITY OF COGNITIVE STATES 293

least), the problem is to develop the above analysis of cognitive content in such a way as to reflect this higher grade of intentionality.

There are other problems, but I prefer to dwell on what I have done. What I have done, I think, is to focus the problem of intentionality in a slightly different way-a way that lends itself more readily to reductionistic efforts. The problem is not: how do we build systems that exhibit intentional characteristics? For we al- ready have such systems in the simple mechanical appliances to be found in our kitchens and workshops. Rather, the problem is: how can such physical systems be endowed with a rich enough information-handling capacity so that they can achieve the degree of intentionality characteristic of our cognitive states? This is a problem, I concede, but not a problem of kind (building intentional systems out of exten- sionally describable systems). I t is a matter of degree. And if I am not mistaken, this is just the kind of difference in degree that work in artificial intelligence is progressively narrowing.

Notes

1. We also use interrogative nominals (know who he is, see where he is going) and infinitive clauses (remember to buy a present) as complements to these verbs. For the purpose of this paper I shall assume that when these constructions are used to describe what someone knows, sees, or remembers, they imply something of the form: S knows that . . . For example, if S knows where she is, then S must know that she is, say, in the closet. I shall not be concerned with knowing how to do things.

2. Wilfrid Sellars’s “Empiricism and the Philosophy of Mind,” The Foundations ofScience and the Concepts of Psychology and Psychoanalysis, vol. I in Minnesota Studies in the Philos- ophy of Science, ed. Herbert Feigl and Michael Scriven (Minneapolis, 1956), is an early ex- ample of this type of approach.

3. I take this to be Donald Davidson’s motive for refusing to credit dogs, say, with inten: dona1 cognitive states: their output is compatible with a variety of different intentionally char- acterized inner states and this underdetermination is not merely epistemological. This, at least, is the way I interpreted some of his arguments in a series of lectures delivered in Madison, Wis- consin.

4. This seems to be Daniel Dennett’s approach in Content and Consciousnsss (London, 1969).

5 . There are, of course, senses of ‘appropriate’ in which a response can be characterized as appropriate independently of the subject’s beliefs and intentions. But these senses of ‘appro- priate’ tell US correspondingly little about what the subject believes or intends and are, there- fore, poor candidates for analyzing the content of internal states that produce the response.

6. Dennert puts the point nicely in contrasting the way we treat human snores (mere sounds having a cause but no meaning) and vocal emissions that have a semantic interpretation. “Once one makes the decision to treat these sounds as utterances with a semantic interpretation on the other hand, one is committed to an intentionalistic interpretation of their etiology, for one has decided to view the sounds as the products of communicative intentions, as the expressions of beliefs, or as lies, as requests, questions, commands and so forth.” “Two Approaches to Mental Images,” in Bruinstoms (Hanover, N.H., 1978). p. 180.

7. The type of information I am here describing is the type associated with Claude Shannon and Warren Weaver’s The Mathematical Theory of Communication (Urbana, Ill., 1949). The Mathematical Theory of Communication has had a checkered career in psychology, and most investigators now tend to disparage its usefulness for semantic or cognitive studies. I think this

Page 14: The Intentionality of Cognitive States

294 FRED I. DRETSKE

is a mistake. The present article should help indicate why 1 think this is SO. 1 exploit this theory-or, better, the principles underlying it-to a much greater extent in Knowledge nnd the Flow ofhfonnution (unpublished) from which the present article is derived.

8. This is why one can use a properly constructed galvanometer to measure voltage as well as, or instead of, current flow. Indeed, one can measure m y magnitude (e.g., pressure, depth, speed) variations which can be converted, by an appropriate transducer, into elecmcal form. All that is needed is a suitable calibration of the scale along which the pointer moves.