feedback about feedback: reply to ehring

9
The Southern Journal of Philosophy (1986) Vol. XXIV, No. I FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING Frederick Adams Augustana College In a recent paper’ Douglas Ehring takes a swipe at feedback models of goal-directed behavior. He realizes that there was a line ahead of him, but he believes that now that the dust has cleared2 we can see that earlier objections to cybernetic models were misguided. In his estimation, they resulted from mistreating feedback as a behavioral feature of a system only-not a structural feature. Nonetheless, Ehring believes that feedback, newly improved or not, is inadequate to help us understand goal-direction. His two basic claims are that negative feedback is not necessary for goal-directed activity, nor does it provide an adequate basis for goal specification. I think that Ehring’s worries about cybernetic models are false worries. I say this even though they are shared by control theorists as well as philosophers’. There are even problems for the models that Ehring does not address which I also believe can be handled-such as the analysis of how internal states of mechanical devices can represent non-existing goal-states or how to solve the demarcation problem of finding the line which divides genuinely goal-directed systems from mere equilibrium-attaining ones. Here I address only points Ehring raises, realizing that there are more in the wings and more people to make them. My replies may seem incomplete to those who have these other worries in mind, but I shall leave them for other occasions4. Ehring’s argument against the necessity of negative feedback is by examples. To be successful, the examples must give cases of clearly goal-directed behavior that is not feedback controlled in any way. My replies to his examples will be that they do not show this. Either the behavior is indeed not feedback controlled and consequently not literally goal-directed at all, or the behavior is feedback controlled in a way that Ehring does not consider. The following example is representative of the first type that Ehring has in mind. Sam’s goal is to activate a device. If Sam’s arm is situated such that Sam has only to close his hand, thereby activating the device, Frederick Adams is assistant professor of philosophy at Augustana College. He has published injournals ofphilosophy andpsychology on topics ranging from episremology and the philosophy of mind ro informarion processing theories of memory. 123

Upload: frederick-adams

Post on 30-Sep-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

The Southern Journal of Philosophy (1986) Vol. XXIV, No. I

FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING Frederick Adams Augustana College

In a recent paper’ Douglas Ehring takes a swipe at feedback models of goal-directed behavior. He realizes that there was a line ahead of him, but he believes that now that the dust has cleared2 we can see that earlier objections to cybernetic models were misguided. In his estimation, they resulted from mistreating feedback as a behavioral feature of a system only-not a structural feature. Nonetheless, Ehring believes that feedback, newly improved or not, is inadequate to help us understand goal-direction. His two basic claims are that negative feedback is not necessary for goal-directed activity, nor does it provide an adequate basis for goal specification.

I think that Ehring’s worries about cybernetic models are false worries. I say this even though they are shared by control theorists as well as philosophers’. There are even problems for the models that Ehring does not address which I also believe can be handled-such as the analysis of how internal states of mechanical devices can represent non-existing goal-states or how to solve the demarcation problem of finding the line which divides genuinely goal-directed systems from mere equilibrium-attaining ones. Here I address only points Ehring raises, realizing that there are more in the wings and more people to make them. My replies may seem incomplete to those who have these other worries in mind, but I shall leave them for other occasions4.

Ehring’s argument against the necessity of negative feedback is by examples. To be successful, the examples must give cases of clearly goal-directed behavior that is not feedback controlled in any way. My replies to his examples will be that they do not show this. Either the behavior is indeed not feedback controlled and consequently not literally goal-directed at all, or the behavior is feedback controlled in a way that Ehring does not consider.

The following example is representative of the first type that Ehring has in mind. Sam’s goal is to activate a device. If Sam’s arm is situated such that Sam has only to close his hand, thereby activating the device,

Frederick Adams is assistant professor of philosophy at Augustana College. He has published in journals ofphilosophy andpsychology on topics ranging from episremology and the philosophy of mind ro informarion processing theories of memory.

123

Page 2: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

but all feedback about his hand’s closing is absent (arm anesthetized, blindfold in place, etc.), Sam can still close his hand and activate the device. Sam will neither need nor utilize information feedback to accomplish this task. Furthermore, Sam’s doing so will be goal-directed (Sam does this intentionally), showing that negative feedback is not necessary for goal-directed activity.

There are several reasons why this type of case, though it may initially appear damaging to a cybernetic account of goal-directed behavior, does not work. First, even if there is no feedback of which Sam is consciously aware, that does not show there is no feedback involved. There are a myriad of control systems in the body involving muscles and joints. The simplest thing as a saccadic eye movement is feedback controlled. And this is true though we are consciously unaware of the control mechanisms as such5. So whether there would be feedback control (at a non-conscious level) is not settled by knowing only that Sam feels no sensations of his hand’s closing-as in this example.

Second, once we remove feedback control from the causal chain leading up to the activation of the device or the closing of the fingers, it is not clear that this is a case of goal-directed activity. I admit that at first it looks clear, but I think that changes upon closer inspection.

How, for example, is Sam to know at the very outset that his hand is not already closed and the device activated? Why should Sam attempt to do anything unless he knows his goal is not realized? If he has no knowledge of the result of his finger motion or even of their motion, then for all Sam knows the task may have already been completed (by him) before he believes it has been. Why should Sam try to do anything at all unless he has information feedback to the effect that he has not already accomplished the task (even if somewhat inadvertantly)?

The previous point seems to me conclusive, but if it were not and Sam did know that the device was not already activated initially (without feedback!), still we have to wonder how he will know if or when he has moved his hand and activated the device, if he has no information (feedback) about this at all. Suppose we asked him “have you done it now?” “Now?” Sam will not know! It is clear that he may in fact close his hand and activate the device, but if we were to observe his behavior we might see a random sequence of hand closings. As if groping in the dark for the light switch, if Sam actually closes his hand and activates the device it may well be luck that he does so on the particular occasion that he does. And he clearly will not know when to stop trying or when he has succeeded. The longer we observed this the less likely we would call his activating the device goal-directed-although we would say that his goal was to activate the device.

Enough randomness and luck on the particular occasion of goal- success has crept in that the action is not clearly goal-directed, though it may be goal-initiated. By that I mean that the agent has his goal satisfied, but has lost the element of control that we use to differentiate

124

Page 3: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

between goal-directed behavior and luck. As we know from a similar problem in action theory, a person can intend to kill Harry and kill Harry, without killing Harry intentionally. His intention to kill Harry may even cause his killing him, but not in the right way. For he may have lost control over his action-over when or how he kills Harry. (Realizing what he is intending to do, shoot Harry, Bill’s hand begins to tremble uncontrollably. His intention causes the trembling which causes Bill’s finger to pull the trigger. Bill shoots Harry, caused to d o so by his intention, but the act is not intentional.) It is the feedback-driven concept of control that we use to rule out action produced by wayward causal chains. One must be in control of his actions to have done them intentionally. When an action falls outside of what the agent can properly control, the action is not intentional even if caused by his intention. Something similar, I suggest, holds for goal-directed behavior generally6.

One can have a goal and the having of the goal can cause its satisfaction, without thereby insuring that the satisfaction was under the control of the agent. I take the“directed”in goal-directed seriously. It is control that the cybernetic condition of negative feedback is all about. Sam’s activation of the device would not be under his control and not something he did intentionally (directedly) on that particular occasion.

We may want t o call Sam’s hand closings, themselves, goal- directed-independently of whether Sam activates the device. But how is Sam to know that his hand is closing, on any given occasion? If he does know, then there is some sort of feedback of information. It could be an afferent monitoring that accompanies the impulses to tighten the muscles. From Sam’s perspective this may seem only like thevolition to close his hand-“do it now!”. But he knows when he “volits”, as they say, and can use this as a form of feedback in the control of his behavior. If he does, then this type of example would go the other way-clearly goal-directed, clearly feedback driven. But this would eliminate the counterexample as well. So it is a case bereft entirely of all feedback that Ehring needs and that is what I think will not count as goal-directed if we look only at the particular trial or occasion and exclude broader context. (This is Ehring’s strategy with the first type of example. Next I shall come back and show how to handle cases even where we know there is no feedback within the particular trial.)

But if even afferent monitoring is missing, then Sam would never know the following: that he had not already closed his hand, that he had undertaken a volition to close his hand, that his hand had begun closing, that his hand had completed closing. Under these conditions his hand closing cannot be goal-directed and Ehring’s first example fails. Observation of his hand may show random flutter, or sporadic closings, or nothing at all over large stretches of time. It should not count as goal-directed behavior if Sam should just happen to close his hand

I25

Page 4: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

randomly this moment and activate the device. For the element of control is clearly missing. (Indeed, nothing prevents wayward internal causal chains leading to Sam’s hand’s closing-random twitch, muscle spasm, whatever.)

Ehring’s second type of example 1 will call the “purely ballistic” examples. He envisages cases where a system does have feedback of information about the outcome of its behavior, but cannot utilize that information to correct for a different outcome (no error-reduction). By his lights, this could still yield goal-directed activity.

Ehring’s choice of example here is unfortunate. He considers a purchase of a lottery ticket with the goal being to win on that ticket. Even if you realize that you have not won, there is no further correction to be made for that ticket’s ability to win. The choice is unfortunate because winning the lottery is something that happens to you, but hardly something you do-think I’ll tie my shoes, and brush my teeth and win a lottery this morning. Your part in winning the lottery (if you do win) ends with the buying of the ticket-and the sequence of behavior involved in that is feedback controlled. The actual winning is completely out of your hands and not uncontroversially even an action of yours-though buying the ticket which did win is clearly your action.

I think, however, that a slight change of example will bring out the problem that Ehring is after. He is clearly addressing cases where there may be only one possible course of action or one possible trial at one’s goal. Suppose I crawl into my space ship, blast off for a certain orbit, but later realize that I will not make thedesignated speed for that orbit. I have the information feedback, but no fuel for a correcting burn. Is my flight activity and lift-off still goal-directed? Surely it is! Even though I can only monitor but not utilize my information feedback about goal- error. The fact that I cannot change what I have already put into action does not mean that my action is not goal-directed. At most it means that I cannot re-direct it (no mid-flight corrections). But the planning that went into the initial burn was clearly goal-directed activity and the shot is just the part where the action becomes “purely ballistic”.

There have been many cases of purely ballistic7 behavior discussed in the cybernetics literature. For instance, the strike of a mantid happens so quickly that there is simply not time for it to be feedback controlled. Or consider something as simple as throwing a dart or firing a shot. These are cases where we clearly want to say that the behavior or action is goal-directed. Yet there comes a point in the causal chain where there is no room for correction. There is either no time for feedback control (mantid) or no means for correction (dart, no strings).

These examples are very important to control theorists, since they want to know if a particular piece of behavior is feedback controlled. But they are far less useful to make the point that Ehring wants to make-that feedback is importantly unable to help in the analysis of goal-directed activity.

I26

Page 5: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

Goal-directed behavior does not come one piece at a time. A given piece of behavior is only goal-directed by virtue of being produced by a goal-directed system. To try to analyze all goal-directed behavior as if for any given stretch of it there must be feedback control in that stretch-since feedback is necessary for goaldirectedness-is mis- taken. The systems themselves from which the goal-directed behavior issues must be goal-directed systems. They must have the capacity for controlling their behavior which is made possible by information feedback. But it would be wrong to think that this means each piece of goal-directed behavior, no matter how short in duration, must itself be feedback-driven in isolation and continuously.

We all know that when things go ballistic on us we have ways of correcting for error across trials rather than within trials (first shot low, second shot aim a little higher). Or we can use an anticipatory correction based on previous experience (aim a little high the first shot). We don’t have to be able to correct the trajectory of the bullet in mid flight in order for the shot to be a goal-directed shot. What makes it goal-directed is that it is a shot taken by a goal-directed system with the ability to make feedback controlled corrections in the aimingand across trials (from one shot to the next). It is in the analysis of being a goal-directed system that the feedback control mechanisms are crucials. It is importantly mistaken to think that every single piece of goal- directed behavior must meet the conditions placed on the system as a whole. Yet that is what is going on when we see examples where behavior becomes purely ballistic. It is then claimed that either feedback is not necessary for goal-directed behavior or some other bold claim. Nothing of the kind is shown! The system emitting the purely ballistic behavior had better be a feedback controlled goal-directed system or all bets are off on the directedness of the individual piece of behavior.

Feedback control systems come in different kinds and have a wide range of characteristics. Some systems utilize a congery of feedforward (open loop) and feedback (closed loop) components. Some systems have more or less continuous monitoring of information about their output and some only sample data at various phases. Some systems can eliminate error entirely and others can only achieve a steady-state error which is satisfactorily described as equilibriumg. But no matter what the system’s characteristics, there is always some point in the forward loop-between effector and output-at which the system’s reaction is purely ballistic. Every system is describable in terms of its reaction-time phase-the time it takes between a stimulus variation and an output response to correct. At any point shorter than the one reaction-time period, the system is operating purely ballistically. Therefore, not only is this not new to feedback control systems, the forward loop component is constitutive of them. So depending on how we describe a system’s output we can always find such a point. If we call any time

127

Page 6: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

phase shorter than one reaction-time a trial, then all feedback correction would be done across trials not within. But this would not affect the basic concept of feedback control systems nor the ability to use control theory to model goal-directed systems.

The first important feature of goal-directed systems is that they have some internal structure (goal-state representation) that nails down what the goal is, within a certain range of specificity. The specificity will vary with different types of systems and may be an effective way to classify goal-directed systems. The second important feature of goal-directed- ness is not that systems have continuous control over each segment of each movement, but that they have the plasticity to turn the behavior on, turn it off, and compensate to correct for goal-error. This will be a whole system characteristic, not necessarily a characteristic of each bit of behavior taken separately. Ehring’s second type of example, therefore, falls short of showing that some form of feedback control is not necessary for goal-directed behavior. For even purely ballistic goal-directed behavior must eminate from a goal-directed system. The latter must have feedback control mechanisms in order to produce the segment of ballistic behavior which is directed (derivatively). N o one has ever claimed that a purely ballistic system could be goal-directed and yet that is what it would take to show that no form of feedback was a necessary condition of goal-directed behavior.10 The third important feature is that the correction for error be causally dependent upon the comparison of the goal-state representation with the feedback of information about system output.

Ehring’s final challenge is that of goal-specification. He notes that there are two ways to approach the problem: one is that goal-states are determined by internal representations, and another is that goals are those end states toward which a system tends (no internal repre- sentations). He thinks that neither approach will work to say how a specific state of affairs becomes a system’s goal-state and briefly tells us why not.

Ehring’s argument against the second approach is basically that it is too behavioristic. He thinks it is circular to say that the goal-state of a system is the end state it tends to reach when it is optimally operating. (We must add the bit about optimality or else malfunction would falsify goal-attributions.) Ehring believes that we land in circularity if we specify optimality in terms of the condition of the system when it tends to reach its goal. Right! But who would define optimality that way? Even behaviorists would not define optimality solely in terms of output (response variables). They would include input (stimulus variables) and background conditions, among other variables to specify optimality. Ehring gives us sufficient conditions for specifying goal- states in a circular manner on the behavioristic criterion, but no argument for the necessity of the circularity.

128

Page 7: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

However, I do not wish to defend this approach. For other reasons than those Ehring gives, I think the approach is doomed to failure. It seems to me that the general lesson to be learned about teleology, philosophy of mind included, in this century is that behaviorism founders when it comes to intentionality. Goal-directed behavior is thoroughly intentional and it is this reason more than any other that I choose the route of appealing to internal representations to specify goal-states. The problem of goal-specification in a nutshell is the traditional problem of intentionality. Goal-directed systems often have non-existing states of affairs as their goals. They also selectively discriminate some properties of their goals from others even though those properties may be linked in various ways (co-extensive, nomically co-extensive, nomically entailed, and so on). Not all goal-directed systems can selectively discriminate goal-states through the range of types of correlations between properties. Some can do better than others and some, in doing so, warrant our attribution of being mentally goal-directed systems1 1. The point is that representations are the tools by which nature equips systems to differentiate instantiations ofvarious properties and, therefore, it is in virtue of the concept of representation that we tackle the intentionality of goal-directed systems. Given the current dominance of computational models in psychology, and elsewhere, I was surprised at Ehring’s quick dismissal of the idea of putting the concept of representation to work for us.

Ehring’s dismissal of the use of representation is quick. He complains that in an earlier paper of mine’* I “seem to suggest that reading off content is a simple matter.” In fact, I said no such thing. Indeed, I gave no hint of how one “reads off content” nor even how content gets determined in the internal representation of a goal-directed system. I said nothing about this for a very good reason. I was unsure how it went. In the paper that Ehring cites I was primarily occupied with the task of analyzing teleological functions of the form “the function of x in system S is y.”I suggested that something has a function in so far as it subserves a goal of some goal-directed system and does so in a controlled way. But I surely had nothing very elaborate to say about the analysis of goal- states themselves nor even of goal-directed systems, generally.

Since then, however, I have put together a cybernetic model of goal-directed systems and have an analysis to offer of the problem of goal specification. It is too long to give herel3, but the concept I use is that of calibration. Goal-directed artifacts (heating systems, guided missiles) can be calibrated such that they are capable of detecting the presence or absence of a goal-state of affairs (65 degrees, target angle zero). Employing the concept of information, we can show that the process of calibration involves making the system selectively sensitive to the information that a particular type of state is or is not actual-in accordance with the analysis of information in control theory generally14. This calibrated structure becomes a goal-state detector-if

129

Page 8: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

you will. The structure is engaged to determine the system’s goal-state when the system’s feedback error-corrections causally depend upon comparing incoming information with this structure to generate error and make corrections in output.

In systems capable of learning, calibration merely amounts to learning and concept formation. In mental systems, for example, one’s goal-state representations consist of concepts (ideas) of the types of states that are desired to be actual. These states will be the product of having picked up information about such kinds of states (or their components). Having learned about them and being able to detect their presence or absence, a mental system may set about to instantiate a state of the type it desires. Therefore, the concept of information content plays an essential role in the analysis of the calibration of the goal-state representing structures. The specific content of the goal-state repre- sentation is a function of the type of information the particular goal- directed system is able to discriminate during calibration. The finer- grained the discriminations, the more highly intentional the system. But all goal-directed systems must be able to make some discriminations. For example, they will be able to discriminate the goal-property (being food) from accidental co-extensions with the goal-property (having a mass of 6 grams) and perhaps universal co-extensions (being tranducer- detectable stuff).

The important point, for our purposes, is that this way of analyzing the source of goal-specificity escapes the difficulty Ehring raises. He is interested in what happens during system malfunction. We would not be able to tell, from its behavior, what the goal-state of the system was. Is it the actual end state it has reached? Or is it some other state masked by malfunction? Of course it is hard to tell, its hard to know anything’s. But whether or not we can tell what the goal-state is, the content of the goal-state representation is determined during calibration (or learning). That is how the internal states get their content. Then it is only a matter of which one is controlling the behavior at a given time. What the goal state is has nothing to do with the system output alone, malfunction, or what we are able to glean from watching the system output. What makes one representation rather than another the goal-state repre- sentation is that it is the one in the driver’s seat. It is the one that the process of goal-state comparison and error correction depends upon. All of this is strictly a matter of control system characteristics, internal structures, and selective sensitivity to informational content. Therefore, Ehring’s objections, which are still aimed at a more behavioristic treatment of goal-specification, are a bit off target when aimed at a control-theoretic modell6.

NOTES

I Douglas Ehring “Negative Feedback and Goals,” Nufure And System 6 (1984): 2 17-220.

130

Page 9: FEEDBACK ABOUT FEEDBACK: REPLY TO EHRING

2 The early classic paper by Rosenblueth, Weiner and Bigelow was far too behavioristic in detail and was easily crushed at the hands of Taylor, Sheffler, Braithwaite, and others. It could not handle the simplest matter as the problem of a missing goal-object. For the model required the presence of an actual goal-object for the goal-directed system to be receiving feedback from. This is easily remedied by having the system simply detect the presence or absence of a goal-state (a sort of detector mechanism), but the earlier formulation did not have this provision. For a n excellent historical critique and full bibliography see Andrew Woodfield, Teleology, Cambridge: Cambridge University Press, 1976.

3 See Woodfield, chapter I 1 and Frederick M. Toates, Control Theory in Biology und Experimenfal Psychology, London: Hutchinson, 1975, chapter 5 .

4 I have developed these ideas further in my “Solving the Demarcation Problem: Goal-Attaining vs Equilibrium-Attaining” forthcoming. See also my Goal- Direcfed Sysfems, Ph.D. Dissertation, University of Wisconsin-Madison, 1982.

5 See Toates, chapter 5 and William T. Powers, Behavior: The Confrol of Perception, London: Wildwood House, 1974.

See Donald Davidson Essuys on Acions & Events, New York: Oxford University Press, 1980 for discussion of the problem of wayward causal chains in action theory and my “A Goal-State Theory of Function Attribution”, Cunadiun JournulofPhilosophy, 9 (1979) 493-518, for the assimilation of that problem to goals and teleological functions.

See Toates p. 160 and J.D. McFarland, Feedbuck Mechunisms in Animul Behuvior, London: Academic Press, 1971, chapter 2.

See Adams (1982) fora more complete defense of a cybernetic model of goal-direction. See Toates for a mathematical description of the various types of feedback controllers

and their systems-characteristics. lo In none of the examples that I have found where control theorists cite pieces of

behavior that are purely ballistic has it ever beenargued that the systems themselves from which this behavior comes need not be feedback controlled systems. Indeed, the strength of the cybernetic approach is that it would be luck on a cosmic scale if a system with goals and no feedback control should ever realize its goals. Everyone realizes that isolated pieces of purely ballistic behavior here and there would never add up to goal-direction or control of the system as a whole. You simply could not build a purely ballistic system that had any margin of control or direction over its destiny.

Fred Dretske mounts a strong case for this type of view in his “Machines And The Mental”, Presidential Address to the 83rd Meeting of the Western Division of the American Philosophical Association in Proceedings and Addresses ofthe APA. Vol. 59, NO. I (1985): pp. 23-33, and in his Knowledge und the Flow oflnformation, Cambridge: The MIT Press, 198 I .

See Adams, 1979. 13 See Adams, 1982.

Dretske develops the foundation for the type of addition of propositional content into the mathematical analysis of information theory, Dretske, 1981. I utilize thisaddition in my information-theoretic analysis of goal-directed systems in, Adams, 1982.

I5 See my “The Function of Epistemic Justification”, forthcoming in Cunudiun Journal of Philosophy.

I6 I am grateful to StevenO. Kimbrough for reading an earlier version of this paper and helping to sift out falsehoods. I would like to thank the Department of Decision Sciences at the Wharton School of the University of Pennsylvania for the hospitality of having a philosopher in residence during the summer of 85, while this paper was written. Partial support for this project came from the Atlantic Richfield Co.

131