basing and arbitrary actions

Some hold that there is an important disanalogy between justified belief and justified action. When speaking of justified belief, we distinguish between doxastic and propositional justification, where the former obtains when there is propositional justification for the content of the belief, and the belief is properly based on the propositional justification in question. We might distinguish similarly between two kinds of justification in the arena of action, between the action-type being justified and the action-token being justified because it is done for the reasons that justify the action-type.

Here’s the purported disanalogy, however. Arbitrary actions can be justified in the way that is analogous to doxastic justification. Suppose you are choosing what to eat for breakfast, and you’ve got no reason to prefer Wheat Chex to Rice Chex, but arbitrarily choose to eat Wheat Chex. Your action is fully justified, but it need not be based on any reasons for preferring one kind of Chex cereal to the other, since by hypothesis, there are none.

I think this attempt to distinguish the role of reasons and basing in the arena of action from the arena of belief fails. There are reasons for eating Wheat Chex–you like it, it will fill you up, you’ll enjoy the experience, and it won’t cause cancer (we hope). If your eating Wheat Chex is rational, you will be eating the cereal for these reasons. What is not true is that there are reasons to prefer eating Wheat Chex to eating Rice Chex, so your action need not be based on such reasons. In slogan form, we can say that your action must be based on reasons, even if they need not be based on contrastive reasons.

So the difference between action and belief is not that basing is required in the latter but not in the former. It is, rather, that one’s reasons can leave open multiple options in the case of action but perhaps not in the case of belief. That’s the lesson of the Buridan’s Ass paradox: reasons for A-ing need not be reasons that would undergird contrastive explanations (though perhaps I should be pushed a bit on how this could possibly remove the paradoxicality here).

On the issue of optionality for belief, some qualifications are necessary as well. For one thing, one’s evidence can justify your believing p even though it doesn’t discriminate p from rival hypotheses that you’ve never heard of nor considered. But there is a question whether optionality is eliminated in the case of belief even when we restrict the options to those you are aware of. You’re at the zoo, you see a zebra, a believe on that basis that the animal in question is a zebra. But you’re evidence doesn’t reveal that it’s a zebra rather than an elaborate illusion constructed by a mad scientist. So you have no reasons to prefer the zebra hypothesis to the illusion hypothesis, except those relying on background assumptions about prior experiences of this sort (where your reasons in those cases weren’t contrastive reasons). My inclination, however, is to think that the lack of contrastive reasons here doesn’t prevent your belief from being justified.

More generally, if we are impressed with Quine/Duhem point about optionality in theory revision in light of anamolous experimental results, we should be wary of the requirement of basing on contrastive reasons for rational or justified belief. We should avoid describing the options as “choices”, given the voluntarism implicit in that term, but the Q/D point transposed into the context of belief revision implies that there is optionality in belief revision. Even so, the revisions must be based on the evidence in question in order for the new beliefs to be doxastically justified. Perhaps what is not needed is that the beliefs be based on contrastive reasons, reasons that single out that particular belief as the single justified response to the evidence in question.

So the difference between action and belief comes down to this: there is no Buridan’s Ass paradox in the realm of belief, as there is in the realm of action. The optionality available in the realm of belief may not become as stark as that involved in equally attractive actions, but that point doesn’t show that a different account of reasons and basing is appropriate. For the paradoxical situation is just the limit of more common cases in which optionality is present and compossible with justification and proper basing.


basing and arbitrary actions — 7 Comments

  1. Does belief really not suffer from the Buridan’s ass problem? Consider scientists facing anomalous evidence, with two theories possibly accounting for the evidence. In Kuhnian language, they have to adopt some paradigm in order to proceed with their research. Is that a forced choice of belief?

    Another example. We are foreign correspondents in Iraq, wanting to get from point A to point B. As we are getting into our car, a local says, “if you go now, you’ll get hijacked.” Another says, “No, the road is clear.” We have no reason to believe one over the other, and no independent information. Don’t we have to choose which belief we’ll operate on?

    One might object by saying that we can adopt working hypotheses rather than real beliefs in such cases. But I am not sure this notion can be cashed out non-circularly, i.e. as something better than “what you adopt when you have to have some cognitive attitude to act on but don’t want to call it a belief.”

  2. Hi, Heath, welcome to the blog!

    I’m not averse to the idea that there is a BA problem for belief, but I think you’ll have to say more about circularity problem to make this point stick. My reaction is to think that it’s fairly easy to act as if a certain claim is true without actually believing it. Do you think acting-as-if requires some cognitive attitude to undergird it? If so, why? And if such an attitude is necessary, why think that the attitude has to be one of belief?

    I think the crucial step here, though, is the first one. If a cognitive attitude is required, it can be justified or not, and then the Buridan’s Ass problem will arise for that attitude, even if not for belief itself.

  3. Jon,

    Thanks for the welcome.

    Regarding the circularity objection, you are certainly right that more needs to be said, and I am not now in a position to say it. I think I recall a Mind article by Michael Bratman on this topic but I can’t remember what specifically he had to say about the matter.

    But if a BA-style problem is a problem for any cognitive attitude, then I think it is fairly easy to show that this problem arises for some kind of cognitive attitude, whether you call it a belief or not.

    Take the scientists example. They have anomalous evidence which can be accounted for by either T1 or T2, and no particular reason to choose between them. In order to pursue their research they have to adopt some theory. Assume they choose to pursue their research via theory T1. They must, then, have some cognitive attitude toward T1 which they lack toward T2: they assume it, or have it as a hypothesis, or believe it provisionally, or cognitively commit to it, or what have you.

    About the foreign correspondents example, it�s a little more complicated. Stipulate (contrary to fact!) that we want our story exactly as much as we want not to get hijacked. Also, we have exactly as much reason to believe (1) that if we go we will get hijacked and won�t get our story as we have to believe (2) that, if we go, we won�t get hijacked and will get our story. But we must either go or stay; suppose we decide to stay. Then, I think, ipso facto we have some attitude toward (1) that we lack toward (2): we assume it, or take it as a working hypothesis, etc.

    This kind of attitude comes across as a little thin: it seems to be defined simply as whatever you are interpretable as having when you have to act in such a case. It lacks some features of belief, for instance, if the next correspondent chose to go, we wouldn�t warn him off (but we would if we believed (1)). Part of its thinness is due to the fact that the attitude in this case is invoked to explain only a single action; we could remedy this with a more elaborate thought experiment. On the other hand, the scientists� attitude toward T1 is �thicker� in that it probably explains lots of actions: the conduct of experiments, design of instruments, interpretation of results, etc.

  4. I am thinking my last post didn�t make the actual argument clear enough �.

    I�m appealing to Davidson�s principle of action explanation. To explain an action, you need a pro [or con] attitude, and a cognitive attitude connecting the pro attitude and the action. (I think there are exceptions to this schema but they don�t matter here.)

    The correspondents stay; how do we explain their staying? The relevant con attitude is against being hijacked; the principle of action explanation says that we must postulate some cognitive attitude relating staying and avoiding hijacking, which is what (1) does in my example. Objection: we do not have to postulate any such cognitive attitude, because the correspondents might just as easily have gone. So there is no explanation of their action. Reply: Ok, when it�s just one action, the pressure against saying �there is no explanation� is light; that�s what I meant by a �thin� attitude. But�

    The scientists choose to continue researching along the path laid out by T1. This involves choices about the experiments they conduct, the instruments they design, the interpretations they put on their data, the papers they write, and so on; in other words, lots of actions. The relevant pro attitude to explain all these actions is the desire to continue scientific work, or to find out the truth, or whatever. We have to postulate some cognitive attitude toward T1 to provide the other half of the explanation. Objection: we do not have to postulate any such attitude, because the scientists might just have easily begun researching along the path laid out by T2. So there is no explanation of all these actions. Reply: when you are talking about actions occupying possibly years of a person�s professional life, it stretches credulity to say that there is no explanation for all these actions. So the objection isn�t plausible this time; the attitude is �thicker.�

  5. Heath, I like the way you push for the need for a cognitive attitude here. I remember in my last book thinking about this when writing about van Fraassen’s idea that you only need to act as if the theory is true, rather than actually accept the theory. I was worried about what to make of the “act as if” idea so that it involved no cognitive attitude at all (and didn’t implicitly quantify over some possible range of attitudes). But I don’t remember what I wrote or what I decided!

    I think, if your argument works, that the BA paradox can’t be handled simply by positing a choice in the ass for one haystack over the other. Presumably, you’ll have the same pressure there to positive a pro attitude and a cognitive attitude to explain the behavior, and the behavior is not one simple action but a complex array of behaviors over a significant period of time. Moreover, the pattern of behavior might recur over time: let the ass feed, then starve, then be put in the same predicament again, etc. I suggested that I perhaps ought to be pushed on the idea that the optionality of action is all that is needed to solve the BA paradox, and I take your arguments to take up the challenge.

    But let’s distinguish the idea that there’s no explanation for the scientists future behavior from the claim that there is an explanation, just not a Davidsonian one. The van Fraassen challenge, I take it, is to ask why the explanation can’t advert simply to the fact that the scientist is acting as if T1 is preferable, and that’s the only explanation needed. You note that there are counterexamples anyway to the Davidsonian claim; why not treat this as another one?

  6. For starters, I think we could invent a name for a propositional attitude toward p, defined as â��the attitude toward p you have when you are acting as if p were trueâ��. Call this attitude aschuming p. Then it becomes possible to smoothly explain the scientists’ actions in Davidsonian fashion, as the product of their desire to continue researching and their aschumption of p. The question is whether these aschumptions have any reality; whether they accurately describe the mental life of the scientists.

    Here the argument just appeals to explanatory adequacy. The aschumptions do a lot of theoretical work; they figure in a lot of explanations, and the explanations are of a familiar form. It is true they are theoretical posits but I think most or all propositional attitudes are theoretical posits.

    If you try to explain why a scientist is constructing some complicated and expensive apparatus, you cannot simply say, �He is acting as if the gravity-wave theory is true.� In fact, he has engaged in an intricate process of instrumental reasoning to get from the gravity-wave theory to the building of his apparatus, and this process of reasoning probably depends on the theory at several points. How can we represent this reasoning? You have to first point out that the scientist wants, say, to detect gravity waves. Then you have to say, uncontroversially, that (1) if the gravity wave theory is true, we�ll be able to detect gravity waves with this kind of apparatus. That is not enough; for it is equally uncontroversially true that (2) if some alternative super-string theory were true, we could detect the super-strings with some different apparatus, thus showing that there were no gravity waves to be detected. Why is the scientist not pursuing that approach? There is a strong pull to say that he performs a modus ponens on (1), for which he needs a cognitive attitude of some sort to the effect that the gravity wave theory is true.

    Now, the van Fraassen option, perhaps, is to say, No, he is just acting as if the gravity wave theory is true, and there is no particular reason for this action. But vF will have to say this many times, for the scientist will appeal to his theory many times to construct his apparatus. Is it just blind luck that the scientist acts as if the theory is true every time? Surely not. If there is no reason for the action each time, why are the actions so consistent? It is much better to say that the scientist has a stable attitude toward the truth of the gravity wave theory which he acts on repeatedly.

    VF might reply, again, that his �acting as if� claim is intended to cover many actions; the scientist, he says, is repeatedly and persistently acting as if the theory is true. No kidding! But why?�can vF explain this repeated, persistent tendency? I do not think so. But if you posit a stable cognitive attitude toward the theory, then you can explain it; the attitude explains the disposition.

    Well, that’s enough said, I think. (And my apologies to anyone who knows anything about gravity waves!)

  7. Heath, I think your appeal to assumptions is the right way to interpret what the scientist is doing, so the only question is whether an assumption is itself a mental state or attitude. I think it’s not correct, though, to attribute to van Fraassen the view that in acting as if p is true, there is no reason for the action. There are all the reasons you cite for engaging in scientific activity. What there isn’t, is contrastive reasons for pursuing the p-direction of research rather than the q-direction of research (where q is the competing theory).

    The interesting question, to my mind, is whether assuming p is a cognitive state or attitude. I’m still inclined to doubt it, and to explain away the tendency to think it is on the basis of the fact that sentences that talk about assumptions take propositions as complements to the operator in question, just as mental state operators do. But consider the operator “S is unaware that” or “S is ignorant of the fact that”. Neither of these operators, when complemented by a proposition, express a mental state or attitude.

    So, what you assume will explain in part what you do, but it’s not clear to me that this is a mental state. Sometimes it is, but the question is whether it always is. Not sure how much hangs on this, though…

Leave a Reply

Your email address will not be published. Required fields are marked *