Hawthorne and Stanley’s Knowledge Principle for Good Reasons

I’ve been reading Hawthorne and Stanley’s new piece “Knowledge and Action,” (downloadable here at Jason’s website), and will post a couple of things about this really fine piece.

So here’s one issue. Restrict what we are talking about to propositional reasons, either for belief or for action. Think, then, about transmission principles for rational belief. What must be true about the reasons in question for transmission to be possible? The usual answer is that an originating condition can’t transmit something it doesn’t possess already, so if a proposition is one’s reason for believing something, that proposition must itself be rational to believe. With H&S, I’ll assume that we should say similar things about rational action and rational belief on this score, and H&S give a stronger requirement here: in order for one’s reasons for believing q or doing A to be p, one must know that p. I know Peter Unger defends such a principle about rational belief, but he’s a rare exception (and motivated toward strong requirements in service of a skepticism of amazingly wide scope). The more ordinary viewpoint is narrower, insisting only that what gets transmitted must be present, but asking no more than this.

Here I’ll voice a worry I have about the stronger requirement, one concerning rational plans of relatively complex sorts.
Your long-term goal is to achieve G. You can’t achieve G just by trying to do so directly, so you think about what small steps you can take that collectively will get you to G. Call the small steps S1, S2, and S3, with the assumption that these are sequential steps (such as “finish high school”, “go to college”, “get a job,” etc.) Associated with this plan, then, are conditionals: if I try at all, S1 will be accomplished; if S1 is accomplished and and I try some more, S2 will obtain; etc. One’s reasoning here is explicit, I will suppose: one resolves to try, and by MP concludes that S1 is in the bag. Same with the next level of trying, and so S2 is no problem. Etc. The result is a rational plan aimed at G, with lots of individual beliefs, both conditional beliefs and unconditional ones. As described, this is a paradigm case of rational deliberation, and even if there are ways to fill out the example so that the beliefs turn out to be irrational, there is no hint of such in the story as told, so the proper view to take is that the schematic character of the example is fully compatible with being filled out so that the beliefs are fully rational.

But if propositional reasons have to be known to be true in order to be good reasons for believe, we’ll have problems here. Ask me what’s going to happen over the next 10 years. I tell you I’m going to take step S1, then S2, then S3, thereby securing goal G. Now, which of these beliefs are rational and which aren’t? One doesn’t have to be much of a skeptic to deny that every belief short of the last one about achieving G counts as knowledge. In fact, the farther into the future the features of the plan, the more inclined we are to be skeptical here. So maybe beliefs about the initial steps count as knowledge, but not much beyond those, and the farther into the future the plan extends, the lower the percentage of beliefs that will count as knowledge. But long-term plans can be rational nonetheless, and we certainly don’t want a theory of rational planning that lets the plan be rational when a large percentage of the beliefs involved in the plan aren’t rational.

What we have here is an analogue of the preface paradox: the plan is rational at least partly in virtue of the rationality of the beliefs about the parts of the plan, even though it is also rational to believe that not everything will go according to plan and that the plan will therefore have to be adjusted. If so, however, that looks like a reason to doubt that transmission principles always require something epistemically stronger concerning the transmitter of rationality than what is transmitted itself, even if H&S are correct that in a great many cases, something epistemically stronger is required of the transmitter of rationality than mere rationality itself. For, as described, the formation of the plan involves inferences to intermediate conclusions that are themselves believed on the basis of prior inferential steps, depending ultimately on one’s intention to adopt the entire plan in order to achieve goal G.

One might try to give a rational reconstruction of what people actually do when they adopt plans, where the reconstruction involves only conditional attitudes of various sorts in place of the categorical beliefs that I’ve put in the example (i.e., so that you don’t really believe that the last steps in the plan will occur, but you only believe that if all goes well in prior steps, then the last steps will occur). I think we should resist saying that plans of the sort here are only honorifically rational in virtue of some close relationship to a rational reconstruction in which the plan would really deserve the name. Such rational reconstruction does give us one way to save the knowledge requirement, just as it gives us a way to try to escape the preface paradox (we might say that you don’t really believe each of the claims in the book, but only that you have a number of particular beliefs of the following sort: if the really obvious parts of the book are true, then so is a given less obvious part (and you refuse to deduce from this claim!)). So I don’t think rational reconstruction is a plausible way out.

One might also play the piggy-back game, requiring now that beliefs are rational only if known. I won’t argue against this claim here, but it would be an interesting result if the H&S requirement implied this apparently stronger connection between rationality and knowledge. But maybe there are other escape routes?


Comments

Hawthorne and Stanley’s Knowledge Principle for Good Reasons — 11 Comments

  1. Jon,

    Can’t H&S say that long terms plans are rational because of what we know about likelihoods rather than in terms of known categorical propositions?

  2. Sorry,

    To complete that thought…

    I don’t see why H&S have to say that plans based on probabilistic propositions are rational in only an ‘honorific’ sense. I’m not sure I see the step that leads from:
    Such rational reconstruction does give us one way to save the knowledge requirement, just as it gives us a way to try to escape the preface paradox
    to
    So I don’t think rational reconstruction is a plausible way out.

    There is, I think, a very obvious objection to the position that H&S are developing, which is that it’s hard to understand why the mere fact that p isn’t known is sufficient grounds for saying that p oughtn’t figure in practical reasoning since we know that p might fail to be knowledge for reasons wholly unconnected to the accuracy of the belief and the strength of grounds for holding it. If I believe that I’m repaying a debt to you on our brief jaunt to the land of fake bills by giving you a genuine $10 at lunch, I can’t see what’s defective about relying on the premise “I’ll hereby repay Jon by handing him this bill” in my reasoning.

  3. Hi Clayton, I thought it was 10K you owed me, but oh well… 🙂

    On the probabilistic point, the story doesn’t contain any beliefs like that, so it would count as a rational reconstruction as well. I’m assuming it’s a bad idea to try to solve the preface paradox by saying that if the beliefs had been slightly different, everything would be OK. I’m assuming, that is, that a solution to preface must begin by granting the data that the beliefs are knowingly inconsistent and yet rational.

    I like your $10 example, though I think H&S will appeal to the excusability factor to explain it away. That’s what I’m going to post about next: the claimed connection between anti-luminosity and the need for excusability conditions for correct norms. I don’t think the conclusion follows here, but that’s another post.

  4. Yes, well, I’m afraid we’ll have to repay that debt $10 at a time.

    I look forward to the post on excusability. I’ve been trying to formulate precisely arguments to block their appeal to excusability to deal with Gettier and Gettierish cases.

  5. But if propositional reasons have to be known to be true in order to be good reasons for believe, we’ll have problems here. Ask me what’s going to happen over the next 10 years. I tell you I’m going to take step S1, then S2, then S3, thereby securing goal G. Now, which of these beliefs are rational and which aren’t? One doesn’t have to be much of a skeptic to deny that every belief short of the last one about achieving G counts as knowledge.

    Compare the case in which the hunch that a restaurant is on street S leads one to mistakenly take road S. The criticism does seem apt that the person did not know it was on S. As I read your case, the same criticism is not appropriate for these conditionals, since it seems like a pretty sure thing that for each Sn, if you take the steps to Sn you will arrive at Sn. You say,

    if I try at all, S1 will be accomplished; if S1 is accomplished and and I try some more, S2 will obtain; etc. One’s reasoning here is explicit, I will suppose: one resolves to try, and by MP concludes that S1 is in the bag. Same with the next level of trying, and so S2 is no problem. Etc

    So, these conditionals do look like things you know. But make more explicit that you do not know them. So, I have a hunch that if I try to achieve S1 by doing D1, I will achieve it. Suppose that’s true for every Sn. It is clear here that I do not know the conditional are true. Now, if I set about achieving G on this plan, my chances are close to nil of getting to G. I am open to the criticism, I think, that I acted on conditionals that I did not know were true.

  6. Mike, it’s not the conditionals that I think aren’t known. It’s the consequent that is inferred from the conditional and the other information available.

  7. . . . it’s not the conditionals that I think aren’t known. It’s the consequent that is inferred from the conditional and the other information available.

    Help me out here; here’s an example.

    C. If I study for the test, I will pass the test.

    Let S1 = I will pass the test.

    So here I am studying for the exam. You want to say that I (might well) know C, but I don’t know S1? That’s hard to see. After all, if you ask me whether I know I am studying for the exam, I will of course say yes. So I am in this situation. I know I am studying, and I know that C, but I don’t know that S1. I don’t think you’d need a controversial closure principle to be worried about that conclusion. It sure seems like I know S1. What am I missing here?

  8. hi jon
    a few quick reactions
    first it would be good to be more explicit about how you think ‘being rational’ connects to ‘being appropriate to treat p as a reason’. our topic was the latter but your puzzle is posed in terms of the former

    suppose i am playing chess and form a long term plan. i play x thinking my opponent will respond with y, which i will respond to my z which my opponent will respond to with w…
    suppose further that i don’t in fact know that my opponent would respond to z with w. then it is really appropriate for me to use the proposition ‘if i play x the sequence xyzw will ensue’ as a reason for playing x? we say ‘no’, and this still sounds good to me. quite compatibly with that we can say that I have good reason to think that my opponent would respond to z with w’ — the reasons may include propositions that are known that make this epistemically likely (even if not known). that a proposition cannot properly function as a reason does not entail that one is not justified/rational in believing it

  9. Hi John, the connection between rationality and reasons is complicated, which is why I restricted the discussion to a case where we were dealing with transmission principles. In such cases, for the belief to be rational, it needs to be based on the reasons for it, and even if somehow the transmitted rationality isn’t sufficient for the belief to be rational, it is at least necessary.

    I agree about your case: if you use them in a plan, your plan won’t be rational. You would be just guessing at the true conditionals, and so whatever is required of beliefs in order for a plan to be rational isn’t found in your example. On the latter point, about adding good reasons for believing the conditionals, it’s an interesting question whether a refinement of that view can provide an adequate theory here. I think the answer is “yes”, and that critics of the view haven’t been sufficiently attentive to the different kinds of rationality that the view might appeal to. But I’m not trying to push that view here; in fact, the planning example raises some worries for it as well.

  10. Pingback: Certain Doubts » Neta on Stanley & Hawthorne

Leave a Reply

Your email address will not be published. Required fields are marked *