I’ve been reading Hawthorne and Stanley’s new piece “Knowledge and Action,” (downloadable here at Jason’s website), and will post a couple of things about this really fine piece.
So here’s one issue. Restrict what we are talking about to propositional reasons, either for belief or for action. Think, then, about transmission principles for rational belief. What must be true about the reasons in question for transmission to be possible? The usual answer is that an originating condition can’t transmit something it doesn’t possess already, so if a proposition is one’s reason for believing something, that proposition must itself be rational to believe. With H&S, I’ll assume that we should say similar things about rational action and rational belief on this score, and H&S give a stronger requirement here: in order for one’s reasons for believing q or doing A to be p, one must know that p. I know Peter Unger defends such a principle about rational belief, but he’s a rare exception (and motivated toward strong requirements in service of a skepticism of amazingly wide scope). The more ordinary viewpoint is narrower, insisting only that what gets transmitted must be present, but asking no more than this.
Here I’ll voice a worry I have about the stronger requirement, one concerning rational plans of relatively complex sorts.
Your long-term goal is to achieve G. You can’t achieve G just by trying to do so directly, so you think about what small steps you can take that collectively will get you to G. Call the small steps S1, S2, and S3, with the assumption that these are sequential steps (such as “finish high school”, “go to college”, “get a job,” etc.) Associated with this plan, then, are conditionals: if I try at all, S1 will be accomplished; if S1 is accomplished and and I try some more, S2 will obtain; etc. One’s reasoning here is explicit, I will suppose: one resolves to try, and by MP concludes that S1 is in the bag. Same with the next level of trying, and so S2 is no problem. Etc. The result is a rational plan aimed at G, with lots of individual beliefs, both conditional beliefs and unconditional ones. As described, this is a paradigm case of rational deliberation, and even if there are ways to fill out the example so that the beliefs turn out to be irrational, there is no hint of such in the story as told, so the proper view to take is that the schematic character of the example is fully compatible with being filled out so that the beliefs are fully rational.
But if propositional reasons have to be known to be true in order to be good reasons for believe, we’ll have problems here. Ask me what’s going to happen over the next 10 years. I tell you I’m going to take step S1, then S2, then S3, thereby securing goal G. Now, which of these beliefs are rational and which aren’t? One doesn’t have to be much of a skeptic to deny that every belief short of the last one about achieving G counts as knowledge. In fact, the farther into the future the features of the plan, the more inclined we are to be skeptical here. So maybe beliefs about the initial steps count as knowledge, but not much beyond those, and the farther into the future the plan extends, the lower the percentage of beliefs that will count as knowledge. But long-term plans can be rational nonetheless, and we certainly don’t want a theory of rational planning that lets the plan be rational when a large percentage of the beliefs involved in the plan aren’t rational.
What we have here is an analogue of the preface paradox: the plan is rational at least partly in virtue of the rationality of the beliefs about the parts of the plan, even though it is also rational to believe that not everything will go according to plan and that the plan will therefore have to be adjusted. If so, however, that looks like a reason to doubt that transmission principles always require something epistemically stronger concerning the transmitter of rationality than what is transmitted itself, even if H&S are correct that in a great many cases, something epistemically stronger is required of the transmitter of rationality than mere rationality itself. For, as described, the formation of the plan involves inferences to intermediate conclusions that are themselves believed on the basis of prior inferential steps, depending ultimately on one’s intention to adopt the entire plan in order to achieve goal G.
One might try to give a rational reconstruction of what people actually do when they adopt plans, where the reconstruction involves only conditional attitudes of various sorts in place of the categorical beliefs that I’ve put in the example (i.e., so that you don’t really believe that the last steps in the plan will occur, but you only believe that if all goes well in prior steps, then the last steps will occur). I think we should resist saying that plans of the sort here are only honorifically rational in virtue of some close relationship to a rational reconstruction in which the plan would really deserve the name. Such rational reconstruction does give us one way to save the knowledge requirement, just as it gives us a way to try to escape the preface paradox (we might say that you don’t really believe each of the claims in the book, but only that you have a number of particular beliefs of the following sort: if the really obvious parts of the book are true, then so is a given less obvious part (and you refuse to deduce from this claim!)). So I don’t think rational reconstruction is a plausible way out.
One might also play the piggy-back game, requiring now that beliefs are rational only if known. I won’t argue against this claim here, but it would be an interesting result if the H&S requirement implied this apparently stronger connection between rationality and knowledge. But maybe there are other escape routes?