Rationality and Excusability

Suppose we are motivated to develop a theory of rationality because of our awareness of our own fallibility: truth isn’t transparent to us, and so what we should track as doxastic beings is signs or marks of truth, and a theory of rationality is just an account of what these signs or marks of truth involve. When the signs point one way, we should believe the claim in question; when they don’t, we shouldn’t. And we inquire into such matters because truth isn’t transparent.

Suppose, though, that one is a fallibilist about marks of truth as well as truth itself.
That is, suppose that, no matter what condition C a theory of rationality says is strictly correlated with rational belief, condition C is no more transparent to us than is truth itself. (By this, I don’t mean that transparency comes in degrees; I’m only giving a premise that entails that condition C is not transparent on the basis of what we already know about the transparency of truth itself). Then the same motivation for developing a theory of rationality in the first place kicks in again to motivate the development of another normative theory, a theory that allows that a belief can fail to be rational and yet be normatively acceptable in some weaker sense. We might say that the weaker notion is that of excusable belief, but nothing much turns on our choice of vocabulary.

As is obvious, however, regress looms. The epistemological heart can only rest once a transparent property of belief is found, and for full fallibilists, there is no such property. This fact raise the interesting question of where the mistake occurs. I’ll leave that issue to another post, though comments on the issue are always welcome.


Comments

Rationality and Excusability — 11 Comments

  1. Jon,

    I’m comparing these two claims:

    “Suppose, though, that one is a fallibilist about marks of truth as well as truth itself.
    That is, suppose that, no matter what condition C a theory of rationality says is strictly correlated with rational belief, condition C is no more transparent to us than is truth itself.”

    “The epistemological heart can only rest once a transparent property of belief is found, and for full fallibilists, there is no such property.”

    Don’t internalists such as BonJour and Fumerton disagree here? They’re fallibilists (though maybe not *full* fallibilists?). But they think that some introspective beliefs about conscious mental states can be fully rational, and that’s because we cannot help but be consciously aware of their existence and certain of their qualities. (Fumerton talks in terms of truth-makers, BonJour in terms of constitutive awareness, but the details don’t matter so much here.) In at least some cases, at least, truth is as transparent as condition C, which in turn fully manifests itself to us in conscious experience.

  2. Hi John, yes you are right about conscious mental states: some foundationalists think these states are transparent and that it is rational to believe that they obtain when they do because of this fact. But nobody should think that this point can be extended to condition C itself, the condition that defines, in fully general terms, what it is for a belief to be rational. For that to be endorsed, one would have to be an infallibilist. So, in the absence of endorsing infallibilism, there is a problem to be addressed here.

  3. Hey Jon,

    It’s an interesting problem. I remember arguing in the dissertation something to the effect that this sort of problem caused trouble for a sort of view on which the conditions for justification had to be transparent in the way I think you have in mind. It seems that for most conditions, we could mistakenly believe the conditions to obtain. If someone non-culpably and mistakenly judged that C obtained, knew that C strongly correlated with rationality, we would be inclined on the one hand to regard the subject as rationally believing p. But that judgment would contradict our theory which states that C’s obtaining is strongly correlated with rationally/justifiably believing p. It was hard to imagine a restriction on C that would rule this out.

    I take it that the intuition that such a subject would be rational or justified in their belief even if the conditions were not what they took them to be indicates is that judgments about rationality such as this really are ways of making sense of what an agent’s responses to what the reasons seemed to be tell us about the agent. But, then one lesson to draw from this is simply that we cannot come up with something like a decision procedure for belief that takes as inputs descriptions of things that are transparent to the subject and yields as outputs claims about what the subject ought to believe. To think otherwise would seem to commit you to the problematic assumption that nothing beyond the considerations that tell us what to make of the epistemic agent have any bearing on the claims about what they ought to believe. One of the reasons I don’t like that assumption is that it fails to do justice to a striking feature of doxastic deliberation, which is that from the subject’s point of view it is a risky affair in which ending up as one ought to seems to involve an element of luck.

  4. Jon,

    I see. A suspicion in that ballpark motivated my parenthetical question about the difference between fallibilism and “full fallibilism.”

    I guess there are a number of ways to go. First, we could deny that “a theory of rationality is just an account of what these signs or marks of truth involve.” Maybe it should track the marks of some other property or quantity X … although the same general sort of problem might then arise about X-marks.

    Second, we could go externalist about rationality (or about excusability, or at some other point down the line), and conclude that a theory of rationality (or excusability, etc.) will not satisfy our epistemological hearts.

    Third, we could give up full fallibilism. One way to do that would be to go anti-realist about truth (at least at some level), so that truth gets defined in terms of rationality (at least at that level).

  5. Hi Jon,

    Let’s distinguish two views that we might call “infallibilist” about your condition C:

    (1) The view that there exists a particular justification J that we can have for believing that C (the condition that makes us rational in believing something) obtains, which is such that it is impossible for us to have J when C does not obtain.

    (2) The view that we cannot have justified false beliefs as to whether C obtains.

    I think (2) is clearly false — the fallibilism that results from denying (2) is true. But why should we reject (1)? Some internalists (following Chisholm) may think that C itself can sometimes serve as our justification for believing that C obtains. But, even if this is true, it doesn’t follow that we cannot sometimes have other, fallible justifications for our belief as to whether C obtains, or that we cannot often have false beliefs as to whether C obtains. So I’m not sure I quite see the problem for the Chisholmian internalist here.

  6. Jon,

    Happy Thanksgiving!

    Apropos your comment to John you say that nobody should think that condition C is itself transparent. This, however, is what I think BonJour and Fumerton do. Look, for example, at what Fumerton says about the justification of probability beliefs. To the extent this view is plausible it’s whether one can make good on the claim that marks of truth are transparent. (Shameless plug: I’m presenting a paper at the Pacific APA on this issue for Fumerton’s epistemology.) One way of attempting that is what Ram suggests above, although I’ve got serious doubts about the use of modalities to explicate the notion of transparency. By the way, I’m interested in seeing what you think the mistake is because I think that arguments against reflective transparency of marks of truth land in skepticism.

  7. Nice comments, guys. Been gone for the holiday, so I’m a little slow responding.

    John, I think the real danger here is that the transparency motivation, once disarmed, will lead a value-driven approach to the truth norm view: the view that the only epistemic norm governing belief is the norm to believe p iff p. That would be an unhappy result, and value-driven approaches are led to it because the notion of rationality or justification found in ordinary language just won’t be significant if the motivation that leads to such talk is one that can’t be satisfied.

    Ram, I don’t know how to refine (1) to free it from counterexamples, but I expect you’ve worked on this more than I, so tell me if you know. The problem is that any belief in a necessary truth turns out to be infallible on this account.

    Also, I’m not sure it’s OK to think of C as that which makes a belief justified when it is justified. Suppose the epistemic principle for Chisholm is: if you’re appeared to F’ly without grounds for doubt, then it is reasonable to believe that something is F. Suppose the antecedent is true. Chisholm at times seems to suggest (though I think he’s never explicit about this) that if you’re appeared to F’ly and consider whether you are, it will be obvious to you that you’re appeared to F-ly. So if what “makes the belief justified” is being appeared to F’ly, there may be a Chisholmian line of the sort you suggest. But being appeared to F-ly isn’t the right candidate for condition C. That condition will have to include both being appeared to in the right way and lacking grounds for doubt, and to my knowledge Chisholm never suggested anything like transparency for the latter conjunct. That’s a good thing, since a transparency claim about the latter is obviously implausible.

    Ted, yes I realize there’s an attempt to think this way that terminates in full transparency. It would be interesting to see if Rich or Larry sustain the motivation all the way through. At least in the 1985 book, Larry gives away the store when it comes to the doxastic presumption: he can’t defend the transparency claim, and ends up endorsing a skepticism so broad and global that all other skepticisms are entailed by it. I can’t remember the particulars of Rich’s view right now, but I doubt it will turn out any better for any view that resolves the motivation at a point involving full transparency.

    Clayton, I like what you say here. I think the key to resolving the problem is to think harder about two things. One is honoring Alston’s lessons about level confusions in epistemology, and the other is figuring out exactly what is involved in the idea that rationality is perspectival. On the latter point, your remarks about agency are significant, since you can take two agents with the same collection of transparencies, so to speak, and yet for whom perspectives differ because of other factors. So it’s a bad theory that applies the same function to the collection of transparencies to yield an answer about rationality. You can’t just fix the evidence and then conclusions about rationality fall out.

  8. Hey Jon,

    You write: “Ram, I don’t know how to refine (1) to free it from counterexamples, but I expect you’ve worked on this more than I, so tell me if you know. The problem is that any belief in a necessary truth turns out to be infallible on this account.” I agree that we can use the word “infallible” in such a way that (1) implies that any belief in a necessary truth is infallible. But why worry about how we use the word “infallible”? I’m not sure why this is an objection to (1), or what other objections there could be to (1), at least construed as a claim about what is a possible justifier, not as a claim about what is required for justification.

    Also, you say that our lacking grounds for doubt about something’s being F cannot be transparent to the subject. I agree that this is true in SOME cases. But isn’t it false for many cases? Right now, I lack grounds for doubting that George Bush is president. Isn’t this lack of grounds transparent to me? Why would you suppose that it is not transparent to me?

  9. Hi Ram, when I talk about transparency, it is implicitly quantified universally. So, some truth is transparent to me, but not all. Some of my evidence is transparent to me, but not all; and sometimes lacking grounds for doubt is transparent and sometimes not. At least, this is so on an ordinary notion of transparency. If we mean one that rules out the possibility of justified false beliefs, then maybe none are transparent.

    On the infallibility point, being infallible needs to be a subtype of being justified if it is going to do the work needed to solve this problem, and it is a datum that it is possible to belief a necessary truth unjustifiedly. So if one adopts an account of infallibility that makes all beliefs in necessary truths infallible, that notion won’t be useful for the problem in question.

  10. Hi Jon,

    But couldn’t the Chisholmian internalist say that a necessary condition of some condition C’s even being a justifier for you to believe something is that C is transparent to you (in the sense of 1)? So, in those cases (if any) in which you are appeared to F’ly and have no defeaters for believing that something is F, then either that combination of facts is transparent to you (in the sense of 1), or else it does not justify you in believing that something is F?

  11. Ram, yes, I didn’t mean to argue against that idea. I find it rather implausible, but that’s not an argument! I’m pretty confident it will lead to radical skepticism, at least if the conditions in question are the type Chisholm had in mind.

Leave a Reply

Your email address will not be published. Required fields are marked *