Deductive Cogency and Probabilistic Coherence

A fairly standard approach to the preface paradox is basically Lockean. On the Lockean story, belief is degree of belief past a certain threshold. We then explain away the inconsistency involved in the paradox by an underlying probabilistic coherence. The preface claim exceeds the threshold of belief, is inconsistent with the beliefs about the contents of the book, but probabilistically coheres because it accurately measures the risk of error in the book (and since the book is large enough, the risk of error is high enough to exceed the threshold).

In other cases, though, we explain away an underlying probabilistic incoherence among degrees of belief by cogency at the coarse-grained level. Below the fold is a case of this sort.

You are an expert witness in a court case. You are required to provide a sworn affadavit regarding what you may be called on to testify about in court. The matter is fairly complex, etc., so your sworn testimony is quite extensive. But you are an expert on the subject, so you prepare the content. You then must swear in writing that what you have told is the truth, which you correctly interpret to mean that you have left out no significant facts that pertain to the case and that nothing you have written is mistaken. You sign, and the document is notarized.

Note, though, that the legal document can precisely mirror a standard preface case, so if in the standard preface case, we can explain away inconsistency at the level of belief by probabilistic coherence at the level of credence, here we can explain away probabilistic incoherence at the level of credence by appeal to cogency at the level of belief. For, since the two cases mirror each other, if probabilistic coherence exists in the preface case because the book is large enough, probabilistic incoherence will exist when you are asked to swear veracity regarding the content of your written testimony.

What to learn from this isn’t completely clear, except for one point. I think the mere fact that one can find an underlying probabilistic coherence even though there is inconsistency at the level of coarse-grained belief, doesn’t by itself give us a solution to the paradox. For what to explain away and what to explain it away with isn’t uniform, as the above example shows.


Comments

Deductive Cogency and Probabilistic Coherence — 11 Comments

  1. If I’m getting it, the probabilistic incoherence in the expert testimony case comes from taking your swearing your whole affidavit to be right, when in fact it must have lower prob than the individual claims. *Would* a wise expert sign to “I swear that the conjunction of all claims made in my submission is highly probable”? Not on reflection. So perhaps in context the content of what you have sworn to is that the whole submission is such that each claim in it, taken in the semantic – not evidential – context of the others is probable enough that you can list it among your secure beliefs.
    These issues interest me because of their connections with my inquiry into contradiction-management. (See my post of last week.) So thanks, Jon.

  2. Hi Jon,

    I think what I’m about to say might echo something that Ralph Wedgwood said on this blog recently.

    Maybe I’m missing something, but I am not sure what would be logically inconsistent about not swearing your whole affidavit to be right, or even about swearing it to be wrong. Consider the following list of propositions…

    1. The suspect was on Main Street at Midnight.
    2. The suspect usually wears a wide-brimmed hat.
    3. Saturday night was cloudless, and visibility was good.
    4. 1, 2, and 3 are all statements in this very affidavit.
    5. One of the statements in this very affidavit is false.

    Am I missing an obvious derivation of an inconsistency in 1 thru 5 above? Of course 1 thru cannot all be true, but that’s not because they’re inconsistent. Rather, it’s because of a fact that’s not mentioned in any of 1 thru 5, namely, that 1 thru 3 are the only statements in this affidavit. So far as I can see, deriving an inconsistency would require adding some statement like the following… 1 thru 3 are the only statements in this affidavit. But what does that mean? It means that the only statements in this affidavit are that the suspect was on Main Street at Midnight, and that the suspect usually wears a wide-brimmed hat, and that Saturday night was cloudless, and visibility was good. Now, if the affidavit is that short, then of course you’re not justified in believing 5. But if the affidavit is long enough, how would a you, in your finitude, be justified in believing an unsurveyably long statement like that?

  3. Jon, I see that the affidavit case mirrors the standard preface case, supposing that each of the object-level claim in the affidavit is well-supported enough to be rationally believed, but not absolutely certain. If there are enough independent claims in the affidavit, then it will be highly probable that at least one of them is false. So if the expert is required to swear that all of the claims are true, she’s required to swear to something that’s probably false.

    I don’t see, though, why this undermines the Lockean diagnosis of what’s going on in Preface cases. The supposed paradox gets its grip from the fact that a set of apparently rational beliefs is one that the agent can easily recognize cannot all be true. If one holds a Lockean theory of belief, this is to be expected. In the affidavit case (at least if the affidavit is parallel to the book in a preface case), it will not be rational for the expert to believe the last statement she’s required to sign (or so I’d maintain, since she should see that it’s unlikely to be true).

    Ram (and Ralph, a while back) emphasize that standard Preface cases don’t involve formal inconsistency, since the Preface statement refers to the object-level claims in the book, but doesn’t explicitly say what those claims are. On this line, there’s an important difference between the agent realizing that not all the beliefs in the relevant set can be true, and the agent’s beliefs being formally inconsistent. Given normal human cognitive limitations, it is impossible for a Preface statement to reprise the content of an ordinary book and still be humanly entertainable. Both Rs suggest that our inability to entertain sufficiently long Preface statements is a consequence of the fact that our minds are finite. (Ralph wrote: “This inability to survey such large totalities of propositions seems one of the fundamental limitations of any finite mind.”)

    This I don’t see. There’s a lot of room between human and infinite, and one wouldn’t have to have anything like an infinite mind to be able to entertain book-length conjunctions, or to remember what every claim in a book was. And while the difference between human minds and infinite minds may seem like a deep one, the gulf between human minds and minds with better memories and capacities to entertain longer conjunctions is not nearly so wide. (I’m also not sure why the difference between an agent’s beliefs being such that she can see that they can’t all be true, and an agent’s beliefs being formally inconsistent, should be so important. But that’s a matter for another time.)

  4. Ram, I won’t repeat what David wrote, but it is what I was thinking too. The source of the Preface Paradox isn’t the lack of surveyability or some limitation central to our minds. The source is our fallibility, and that remains even when we imagine finite extensions of our surveyability capacities.

    David, I agree that nothing here undermines the Lockean answer to the preface. It just puts some pressure on the view to say why the explanatory direction in preface is the correct response. Of course, if you are right that one can’t rationally believe the summary statement involved in signing the affadavit, then there’s not much left to explain. But if that’s correct, and the expert is rational, defense lawyers are going to have fun with him/her! To be an expert witness, you have to be able to tell a story and stick to it. If you don’t believe the story, they need to get someone else. And if you would say that you believe your testimony but shouldn’t, you’re of little use as well. Of course, these are practical considerations about the legal system, but my gut response to the case is that it is clear that we have a rational expert here who, when he tells the story required, is willing to sign off on it at the end. And the story I’d want to tell about how the legal system works (when it works effectively) is that it presupposes that such beliefs can be rational. Having experts asserting preface remarks about what they’ve testified to would be a very bad thing for the legal system. And I think our gut reaction balking attaches, in such contexts, to experts who express uncertainty about their story or refuse to endorse it rather than those who stick to their guns.

    Were it not for this fact, Pollock’s approach to preface would be completely bizarre, wouldn’t it? There needs to be some cases where his kind of response makes some kind of sense, and that’s what I think the affadavit case provides. Not that I think that we can generalize the affadavit case to endorse John’s solution to preface, but that we shouldn’t claim that probabilistic incoherence indefeasibly implies irrationality. OK, that last claim is stronger than what anything here shows yet, so maybe all I should say is that the affadavit case provides a prima facie defeater for the claim that probabilistically incoherent credences imply irrationality of belief.

  5. Jon,

    I don’t see the problem.

    1. If the beliefs of the “Lockean expert” that are stated in the affidavit are conjunctively inconsistent, then she shouldn’t sign a statement that effectively affirms that she believes the conjunction, because she won’t believe the conjunction.

    2. If the beliefs of the “Lockean expert” that are stated in the affidavit are conjunctively consistent, but she doesn’t believe their conjunction (because the conjunction falls below threshold), then she shouldn’t sign a statement that effectively affirms that she believes the conjunction, because she doesn’t.

    3. However, to my knowledge, the kinds of claims to which an expert usually attests in an affidavit are claims for which the expert has very very high confidence, and she is also very confident in their conjunction. So as a practical matter I don’t see that the Lockean thesis will cause any problems in the sort of case you describe.

    Notice, too, that the Lockean thesis does not force the situation in the way your question seems to suggest. That is, the thesis *does not imply* anything like this: “whenever the belief-threshold is t (for t not unreasonably high, e.g., not 1) and the agent believes n claims (for appropriately large n), then the agent *cannot believe* the conjunction.” If it implied something like that, then the issue you raise might indeed be a problem. Rather, what the Lockean thesis does imply is this: “if the agent believes n claims that are conjunctively inconsistent, then her belief-threshold t cannot be (have been) too high relative to n — i.e., t must be less than or equal to (n-1)/n.” Or, equivalently: “if the agent’s threshold for belief is some given number t, then the agent *may* believe n conjunctively inconsistent claims *only if* n is large enough (relative to t), large enough in the sense that n must be greater than or equal to 1/(t-1).” Equivalently, the Lockean thesis denies that a coherent Lockean agent can “both believe n (or fewer) conjunctively inconsistent claims and at the same time have a belief threshold t greater than (n-1)/n.” It remains perfectly coherent for the Lockean agent to believe any number of conjunctively consistent claims and also believe their conjunction, regardless of the value of the threshold t? Nothing about the Lockean thesis blocks this from happening, or makes it even difficult for this to happen. Of course, if in a particular case the agent does not believe the conjunction, then she shouldn’t sign. But what should any expert who gives testimony do? The only difference for the Lockean agent is that she is “permitted to believe” conjunctively inconsistent claims, provided the number of claims is large enough relative to the threshold — but when such cases arise she will not believe the conjunctions (and will not sign).

  6. Hi Jim, that’s a nice suggestion. I think you are exactly right here that if there are epistemic permissions that are not also epistemic obligations, then there is no problem in the affadavit example for the Lockean. I’m also attracted to this optionalist view of rationality, as opposed to the restrictivist view of most epistemologists (or at least what I think is the most common view). The challenge for the optionalist is then to explain why epistemic rationality is more restrictive than practical rationality without undermining the distinction between permissibility and obligatoriness of the epistemic sort for belief. Do you have any ideas on that? I’m working on some, but it’s all still pretty murky to me.

  7. Jon,

    I hadn’t been thinking of it explicitly in terms of epistemic options (or permissions) vs obligations. But that’s a really nice way to think about it. I guess any way of trying to increase epistemic options will be in tension with trying to maintain some sort of epistemic standards and norms. One might have thought that there could be no plausible principled way to relax the standard obligation of logical consistency among beliefs. Then the Lockean idea comes along, and we see how to permit more latitude while still maintaining a kind of principled standard for coherent belief — i.e. probabilistic coherence, or rather, as I would prefer to have it, coherent (qualitative) comparative confidence (which may be modeled probabilistically).

    This idea about options vs obligations sounds like quite an interesting direction to pursue. Can you tell me about your ideas along these lines? Is this connected with the theory of rationality you are trying to work out.

  8. Hi Jon and David,

    I’d like to say something in reply to two points that David makes, and that Jon endorses. First:

    “There’s a lot of room between human and infinite, and one wouldn’t have to have anything like an infinite mind to be able to entertain book-length conjunctions, or to remember what every claim in a book was. And while the difference between human minds and infinite minds may seem like a deep one, the gulf between human minds and minds with better memories and capacities to entertain longer conjunctions is not nearly so wide.”

    I agree with what David says in this quoted passage, but I’m not sure how it affects the point that I (and perhaps Ralph) were making. My point was this: you are justified in believing a preface proposition (e.g., “at least one of my current beliefs is false”) JUST IN CASE the list of your current beliefs is unsurveyably long FOR YOU. If you’re superhuman but still finite, a belief list that is unsurveyable by humans will still be surveyable by you — and then you won’t be justified in believing a preface proposition if that is the list of your beliefs. (Although you will still be justified in believing a preface propositions if the list of your beliefs is much longer than that.)

    Second, David writes:
    “(I’m also not sure why the difference between an agent’s beliefs being such that she can see that they can’t all be true, and an agent’s beliefs being formally inconsistent, should be so important. But that’s a matter for another time.)”

    Here’s why it matters.
    According to multi-premise closure, if I’m justified in believing that p and I’m justified in believing that q, then I’m justified in believing that p and q.
    Furthermore, I suppose: one cannot be justified in believing a proposition which is such that, one can know by reflection alone that it is false.
    The two principles that I’ve just stated jointly entail that one cannot be justified in believing each of two propositions the conjunction of which one can know by reflection alone to be false. But the two principles do not entail the stronger thesis that one cannot be justified in believing each of two propositions the conjunction of which one can know to be false. If you are a fan of the two principles, and you think that formally inconsistency — unlike metaphysical inconsistency — can be identified by reflection alone, then you will care about the difference that David mentions.

  9. Jim, maybe sometime late summer I’ll be able to post a draft of what I’ve been working on about optionalism. Still a ways to go on it, though.

  10. Hi Ram–

    I see the difference between human agents, who can’t survey book-length lists of beliefs, and finite super-human ones who can. But the suggested application of this difference seems odd to me. When one of us writes a book of very well-supported claims, it’s OK for him to believe each claim in the book, as well as a preface statement. But if a super-human agent writes an exactly similar book (we might imagine that all the same claims are backed by exactly similar evidence), and she accepts a preface statement for just the same sort of reason (she realizes that the claims in her book are based on non-entailing evidence, and there are enough of them that it’s very likely that at least one of them is false), then she’s believing irrationally.

    Of course, even the human agent knows that at least one of the beliefs in a certain identifiable set of her beliefs is false–she just can’t entertain all the members of that at once. So why should the super-human’s powers of simultaneous entertaining preclude her from drawing the same conclusions her human counterpart draws? I guess part of what seems odd to me about the suggested position is that the difference between the two agents–that she can hold the whole book in her mind at once, while we can’t–doesn’t seem to have any implications at all for either agent’s rational estimate of the likely truth of the propositions he or she considers.

    Now I’m no fan of multi-premise closure. But I wonder what those who like the principle think the super-human agent should believe? To me, it seems wrong either to insist that she stop believing some of the (extremely well-supported) claims in her book, until there are so few left that their conjunction is likely to be true, or that she would be justified in believing the (highly improbable) conjunction of all of the original book’s claims.

  11. Hi David,

    Thank you for this reply! You raise the following challenge to my view:

    “Now I’m no fan of multi-premise closure. But I wonder what those who like the principle think the super-human agent should believe? To me, it seems wrong either to insist that she stop believing some of the (extremely well-supported) claims in her book, until there are so few left that their conjunction is likely to be true, or that she would be justified in believing the (highly improbable) conjunction of all of the original book’s claims.”

    I go with the first of the two options that you offer: the super-human agent should only believe a set of claims the conjunction of which is likely to be true. Now, why do I say that? Doesn’t the super-human agent, by hypothesis, share all of our evidence for the propositions that we mere humans believe? And since that evidence renders it rational for us mere humans to believe each one of those propositions, shouldn’t it render it rational for the super-human agent to believe each one of those propositions as well?

    Here’s what I say in my defense. Even though the super-human agent, by hypothesis, has all the evidence that we mere humans have — the evidence that makes it rational for us mere humans to believe a large set of propositions, the conjunction of which cannot be true — still, the super-human agent, unlike us mere humans, is capable of recognizing certain particular bits of our shared evidence as defeaters of other particular bits of our shared evidence. Now, I don’t just mean that the super-human agent can recognize that THERE ARE bits of our shared evidence that defeat other bits of our shared evidence. We mere humans can recognize that fact, but recognizing the fact that THERE IS a defeater for your evidence does not by itself defeat your evidence.

    So the super-human agent is capable of recognizing defeaters that we are incapable of recognizing as such (even if we know that they must be out there somewhere in our evidence set). That’s why the very same evidence that makes it rational for us to believe various propositions does not also make it rational for her to believe all those same propositions. Or so it seems to me.

Leave a Reply

Your email address will not be published. Required fields are marked *