Pascal and Pragmatic Encroachment

Not all were convinced by the earlier posts about symmetrical pragmatic encroachment and closure concerns, so I thought I’d add a bit about the context–together with a really unrealistic example.

My thinking about this occurred in the context of thinking about Pascal’s wager argument. Not the actual argument but a modification of it. So change the case where there is slightly more evidence for God’s existence than against. And change the costs of disbelief to 0. (Pascal thought the costs were infinitely negative, but suppose you are just a conditional immortalist, so that the cost of disbelief is the same whether God exists or not.) And let the benefits/costs of belief if God doesn’t exist be roughly the same (he claimed that believers would “lose some vices” so would benefit slightly; but let’s disagree and claim its a wash).

So, now the benefits of belief are supposed to be infinite, with no risks of error at all. And if one imagines a position where the extent to which the benefits lower the threshold needed for knowledge is measured by how expansive the benefits are, then even the slightest tilt of the evidence in favor of the claim could be enough to put one in a position to know. So, given the right theology (and abstracting from the plethora of other problems with the wager argument), it would be really quite easy to be in a position to know, on the basis of the evidence, that God exists.

(Depending on how the symmetry is characterized, maybe even the slightest epistemic probability away from certainty of falsehood would also put one in a position to know, but I imagined the position being one where knowledge requires some epistemic underpinning favoring a claim over its denial.)

Then consider inference cases where all is known by the inferrer: both the underlying epistemic measure and the way in which pragmatic factors undermine knowledge of the premises. The inferrer isn’t deterred since the motive concerns figuring out the underlying epistemic measure to which the believer proportions strength of belief. Imagine the degree of belief is above the threshold for belief, and that the risks of the premise prevent knowledge of it but benefits of belief in the conclusion don’t, then we get an unpalatable conclusion. So some other device, other than pragmatic encroachment and shifting thresholds, is needed to explain why one isn’t in a position to know the conclusion of the inference.

The need for an additional codicil ought to count as additional theoretical complexity, detracting from the overall simplicity and beauty of the fundamental idea. Not an objection, but in some sense a theoretical cost. And, to repeat a point made in the earlier posts, this is not meant to count as a cost for any actual pragmatic encroachment position; its about a theoretically well-motivated, symmetrical approach to the idea that comes to some grief.


Comments

Pascal and Pragmatic Encroachment — 4 Comments

  1. Quick question, John. When you say that “this is not meant to count as a cost for any actual pragmatic encroachment position”, what about the position you consider do you take to be different from the actual positions some pragmatic encroachers favor? Is it the symmetry countenanced by the position you consider? Or is it something else?

    I take the position you’re consider to be this one: “a position where the extent to which the benefits lower the threshold needed for knowledge is measured by how expansive the benefits are”. Are you thinking that this differs from actual versions on offer mainly in its emphasis on benefits? Or in some other way?

  2. Jeremy, yes, I don’t know of a view that has this aspect built into it. Especially if the lower threshold can go all the way to anything short of a zero chance of being true! But I doubt any in your crowd would want claims with just a smidgen better chance of being true than false to be able to count as knowledge (except for Hawthorne’s idea that sometimes knowledge is granted when all that seems to be involved is true belief).

  3. I see. But it seems to me that symmetry is built into all the main PEers views in the sense that all the PEers I know will allow knowledge that p to be lost either if the costs of acting as if p become too great when p is false, or if the benefits of not acting as if p become too great when p is false. Suppose that I have fallible but knowledge-level evidence that the bank is open Saturday and there’s a moderate line today and I don’t much need the check to clear by monday. Then I can wait until Saturday and know the bank is open Saturday. In the standard high stakes version of the case, the costs of waiting until Saturday if the bank isn’t open Saturday are very high, so I shouldn’t wait until Saturday and so don’t know the bank is open Saturday. But here’s another high-stakes version: if I deposit the check by monday, the bank gives me a one-million dollar bonus (and I know this). Here there is no cost to depositing it tomorrow if I’m wrong — it’s not like my life is over or I lose my house. I just proceed as normal. There’s just added benefit if I deposit it by Monday. No PEer is going to want to say you should wait until Saturday, and so no PEer is going to say you know in this case.

    I do see a motivation for allowing that kind of symmetry, but all the PEers already allow for it.

    There is, however, a second difference between the view you describe and the view that most PEers are going to be comfortable with. It sounds like on the PE view you’re discussing, the threshold for knowledge that p goes up when the costs of p being TRUE go up. That, I think, is not anything that any PEer will want to be committed to. Of course, if that’s the PE view under discussion, symmetry would require that the threshold for knowledge goes down when the benefits of p being true go up. But because no PEer is going to advocate the negative version of the view, I’m not sure who this ends up being a problem for.

Leave a Reply

Your email address will not be published. Required fields are marked *