Pragmatic Encroachment with Symmetry

Pragmatic Encroachers always speak of contrasting cases with very little at stake with cases with lots at stake. Actually, though, the cases with lots at stake are cases where the risks of being wrong are heightened. So truly symmetrical pragmatic encroachment would look at a continuum with nothing at stake as the centerpoint, and risks of error extending in one direction and benefits of accuracy going in the other direction.

Been talking to Alex Pruss about this, having fun with cases, and noticing the absurdity of the view. Think, for example, of a view on which there is an underlying measure of epistemic support, with the threshold on this measure needed for rational belief (or knowledge) varying depending on the stakes. Then risk of error can raise the bar, and benefits of accuracy lower it.

Suppose the threshold is .9 on an underlying scale from 0 to 1. Then consider this case. You have information which gives a .9 measure to the claim that you have cancer that will kill you in excruciating pain within a year. So it is reasonable to believe you have cancer, and if the world cooperates, you can know that you have cancer.

Now change the case but keep the informational base the same. Unknown to you, however, GlaxoSmithKlein has just found a new drug that enhances life expectancy by several years and diminishes the pain threshold to something on the order of listening to a bad epistemology paper! And, out of their expansive humanitarian hearts, they are providing it free to all epistemologists.

Now, you are prone to refusing to believe in the face of adequate evidence–after all, you are both a philosopher and an epistemologist. So, even though in the initial case, you have sufficient reason to believe you suffer the nasty death, you withhold and irrationally so. But once the GSK item is added to the case, a symmetrical version of PE should say that your withholding in now rational, and if you were to believe the nasty it would not be rational.

Crazy of course. But the interesting question is why the PE’ers aren’t symmetrical PE’ers. One answer appeals to ordinary language: adopt PE theories in order to account for what seems, or is, true to say. That fits better with contextualism than with the PE versions of invariantism, however, since the former is automatically at the level of semantic ascent. For object-level theories, there’s always the appeal to intuitions about cases, but if you really want to put knowledge to work, symmetry should be attractive, shouldn’t it?

(OK, I admit none of this is even close to compelling. I intend only to provoke a bit…)


Comments

Pragmatic Encroachment with Symmetry — 7 Comments

  1. I know at least two PEers that are symmetrical PEers. But those PEers would also say that you have to have evidence for GSK fact for it to make any difference. That aside, what’s the act or belief that is no longer warranted enough to justify after the GSK fact is added? It’s certainly warranted enough to justify taking the drug. After all, you SHOULD take the drug (at least, once you have info about the GSK fact).

  2. I assume the belief in question is that I have cancer.

    But you don’t need any belief to justify taking the drug, if there are no costs to the drug. If the credence in my having the cancer is 0.4, I’m still justified in taking the drug.

  3. Hi Alex. That’s what I was assuming p was. What I’m not sure about is why the PEer is committed to saying that you are now rational in withholding on p. For the PEer to be committed to saying that you are not rational to believe that p or that you are not justified in believing that p or don’t know that p, there has to be some act or belief that p isn’t warranted enough to justify. I don’t see what that act is. p is warranted enough to justify you in taking the drug. As you say, you don’t need p as a reason, and that’s fine. But I’m not seeing yet the argument that the PEer must say you’re rational in withholding on p once the new drug is available.

  4. Jeremy:

    The idea was that the symmetric PEer will increase the threshold (for belief or knowledge) when the cost of being wrong goes up, as well as when the benefit of being wrong goes down. So initially, when you think the cancer has really nasty consequences, there is a big benefit to being wrong about having cancer, and hence it’s easy to be over the threshold. We suppose that initially you’re just barely over the threshold. So you’re justified. Then you learn the cancer’s consequences aren’t so bad. So now the benefit of being wrong about having cancer has gone down by a lot, and so the threshold has risen, and you’re above it now.

  5. I see. But the PEer doesn’t say that it’s harder to know propositions that are less desirable or that it’s harder to know propositions that are worse if true. The PEer says it’s harder to know propositions when there are actions whose rationality depends on your epistemic position with respect to those propositions. It’s just as easy to know that you’re broke as that you’re a millionaire, ceteris paribus. It can become harder to know either of those depending on what’s at stake in action.

  6. Pingback: Pascal and Pragmatic Encroachment | Certain Doubts

Leave a Reply

Your email address will not be published. Required fields are marked *