Pragmatic Encroachers always speak of contrasting cases with very little at stake with cases with lots at stake. Actually, though, the cases with lots at stake are cases where the risks of being wrong are heightened. So truly symmetrical pragmatic encroachment would look at a continuum with nothing at stake as the centerpoint, and risks of error extending in one direction and benefits of accuracy going in the other direction.
Been talking to Alex Pruss about this, having fun with cases, and noticing the absurdity of the view. Think, for example, of a view on which there is an underlying measure of epistemic support, with the threshold on this measure needed for rational belief (or knowledge) varying depending on the stakes. Then risk of error can raise the bar, and benefits of accuracy lower it.
Suppose the threshold is .9 on an underlying scale from 0 to 1. Then consider this case. You have information which gives a .9 measure to the claim that you have cancer that will kill you in excruciating pain within a year. So it is reasonable to believe you have cancer, and if the world cooperates, you can know that you have cancer.
Now change the case but keep the informational base the same. Unknown to you, however, GlaxoSmithKlein has just found a new drug that enhances life expectancy by several years and diminishes the pain threshold to something on the order of listening to a bad epistemology paper! And, out of their expansive humanitarian hearts, they are providing it free to all epistemologists.
Now, you are prone to refusing to believe in the face of adequate evidence–after all, you are both a philosopher and an epistemologist. So, even though in the initial case, you have sufficient reason to believe you suffer the nasty death, you withhold and irrationally so. But once the GSK item is added to the case, a symmetrical version of PE should say that your withholding in now rational, and if you were to believe the nasty it would not be rational.
Crazy of course. But the interesting question is why the PE’ers aren’t symmetrical PE’ers. One answer appeals to ordinary language: adopt PE theories in order to account for what seems, or is, true to say. That fits better with contextualism than with the PE versions of invariantism, however, since the former is automatically at the level of semantic ascent. For object-level theories, there’s always the appeal to intuitions about cases, but if you really want to put knowledge to work, symmetry should be attractive, shouldn’t it?
(OK, I admit none of this is even close to compelling. I intend only to provoke a bit…)