PE and Closure

From talking to Alex Pruss, again, a worry about pragmatic encroachment combined with closure. Suppose you know that p entails q, where p is something with serious risks concerning error but q isn’t. Suppose your underlying evidence puts you barely above the the threshold for rational belief when nothing is at stake. You believe p and you believe q because you’ve deduced it from p. But p is at risk, in the sense required to raise the threshold needed for rational belief. In the case as imagined, your underlying evidential basis is the same for both beliefs. But the PE’er shouldn’t be happy saying you don’t know or rationally believe p, but you do know or rationally believe q.

Don’t say the adequate closure principles don’t license the belief that q. Those principles offer sufficient conditions only (e.g., “knowledge is closed under known deduction”), and whatever one says about belief, the evidential support for q can be the same as that for p. If you choose to appeal to some condition on proper basing, let the person note the entailment, and then base the belief directly on the evidence for p rather than on the entailment: on any adequate account of basing, one can realize that q follows from p, deduce it from p, and yet know it on the basis of something other than this entailment. That’s a claim that needs an argument, but I won’t go into it here.

Think of it this way. You are strict purist about evidential support. You always honor an invariant threshold of epistemic support. The PE’ers claim that your behavior sometimes yields rational belief and sometimes not. But whatever they want to say, they should not say that your belief that p is not rational and your belief that q is. The question is how to secure this result.


Comments

PE and Closure — 10 Comments

  1. Hi Jon,

    I think Robert Howell has a paper that argues for a similar point, if you’re interested (called something like, “A Puzzle for Pragmatism”). It was early on, so some of the developments on pragmatic encroachment aren’t taken into account, but if I recall he gives some example in which I know that my sister is in LA, from which it follows that she’s not dead in Denver. I don’t care much about the former, but I do care a lot about the latter. So I may not know the latter even while knowing the former: so, closure failure.

    With respect to that specific example, I think in my specific choice situation I’m rational to act as if my sister is not dead in Denver, I can treat it as a reason for belief and action, it’s warranted enough to justify me in anything, etc. So I don’t take the specific example as a counterexample. But there is the general point. There might be cases where I’m not rational to act as if p (the consequence of the putatively known q). Once such a case is fleshed out, what’s the problem with just denying that q is known?

    Demon says: “Take the bet on (P or Q). I’ll kill you if you’re wrong, but you’ll get a nickel if you’re right.”

    Now, suppose you have testimonial evidence for P. The bet isn’t on P. It’s on (P or Q). Any pragmatic encroacher argument that has the result you don’t know (P or Q) will also have the result that you don’t know P. You can’t just act as if P is true, you can’t treat P as a reason for action and belief, and P isn’t warranted enough to justify you in taking the bet.

  2. In the Howell case, q is solely derived from p, and p is trivial while q is important.

    In the case Jon and I are interested in, q is solely derived from, and p is important while q is trivial. So it goes the other way around.

    If one likes the “acting as if” stuff, one will add: Since not much is at stake with q, I should act as if q, as it passes the evidential threshold. Shouldn’t I say I know q? But I derived it solely from p, which I don’t know.

  3. Ah. Thanks, Alex. I got the p and q mixed up. (Partly because I incorrectly assumed the problem was supposed to be that the PEer is committed to denying closure.) Sorry about that.

    I’m not sure that any PEer is committed insofar as they are PEers to saying either you know that q or that you don’t know that q in this case. (Maybe the PEer is also committed to saying you do know that q, if the PEer also endorses some principle like, “S should act as if q only if S knows that q”, but that principle isn’t really important for deriving PE. Plus, it’s false.) If, generally, q can be derived from p but based on the evidence for p, then there’s no problem on that score saying you know that q in this case but not p. So, any of a host of options of things to say are available to the PEer. My preferred: neither p nor q is known or justified because neither p nor q is warranted enough to justify you in doing the things that p isn’t warranted enough to justify you in doing.

  4. From the post, I’m not quite sure if you were trying to show that PE leads to violations of closure or to violations of some other principle in the neighbhorhood (e.g., the principle that if you know q based on a deduction from p, you know p, or a similar principle about justified belief).

    In the second paragraph, you imagine a belief in q being based on evidence which makes q just as likely as it makes p. Say this evidence is a belief r. On the principles we endorse in our book, since r isn’t warranted enough to justify believing p, this will mean that you are not justified in believing r. And on the plausible assumption that a belief that isn’t justified itself can’t justify further beliefs, the belief r can’t justify believing q. So if the belief r were the sole basis for the belief that q, the belief in q wouldn’t be justified, given our principles. So, neither the belief in p nor the one in q would be justified.

    The relevant principle from our book is:

    If you are justified in believing that p, then p is warranted enough to justify you in PHI-ing (where PHI-ing could be anything cognitive or practical).

  5. Matt, it wasn’t meant as a counterexample to anything in your book. The idea, though, is that whatever principles of closure or transmission one wants to adopt, if p entails q, one can find a way to get to q from p so that the whatever underlying epistemic status p has gets carried over to q. So if the threshold idea of pragmatic encroachment is adopted regarding justification, then don’t think of closure or transmission in terms of justification but move instead to the underlying epistemic status on which the threshold.

    The lesson may be that PE’ers will have to reject the principle that if p isn’t justified it can’t justify anything, but that principle shouldn’t be sacrosanct for symmetrical PE’ers of the sort described. When pragmatic encroachment enters the picture, with an underlying purely epistemic measure of the sort assumed here, it looks to me like the principle should be replaced by something about the underlying measure. In particular, the new principle should rule out p supporting q at a higher epistemic level than p itself has. And the new principle, whatever it is, should allow p to support q at whatever level p itself has, when p entails q. And then, by appeal to symmetry, adopting the principle would be at least ad hoc.

  6. Here’s a concrete example along these lines, though with entailment replaced by high probability, to make sure we’re on the same page.

    Consider a rare cancer that kills those who have it, and does so very painfully. Your evidence supports the claim that you don’t have this cancer to some degree a little below the threshold given how much is at stake. Moreover, you will visit Houston next year if and only if, and if so because, you have this cancer. For you have a cancer researcher friend in Houston who you know wants to observe as many as possible who have this rare cancer, and so you’ve promised him that if you do have it, you’ll visit him. Otherwise, you have on balance no reason to go to Houston. Not much is at stake in whether you go to Houston.

    So, your evidence supports the claim that you don’t have the cancer (p) to a degree that’s below the threshold given what’s at stake in that belief. Your evidence supports the claim that you’re not visiting Houston next year (q) to a degree that’s above what’s at stake in that belief.

    It would be weird to believe you’re not going to Houston but not believe you’re cancer-free. So I assume the PEer will say: you should not believe that you lack the cancer, and you likewise should not believe that you’re not going to Houston. The latter you shouldn’t believe, even though its evidence is above the threshold for given what’s at stake in going to Houston as such, because this evidence comes from something that is below the threshold. Is that the right reading of PE?

    And then when next week you find out, with very very high probability, that you have a very rare genetic condition that has the property that if you have this cancer, it will not kill or hurt or inconvenience you (except for your having to keep your promise to go to Houston), then it becomes rational to believe both that you don’t have the cancer and that you won’t go to Houston. Right?

  7. Hi Alex. I’m assuming that my evidence supports the proposition that I’m cancer-free to at least the same degree that my total evidence supports the proposition that I’m not going to Houston. Without this assumption, it wouldn’t be weird at all to fail believe that I’m cancer free while believing that I’m not going to Houston. There’d be a natural explanation: it’s better supported for me that I’m not going to Houston than that I’m cancer free.

    Given that assumption, it follows at least from Matt’s and my view that if you are justified in believing that you’re not going to Houston, then you’re justified in believing that you’re cancer-free. That’s because if you fail to be justified in believing that you’re cancer free, then there must be some PHI that the proposition that you’re cancer free isn’t warranted enough to justify you in. But if THAT proposition isn’t warranted enough to justify you in PHI-ing, then any proposition with equal or lesser warrant (such as the proposition that you’re not going to Houston) also isn’t warranted enough to justify you in PHI-ing. At least, that’s the view.

  8. Alex, that sounds right. On a PE which takes rational belief to imply usability as a reason, you would get these results nicely. If you know *p iff q*, then if you had a rational belief in q, you’d have q available as a reason. Putting q and *p iff q* together you’d have a rational belief in p. But you don’t have a rational belief in p (since you can’t use p as a reason), and so you don’t have a rational belief in q either.

    Just to be clear, there are two distinct issues raised suggested by your last comment. One is diachronic and concerns rational belief coming and going depending on the stakes. The PE theorist needs to have something to say about why that seems wrong. Another issue is synchronic and concerns the possibility of having a rational belief in one proposition but lacking a rational belief in another relevantly related proposition. The latter is the basis of a “this is justified but that isn’t?!” sort of objection. There are several ways to try to develop an objection like this. One is to consider cases like yours where one knows a connecting biconditional (I think these aren’t so problematic for reasons I mentioned above). Another is to consider cases like Neta’s “State and Main” cases where the relevant beliefs seem to have the same sort of basis but one doesn’t know any biconditional relating them (these are hard for PE, and we offer some possible replies in our ch. 7). A third is to try to raise worries about violations of closure-like principles (“knowledge-in knowledge-out” principles). And a fourth is raise worries about violations of converses of closure-principles (“knowledge-out knowledge-in” principles).

    One thing to ask is whether, as we consider the various cases, intuitions about the usability of propositions as reasons “sway together” with intuitions about whether the subject has knowledge (or justified or rational belief).

  9. Pingback: Pascal and Pragmatic Encroachment | Certain Doubts

Leave a Reply

Your email address will not be published. Required fields are marked *