Fantl & McGrath’s EPC Principle

There’s a paper by Matt and Jeremy that is very impressive and very disturbing to me (Phil Review, 2002). It argues for pragmatic encroachment into epistemology, which I view much as I view the Bush policy of opening up parts of Alaska to oil exploration: pristine territory forever polluted… But the argument of the paper is ingenious and difficult to avoid. I think, though, that it doesn’t quite work, at least as stated. It relies on the following principle:

EPC: S is justified in believing that p only if–
for any states of affairs A and B, any X with S’s evidence for p, X is rational to prefer A & p to B & p if and only if X is rational to prefer A to B.

I think there are counterexamples to EPC, though I’m not sure they are decisive (in the sense of showing that no adequate alteration can be found). But first a counterexample.

The idea is to find something you strongly prefer to know obtains, but does not obtain. Here’s one for me:
A=my knowing that I will live in good health till I’m 120.
B=my knowing that I won’t live in good health till I’m 120.
p=I will not live in health till I’m 120.

I’m justified in believing p, and I prefer A to B. So EPC allows me to infer that I prefer A&p to B&p. But, I claim, I don’t. I don’t prefer obvious inconsistencies to realistic appraisals of the future.

This problem is relatively easy to fix: just insist that p be consistent with both A & B. But this won’t quite do the trick. Change A and B from knowing to being justified in believing. I prefer not living till 120 and being justified in believing that I won’t to not living till 120 and being justified in thinking that I will. But I prefer A to B, and here there is no inconsistency between p and either A or B.

The heart of the counterexample is this. There are things we prefer that are hopelessly unrealistic. Being justified in believing they are unrealistic puts us in a position to prefer the combination of the less ideal state of affairs together with our realistic appraisal over the more ideal state of affairs together with our realistic appraisal. Such counterexamples are, we might say, Stoic counterexamples to EPC.


Comments

Fantl & McGrath’s EPC Principle — 31 Comments

  1. How about this revision:

    A = I hope I quit smoking cigarettes before I die.
    B = I hope I never quit smoking cigarettes before I die.
    p = I will not quit smoking cigarettes before I die.

    It is rational to prefer A to B, but it is rational to prefer (B & p) to (A & p). Perhaps also stipulate that the evidence for p consists in the fact that only one in one billion smokers quit smoking cigarettes before they die.

  2. Scott, maybe this one works, though I can’t quite tell. Do I prefer realistic hopes to unrealistic hopes? Sometimes I do, and sometimes I don’t. Then, again, if I were a smoker, would I prefer “not quitting but hoping to” to “not quitting and hoping not to”? Maybe I would… I just can’t quite tell. And maybe being ambivalent in this way is enough to show a problem with EPC. Interesting case!

  3. Just a pointer to some readings which provides an distinct angle on the question about the relevance of pragmatic considerations to epistemic ones: There is an interesting exchange between Crispin Wright and Martin Davies on epistemic entitlement and rational trust in Proceedings of Aristotelian Society 2004. Wright develops a some notions of epistemic entitlement which, he purports, renders certain kinds of trust rational. These notions of entitlement have a pragmatic parameters in addition to the purely epistemic one of ‘tracking truth’ (or whatever your favorite locution is). Its interesting stuff and its especially interesting to contrast it with purely epistemic notions of entitlement such as Burge’s notion of ‘perceptual entitlement’ in the paper of that name (PPR 2003).

  4. It seems to me that EPC is subject to the counterexamples because we are allowing A and B to include variations in one’s total evidential state. If we were to reformulate the principle to disallow such variations, then I don’t think Jon’s type of counterexample would work.

    Here’s one way to reformulate the principle:

    S is justified in believing p only if –
    for any X whose total evidence is E, and any states of affairs A and B consistent with X’s total evidence being E, X is rational to prefer (A & my total evidence is E) & p to (B & my total evidence is E) & p iff X is rational to prefer (A & my total evidence is E) to (B & my total evidence is E).

    Let E be Jon’s total evidence, which includes his evidence that he will not be healthy until he is 120. A, the case that he is justified in believing he will be healthy until he is 120, is not consistent with his total evidence being E. He can’t rationally prefer that to B, in which he is justified in believing he won’t be healthy until he is 120, without supposing his total evidence in the two cases would be different.

    I think EPC2 might be plausible as more than just an ad hoc fix. I’ve found that some very strange things can happen when we talk about the rationality of preferences without making sure we keep the background assumptions of all the preferences constant. In Jon’s case, for example, it seems reasonable to prefer A to B, because we suppose then that one’s evidential state varies in the two cases being compared, and we tacitly assume that the probability of p covaries well with that evidential state. That is, it’s reasonable for Jon to prefer A at least partly because, given A, there’s a good chance he’ll be healthy until he’s 120. But it seems reasonable to prefer (B & p) to (A & p) because we like truth and we are not allowing p’s probability to vary in the cases being compared. In both cases, the probability of being healthy to the age of 120 is nil, but at least in B&p one has a justified, true belief about it.

    EPC2 simply stipulates that we aren’t going to compare the rationality of two preferences unless one’s evidential state stays constant across all four states of affairs. It ensures we’re comparing apples to apples, rather than comparing the rationality of a preference given certain information (E) with the rationality of a preference not given that information.

  5. Very nice, Chase, and you’re right that EPC2 blocks my original counterexamples. It doesn’t help about the hope example, however. One can be rational to prefer realistic hopes (though I’m not sure I actually do!). But suppose I’m more like that than I actually am. I rationally prefer, let’s say, being in my epistemic condition and hoping for eternal life to being in that condition and not hoping. But suppose I’m also justifiably believe that there is no afterlife. My rational preference for realistic hopes then undermines EPC2 as well as EPC1.

    Note that it is important here that the proposition that there is no afterlife can’t be part of the total evidential condition. But that’s required to avoid trivializing the principle anyway.

  6. It looks to me like we can regenerate the problem using EPC2.
    Let my total evidence be E, let A be the proposition that I am going to win the Powerball this week, let B be the negation of A, and let p be the material conditional proposition that if A then my total evidence is not E.
    A and B are each consistent with my evidence being E (it is not logically impossible that I will win Powerball this week).
    I do prefer A to B, even conditioned on my total evidence being E. If Jon is right that we don’t prefer obvious inconsistencies to realistic appraisals, then I don’t prefer (A & my total evidence is E & p) to (B & my total evidence is E & p). So EPC2 tells me that I am not justified in believing p.
    Now, it is not completely obvious to me that I am justified in believing p. I do, in fact, believe it, but I do not have much of an intuitive concept of a belief’s being justified. Still, it seems to me that this is just as good a counterexample as Jon’s was.

  7. I like your Powerball case, Jamie. Could it be avoided by restricting the quantification over states of affairs A and B so that they are consistent with the conjunction of p and one’s evidence being E?

    I don’t think that restriction would be ad hoc either. Let p be a contradiction and q be something consistent. I don’t think the failure to prefer p to q amounts to preferring q to p or being indifferent as to which of them is the case. Talk of preference appears to make most sense when one is comparing states of affairs that one takes to be possible.

    As for the fact that EPC2 doesn’t avoid counterexamples about having realistic hopes, I’m not sure what to say just yet. If I were going to defend EPC2 (or a version restricted to avoid the sort of counterexample Jamie raises), I’d try to find some difference in the possibilities taken to constitute relevant alternatives to actuality when I prefer being in my epistemic position with hope to being in it without hope and when I prefer realistic hopes to unrealistic ones. If I could find such a difference, I’d know how to modify EPC2 to handle it: stipulate that such differences don’t count. I haven’t been able to come up with any such differences, though.

    Another thought would be this: One might argue that it isn’t rational to be in the following situation:

    1. You have strong evidence against the afterlife.
    2. You prefer hoping for an afterlife in your epistemic position to not hoping for one in your epistemic position.
    3. You prefer realistic hopes to unrealistic ones.

    Owing to 1, you have strong reason to believe hoping for an afterlife in your epistemic position would be an instance of unrealistic hoping. So, by 3, you ought not to prefer hoping for an afterlife in your position to not hoping for one in your position. But that would contradict 2.

    If it’s not rational to be in that situation, then I’m not sure how serious a problem the possibility of the situation poses to EPC or EPC2. Can two preferences be such that both are rational individually, but it is irrational to have both of them?

  8. Chase, I think that we lose insight into what rational preferences are like if we disallow the hope case in the way you suggest. Think of the following. I prefer not to be in pain. I also prefer the state of being in pain and having it contribute to a better life in the future to the state of being in pain and having it not so contribute. I’ve had knee surgery and it hurts. If you make me drop one of the preferences on pain of irrationality, you’ve lost insight into the structure of rational preferences. In the very epistemic position I’m in, I still prefer both not to be in pain and to be in pain in the circumstances in which it makes for a better future. I don’t somehow lose these preferences when I find myself in pain (that I’m sure of), so the only thing one could argue is that retaining all the preferences is somehow irrational. But I don’t see why one should think that, except on the basis of thinking that it’s irrational to have preferences in a circumstance in which not all of them could be satisfied. But such a requirement is too strong, I think. For one thing, pain can hit in a number of places besides my knee and I’d prefer to be in a position where I don’t have to form new preferences in order to react appropriately when threatened by that angry dog!

    In general, the point of having standing preferences is to cut response time in an organism when negative consequences threaten. So standing preferences should be overridden by other preferences in situations where they conflict with what a person has strong evidence about. Positing such a structure to rational preferences gives a better understanding of the role they play than requiring that they come and go depending on whether there is conflict between them.

  9. Jon, I’ll need to think more about circumstances overriding standing preferences. I don’t think I’d want to deny what you say about that, but it might turn out that I think we have standing preferences that are extremely fine-grained in a way that others would deny.

    At any rate, I’m not sure I follow what’s going on the case of your preference not to be in pain, so let me try to say why I think it’s different from hope cases that are counterexamples to EPC. Let me know if I’m misunderstanding something.

    You prefer (A) not being in pain to (B) being in pain. You also prefer what I’ll call (C) “being in pain for the better” to (D) being in pain not for the better. That all sounds fine to me, (C) and (D) are subcases of (A). Your knowing that you are in pain should not override your preference of (A) to (B).

    Now, the hope case. You have very strong evidence for p, the claim that there is no afterlife. You prefer (1) having hope for an afterlife in the absence of one to (2) lacking hope for an afterlife in the absence of one. Nevertheless, owing to a standing, general preference for realistic rather than unrealistic hopes, you do not prefer (3) hoping for an afterlife to (4) not hoping for an afterlife. This case has a different structure from the pain case, since (3) and (4) are not both subcases of (1) and (2).

    Here’s an argument for why it’s probably not rational to prefer (1) to (2) but not (3) to (4). I think it is probable that you prefer (5) hoping for an afterlife in the presence of one to (6) not hoping for an afterlife in the presence of one. If you prefer (1) to (2) and also (5) to (6), then hoping for an afterlife dominates not hoping for one: whether there is an afterlife or not, you’d prefer to hope for one. That is, you’d prefer (3) to (4) after all. I say it’s *probably* not rational to prefer (1) to (2) but not (3) to (4), since it may turn out that one can rationally perfer (1) to (2) without preferring (5) to (6).

    I actually think that’s all compatible with having a general preference for realistic hopes over unrealistic ones. This may be where you think I’m getting things wrong about standing preferences. I tend to think of preferences as existing in relation to a set of relevant alternatives to actuality. My guess is that one prefers realistic to unrealistic hopes in relation to one set of relevant alternatives but prefers hoping for an afterlife relative to a different set.

    All of this is reminding me of some issues I try to grapple with in a paper (“Rationality for Ostriches …”) on my web site. At the end of that paper, I take some tentative first steps toward understanding desire and preference as attitudes towards sets of relevant alternatives to actuality, where relevance varies contextually. I still don’t know how successful those steps were.

  10. Chase, yes, the pain example wasn’t meant to be the same structure as the hope example. It was meant to show why a heirarchical structure to preferences allows conflict between rational preferences. On the hope example, you did the example backwards from the way I did it: I prefer (2) to (1), and (3) to (4). That difference undermines the dominance argument you give in the next paragraph.

    Let me see if changing the example makes a difference for you. I prefer being filthy rich to not being filthy rich. I also prefer being a decent human being who is a philosopher to not being one. Given what I like to do and what options are open to me, I might realize that the only path to becoming filthy rich involves giving up philosophy and decency. So given a further preference for realistic appraisals of the future, I need either to abandon the first preference or be such that the second overrides the first. I prefer the second option here to the first: after all, you can have exceptionally good evidence that integrity has to go to become filthy rich and yet be wrong. And I want to be prepared… Of course, if I find out I’m wrong, then I’ll abandon the more fine-grained preference, but that is irrelevant to the question of what my actual preferences are and whether they are rational.

    I’m not sure I quite get the relevant alternatives point, but it sounds interesting. Do you think it will help in formulating a counterexample-free alternative to EPC?

  11. To all:

    I’m sorry not to comment on the various permutations of the counterexample to EPC, nor the more general issues, but I feel the need to correct a possible misunderstanding. Matt and I do not rely on EPC in our argument for a pragmatic condition on justification. Rather, we contend that EPC is FALSE. Our argument DEPENDS on the falsity of EPC. The reason that we think EPC is false is that, if true, it commits us to a rather thoroughgoing skepticism. Suppose I am justified in believing (p) that my hard drive has been successfully backed up. (My evidence: my new backup software has successfully backed up my work every night at 1:00 am for the entirety of the month since I’ve bought it.) There’s not much riding on it; I haven’t done a lot of work since my last backup, and there’s no important event tomorrow for which I’ll need any of my work. I prefer staying in bed and going to sleep to getting up and checking. I also prefer staying in bed to getting up and checking, both on the assumption of p. That is, what I prefer in fact is also what I prefer, on the assumption of p. But, consider another subject, S’, with exactly my evidence that the computer has been safely backed up. This subject, S’ has done a lot of work that day and desperately needs the file the next day. This subject, in fact, is rational to prefer (to staying in bed) getting up and checking to make sure the file is safely backed up. However, on the assumption that the file is safely backed up, S’ is rational to prefer staying in bed. That is, S’ is rational to prefer the state of affairs in which he gets up and checks to the state of affairs in which he just goes to sleep. But, he is not rational to prefer the state of affairs in which he checks and in which the file is safely backed up to the state of affairs in which he just goes to sleep and the file is safely backed up. Therefore, the consequent of EPC is not satisfied. And, therefore, if EPC is true, then I (in my low stakes context) am not — contra the hypothesis — justified in believing that the file is safely backed up.
    More to the point — if EPC is true, we can’t be justified in believing anything if our stakes in whether we’re right about p get raised to such an extent that what it’s rational to prefer changes. But this seems like it will be the case for almost all of our mundane, supposedly justified beliefs. Perhaps even ‘Here’s a hand’ will end up unjustified if EPC is true. Therefore, EPC is too implausible.
    Notice, also, that if EPC is true, it will not be much of a “pragmatic encroachment” into epistemology. A genuine pragmatic encroachment will have the following consequence: two subjects can have the same evidential (or, more generally, purely epistemic) standing to a proposition, but one can be justified and the other not, simply because, for one, the stakes are higher. EPC does not have this consequence. As far as EPC is concerned, if two subjects have the same amount of evidence (or strength of purely epistemic position), then if one isn’t justified because the stakes are too high, the other isn’t either.

  12. Jeremy, thanks for the clarification (it shows, as has been shown many times before, the dangers of relying on memory…). Suppose, though, that we construct the principle as follows:
    S is justified in believing p only if
    S is rational to prefer A&p to B&p iff S is rational to prefer A to B.

    In the example you use, you (in the lows stakes context) prefer staying in bed to getting up and checking, so you prefer (staying in bed and the thing being backed up) to (checking and it being backed up). So you pass the test for justification. The other guy doesn’t, since he prefers checking to staying in bed, but he prefer (checking and the thing being backed up) to (staying in bed and the thing being backed up). So this principle has the following virtue: if you accept it, it can explain your example.

    But maybe you don’t accept this explanation either?

  13. I absolutely accept that explanation, Jon. Successful counterexamples to that principle will end up problematic for us.

  14. Jeremy, good, then I wasn’t as far off as I was worried I was! Note that the discussions above are all aimed at being counterexamples to that principle, since they all keep fixed the subject in question.

  15. Jon,

    Thanks for pointing out my mistake in getting your counterexample backwards. You prefer (2) lacking hope for an afterlife in the absence of one to (1) hoping in the absence of one, and you also prefer (3) hoping for an afterlife to (4) not hoping for an afterlife. Once we get this straight, the dominance argument does fail.

    I’d still consider falling back on this strategy for defending EPC2: Your preference for (3) to (4) is relative to a set of alternatives that is different from the set that is relevant to your preference of (2) to (1). In preferring (3) to (4), you are comparing your life in the nearest worlds where you have hope for an afterlife to your life in the nearest worlds where you do not. Your preference of (2) to (1), though, depends on your preference for realistic hopes, which involves a comparison of your life in the nearest worlds where you have realistic hopes to your life in the nearest worlds where your hopes are not particularly realistic. The defender of EPC2 (or EPC, I suspect) could simply insist that the two preferences compared in the consequent must be relative to comparisons of the same sets of alternative possibilities.

    I’m not all that sure how well this will actually work in saving EPC (or EPC2, or either one’s subject-relativized version). I would be surprised if the problem with the original principles is just that they fail to treat preferences as relative to a background of relevant alternatives. There might be other counterexamples this strategy won’t be able to answer.

  16. Here are a few comments in response to Jon’s post. (No fair to put up posts right before the Pacific APA!) Jeremy and I do defend principle PC (modulo several restrictions): if a subject is justified in believing that p (i.e., has justification adequate for knowledge that p), then for any states of affairs A and B, the subject is rational to prefer A to B given p iff the subject is rational to prefer A to B in fact. We read “S is rational to prefer A to B given p” as equivalent to “S is rational to prefer A&p to B&p”. It may help, when considering examples, though, to stick with the intuitive talk of “given p.”

    We do not, of course, just lay this principle down as an axiom. We defend it through a series of steps, the first and most fundamental of which is this: if you know that p, and you know that if p then act A is the best thing you can do, then you know that A is the best thing you can do; and if you know that A is the best thing you can do, you are rational to do A. These two arguments, taken together, give us the following principle: if you know that p, and you know that if p then A is the best thing you can do, then you are rational to do A. Call that principle FM for Fantl/McGrath. In successive steps, we move beyond FM, by replacing talk of what is best to do to preferences between acts, and then moving to preferences between states of affairs, and then on to claims about justification, rather than knowledge. I won’t rehash the paper.

    A few things should be noted here. First, all we need is FM to undermine non-skeptical evidentialism, or so we believe. We can talk about this more, if people have doubts. There was, in effect, a discussion of it on Certain Doubts last summer. Jon’s examples do not refute FM because FM is only about acts. Second, in the paper, we are sensitive to concerns about the need to restrict principles like PC and even FM. Here is a bad piece of reasoning. “I’m going to live for at least another year; if I’m going to live for at least another year, then swimming in the Amazon next year is better than not (because I’d love to do that, so long as I’d live through it); so swimming in the Amazon next year is better than not; so I should do it.” This is bad because swimming in the Amazon next year can make a difference as to whether you live for another year. (I think Hawthorne has a similar example in his book.) This suggests we need to restrict FM to acts A such that the subject has no reason to think that doing A could make a difference as to whether p. In the paper, we thought of making a difference causally. But we are open to modifying it, if necessary. Perhaps “making a difference” should be understood more broadly in terms of probability-raising or lowering. (We don’t think this will affect the anti-evidentialist argument.)

    But what about Jon’s case? It doesn’t pose a problem for FM. But it might still pose one for PC. Jon knows he won’t live to 120 (and so is justified in believing this). He is rational to prefer J(~120) i.e., being justified that he won’t live to 120 to J(120) given that ~120. But is he rational to prefer J(~120) to J(120)? Jon says no, so he concludes PC is false.

    We could think of the example as Jon does: “I know that I won’t live to 120. But I am rational to prefer J(120) to J(~120), because were it the case that J(120), that would indicate or at least be good evidence that I will live to 120.”

    But we can also think of the example in the way we prefer to, as follows. “I know that I won’t live to 120. Am I rational to prefer J(120) to J(~120)? Well, I won’t live to 120, and it does no good to justifiably believe I will if I won’t. It just means I have a false belief. Consequently, I should prefer J(~120) to J(~120).” Or compare a similar case. Schiavoâ??s parents know that their daughter will not recover tomorrow. Are they rational to prefer being told by her doctors that she will? They may think to themselves: “No, we aren’t rational to prefer being told this, because she won’t recover tomorrow and so being told she will is just being told something false.”

    The easiest escape for us is to restrict PC to states of affairs A and B and propositions p such that whether A or B obtains is probabilistically independent of p. But we’re also inclined to think that the causal decision-theoretic approach has a good deal going for it. Think about it this way. Suppose a superneurologist gave you a choice between J(120) and J(~120)– she can create the right mental states either way–but told you that your choice could not affect whether you’ll live to 120. We take it that you ought to choose J(~120), even though choosing J(120) would be choosing the better news. In fact, there is no such superneurologist. But don’t you in fact have the preference that would be guiding your decision were there such a superneurologist?

  17. Matt–very nice comments here! They remind of my flawed memory–I really should check the paper itself rather than relying on memory before commenting…

    Let’s take your last proposal, though: let A and B be probabilistically independent of p, and then claim that you’re justified in believing p only if you rationally prefer A&p to B&p iff you rationally prefer A to B.

    Then stipulate strong evidence against an afterlife, and let A be hoping for an afterlife and B be not hoping for an afterlife. These states of affairs are probabilistically independent of p (the claim that there is no afterlife), and I rationally prefer A to B. But I don’t rationally prefer A&p to B&p, given my stoical tendencies. Doesn’t that seem right to you?

  18. I don’t know, Jon. This actually doesn’t seem right to me. I presume the reason that your stoical tendencies lead you not to rationally prefer A&p to B&p is that you really don’t like hoping in vain. Therefore, you know that hoping in vain is worst for you. But, if you know this, then you know that, if p is true, then it is worst for you to hope for an afterlife than to not hope for an afterlife. And, because you know p (we’re stipulating), you must also know that it’s worst for you to hope for an afterlife.

    Do you disagree so far? If so, then it seems to me that you deny closure. If you don’t disagree so far, then you will agree that you know that it’s worst for you to hope for an afterlife. And, it seems clear to me, if you know that it’s worst for you to hope for an afterlife, it’s certainly not rational for you to hope for an afterlife.

    Now, perhaps you don’t think that whether it’s rational for you to hope for an afterlife has anything to do with preferring the state of affairs in which you hope for an afterlife. But, even there, given that I know there is no afterlife, I don’t at all prefer the state of affairs in which I hope there is an afterlife (given my stoical tendencies). After all, if I know that there is no afterlife, then I know that the state of affairs in which I hope for an afterlife will be worse for me. Why would I prefer what I know to be worse for me?

  19. Jon, I do not mean to be difficult, but… I do not take myself to know that there is no afterlife. So, I do not take myself to be justified in the relevant sense. I do not think you know this either. No offense! We could stipulate that some subject S knows this, but then I doubt intuitions are going to be clear.

    But maybe the example could be modified. Are you thinking of cases in which, although one knows that p, it is not irrational to hope that not-p?

  20. Matt and Jeremy, it may be that we’re talking past each other because you both talk about knowledge and the principle is about justification. And I know you think of justification in terms of knowledge, which I don’t, and that may be a source of my confusion. But let’s see…

    Jeremy, I balked at your third sentence, the one starting with therefore. It doesn’t follow from the previous ones. But even if I do know this (let’s assume I do), the relevant necessary condition is about rational preferences, not about what I know. What I’m objecting to is reading off conclusions about rational preferences from information about epistemic conditions. So when I get to your third paragraph, I think you don’t understand how standing preferences work. Even if I’ve got convincing evidence that there is no afterlife, that doesn’t rationally compel me to give up my preference for hoping that there is one. After all, I’m a fallibilist. If my evidence changes, I won’t have to acquire a new mental state to be the kind of person I swear to you now that I am. I’m a devotee of the afterlife. I’ll be that even if I know there isn’t one. If you say this preference is irrational, I’ll say no (perhaps a bit too defensively :-)). What would be irrational is combining my stoical tendencies with such knowledge and preferring to hope for there being an afterlife given that there isn’t one. Now that’s irrational. But I’m not. I’m really not. 🙂

    OK, so Matt is right. I actually think belief in an afterlife is justified for some people, including me. So we are talking hypotheticals only. I picked the example, though, because I expect most of the readers here to take themselves to be in the position that is only hypothetical for Matt and me. So what would be a good example for Matt, or for me? The Schiavo case is a good one, I think. I take myself to be justified in believing that she’s in a permanent vegetative state. I think that rational parents in such a situation would prefer two things: that she live rather than die, but that she die given that she’s in a permanent vegetative state (and given that she wouldn’t want to receive the treatment necessary to keep her alive in such a condition). I know that’s what I would prefer. And if you tried to argue me out of the first preference, I’d say you don’t have the resources to explain adequately the precise nature of my laments upon her death.

  21. I never thought you were irrational, Jon. But I’m really not sure at what point you think the argument goes astray. I agree that sometimes, when you prefer A and p to B and p, you don’t know that B and p will be worst for you. But, in your specific counterexample, to the extent I agree that I prefer not hoping and there being no afterlife to hoping and there being no afterlife, it’s because I know that it will be worst for me if I hope and there is no afterlife. And if I that it will be worst for me if I hope and there is no afterlife, then I know that, if there is no afterlife, it will be worst for me to hope. But I also know that there is no afterlife. If I don’t know this, then it’s no counterexample at all. (At least, the sense of justification we’re using is the sense of justification according to which, when you are justified, you’re in a position to know. As long as the relevant instance of closure is ok, then, the argument will work.) Therefore, I know that it will be worst for me if I hope. I still am not sure if you are questioning this closure step or if you’re questioning the next step, to the claim that I am rational to prefer not hoping. That is, if I know that it is worst for me if I hope, it seems very strange to say that I prefer it if I hope. At least, there is a sense of rational preference according to which this is true. We’re still, of course, open to calculating the rationally preferable state of affairs in the standard way. But, if it turns out that this calculation has the result that I am rational to prefer B to A, then it will have to be the case that I don’t know that A is better than B. And all of this follows from the absurdity of saying, “I know that A will be worst for me, but I prefer A to B.”

    There might be other counterexamples, but I would have to evaluate those on a case by case basis. In this particular counterexample, I don’t agree that I should prefer hoping to not hoping in fact, because I feel like I know that hoping is worst for me, and I know this because I know that there is no afterlife, and I know that hoping in vain is out of the question (given my stoical nature).

  22. Jeremy, don’t worry, I was joking about the irrationality charge. And besides, there’s at least some chance that I *am* irrational…:-)

    Here’s the sentence that I think is false in the argument you give above:

    And if I that it will be worst for me if I hope and there is no afterlife, then I know that, if there is no afterlife, it will be worst for me to hope.

    The antecedent here is true (suitably corrected), but the consequent is false. I rationally prefer not to hope and there be no afterlife. But that doesn’t imply that if there is no afterlife, it is better not to hope for an afterlife. It implies that if I know there is no afterlife, then my inner mental life is structured in such a way (given that I’m being rational) that actions depending on these features answer to the overriding conditions and not to the hope in question.

  23. Jon, I wonder if it would help here to distinguish value from expected value. I agree that, for her parents, the value of Schiavo living is higher than the value of her dying. But for her parents the expected value of her living is rather low, I think, because the probability of her living in a non-vegetative state is extremely small.

    I know decision theorists sometimes speak of worlds as having value and not only expected value. Expected value of state of affairs or propositions, in the end, is explained in terms of the value simpliciter of worlds. I would think that one might be able to extend this notion of the value of a world to apply to states of affairs. (We quickly will get ourselves into tricky issues about intrinsic value and its relation to final value.)

    We mean our principle PC to apply to the expected value of states of affairs, not their value simpliciter. That’s what we have in mind by talk of ‘rational to prefer’.

    Does that help a bit?

  24. Jon, I still find myself confused, and I’m beginning to think that I’m at fault. Here’s the state of affairs that will be worst for me: no afterlife + hope. You grant that, I guess, because that’s the antecedent from above. Now I find out that there is no afterlife. So, I know that there is only one way to avoid that (worst) state of affairs, and that’s by not hoping. So, I know that, if I do hope, I will not avoid that worst state of affairs. And that’s the consequent. If there were some probabilistic relation between hoping and there being an afterlife, things might be different. But we are supposing there is not. What am I missing?

  25. Jeremy, I think if you symbolize the relationships here, you’ll see that I’m denying the use of certain first-order rules within the scope of a rational preferability operator (and maybe this is a mistake I’m making). Here’s the details:
    ~RP(~A&H) to it’s opposite.
    RP(H) to it’s opposite.
    Given: ~A.
    Therefore, ~RP(H).

    So the question is how to get ~RP(H) to follow from the first and third claims here. One way is to allow &-E within the scope of the operator in the first premise when one of the claims is true (or given). That’s the rule that I’m denying here, and because I deny it, I claim there’s no inconsistency between the first and third premises and the second. Do you think this rule is a good one?

  26. At the risk of stepping into a horrible decision-theoretic trap, I’m going to say that I do like this rule, assuming that it’s appropriately restricted and that “Given: ~A” means that ~A is known. One problem with denying the rule is that, if you reason out of accord with the rule, it looks like you can fall victim to a Dutch book. After all, if you prefer H to ~H, then presumably you will pay a little bit if I could help you achieve H. But, once I do, ~A&H will be the case, and you will know that this is so. But you prefer ~A&~H to that, so presumably you will pay me just a little bit to make it the case that ~A&~H. But now ~H is once again the case, and you prefer H to that. This makes you a money pump.
    I know that Dutch book arguments aren’t all that convincing, but the general worry, it seems to me, is that by denying the rule, you end up saying that you are rational prefer the state of affairs that you know will, if it obtains, result in the state of affairs that you prefer least of all.
    Perhaps there is some ambiguity between a notion of rational preference abstracted from what people in fact know to be the case, and a notion of rational preference in fact. We’re definitely talking about the latter. But I also don’t want this discussion to get too bogged down in details of PC. All that is needed for pragmatic encroachment is that what Matt labeled “FM” be true. Once the discussion turns to stronger propositions (like PC), there might be others issues involved: what is meant by “given p”, whether there is ambiguity in the notion of rational preference, etc. But none of these issues touch FM, and so none of them seem to touch the core of the case for pragmatic encroachment.

  27. Jeremy, this is very nice; it at least shows what I’m committed to in order to deny the principle. Here’s what I think, though. First, you can’t help me be hopeful for an afterlife, so I can’t imagine what I’d be betting on or paying for. And being a money pump in this way just means that I shouldn’t pay you in such cases. It doesn’t show that my preferences are irrational.

    More helpful, however, is to sketch the picture of what I think happens in such cases to preferences. Here’s the key principle I think is false: Kp & K(p->A is better than B) => Rational to prefer A to B. What follows from the two knowledge assumptions, I claim, is that it is rational to prefer A&p to B&p. What learning p does to the mental life of a person is put in place an overriding structure among their mental states (if they are rational); it doesn’t need to eliminate some of the mental states in question. Prior to learning p, my preference for B over A might override my preference for A&p over B&p. After learning p, this overriding structure changes.

    So the question is whether to prefer this picture to the one you cite (and, as you might expect, I’m unmoved by arguments that require that I think of rational preferences in terms of utilities or expected utilities–I assess these proposals by considering particular cases that I’m very confident about… cases such as this one, for instance). Here’s why I think my picture is better–it explains my reactions when various possible futures develop better than the alternative that has me dropping a preference on pain of irrationality.

  28. Jon, one last comment concerning the “key principle” you mention. Let’s just assume a restriction on A/B/p such that whether A or B obtains is suitably independent of p (probabilistically, causally, or what you have). Assuming the closure of knowledge under modus ponens, then from 1 and 2:

    1. S knows p
    2. S knows that if p then A is better than B,

    one can infer:

    3. S knows that A is better than B.

    The issue is then whether from 3 we can infer:

    4. S is rational to prefer A to B.

    If so, we’re in good shape. If not, we’re not.
    That’s how the logical situation looks from here! Thanks for an interesting discussion.

  29. Matt, yes I agree that this is the logical structure of the issue (and glad you posted this comment about our emails, since it is so helpful to others following the discussion), and that the key is whether the knowledge claims in question allow one to derive (4) from (3). On the one side of the issue is that (4) seems such a natural claim to say follows from (3). On the other side are the examples, and the only way to read them as I do is to say that the account of rational preference is going to “carry the assumptions” on which (3) depends into the preferences in question (i.e., since A’s being better than B depends on p, the account of rational preference need only claim that what is preferable is A&p over B&p).

  30. Jon, I guess I just haven’t been convinced by the examples. I don’t, for example, have the intuition that I am rational to prefer hoping. This is because I know that hoping will have the worst consequences for me. You seem to be interpreting 3 as a claim about rational preference. But it’s not. It’s the claim that A will have better overall consequences in fact than B. Certainly one could know this without knowing whether p is in fact the case. The fact that I can also learn it by knowing p doesn’t seem all that relevant.
    It seems quite clear in the case of action. When A and B are actions, I don’t know how 3 “carries the assumption” of p with it. (I’m not even sure what this means.) We don’t normally think that closure results in knowledge that “carries the argument” with it. We just think we get to know the consequent. All 3 says is that doing A will in fact have better overall consequences than B.
    So, I can know that doing A will in fact have better overall consequences than B. When you agree to this and deny 4, then it seems like you are committed to say something like, “I know that doing A is better than B — doing B is worse than A — but I should do B, not A.” And I suspect that there are going to be cases in which it gets even worse, in which you may end up committed to saying things like, “I know that doing B is the worst thing I can do — but I should do B.”
    This all may hinge on whether there is sense to be made out of 3 “carrying the assumption” of p with it. But, because 3 is not a claim about rational preference but about the in fact consequences of actions or the in fact ramifications of states of affairs being the case, I’m not sure how sense can be made of it.

  31. Jeremy, no, I’m not interpreting (3) as about rational preference. The claim about carrying an assumption is about whether (4) follows from (3), which I (have to) claim it doesn’t. (On the assumption-carrying metaphor: think of Lemon-style proof systems that track assumptions in the left margin–then any line of a good proof expresses the claim that if the assumptions are correct then the content of the line is true. My suggestion is that rational preferences might sometimes work like that, so if the knowledge claims depend on p, then the rational preference isn’t for A over B, but for A&p over B&p.) Also note that A and B are states of affairs, so talk about doing A rather than doing B is ill-formed.

    Bottom line for me is I think Matt has outlined the logical structure of the issue very plainly. It makes clear what any counterexample must do: it must make it plausible to deny that, in that case, (4) is true even though (3) is. Reaching the point of seeing that as the central point of contention is a nice accomplishment, since it clarifies what is at stake and the burden that any purported counterexample must carry, since, I take it, the plausible initial reaction to the suggested inference is that it looks pretty good.

Leave a Reply

Your email address will not be published. Required fields are marked *