An Epistemic Newcomb Problem

Suppose that there is a predictor – let us call him Peter – who is an astoundingly reliable predictor of what beliefs human beings will form. He is not infallible, but has in fact never made an incorrect prediction about what belief a human being will form. On tens of thousands of occasions, he has predicted what belief a human being will form, and on every single one of these occasions, his prediction is correct.

Now suppose that you know about Peter’s track record. Suppose that you also know that Peter has filled a big jar with a number of coins, and has made a prediction about what belief (if any) you will form about whether there is an odd or even number of coins in the jar, and then acted on his prediction as follows:

  • If he predicted that you would believe that there is an odd number, he made sure that there was an odd number of coins in the jar;
  • If he predicted that you would form the belief that there is an even number, he made sure that there was an even number of coins in the jar;
  • If he predicted that you would not form any belief about the number of coins in the jar, then he filled the jar randomly, not caring whether there was an odd or an even number of coins in it.

In this case, I claim, whatever belief you form about the number of coins in the jar, it will be rational for you to form that belief. If you form the belief that there is an odd number, that belief is very probably true, and so it is rational for you to form this belief; and if you form the belief that there is an even number, that belief is also very probably true, and so it is rational for you to form that belief too.

If my claim is true, then it seems to show that the correct theory of rational belief is more like Evidential Decision Theory (EDT) than like Causal Decision Theory: The relevant probability that should guide one in rationally forming one’s beliefs with respect to a given proposition p is not the unconditional probability of p given the evidence, but the conditional probability that p has on the assumption that one believes it.

This also explains the following striking fact: Even though it might be highly probable on my evidence that It’s raining but I don’t believe it’s raining, this is not a belief that it is rational for me to form, because the conditional probability of this proposition on the assumption that I believe it is extremely low.


Comments

An Epistemic Newcomb Problem — 27 Comments

  1. “If you form the belief that there is an odd number, that belief is very probably true, and so it is rational for you to form this belief”

    I’m not sure how that follows. What’s rational to believe seems to me to have something to do with what is probably true *before* you form the belief, not afterwards.

    (Compare: the genie tell you that if you can make yourself believe p now on no evidence, it will become true in a few minutes’ time.)

    If you were right, what it’s rational to believe would be pretty radically separable from what your evidence supports, since it’s hard to see how the evidence supports either an odd or an even verdict (still less both) in your case.

    I wonder if this might turn out to be a case where believing there are an even number of coins is a piece of epistemically-motivated epistemic irrationality.

    Incidentally, what happens if the predictor thinks you will – inconsistently – form both beliefs? I’m guessing that in this case he’d revert to randomness, as in the no-belief case. In any case, we can be pretty sure that, even if the situation rationalizes belief in odd and belief in even, it does not rationalize belief in the conjunction? But the thought that the conjunction is not made rational puts some pressure, doesn’t it, on the thought that each conjunct is?

  2. Carrie — I should clarify: I wasn’t claiming that it “follows” that it is rational to form the belief in question. I was just reporting my (admittedly controversial) intuition about the case.

    If my intuition is right, then we must reject your claim that “what’s rational to believe seems … to have something to do with what is probably true *before* you form the belief, not afterwards.” (To be precise, though, even *before* one forms the belief, it must still be true on my view that the proposition in question has a high enough conditional probability on the assumption that one believes it.)

    I would also reject your claim that on my view, “what it’s rational to believe would be pretty radically separable from what your evidence supports, since it’s hard to see how the evidence supports either an odd or an even verdict (still less both) in your case.” On the contrary, on my view it is precisely one’s *evidence* about the predictor’s reliability (and about the setup of the situation) makes it rational for one to believe the odd verdict, and rational for one to believe the even verdict, in this case.

    Obviously, I will have to deny that the fact that it is rational for one to believe p and also rational for one to believe q implies that it is rational for one to believe ‘p & q’. But I don’t think that there’s a problem with denying this. Multi-premise closure is surely a highly questionable principle for rational belief: after all, multi-premise inferences like conjunction-introduction typically lead to a conclusion that is *less* probable than either of the premises!

  3. P.S. In order to defend the view that I outlined in this post, I wouldn’t have to reject multi-premise closure about ex post attributions of rational belief — only about ex ante attributions. (The difference here is that the ex post attribution ‘She rationally believes p’ implies that she does believe p, whereas the ex ante attribution ‘It is rational for her to believe p’ does not imply that she believes p at all.) Multi-premise closure about the ex ante attributions seems to me even less well motivated than about ex post attributions.

  4. I should clarify: I wasn’t claiming that it “follows” that it is rational to form the belief in question. I was just reporting my (admittedly controversial) intuition about the case.

    You offer three propositions that one might believe in the case you describe. I don’t think the question is whether it is rational to form the belief that p. Forming the belief that p is rational in the case you describe (i.e., it has the highest expected payoff for those who value having true or justified beliefs). But no proposition p is itself evidentially any better off than ~p for any proposition on offer prior to the formation of the belief that p (or the belief that ~p). So antecedent to forming any one of the beliefs one might form in the case you describe, there is no proposition that has evidential priority. The evidence for any p prior to our choice to believe it gives us no reason to believe it rather than ~p. But then we could not rationally choose to believe p on the evidence available prior to believing it (of course, we could rationally choose to believe it on the expected payoff for believing it). We should remain agnostic on p given the evidence available. I think that’s the idea.

  5. Ralph,

    If you are asking for intuitions, mine are that it is not rational to believe either p or not p. Until I form the belief that p, I have no special reason to believe it will be true. As Mike indicates, it is no more likely true than false. The fact that I have other evidence to show that if I believe p, p will be true, is not itself evidence that p is true, absent my belief in its truth. So I’m not really persuaded by the response to Carrie regarding evidence.

  6. I agree with Mike of course that in this case, there are no *compelling* reasons to believe p. One could equally rationally respond to one’s evidence by believing ~p instead. But I claim that in this case there are *sufficient* reasons for believing p. Antecedently to one’s believing p, one knows that on the assumption that one believes p, p is very probably true. If one responds directly and spontaneously to this fact about one’s evidence by forming the belief in p, then I claim that this belief is rational.

    Moreover, even if (like Mark) you reject my intuition about this case, it’s surely an advantage of this view that it can explain why it’s irrational to form the belief “It’s raining but I don’t believe it’s raining”, which might after all be highly probable on one’s evidence.

  7. Ralph,

    Supposing your intuitions are right contra causal decision theory, it seems you are left with other (I think, more controversial) cases that do not involve the value of having true beliefs. Suppose it were a part of your example that forming the belief that the number if items is odd is extremely highly correlated with not dying young. Assuming EDT and the value of not dying young, it would be more rational to believe the number of items is odd than that it is even. For all the reasons that make one-boxing strange (and all of the lessons from Gibbard and Harper’s common cause cases) this seems a strange conclusion.

  8. For what it’s worth, my initial intuitions were like Ralph’s, but on reflection they are bothering me. I think for this reason: suppose I am the one Peter makes a prediction about, and we know this. Now Ralph and I each consider whether P. We have the same information. So how could it be that Ralph has no reason to believe that P, while I have?

  9. I wasn’t claiming that the right theory of rational just *is* EDT, nor was I claiming that the right theory of rational decision is EDT. I basically agree with Tom Kelly’s criticisms of all attempts to reduce epistemic rationality to instrumental or practical rationality. Moreover, I reject EDT as it stands, since I agree with you that two-boxing is the right choice in the original Newcomb problem; in fact, I am very sympathetic to some of the recent critics of CDT (such as Andy Egan) who argue that neither CDT nor EDT can be the correct account of rational choice. All that I was suggesting in my post was that the correct theory of rational belief *resembes* EDT in this one respect, that the relevant probabilities that should guide one’s beliefs are conditional probabilities (conditional on the assumption that one holds the belief in question).

  10. Ralph,
    I think it might be significant to distinguish the rationality of forming the belief from the rationality of holding the belief. As you say, if on presentation of the case you just find yourself believing ODD, this belief is highly probable given your evidence, so you could say it is rationally held. Furthermore, you could reflectively endorse it for the same reason. But was it rationally formed? Well, the spontaneous acquisition of the belief ODD wasn’t formed by reasoning on the evidence, and for anyone who doesn’t just find themselves believing ODD or EVEN, reasoning could only lead them to continue with no belief about the parity of the number of coins, which suspense they would also find themselves reflectively endorsing. However, we might then go on to say that the other beliefs (believing ODD or EVEN) were irrational because they are of the kind whose rationality requires not only that they are probable on the evidence once believed but also that they are formed by reasoning about the case (unlike, for example, perceptual beliefs)
    I wonder if you shouldn’t put this in terms of a necessary rather than sufficient condition on rational belief, that is to say, say only that Bp is rational implies P(p|Bp) is high. The support for your idea from the case of Moorean paradoxical belief rests on it being a necessary rather than sufficient condition.

  11. All that I was suggesting in my post was that [on] the correct theory of rational belief . . . the relevant probabilities that should guide one’s beliefs are conditional probabilities (conditional on the assumption that one holds the belief in question).

    But I thought that was what I said, too. Where am I going wrong. Let B be the belief that p = the number of items is odd. Let p* be the proposition that the number of items is even. We conditionalize on my having each belief and (presumably) take into consideration the value of having each belief. The value V in this case is both the value of true belief and the value of not dying young. So, [Pr(p/Bp) x V(Bp)] > [Pr(p*/Bp*) x V(Bp*)]. In your original case we had [Pr(p/Bp) x V(Bp)] = [Pr(p*/Bp*) x V(Bp*)]. But in your original case the V(Bp) is just the value of true belief. Isn’t that about right, or did I miss something in your initial argument?

  12. I think the following is a remark along the lines of Jamie’s worry. It’s been known for quite a while that belief itself can both create and destroy evidence for a given claim. So, it is possible to have adequate evidence for p but have the addition of the belief that p change one’s epistemic situation so that it is not rational to believe p even though the totality of one’s evidence, before adding the belief, confirmed p. And it is possible to lack adequate evidence for p and yet the addition of the belief itself create evidence so that it is rational to believe p after it is added.

    The first thing to do in response to these points is to distinguish between propositional rationality and doxastic rationality. If we think about the latter, and about the rationality of holding the belief once formed, then the view looks a bit better, I think, though I expect you’ll need to insist that the belief is based at least in part on the information about the predictor. And presumably the conditional probability will involve the unspecified background knowledge as well in the usual fashion.

    Even so, the points in the first paragraph will still retain their power to show that the fundamental notion of rationality is about what the evidence shows, with the doxastic notion definable in terms of the more fundamental notion, since the takeaway from the first paragraph is that belief itself can alter the totality of evidence.

  13. OK. Here are some very brief replies to these excellent comments from Jamie, Nick, Mike, and Jonathan.

    1. Jamie. In a word, the crucial difference is the “essential indexical”. Suppose that I know that Peter has made a prediction about what belief *Ralph Wedgwood* will form about the number of coins in the jar, and has then filled the jar accordingly; but as a result of amnesia I have completely forgotten that *I* am RW. Then I certainly don’t think that it would be rational for me to form the belief that there is an odd number of coins in the jar. These essentially “first-personal” phenomena are hard to explain, but there is no doubt that they exist.

    2. Nick. You’re absolutely right, as it seems to me, that it is of the greatest importance to distinguish between forming and holding beliefs, and that it is plausible that different principles of rationality apply to these two different phenomena. However, I don’t think that this point undermines my suggestion. Consider Buridan’s ass. The capacity for reasoning alone won’t get it to decide to head towards the bale of hay on the Left (rather than the bale of hay on the Right). Something more is required — some essentially random leaning towards one side or the other. But if the ass does decide to head towards the Left, it can do so as a result of rational reasoning. The same is true of belief, or so I claim.

    3. Mike. You’re still assuming, so far as I can see, that I’m identifying rational belief with some notion of what maximizes some sort of (evidentially) expected value. But (citing Tom Kelly) I have explicitly denied that I accept any such identification. So I am somewhat puzzled by your insistence that I have to interpret rational belief in any such way.

    4. Jonathan. Actually, my view is the inverse of yours: I think that doxastic (ex post) rationality is more fundamental than propositional (ex ante) rationality. Roughly, a proposition p has propositional (ex ante) rationality for a given believer at a given time if and only if (and because) there is a possible course of reasoning that would take the believer from his overall state of mind at that time to a doxastically (ex post) rational belief in p. I’m afraid to say that I completely fail to see how the points that you make in your comments cast any doubt on this view of the matter.

  14. Ralph,

    I haven’t insisted on interpreting you in any particular way. Indeed I’m sure I’ve invited your corrections on my interpretation. In fact, I’m pretty sure I did so twice in my last post. So tell me where I’ve said something in my example that differs importantly from what you said in your initial example. I urge you to be specific about the error, so I can see the problem. I’ve tried to base my example on nothing more than what you’ve already said.

  15. Ralph,

    Thanks for your response. Two thoughts:

    1. I don’t think this looks sufficiently analogous to the well-known counterexamples to multi-premise closure for me not to feel concerned. These are cases where each conjunct has enough probabilistic support to count as rational yet the conjunction does not. In your case, p and not-p are both supposed to have plenty of support and yet their conjunction – a conjunction of just two things each of which has plenty of support – has no support at all. That doesn’t look good. Compare the preface paradox. My credence in ‘Everything in the book is true’ should be pretty low, granted; but my credence in ‘The first two things in the book are true’ had better be pretty high!

    2. From the description of the case, before you form any belief as to whether p (and assuming you also don’t have evidence that you are going to decide one way or the other), your credence in p and your credence in not-p should be the same. Differential credence here looks irrational. So both credences had better be below 0.5. Now, at what point does anything happen which differentially affects your rational credence in p and that in not-p? If the answer is (as it seems to me to be): ‘never’, how can it be rational to stop having the same credence in each of p and not-p?

  16. Hi, Ralph.

    I don’t want to sidetrack the discussion, but I’d like to broach the possibility that no one understands the set-up of your problem well enough to venture a justified answer to it: the set-up conceals an infinite regress that makes it incomprehensible, at least to any finite mind. Garnett Wilson and I argued for this claim in regard to the original Newcomb problem:

    http://ace.acadiau.ca/arts/phil/faculty_and_staff/Maitzen&Wilson_Newcomb.pdf

    Nobody else (as far as we know) accepts our argument, so you’ll be in good company if you don’t either. But we have yet to be shown what’s wrong with it.

    Best,

    Steve

  17. I’m joining late, and reading backward. This thread is an interesting way into the question of whether there are such things as a pure epistemic causal paradox. Some short remarks:

    Carrie:

    how can it be rational to stop having the same credence in each of p and not-p?

    When the agent is partially ignorant of the parameter, p, these values can come apart. So, an agent may have a belief that p of .4, say, and a belief that not-p that .5. The measure won’t be additive, but there are plenty of models that are sub-additive. Dempster-Shafer theory is one well-known example.

    I’m sympathetic to Jon Kvanvig’s remarks, particularly about the need to build in (or explicitly account for) the information the agent has about the predictor, Peter. I’m puzzled why that was swatted away so quickly. The Peter example is a strange case to draw conclusions about principles for rational belief since the types of causal intervention we are able to make by belief formation do not include the one described in the example.

    Ralph, have you looked at Adam Morton’s example? And, if so, do you find it persuasive? I.e., do you find his one-coin/two-coin thought experiment to describe a pure epistemic causal paradox?

  18. Ralph, there are lots of counterexamples to what you propose in response to my comment, but I’ll just give one here. Let p be the claim that I have never considered the claim that twelve squared is 144. The totality of my evidence could yield justification that claim, but of course not if I believe it already. And if I reason to this claim from the evidence I have for it, the addition of the belief will undermine the evidence for it.

  19. Hi Greg,

    Thanks for that. Can you tell me a bit more about what you mean by cases where “the agent is partially ignorant of the parameter, p”. Is Ralph’s case like this?

    (For clarity, I wasn’t suggesting that we should generally have the same credence in p as not-p; just that in Ralph’s case each is equally well-supported for us so I don’t see how it could be rational to assign different values.)

  20. Hi Carrie

    You are right about symmetry in Ralph’s example: in this case, it would seem that the agent’s expectations for p and for not-p should be symmetric. (hmm; so much for my reading back to front…) It is not always the case, however, which would be a common sense point for a traditional epistemologist, but a bone of contention for an orthodox Bayesian: the calculus throws off structural properties on “rational belief” that are neither reasonable nor have much to do with belief, which we’ve known for a long time, but there is no longer a good reason to put up with these defects given the theory of imprecise probabilities. I mean, there are new defects for us to chart and complain about!

    But one of the things the theory of imprecise probabilities gets right, I think, is that it pulls distinguishes between uncertainty and ignorance. And this turns out to make the framework much more supple. There are cases where the agent may have a belief that p, Bel(p), which is neither equal to the belief that 1-Bel(not-p), nor symmetric, and the theory can represent this. For example, if your degree of ignorance about p is 0.2, it doesn’t follow that your degree of ignorance about not-p must also be 0.2: you could have better evidence for your assessment of p, and have almost no evidence at all for not-p. Imprecise probabilities allow you to represent this difference in your evidence w.r.t. p and its complement.

    Perhaps a more familiar example is a wish to distinguish between the case where there is a lot of evidence that a coin is fair, and so a lot of evidence for the frequency of tosses to land heads being 0.5, and the case where we don’t know anything at all about the coin, and so also assign 0.5. On IP, you’d assign [0,1] to the second case to represent your maximum degree of ignorance about the coin.

  21. Ralph: Let me try expanding my remark. You wrote that:

    The relevant probability that should guide one in rationally forming one’s beliefs with respect to a given proposition p is not the unconditional probability of p given the evidence, but the conditional probability that p has on the assumption that one believes it.

    I’m not sure that the additional assumption, that the agent believes p, is telling. Rather, the agent may be viewed to be in an experimental setting whereby his having a belief about a parameter, the parity of number of coins, intervenes on the outcome of p. This cuts through quite a lot of the philosophy, but it seems to me that this is the basic mechanism underlying the agent’s assessment of p.

    Now, the event of believing p is important only because the agent also accepts that he is in a situation where so-believing has an effect on the outcome of p. His believing p is an intervention on the assignment of the parity of p. There doesn’t seem to be anything special about the conditional probability of p given that he believes p; he accepts that this probability is 1 in virtue of accepting the story about Peter. Probability isn’t doing any special work here; this is just one way to represent this piece of evidence about the uncertainty mechanism underlying assessments of p.

    Notice that this point creates some problems for the claim that each of the possible beliefs about p are rational, that it doesn’t matter whether the agent (i) believed p = odd, (ii) believed p = even, or (iii) suspended judgment about p.

    I don’t think the antecedent of (iii) would be a reasonable option for the agent, given the back story, and assuming that the agent accepts the part of the story about Peter. The agent accepts that his forming a belief about p will fix the value of p. So, why would he then suspend judgment about p? I thought this was the idea behind Jon’s point; and it seems to go to the heart of the matter.

  22. Loads of interesting points here. For now, I’ll just respond to Jonathan’s alleged counterexample. I just deny that it is a counterexample. In my view, it would never be rational for me to believe that I have never considered the proposition that twelve squared is 144. Of course, it might be highly probable given my evidence that I have never considered this proposition; but according to me, that needn’t make it true that it is rational for me to believe that I’ve never considered this proposition.

  23. The question isn’t whether it is rational to believe the claim, but whether there is propositional justification or rational grounds for the claim in question. That’s the notion of ex ante justification in play here, and one surely can have grounds for the claim in question. Maybe I just started squaring and I know I’ve never thought of squaring anything bigger than a single-digit number, and I know that 12 isn’t a single digit number. That’s excellent evidence for the claim in question.

    There are other examples as well. For one, take mathematical truths. Some are rational for me to accept, some are rational for me to be agnostic about, and some I rationally disbelieve. Sad to say, but true! In every case, there is a line of reasoning that gets me to the truth, however, and using it would make the resulting belief rational. Even so, rational false beliefs are possible in this domain.

    The point is the same as I started with. The line of reasoning you use may create or destroy evidence, and hence can’t be relied on to get a good picture of what your actual evidence supports. It’s just another case of the conditional fallacy.

  24. Ralph,

    You wrote:

    “The relevant probability that should guide one in rationally forming one’s beliefs with respect to a given proposition p is not the unconditional probability of p given the evidence, but the conditional probability that p has on the assumption that one believes it.”

    Suppose I know that I’m a pretty good tracker of truth in forming and holding my beliefs and that I’m especially good at making judgements about medium sized physical objects. If I believe there is a table in front of me I am almost certain to be right. Does it follow from your idea that I should believe that there is a table in front of me? For, on the hypothesis that I believe it, there is almost certainly a table in front of me.

    Or am I reading what you say in the quoted bit incorrectly? I’m probably missing something.

  25. Or better, I should have asked my question in #24 above in this form: Does it follow that I’d be rational if I believed there is a table in front of me?

    The self-correction seems needed since the judgement that there is no table in front of me is symmetrically situated – there would likely be no table in front of me if I judged there was not one. So it seems like the verdict here would be similar to that in the original story – that I’d be rational whatever I believed.

  26. I’ll try to think some more about the points that have been made here by Steve and Gregory. But here’s a quick response to the latest comments by Jonathan and Mark.

    1. Jonathan insists on distinguishing between what one’s current evidence supports and what a possible process of reasoning (starting out from one’s current evidence) could lead one rationally to believe. He also seems to think that there is a second distinction between what a possible process of reasoning of this sort could lead one rationally to believe, and what it is rational for one to believe (since in his view, there are complex mathematical truths that it is not rational for him to believe, but which he could come rationally to believe via a possible course of reasoning).

    Now, I can certainly recognize a distinction of the first sort. I was supposing that a proposition p is supported by one’s evidence precisely to the extent that p is probable on that evidence, and then as I was claiming, there are cases in which one can rationally come to believe something even though it is not antecedently supported by one’s evidence. Jonathan and I probably disagree about whether I should analyse the notion of what my evidence supports in probabilistic terms; but I don’t think we disagree about the intelligibility of this distinction.

    I could recognize *something* like Jonathan’s second distinction as well. At least, there is a distinction between a proposition that a *short and easy* course of reasoning could lead one rationally to believe, and a proposition that some (possibly fantastically complex) course of reasoning could lead one rationally to believe. I’m not convinced that any further distinction is needed to capture any real difference in this area.

    2. Mark’s objection, I think, overlooks the point that the relevant conditional probabilities are one’s *current* probabilities, which must reflect one’s total current evidence. E.g. my total current evidence includes an experience as of looking out over Christ Church meadow towards the river, and a complete lack any reasons to suspect my current experiences of being unreliable. Given this total evidence, the conditional probability that I am directly in front of a table, on the assumption that I believe that I am directly in front of a table, is still pretty low. If given all this evidence, I still believe that I am directly in front of a table, it is more probable that I have gone insane than that I am directly in front of a (presumably invisible) table!

  27. Ralph, I’m not really after two distinctions, I don’t believe. My reason for objecting to using the phrase ‘rational to believe’ as you did earlier is that it is ambiguous between propositional and doxastic rationality. The latter is the only distinction I’m after, and the cases I described were only meant as counterexamples to the proposal you suggested. The math counterexample presupposes, of course, that propositional rationality for a claim isn’t implied by having evidence that entails that claim. In short, propositional rationality isn’t closed under entailment.

    We need a notion of propositional rationality because it functions explanatorily. A person sometimes believes contrary to his/her evidence, and thus believes irrationally; a person’s withholding is sometimes irrational because he or she has evidence adequately supporting the claim or its denial. The argument for taking propositional rationality as basic is basically a lesson learned from the conditional fallacy: getting a belief into the picture when there isn’t one encounters the problem that doing so can change the totality of evidence available and thus subvert the explanatory role that propositional rationality plays.

Leave a Reply

Your email address will not be published. Required fields are marked *