Many formal epistemologists think that Conditionalization is *always* the uniquely rational way to update one’s credences. But this cannot be correct. In certain troublesome cases, Conditionalization would take the thinker to rationally forbidden destinations. Conditionalization has to be restricted somehow, so that it does not apply in these troublesome cases.

There is actually quite a range of such troublesome cases. But the simplest example is that of certain *Moore-paradoxical propositions* – propositions that the thinker could express by uttering something of the form ‘*P*, and there is no time *t* at which I assign a high credence to the proposition that *P*’.

(In contemplating this proposition, the thinker has to refer to *herself* in a distinctively first-personal way. However, to bracket worries about how to accommodate indexical references to *times* in our formal framework, I have chosen a Moore-paradoxical proposition that quantifies over times in its second conjunct – rather than a proposition that contains a distinctively indexical reference to the present time.)

Now, nothing prevents the thinker from rationally having a prior system of credences that assigns arbitrarily high probability to this Moore-paradoxical proposition, conditional on a certain possible body of evidence *E*.

Indeed, *E* might just be *P* itself. It might be obvious from the nature of *P* that *P* is the kind of proposition that one is extraordinarily unlikely to have a high credence in even if *P* is true. For example, suppose that *P* is the proposition that the number of flies in the world right now is exactly 17,000,000,000,000,000. Even conditional on the truth of *P*, it is unbelievably unlikely that one will ever have a high credence in *P*. So, conditional on the supposition of *P*, one rationally assigns an extremely high conditional credence to the proposition that one could express by uttering ‘*P*, and I never have a high credence in the proposition that *P*’.

However, one’s prior credence in *P* is still non-zero. So, it could still happen that one day one learns that *P* is true. But then, if one updates one’s credences by Conditionalization, one will end up assigning an extremely high credence to the proposition that one could express by uttering the relevant instance of ‘*P* and I never have a high credence in the proposition that *P*’.

This, surely, is an irrational place to end up in. It is *a priori* obvious that if one has a high credence in this proposition, the proposition cannot be true (given that it is also *a priori* obvious that if one has a high credence in a conjunction, one also has high credence in each of its conjuncts).

So, in these cases, it seems to me, it is irrational to update by Conditionalization. Conditionalization must be restricted so that it does not apply to these cases.

Indeed, this point seems so obvious to me that I feel sure that someone must have thought of this point before. I would be very grateful if someone could let me know who (if anyone) has made this point before!

Interesting, but I don’t necessarily see a threat to conditionalization. Let P be the proposition about flies, and Q be the proposition that at some time I have a high credence in P. That is, let Q be the set of possible worlds where at some time I have a high credence in P…

Q is relatively very small, maybe even much smaller than P. Some worlds in Q are worlds where I have no reason to believe P but do anyway, but most of them (by mass not by number), if we’re being charitable about my epistemic virtues, are worlds where I’ve got good some evidence that P. Of these, some are P-worlds and some aren’t (ie, the evidence misled me). Maybe it’s around half and half. (Or, if I’m really bullish on my epistemic skills, P is true in exactly q proportion of the Q worlds, where q is the level of my credence that P, if we fix that number for simplicity.)

The general picture is that in the space of possible worlds, P is a tiny circle and Q is a yet tinier circle, half in and half out of P. You’ve suggested that conditionalization can get my space of epistemically possible worlds down to a set which is mostly P and mostly not Q, and have pointed out that P is one such set. I agree that this would be a repugnant result, but I don’t think it’s possible.

Could I, for example, get some evidence that shrinks my set of epistemically possible worlds to roughly P? – no more, and no less, because we still need not-Q to dominate Q. No, I could not. Suppose the UN Fly-Counting Commission publishes an unequivocal report to the effect that P. Then I’ve learned more than probably-P. I’ve also learned that this is a world where the UNFCC does such-and-such, and their report reaches me in such-and-such a way, and so on – which puts me squarely in a smaller set of worlds, call it R, where these things are true. (Now, conditional on R, P dominates not-P, which is why my credence in P is high.) And R must be a proper subset of Q, because it’s a set of worlds where things happen to make my credence in P high, which you’ll recall was the definition of Q…

So, it looks like there’s no way for me to ever learn P without also learning Q collaterally (except for weird lapses, split personalities, etc.). Even if my evidence for P is an oracle or an irresistible hunch, I’m still able to locate myself in a set within Q. And I take it as a nice piece of evidence for conditionalization, actually, that it’s able to take account of these cases.

Or, the short version: I don’t think it’s possible to have evidence set P. Every world in P-and-not-Q is a world where P and I never come to believe that P. Any world where something happens to make me come to believe that P would be a world in Q, and I would know it was somewhere in Q because it doesn’t match any world in not-Q. I couldn’t get evidence that P without also getting evidence that Q. Conditionalizers tend to speak as if when someone tells us X, we learn exactly X. I think what your example here shows is that sometimes, that approximation is too approximate.