Undercutting Defeaters for Conditionalizers

In a forthcoming paper in the British Journal for the Philosophy of Science, Jonathan Weisberg shows that enthusiasts for conditionalization cannot accommodate a certain strong kind of holism. Very interestingly, Weisberg shows that this result does not just hold for classical Bayesian conditionalization — which is well known to have a strikingly foundationalist character, since Bayesian conditionalization makes the “evidence” completely indefeasible; he shows that this result also holds for Jeffrey conditionalization as well. He also suggests that since conditionalizers cannot accommodate this sort of holism, they will have trouble making room for undercutting defeaters (as opposing to rebutting defeaters).

In this note, I shall argue that while Weisberg is right that conditionalizers cannot accommodate this strong kind of holism, but they have no problems accommodating undercutting defeaters. (This note owes a great deal to some correspondence that I have recently had with Weisberg about this issue.)

The crucial point is that whenever one updates one’s credences by conditionalization, the transition from the old credences to the new credences is rigid in the sense that certain conditional probabilities must remain constant throughout the updating process.

  1. Suppose that your old credences can be represented by the probability function p0(•), you acquire the new evidence E, and you update your credences by classical Bayesian conditionalization. Then your updated credences can be represented by the probability function p1(•) = p0(•|E). This transition from p0 to p1 is rigid with respect to E because for every proposition H, p0(H|E) = p1(H|E).
  2. Suppose that you do not learn any new “evidence” with complete certainty, but your experience somehow changes your credences across a partition {Ei} (a partition is a set of propositions such that you are certain that one and only one of those propositions is true). Specifically, suppose that your experience changes your credences in the members of {Ei} from p0(Ei) to p1(Ei). Then if you update the rest of your credences by Jeffrey conditionalization, your updated credences can be represented by the probability function p1(•) = ∑ip0(•|Ei)p1(Ei). This transition from p0 to p1 is rigid with respect to {Ei} because for every proposition H, p0(H|Ei) = p1(H|Ei) for each Ei.

From now on, I shall concentrate on Jeffrey conditionalization. Here is a quick sketch of Weisberg’s argument that Jeffrey conditionalization leaves no room for undercutting defeaters.

Suppose that your experience raises your credence in some proposition. E.g., you examine a jelly bean in dim light, and your experience raises your credence in the proposition G, “The jelly bean is green”, and lowers your credence in ¬G, “The jelly bean is not green”.

Now consider the undercutting defeater F, “The lights are green-tinted”. If F is an undercutting defeater (as opposed to a “rebutting defeater”), then in advance of your having the experience, F tells you nothing at all about the colour of the jelly bean; so according to the probability function that represents your earlier credences, the conditional probability p0(G|F) is no different from the unconditional probability p0(G).

However, after you raise your credence in G in response to the experience, F does become relevant to how much credence you should have in G. So it seems that according to the probability function that represents your later credences, p1(G| F) must be less than p1(G). As Weisberg shows, however, this cannot possibly happen if the transition from p0 to p1 is rigid with respect to {G, ¬G}.

The obvious solution for a Jeffrey conditionalizer to go for is to deny that the updating of your credences resulting from your experience is rigid with respect to a simple pair of propositions like {G, ¬G}. The immediate effect of your experience is not just to raise your credence in G and lower your credence in ¬G. Instead the immediate effect of your experience must be more complex.

Let D be what I shall call “the generic defeater” – let us take it to be the proposition “My colour experience is not a reliable guide to whether or not the jelly bean is green”. Then the relevant partition for the impact of your experience is: {G & D, ¬G & D, G & ¬D, ¬G & ¬D}. So long as your prior rational credence in D is sufficiently low, the main immediate effect of the experience will be that it will greatly increase your credence in G & ¬D. However, the experience will increase your credence in G & ¬D almost entirely at the expense of your credence in ¬G & ¬D; the experience will barely change your credences in G & D and in ¬G & D at all – or at all events, if it does reduce your credence in either of those, it will do so in a way that leaves the ratio between your credence in G & D and your credence in ¬G & D more or less unchanged.

So your experience has changed the conditional probabilities: p1(G|D) is lower than p1(G), even though according to the probability function that represented your earlier credences, p0(G|D) is not lower than p0(G). So we can now say that any proposition H that raises the probability of the generic defeater D, but did not tell against G according to the probability function that represents the earlier credences, counts as an undercutting defeater for G. I.e., if p0(G|H) is not lower than p0(G), but p1(D|H) is higher than p1(D), then H is an undercutting defeater for G.

Evidently, this solution involves making room for the possibility of defeaters in the original partition that the experience immediately bears on. A radical holist would doubt that we can do this, because according to a truly radical holist there is a completely open-ended range of defeaters, and no simple “generic defeater” that can be expressed by any single proposition. For example, the radical holist might insist that even you don’t acquire any evidence that your colour experience is unreliable, there could be other undercutting defeaters as well. Perhaps you get extraordinary evidence that you are not having the experience at all! Or perhaps you acquire weird evidence that you don’t even possess the concept green, or that you are irrational to the point of total insanity. Or perhaps …

I am not at all convinced that we should be moved by these suggestions from the radical holist, however. It seems more plausible to me that there is a definite range of defeating propositions, which could at least in principle have been identified in advance of your having the particular experience. So it seems perfectly possible to me that the original partition that the experience immediately bears on has already made room for the possibility of defeaters, in the way that I have sketched.

For this reason then, I think that conditionalizers can make room for undercutting defeaters, on the most plausible way of understanding them. It is true that conditionalizers cannot accommodate radical holism, but this sort of holism is not required in order to make sense of undercutting defeaters.


Comments

Undercutting Defeaters for Conditionalizers — 1 Comment

  1. Hi Ralph, thanks for setting out the issues so clearly here, and for raising an excellent point I hadn’t considered. It hadn’t occurred to me to use the generic defeater, D, to ensure that the probability of G can be appropriately lowered at a later date.

    I wonder if I can get around this solution by picking on the content of D. If I understand the approach correctly, D serves as a sort of proxy for all the non-generic defeaters like “the lighting is tricky”. So it is a proposition that is made highly probable by all and only such defeaters, and which makes G improbable only after G’s probability has been boosted in response to the jellybean’s appearing green. What proposition could play this role?

    One obvious candidate is D1 = “my color vision is unreliable”. I take it this is too crude: I might learn that my color vision is generally unreliable for reasons that do not affect the testimony of my color vision in this one instance. Another is D2 = “the objective chance of G is low [= p0(G)]”. The price here is that JC must now rely on objective chances and the Principal Principle to get things right, and must presume that the world is either non-deterministic or else that chances can be non-trivial in a deterministic world. There is also the worry that, when you have inadmissible evidence, D2 will not have the desired effect.

    A third option is D3 = “my color vision is unreliable in this one instance.” The worry now is that D3 is just code for “I ought to reduce my credence in G if it was boosted on the basis of the jellybean’s appearance.” This feels a bit like a cheat to me, though I confess I can’t say exactly why. Once we include such propositions in the domain of the credence function, I worry that (Jeffrey) Conditionalization is no longer the workhorse of our theory it was supposed to be. Instead of letting our prior conditional credences guide what our posterior credences ought to be, we let our beliefs about what our credences ought to be dictate what they ought to be, and we rig the inputs to JC so as to get those normative, higher-order beliefs just right.

    So what worries me about the generic defeater approach is that it might end up trivializing the Conditionalization paradigm by inserting proxy propositions to get the conditional probabilities just right. As others have noted (e.g. van Fraassen in Laws and Symmetry, I think), given any two distributions p0 and p1, we can always enrich the domain of propositions so that p1 is obtainable from p0 by Conditionalization. My sense is that we might be doing the same thing here. But, as I say, it’s not clear to me that there isn’t a respectable candidate for D that voids this worry.

Leave a Reply

Your email address will not be published. Required fields are marked *