Defeat and Probability

One might think that there is some connection between low (or inscrutable) probability and defeat. Plantinga, for example, rests an entire argument against evolutionary naturalism on this point, and the argument is generally viewed to fail on this point. Tom Crisp and I were talking about this issue at a conference this past weekend, and I was skeptical of the connection.

Here’s the concern. Let’s say defeaters are either rebutters or undercutters. For low probability to be a rebutter, you’d have to have some information e such that the probability of the target claim p was low or inscrutable on e. But, of course, we almost always have some such information for our beliefs, even when rational and known to be true. Tom’s idea was to focus on undercutters instead, where we have information e (he talked in terms of states of mind, but I think it won’t matter) such that the (conditional) probability that our belief-forming process concerning the target claim p is reliable is low given e. (He also has an additional clause to rule out overriders, but that isn’t my concern here.)

But, once again, I think we almost always have such information, even for fully rational beliefs and beliefs known to be true. Let p be a perceptual belief: it is raining outside. Let e be completely irrelevant, unrelated information: the moon is not made of green cheese. The conditional probability that perception is reliable, given that the moon is not made of green cheese, is really low. But that conditional probability isn’t a defeater of my belief that it is raining.

That’s where our conversation stopped on this point, but since then I’ve thought of further possible moves to try to link low conditional probabilities with defeat.  I’m more interested in whether others think there is a good way to link the two, so I won’t list the various ways I thought of and how they fail.  Any ideas?


Comments

Defeat and Probability — 6 Comments

  1. There is something that I find puzzling here.

    Let p be a perceptual belief: it is raining outside. Let e be completely irrelevant, unrelated information: the moon is not made of green cheese. The conditional probability that perception is reliable, given that the moon is not made of green cheese, is really low.

    If we assume that e is independent of the proposition r that perception is reliable (which is presumably what you mean by e being ‘completely irrelevant, unrelated information’), then the relevant conditional probability is equal to the prior probability of r, which may or may not be high.

  2. Alright, I’ll take the bait…

    It seems to me that a defeater may be represented formally in terms of disconfirmation rather than low probabilities per se. In this case, a defeater would not have to be some proposition in the light of which e.g. the reliability of perception (r) is improbable, but rather some proposition that lowers the probability of perception. That is, d is a defeater just in case:

    Pr(r|d)<Pr(r)

    In effect then, a defeater is a disconfirmer. This construal leaves the door wide open for a quantitative twist on the concept of defeat, such that d is a defeater of r in proportion to the strength of the above inequality. Of course, there is a whole lot of material on Bayesian measures of confirmation that try to make this qualitative notion quantitative and all of that material would be directly applicable here. Thoughts??

  3. Hi Jon,
    Michael Rea’s suggestion in World Without Design is as follows. He writes,

    ‘(GP) Let p be a proposition believed by S not on the basis of evidence, and let Z be the set of S’s beliefs and experience. Then: S has a defeater for B(p) if, but not only if, S sees that (a) there is a belief or experience e in Z such that S’s rational degree of confidence that p in light of e is not high, and (b) there is no belief or experience e* in Z such that S’s rational degree of confidence that p in light of (e&e*) is high.’ (p. 186)

    ‘B(p)’ denotes the belief that p. Pages 186-188 is a bunch of caveats and clarifications to GP. I think that GP can be used to respond to your example, because, presumably, you do have some other belief or experience e* such that your rational degree of confidence in the proposition that it is raining in light of the proposition that the moon is not made out of green cheese AND e*, is high. (e* might be how you are appeared to.)

    I had a long discussion with Ted Poston about this here: http://prosblogion.ektopos.com/archives/2006/08/a-failed-defeat.html#more.

  4. Some replies:

    Jonah, at most, what you’ve described is a rebutting defeater. Undercutters don’t fit that model.

    Mike, yes it is like the perspiration objection.

    Andrew, there’s a technical flaw in the Rea definition. Just let e*=p. But even if we fix this, the example didn’t involve having any good evidence for the claim that it is raining. So clause (b) can’t rescue the account from the example.

    As an aside, the approach to defeat involved in GP has serious problems: see my paper “Two Approaches to Defeat” on this issue.

    Jake, true enough, but it doesn’t help avoid the counterexample, as far as I can tell.

Leave a Reply

Your email address will not be published. Required fields are marked *