One of the lessons of Plantinga’s argument against evolutionary naturalism is that the mere fact that a claim is improbable on a certain piece of information doesn’t imply that the latter information is a defeater of any evidence in favor of the claim in question. In the context of his argument, this point plays out as follows. Even if the claim that our faculties are reliable is improbable given only the assumption of evolutionary naturalism doesn’t imply that these assumptions defeat any and every defense of the reliability of our cognitive faculties.

The debate about Plantinga’s argument thus turned to the interesting question of how to determine when such a conditional improbability would count as a defeater and when it wouldn’t. There’s some interesting literature on that question, but I’m more interested in the analogy on the other side: if conditional improbability doesn’t signal defeat, then conditional probability may not signal support, either.

Here’s what I mean. Consider the following detachment rule:

DR: if Pr(p) exceeds some threshold X (less than 1), then one is epistemically justified in believing p.

The idea behind DR is as follows. You learn some information, and by some complicated rule, (perhaps Bayesian conditionalization, perhaps something else), that information teaches you that the probability of p is high (meets the threshold in question but is less than 1). What should you believe? Well, for one thing, it’s surely OK to believe that p is likely to be true. But what of p itself? If DR is correct, then in such a case one can detach the probability operator and believe p as well.

I expect this proposal to fail, for much the same reasons that negative probabilistic relevance fails to imply that something is a defeater. That is, I expect positive probabilistic relevance sometimes tracks evidential support and sometimes not. I’m working out details on specific cases, which I’ll post a bit later, but wondered if others have been thinking about this much.

Ok, I’ll bite. How is having reason to believe that something is likely true different from believing that something is true, with perhaps less than perfect confidence in your judgment? It seems intuitive that whenever you would still want to reject p you must have some other evidence that causes you to reject it, and thus don’t really believe the probability to be high after all.

Jon,

I’m not familiar with the literature on the Plantinga argument. But if I understand the example you cite, it goes like this when reconstructed in a Bayesian manner.

Let R be ‘our faculties are reliable’, let N be some appropriate statement of epistemic naturalism, and let D be some additional evidence or claims that defend the reliability of our faculties. Then Bayesian-wise the idea works like this:

1. Suppose P[R | N] is small (very much less than 1)

2. But D is supposed to be “additional evidence” that strongly supports R, so I read this as P[R | N&D] is large.

3. But then, from 1 and 2 it follows that P[D | N] should be quite small — i.e. naturalism on its own should “defeat” the additional evidence or claims D that defends R, defeat it in the sense that D is unlikely on N alone.

(Notice: P[D | N] should be quite small because

P[R | N&D] x P[D | N] = P[D | N&R] x P[R | N],

so for P[R | N] small, P[R | N&D] x P[D | N] is small

(even if P[D | N&R] is near 1, and even smaller if

P[D | N&R] is small. But P[R | N&D] is large, so P[D | N] is small.)

However, the fact that P[D | N] is small (i.e. that N “defeats” R) doesn’t undermine the strong support that N&D together gives to R (i.e. P[R | N&D] may still be quite large). Indeed, from a Bayesian point of view, the fact that P[D | N] is quite small helps contribute to making P[R | N&D] large (provided that P[D | N&R] is significantly larger than P[D | N]).

Am I understanding the example correctly?

“The idea behind DR is as follows. You learn some information, and by some complicated rule, (perhaps Bayesian conditionalization, perhaps something else), that information teaches you that the probability of p is high (meets the threshold in question but is less than 1). What should you believe? Well, for one thing, it’s surely OK to believe that p is likely to be true. But what of p itself? If DR is correct, then in such a case one can detach the probability operator and believe p as well.”

Jon, I’m not sure I’ve got your claim right, but there are cases like this: the prior on p, Pr(p/k), is not very high, where k is background information. Suppose, for instance, that p is the proposition that a certain kind of miracle occurred (say, an S-miracle). Now suppose that the hypothesis H1 states that there are special beings that are sure to produce S-miracles. That’s of course strong evidence for p. But even if you learn that H1 is true, the posterior probability in Pr(p /H1 & k) is not in general going to be high. In fact it’s often undefined. This is because the priors on H1 (and similar hypotheses) are, in many cases, reasonably put at zero. There is no reason (that I know of) to believe there are such beings on just our background information. But, of course, Pr(p/H1&k) = Pr(p/k) x Pr(H1/p & k)/ Pr(H1/k). But since Pr(H1/k) = 0, the posteriors (viz. Pr(p/H1&k)) will not be defined. And this is true, as I say, even if you learn that H1 is true and H1 is strong evidence for p. But then, on a Bayesian account, even if you learn that the strong evidence for p is true, I’m not sure that in general you can detach.

Jim, that’s close at least, though I’m having to rely on memory since the book I’d look at is in a box somewhere! There’s a nice paper by Paul Draper in a collection on Plantinga’s argument that came out a year or two ago on the defeat principle. Plantinga, of course, thinks that the story you tell can’t work because of the special character of N, but I think he needs a better response to Draper than he’s given so far. The issue has more to do with the probability of reliable faculties given separate pieces of information, and what happens when the separate pieces are combined. Plantinga holds that the negatively relevant info from evolutionary naturalism swamps the other information, but I think the argument isn’t compelling or persuasive at that point.

Mike, I’m not quite sure about your case, because (as you already know) there are problems whenever empirical hypotheses are assigned priors of zero or one. You’re on the same track as I, though, because I have in the backgrounds phil of religion concerns here.

Peter, the answer to your question depends on what notion of justification one is thinking about. I mean to be thinking about epistemic justification, the kind that when ultimately undefeated and combined with true belief, yields knowledge. For such justification the difference appears most clearly in the the epistemic paradoxes, especially lottery and preface. In lottery, for example, one will have to allow the possibility of knowing that one’s ticket is a loser on the basis of statistical information if one doesn’t distinguish justification for p from justification that p is likely.

“In lottery, for example, one will have to allow the possibility of knowing that one’s ticket is a loser on the basis of statistical information if one doesn’t distinguish justification for p from justification that p is likely.”

I have always thought of epistemic justification as a fact-evidence connection. Thus you deduce the probablility of a fact being true based on the consequences, either present or lacking, and if the probability is high enough you judge it to be true. In the lottery case however you have no evidence with which to confirm or deny the hypothesis that your ticket won, since past chances of winning or losing aren’t a consequence / evidence for your current ticket winning or losing. As for the preface paradox, well it doesn’t seem like a paradox to me at all. Even if Pr (s-n | e) (a sentance is confirmed to a high degree by evidence) the probability that the entire book is correct is Pr(s-1 | e1) * Pr(s-2 | e2) * … (assuming they are independant, which is probably not exactly right, but the general idea is the same) and thus may be low, exactly as the paradox says. (so where is the paradox?)

Basically I am saying that I understand that probability only tracks truth given an evidence based model. I still don’t see why one would choose to throw out p without evidence, and if you had evidence of not-p (or evidence against p) clearly the probability would not be high. Perhaps you are implying that some events are simply so unlikely that we should consider them to be false if we have evidence to the contrary (?), but it seems to me that if Pr( event | evidence ) / Pr( event ) is high then the event has been confirmed to a high degree, and thus it would be foolish to believe that the event didn’t happen, no? All of this seems to agree with the standard theory.

Talk of throwing out p without evidence sounds suspiciously like your earlier talk of rejecting p, and both sound like believing ~p. To fail to hold an opinion about p is not to reject p and not to throw it out.

In addition, talk of deducing probabilities vastly overintellectualizes what happens in belief formation. In the ordinary case, evidence is present and beliefs are formed. There is no intermediate step involving probabilities at all. So epistemic justification can’t be anything requiring deducing probabilities.

“There’s a nice paper by Paul Draper in a collection on Plantinga’s argument that came out a year or two ago on the defeat principle.”

Are you thinking of the collection edited by James Beilby, which came out in 2002, ebtitled “Naturalism Defeated?”?

I have been reading that, but there is no article by Paul Draper in it. So I would be interested to know if there is some further collection of articles on the evolutionary argument against naturalism which has come out, and of which I’m not aware.

Regards,

Omar Mirza

That’s the volume I was thinking of, but I must have the wrong author. Any other possibilities in it? I’ll be happier when I can find my copy…

You might be thinking of Trenton Merricks’ paper, “Conditional Probability and Defeat”, or Richard Otte’s “Conditional Probabilities in Plantinga’s Argument.”

Regards,

Omar

Quite a long time this post isn’t updated, but I still want to try: Anybody knows of good literature on Plantinga’s EAAN, besides J. Beilby’s “Naturalism Defeated?”, Fitelson&Sobers 1997 Article and Omar Mirza’s “User’s Guide to EAAN”? I’m going to write my diploma thesis on this topic and was looking for some more relevant informations.

Best Regards,

Marco

You might contact Troy Nunley. He wrote a dissertation on EAAN, which included a complete literature survey. That was in 2004 or 2005, I believe, but I expect he still knows the literature. He’s been at UT-Pan American, but is moving to Denver Seminary.

i’ll try to contact him!

Thanks a lot,

Marco