Philosophers’ fallacies

I’ve been reading some of Branden Fitelson’s bayesian papers recently, (one in particular on the issues between likelihoodists and bayesians is fantastic), and came across a review by him, Stephens, and Sober of Dembski’s book on design theories (F/S/S, for short). The review is long as far as reviews ordinarily go–15 pages or so–and is an excellent source for the inadequacies of Dembski’s metaphilosophical position on when design explanations should be accepted.

The piece has a really interesting passage, however, one that contains a common mistake that philosophers tend to make. The mistake is this: Philosopher X says p is true; philosopher Y isn’t convinced that p is true, but has no direct argument to show that p is false; so, Y attacks some generalization that implies p instead.

My favorite example of this tendency among philosophers occurs in conversations with my former colleague Michael about where to go to lunch. “Where should we eat?” “I don’t know, how about Shakespeare’s?” “Why do you want to go there?” “No special reason, we just haven’t been there in awhile.” “Do you always want to go to lunch at places where you haven’t been in awhile?”

Here’s the F/S/S version, referring to an argument that has 3 premises, the relevant one being one concerning Dembski’s conditional independence requirement (CINDE):

(2) If CINDE is true and S is warranted in accepting H (i.e., that E is due to chance), then S should assign Pr(E | I) = Pr(E). . . .
We grant premiss (1) for the sake of argument. We’ve already explained why (3) is false. So is premiss (2); it seems to rely on something like the following principle:
(*) If S should assign Pr(E|H&I) = p and S is warranted in accepting H, then S should
assign Pr(E|I) = p.
If (*) were true, (2) would be true. However, (*) is false.

First, note that premise (2), as stated, is far from obvious (and would be even if I took the time to explain it fully). So there’s a problem here. But F/S/S want to go further; they claim that (2) is false. But notice that their argument cannot demonstrate this claim. What is going on is a display of the philosopher’s predilection noted above. Notice the sequence in the passage. First, it is claimed that (2) is false. Why? Because it “seems to rely on” a certain claim that entails it, a claim that is demonstrably false. The language of “seeming” here is instructive, I think. F/S/S don’t simply say that a defense of (2) is needed, and the best they can think of is (*); they make a much stronger claim. The connection between (2) and (*) is so strong that a phenomenalistic connection is claimed to be present. That’s the predilection: we don’t just ask what generalization might lie behind a claim, the claim itself leads us so quickly to the issue of the underlying generalization that we often don’t acknowledge the move, sometimes describing the move in terms of “seeming” language.

None of this helps Dembski, of course. (2) is far from obvious, and (*) is a good example of the kind of principle he’ll have to cite to defend (2), and it won’t help; so it’s a mystery how (2) could be true. Besides, as F/S/S show, the argument has other flaws as well.


Philosophers’ fallacies — 10 Comments

  1. Pingback: Certain Doubts » Some Norms of Assertion

  2. The mistake you point out (criticizing an argument that includes a premise P by criticizing a more general claim that implies P) is indeed quite common. A few years ago I wrote 3 referee reports *in a row* in which I complained about the author’s use of this critical tactic.

    Presumably those using the tactic might be charitably interpreted as suggesting that one wouldn’t accept the instance if one didn’t accept the generalization. but of course one should state and defend this claim.


  3. Fritz, you’ve got me beat on that one! I see it quite a bit, too, but never three in a row, and of course not nearly so often as the mistake of thinking that a good account of what you’re thinking about contains subjunctive conditionals!

    I agree that there’s often a charitable way to interpret the arguments when philosophers do this. It’s just hyperbolic speech aimed at insisting that a better defense of the disputed premise is needed than by appeal to the generalization in question.

  4. I was thinking about this and then I begin to wonder if this class of attacks is really that different from the class of attacks on induction as fallacious. Certainly induction can lead to fallacious reasoning. For instance all the shoes I see are white therefore I assume all the shoes near me are white.

    I bring this up since the argument at hand appears to be of the class of logic C. S. Peirce termed abduction.

    Now of course if you require deductive reasoning or simply reject abduction or induction, this is less of a problem. And of course, Peirce would argue that the general notions arrived at by abduction ought to be tested. But is attacking the reasoning of this sort always invalid?

  5. Clark, very good points here, and I agree that we ought to be careful in uncharitably attributing deductive arguments to philosophers in every case of reasoning they employ. And it is true that this pattern is an example of attempting to find an explanatory hypothesis for why the target piece might endorse a non-obvious premise of an argument. Even so, however, you can’t make the practice antiseptic just by treating it as an example of abduction, especially if you are going to turn around and attack the explanatory hypothesis developed and then conclude that the premise in question is false.

  6. Jon, I fully agree that merely labeling it as abduction doesn’t resolve the issue anymore than labeling it as induction does. I discussed this a bit on my blog, ending up leading into some of Putnam’s approaches to philosophies traditionally taken to be purely deductive. (i.e. his famous “What is Mathematical Truth?”) It seems to me that Peirce in particular associated abduction with a thoroughgoing fallibilism and the need to question.

    I suppose whether abduction is appropriate (fallacious?) or not depends upon its role. Is it there to open up the discourse and force us to examine “blind spots” in our discourse? Or is it there to cut off discourse and keep us for analyzing? The same argument can be used for either aim.

    Now I’ll agree that it is most bothersome in philosophical papers when used to cut off discourse. (i.e. you are wrong because…) However I was just reading John Fischer’s book on semi-compatibilism and I think he uses this kind of argument a fair bit. Although he is very careful to point that he *is* doing this and that it is at best a suggestive argument. Fairly insightfully he ties Frankfurt arguments to a whole class of arguments including Gottier arguments as a kind of skepticism suggestive of a position but never entailing a position. I suspect (although am not prepared to argue) that this who approach is instinctively used by philosophers precisely because it tends to be right so often and is part of how we naturally do philosophy.

  7. Clark, I think we agree that the key to such moves by philosophers is simply that of being honest about exactly what one is doing. That’s what makes Fischer’s moves most interesting, I think, and I enjoy the originality displayed in such argumentative moves. And I suspect that even when unacknowledged as a deductively invalid move, the posited generalization really does underlie the questionable premise; just not often enough to be sanguine about it!

  8. Jon,

    Yes, strictly speaking, this is not a valid argument for the denial of
    (2). I think it’s better thought of not as an argument for the denial
    of (2), but as our best guess as to what might have been *causing*
    Dembski to think (2). And, as far as it goes, I still think it ain’t
    too bad for that purpose.

    To see why what we say about (*) is true (that it entails (2)), note
    that (2) really is:

    (2) If one assigns Pr(E | H & I) = Pr(E | H), and one rationally accepts
    H, then one should assign Pr(E | I) = Pr(E).

    This is such a strange thing to claim (from a Bayesian perspective),
    that it’s hard to see what reason one could possibly have for thinking
    this. All we could think of was that, perhaps, Dembski was conflating
    rational acceptance of H with assigning probability 1 to H. This is at
    least A BACKSTORY (and not an insane one) that implies (2). Sure, it
    may not be how Dembski was reasoning, but (to this day) I have not heard
    him explain this assumption. So, we’ll probably never know.

    It’s worth noting, in this connection, that Ellery Eells is equally
    nonplussed concerning this step of Dembski’s argument in his review. See
    page 2 of:

    where Ellery (a master of charitable reconstruction) tries hard to
    provide several readings on which this step comes out plausible. But,
    in the end, he can’t make it plausible either. I challenge the reader
    to do better than we or ellery did on this score.

    When it’s taken not as an argument that (2) is false, but as an attempt
    to articulate an assumption that may have led to Dembski to make the
    claim in the first place (and then an argument which shows that this is
    a false assumption), this isn’t so bad. Philosophers do this sort of
    thing all the time. When they do so, they are, I submit, better
    understood as constructing (and explaining away) a possible (but false)
    rationale for a belief than arguing for the falsity of the belief itself.

  9. Branden, I agree this is exactly the charitable way to take the inference, and how I took it when I read your very nice review; in fact, when I saw (*), I immediately thought, “yes, that has to be what D was thinking.” Of course, that’s too strong, but it seemed a plausible and compelling reconstruction, since (2) is utterly perplexing.

Leave a Reply

Your email address will not be published. Required fields are marked *