Defeating Defeaters

I gain evidence that this used car salesman is unreliable.  He tells me he’s the most honest person I’ll ever know.  It would be ridiculous, on that basis, to think “Well, gee, I guess he’s reliable after all.”  We need independent confirmation that his testimony is reliable.  Here are two more controversial examples.  I gain evidence that my moral intuitions are unreliable.  Can my moral intuitions defeat that evidence?   It seems not: to defeat that undercutting evidence, you would need to appeal to something independent of the intuitions undercut.  You’re smart, but we disagree about P, which gives me some reason to think that my relevant arguments and intuitions aren’t reliable.  Can I rely on those intuitions and arguments to defeat the evidence of my unreliability?  It seems not: again, we seem to think we need something independent of those arguments and intuitions.

Our intuitions suggest that, to defeat undercutting evidence, we need to appeal to something independent of what’s undercut.  I think those intuitions are wrong.  Suppose I receive testimony from a reliable source that all my belief-forming methods are unreliable.  Given our “independence intuitions”, this undercutting evidence is impervious to defeat.  Since it undercuts everything, there is nothing to which I can appeal.  Yet the mere generality of the defeater should not make it invincible.

When we realize that the intuitions are wrong, we have some explaining to do.  If undercut evidence can defeat the undercutting evidence, then what’s the problem with trusting the testimony of the car salesman?  And what’s the problem with relying on the wall’s appearing red to give me evidence of my reliability in the context where the wall is lit by red lights?  If our “independence intuitions” don’t explain what’s going on in these cases, why can’t we appeal to the undercut evidence to defeat the undercutting evidence?

Feel free to comment on the above quick and dirty argument without consulting the material below the fold.  For those who want to see the argument against our independence intuitions laid out more carefully, see below.

I’ve just completed a mathematics test.  My belief in each answer is highly justified.  But then I’m told by a reliable source that my coffee was laced with a drug that makes mathematical reasoning highly unreliable for at least four hours (it only took 1 to complete the test).  It seems that I should now lower my confidence that my answers are correct.  More generally, when our beliefs are highly justified (but not certain), and we acquire some undefeated evidence that they were formed in an unreliable way, it seems that our beliefs are less justified and, perhaps, no longer justified at all.  More formally, we can appeal to the:

Undercutting Evidence Principle (UEP): If my beliefs in domain D are highly justified (but not certain) at t1 and if, at t2, I acquire evidence that my judgments within D were formed using method M and that M is unreliable, then, unless this evidence is defeated, my beliefs in D are less justified at t2 than they were at t1.

Once I’m informed of the drug, I begin to check my answers.  I consider first question.  I check my calculations and they seem flawless.  I then realize that there are other ways of calculating the answer to the question.  For example, instead of multiplying 4 x 4, I add 4 + 4 + 4 + 4.  I consider an additional five ways of calculating the results to the answer and each time I confirm my original answer.  I undertake a similar investigation for each of the other 9 answers. My new calculations confirm my old ones again and again.  Presumably, these confirmations, when taken together, count as evidence for the reliability of my mathematical reasoning.  Can this evidence raise my justification that my answers are correct (at least partially) back to their original level? Or perhaps even higher than their original level?  Intuitively, many think the answer is clearly no.  These “confirmations” appealed solely to mathematical reasoning.  But that is the very sort of reasoning that my evidence suggests is unreliable.  How can further investigation defeat the undercutting evidence by relying on the very sort of reasoning that is undercut?  At first glance at least, that’s just plain foolish.  Hence, the following principle may seem undeniable:

Independence: A body of evidence E* can defeat undercutting evidence that M is unreliable only if E* was, at least in part, was not obtained via M.

Unfortunately, things aren’t so simple.  We are faced with a contradiction when we consider two other very plausible claims.  Consider first:

Possibility of Global Undercutting (PGU): it is possible to have beliefs that are highly justified at t1 and then acquire evidence at t2 that all of your belief forming methods are unreliable, so all your judgments are/were formed using methods.

PGU is a very weak claim.  It says that it is possible to have beliefs that are highly justified and then acquire evidence that all your beliefs are formed via some unreliable method.  Suppose, for example, that I receive reliable testimony that all my belief-forming methods are unreliable.  This testimony is confirmed by the world’s leading experts.  The most reliable news outlets testify that my beliefs are unreliable, an unfortunate result of an insidious science experiment.  I would say that I have now acquired evidence that all my beliefs are formed by some unreliable method.  Indeed, one might think that this is fairly strong evidence that all my belief forming faculties are unreliable.  I usually confidently believe things on far less impressive testimony.

But when you put UEP and PGU together, it follows that I should reduce confidence in all my beliefs.  Is there any way that I can regain the lost degree of justification?  Not given Independence.  Since it disallows me from appealing to anything undercut and every way I have of forming beliefs is undercut, there is nothing to which I can appeal.  I’m stuck where I am—at least until I gain even more evidence of my unreliability, at which point my justification might decrease even further.  Hence,  invincible counterevidence would be possible.  Yet the mere fact that undercutting evidence is completely general should not make that evidence invincible, as would be the case were Independence true.  In other words:

Anti-Invincibility: A body of undercutting evidence E can’t be invincible, i.e. there must be some possible circumstances in which E is defeated.  Or at the very least, the generality of the undercutting evidence shouldn’t ensure that the defeater is invincible.

Setting aside the possibility that one can acquire new belief forming methods, UEP, Independence, PGU, and Anti-Invincibility form an incompatible tetrad.  UEP, Independence, and PGU guarantee that invincible defeaters are possible, and Anti-Invincibility guarantees that there aren’t any.

I take it that the two least plausible claims are Independence and Anti-Invincibility.  Which one should we give up?  Which one is the least plausible?  I don’t have a super strong argument for Anti-Invincibility, but I do want to put a little pressure on Independence.  In the case described above, I have all sorts of testimonial evidence that suggests all my belief forming methods are unreliable.  Independence entails that, even if I were to acquire rather convincing evidence that the whole thing is an elaborate gag, this evidence still could not restore me to my original level of justification.  And that seems rather counterintuitive, and it reinforces the intuition that the mere generality of undercutting evidence should not make it invincible.  I think, therefore, that we should reject Independence.


Comments

Defeating Defeaters — 7 Comments

  1. You suggest that we’re faced with a choice of rejecting Independence or Anti-Invincibility. I might be happy to reject one of those, but I’m also pretty suspicious about the possibility of global undercutting. Lots of philosophers have doubted the intelligibility/possibility of questioning all one’s beliefs at once. On a broadly coherentist/Neurathian/Wittgensteinian position, it’s only by taking some beliefs (and probably belief forming methods) for granted that we can raise doubts about others. If you’re at all sympathetic to that line of thought, then you should be skeptical that any evidence could simultaneously throw all our beliefs/belief forming methods into doubt.

    That still leaves the question of what’s to be said about the case when a reliable source testifies that all your belief forming methods are unreliable. I’m not sure what to say about the case, but it strikes me as at the very least a tricky case, rather than as a clear example in which you get evidence that all your belief forming methods are unreliable. For one, it seems like it’s only by taking the testifier(s) to be a reliable source of evidence that you can treat the testimony as undercutting the rest of your beliefs. Perhaps this will turn on how we individuate belief-forming methods?

  2. Hi Dan,

    Good questions. It was concerns like these that led me to formulate the worry in the way that I did. I’m not just concerned with undercutting evidence that makes it irrational to believe something; I’m also concerned with undercutting which makes me less justified in believing P (e.g. I should lower my credence from .99, say, to .98). If I have reliable testimony of the sort described that all of my belief-forming methods are unreliable, it is very plausible that I should have at least some marginally less confidence in my beliefs than I did before I acquired that testimony. And this is so, because it is plausible that reliable testimony of the sort describe provides evidence against the reliability of your belief-forming methods. But if that undercutting evidence is invincible, than I can’t recoup that lost, perhaps minute, degree of justification even if I discover evidence that the whole thing is a gag. But that seems strange.

    I hope this clarification shows two things. First, we can grant that we can raise doubts about some beliefs only by taking others for granted and the problem still remains–at least, this is so as long as ‘taking for granted’ doesn’t require absolute certainty. And it shouldn’t require absolute certainty: I can doubt that there is purple elephant in the room because I take the reliability of perception for granted without being certain that perception is reliable. Second, in saying that the testimony of the described sort to provide evidence of global unreliability, I am making a very weak and plausible claim. This evidence need only be strong enough to lead to a minute loss of justification, and it needn’t make it irrational to believe all of my beliefs.

    I think, then, I have responded to the main points in your comment. That leaves the worries about method individuation. I do think there are some tricky issues here, but I think they only affect the precision with which the argument can be given. We have some intuitive grasp of the distinction between belief-forming methods that are “natural” and those that are “gerrymandered”. The method of visual perception seems more natural than the method of believing something on the basis of an argument on Tuesdays when you are at work with the window open. We should restrict our focus solely to “natural” methods rather than gerrymandered ones, because it is less clear to me that I have undercutting evidence when I take the unreliable method to be gerrymandered. I don’t think this should introduce any problems, but who knows.

  3. The used car salesman strikes me as a really bad analogy because the example makes us think of a dishonest used car salesman who is trying to cheat us, and obviously someone like that would lie and say he was honest. In fact, you even switch between “reliable” and “honest”.

    If you have evidence that an evil demon is systematically screwing with your mathematics reasoning so that you keep arriving at the same wrong answers by different methods then, yeah, you shouldn’t trust the checks you are doing. But no drug could have that sort of intelligence, I would say in the drug case you are right to think that you are reliable after all.

  4. Hi James,

    The used car case wasn’t intended as an analogy. It was just a case that elicits the intuition in favor of Independence. I think that there is an important disanalogy between that case and some of the others, which explains what’s wrong with the “circular” reasoning in the used car case which is not available in the other cases. But I’ll leave that for another day. I’m not sure I understand your concerns about the switches between honesty and reliability, but the key points all concern reliability. So mentally “re-write” the case in that way, if it makes it clearer.

    Do you really think it is impossible for a drug to make our mathematical reasoning unreliable? And, more importantly, do think it is impossible for someone to have strong evidence that they have just ingested such a drug? In any event, the case seems to work just as well if you have evidence that a demon has targeted your mathematical reasoning in the relevant way.

  5. I’m pretty sure that James’ point was that it is impossible for a drug to cause you to make such specific errors no matter how you approach the question. A drug that makes your mathematical reasoning unreliable would lead to random errors, not to the same wrong answer no matter how you approach the problem. Because of this, finding the same answer from different approaches seems to defeat your evidence for the view that your mathematical reasoning is unreliable. To create an undercutting defeater that is compatible with such systematic error, you would likely need intelligent, deliberate deception each time. This matters for creating the problem you have in mind because it means that the generality of a defeater by itself isn’t doing the work. You need the defeater to have specific features that negate standard methods for confirming our beliefs that are internal to the area of belief formation. This means that Independence isn’t true if it is supposed to be read as a universal generalization.

    However, you don’t seem to need a universal version of Independence to create an inconsistency. If there are specific types of undercutting defeaters that can’t be overridden by evidence internal to the method, then there could still be examples of undercutting defeaters that violate Anti-invincibility. However, in a case where you add in enough features to get around all these internal measures of confirmation, I don’t see why we should accept Anti-Invincibility. Suppose an evil demon let you know that he existed and that he was going to cause you to have erroneous beliefs quite often. In that situation, there doesn’t seem to be anything you could do to fix this problem. If you know that someone is actually directly screwing with your mind, and you can’t do anything to stop them, there’s probably nothing you can do to overcome this sort of defeating evidence. So, if you’re trying to show that you can’t create an Invincible defeater just by creating a general defeater, that seems correct. But I don’t see why we should think that there can’t be invincible defeaters.

  6. Hi Matt,

    That’s very helpful. I now see the problem with the drug/mathematics case as it was described, but I don’t think it matters for the main argument of the post. What you (and perhaps James) point out is that one could rely on the coherence of the different mathematical checks to determine that it’s unlikely that I’ve been given the drug. I see the point. But relying on the coherence of mathematical checks doesn’t violate Independence, because, by engaging in coherence-based reasoning, I’m relying, in part, on a type of non-mathematical reasoning. So this case doesn’t seem to motivate Independence very well. But, since the point of the post was to object to Independence, I’m not sure why the main argument would inherit the defects of my motivation for Independence. (I’m not sure I understand the final three sentences of your first paragraph. If you still think the overall argument has a problem, could you clarify?)

    As far as Anti-Invincibility goes, I don’t share the intuitions in your cases. If I acquire evidence E1 that an evil demon is going to cause me to make mistakes quite often, and I acquire evidence that E1 is misleading, then wouldn’t E1 be at least partially defeated? As long it is possible for some subject to have E1 and acquire evidence that E1 is misleading, then E1 doesn’t seem invincible to me in the sense I intended. A similar point applies to the “screwing with your mind” case. Now, the word “know” features prominently in your two cases (the demon let you know…, and you know that someone is screwing with you). Was that fact that you know about the demon activity–rather than just having evidence about it–supposed to do some important work that I’m missing?

  7. I guess I wasn’t individuating methods as finely as you are. I would have thought that checking for coherence among various methods of solving a problem would have counted as an aspect of mathematical reasoning, not a separate form of reasoning on its own. In general, I would think that methods that cover any broad range of beliefs are very complex, and that most if not all of them include self-regulating and self-correcting aspects such as checking their outputs for coherence. Since I thought of the methods as complex in this way, I thought that a defeater would have to cut off all of the routes available for self-correction from within the method before we should say that we can’t use the method itself as a basis for rebutting the defeater. A general reason to think one is not good as mathematical reasoning, such as the one you suggested, wouldn’t do the work because it would leave enough available resources to test whether or not the defeating evidence was sound. I guess if we’re individuating methods more finely, so as not to include those self-correcting aspects, then this wouldn’t work, but I have a hard time understanding why we should think that mathematical reasoning isn’t more complex.

    As for the other case, I was trying to think of a simple situation where all the available internal tests were corrupted in the same way. I figured that the simplest way to do so was to give you good evidence that an outside intelligence was directly and deliberately misleading you in all of your intellectual endeavors. I was assuming that in this case your belief was correct; the demon really is messing with your mind, and the one thing he lets you know is that he is doing so. Anti-invincibility seems to require that we can never be put into an epistemically doomed situation and still become aware of this fact. I can’t see why we should think that isn’t possible, though.

    It’s worth pointing out in this case, though, that the reason the defeater is invincible isn’t merely because of its generality; it’s because it is a sufficiently broad and specific type of defeater to cut off all avenues of approach for fixing the problem.

Leave a Reply

Your email address will not be published. Required fields are marked *