Testimony and Defeat

Gary Gutting has long been bothered by cases of disagreement between epistemic peers. His view, if I recall correctly, is that your judgment that an intellectual peer disagrees with you is a defeater of your justification.

This viewpoint is clearly too strong, but it raises the interesting question of when testimony generates defeaters. The Gutting view is too strong unless the kind of defeat in question is easily overridden. Suppose you and I are discussing a matter that I take myself to have thought through fairly carefully. Then the mere fact that you disagree won’t have any effect on my justification even if I believe you are my intellectual peer (you are as careful as I try to be, you are as intelligent and thoughtful, you are not motivated to hold a view more than I by irrational factors, etc.). In fact, even if you are my intellectual superior, mere disagreement won’t affect my level of justification. There’s no reason to overgeneralize from the fact that your views are generally more reliable than mine to the present case. At least, that’s what I’d tell myself until and unless you produce some piece of information relevant to the matter at hand. Such a remark sounds so strong, however, that it threatens to rob testimony of most of its power to undermine present opinion, and that would be a mistake. Thus the question how intellectual competence and testimony are related, if at all, in explaining how testimony can give one a defeater.

Here’s the view I’m tempted toward; call it the “irrelevance of competence” position. On it, information about the intellectual competence of a disagree-er is never an underminer of present opinion. In cases where it looks like such information is playing a role, it’s really something else doing the defeating. For example, if I think you are smarter than I and have thought more about the matter than I, then your disagreeing with me is a defeater. But what is doing the defeating here is my view that you know more about this issue than I.

Note that in usual cases of testimonial defeat, what matters is not degree of competence. In normal cases, such as perceptual ones, the testifier has a source of information that we lack–perhaps the testifier was present to observe and we weren’t; or perhaps we were present, but we know we don’t see as well as the other person. Or perhaps, as in auto repair cases, we have some experience with the kind of thing in question, but the other person has much more experience. In all these usual cases of testimonial defeat, some minimal level of competence is required for the testimony to defeat, but the question of intellectual parity is simply not an issue.

The strong view says this is always the case. That is, whenever it appears that intellectual parity by a disagree-er is a defeater, there is a better explanation in terms of greater knowledge or experience of the particular subject matter in question, sources of information not shared, and the like. The strong view claims that the issue of intellectual competence is simply too general to function as a defeater on its own, and that once the additional information is included that will allow defeat, epistemic parity is not itself an issue at all (though some minimal level of competence still will be).

How about disagreement by one’s intellectual superiors? I’m inclined to say the same thing here (except where one’s intellectual superior is so superior as to be infallible). As long as the person who disagrees is fallible, it should take more than an assessment of relative competence to undermine present opinion. Of course, such assessment can undermine present opinion, and rationally so, but the issue of whether such information is a defeater is the question of whether such assessment has to undermine present opinion (when no rebutters are present). There is one caveat, however. When people find themselves in a situation where an intellectual superior disagrees, humans sometimes come to distrust their own competence–this is a common experience of students (one with fairly significant implications with respect to the ethics of teaching, by the way, but which I won’t go into here). Once such trust is lost, of course, we have an explanation of defeat that does not rely on the intellectual superiority of the one who disagrees.

So, there it is: a position inviting counterexamples.


Comments

Testimony and Defeat — 31 Comments

  1. Somewhat beyond the question addressed, but how do we account for the quantity of disagreement. Consider that there are five peers and four of them disagree with the other. Does this volume of disagreement change the epistemic status? Or, because they are peers, do we treat them all the same and follow the reasoning you outline.

    Clearly it appears to be that the volume of disagreement of “equals” does offer reasons to lose trust. But if we are equals, should we?

  2. Clark, I think sometimes the quantity of disagreement does make one lose trust, but if one doesn’t lose confidence, I don’t see any irrationality here. My view on defeaters is that if you think you have one, you do; so if you lose confidence because of the disagreement, you’ve got grounds for abandoning or weakening your view. My question, though, is whether this is required and I don’t see how.

  3. Jon,

    I think this is an objection to the ‘irrelevence of competence’ thesis, but I’m not quite sure I have a firm grip on the opposing views. I’m wondering what you would say about cases where it seems, let’s say to us, that settling the issue as to whether p is true would require nothing but some common knowledge and some reasonably straightforward inferences from readily known propositions. Under such conditions, wouldn’t disagreement with someone we know to be a reasonably competent individual (not ignorant or the general sorts of considerations that support our believing p; not logically incompetent) suggest that it is in fact harder to get yourself in a position to know than we initially thought?

    If disagreement between competents can itself constitute evidence for thinking that p is harder to know than we thought initially, wouldn’t there be cases in which the fact of disagreement between competents is an undermining defeater?

  4. Clayton, the thesis is that knowledge that someone equally competent disagrees is not by itself a defeater. Your case adds information to the effect that anyone minimally competent would see the truth. So it’s not a case of the sort needed, but it does raise an interesting question. Even in your case, I’m not sure you have a defeater for the claim in question. Instead, it looks like you have a defeater for the conjunction of what you believe together with the claim that the person who disagrees is minimally competent.

  5. Hereâ??s an issue connected with Jonâ??s.

    There are many instances in which we believe P even though we know full well that there are many experts who believe the opposite while being our epistemic superiors in just about every relevant way with regard to P and the topics surrounding P.

    By â??epistemic superiorâ?? I mean that they are generally more informed than we are on the topics involving P, they have more raw intelligence, they have thought and investigated whether P longer and in more depth than we have, they have thought and investigated the topics surrounding P longer and in more depth than we have, they are just as or even more intellectually careful than we are, they are just as or even more intellectually honest than we are, and, crucially, we admit that they have understood and fairly and thoroughly evaluated the same evidence we haveâ??and usually more evidence.

    For instance, for many of your philosophical opinions you can point to large groups of living philosophers who disagree with youâ??and youâ??ll *admit* that each of them is your epistemic superior. At least, youâ??ll admit this if youâ??re a philosopher of average abilities who has a bit of self-knowledge.

    It appears as though many philosophers are in this kind of situation. It also appears as though continuing to believe P in these circumstances is unreasonable. Are appearances deceiving?

  6. Hi, Bryan, good to see you here! I’m glad you posted, since it was in looking at your webpage that I thought about Gutting’s view, and posted about it.

    By the way, don’t you know that no philosopher thinks of him/herself as average??? I’m joking, but there is empirical data about this for academics in general: in the study I saw, 92% of academics think themselves more competent than their peers! (I remember an experience as a grad student at a regional conference, talking with a complete unknown in epistemology, who claimed he was every bit as good an epistemologist as Chisholm–I was so shocked I didn’t know what to say; the shock prevented me from saying anything, and that’s good, because no matter what I might have said, something bad would have occurred…)

    But let’s assume that we’re not so outrageously self-impressed. Let’s distinguish first-person cases from third-person cases here. In the first-person case, I think the appearance of unreasonability is not normally present. Where it is present is in the third-person case: when you see the unknown disagreeing with Chisholm, Carnap, and other greats in epistemology, you are suspicious. I don’t know that it is appears to you that the the belief is unreasonable, but it at least appears to be questionable. But here a sense of the high degree of human fallibility in matters philosophic comes into play: this is way beyond rocket science, so it’s far from reasonable to assume that your philosophical heroes are reliable guides to the truth. So the questionableness experienced doesn’t need to trickle down into a conclusion that disagreeing with the greats is irrational.

    I think a sense of human fallibility is the key to such disagreement in other cases as well. If an upstart math grad student claims to have discovered a flaw in a proof by Gauss, we may be suspicious, but we don’t conclude that the math student is irrational in holding that view.

  7. Adam Elga has a nice piece in PhilStudies on people overrating themselves, professors included. I do not know if it has come out yet.

    I agree that it is very reasonable to insist that our philosophical heros are not reliable guides to the truth. But since I know that they are my epistemic superiors, and even they, in all their genius, are unreliable, why do I have any confidence in MY view?

    Here is what I say to myself: they are unreliable; but they are my epistemic superiors. You would think that I would then infer “So I should not have any confidence in my philosophical inclinations, as I am worse than they are”.

    So I do not see how the recognition of unreliabilty helps the reasonableness of my belief in P.

    Matters might be different if I never went through any of this reasoning, but unfortunately, I have!

  8. Bryan, you’re right, and I don’t have an answer I’m comfortable with. But how about this: side with Lehrer that self-trust is essential to any rationality whatsoever. So you have to trust your own capacities in a way that you don’t have to trust anyone else’s. So after you go through the reasoning in question, you should ask yourself if the superiority of the other person rationally undermines your self-trust. It won’t do it in general, since then you’d lose rationality for all your beliefs. And if the reasoning does undermine your self-trust in the particular case, then you lose rationality there. The question, though, is whether you need in every case to lose your self-trust, and here I think the answer is “no”. You know someone superior disagrees; you reflect on whether that makes you think that you are less reliable than you had thought. If you say “yes”, that implies that you’ve inaccurately assessed your competence on this issue in the past, but it is not hard to see why you could rationally resist that conclusion: you consider your self-assessment and think it was pretty thorough and accurate. So you conclude that you have the same reasons to trust your abilities in this case as you thought you had prior to learning of the disagreement.

  9. I think this is a hard issue. But I think there are several factors that make my philosophical opinions reasonable even when I realize that they are denied by my epistemic superiors.

    First, and this is related to what you said, I simply have to put some trust in my abilities. Perhaps this is similar to the thing in ethics where we say things like “I am going to save MY kids from the burning building first, because they are part of my life project, etc.” I am ethically okay in saving my kids first; so maybe I am epistemically okay for having confidence in my abilities.

    Second, I do not really have many philosophical BELIEFS. I have beliefs that are heavily conditionalized, and I have beliefs about how good certain arguments are, but I have far fewer controversial beliefs than it might at first appear. But I definitely have some philosophical beliefs that I know go against the opinions of my epistemic superiors, so the problem has not gone away entirely.

    Third, my philosophical beliefs are held with a degree of confidence much less than my ordinary beliefs. Presumably, that means I need less warrant for my holding those beliefs to be reasonable.

    Still, I do not think the problem is solved. The first point could use a lot of elaboration! There are lots of ways to resist it.

  10. Bryan, I agree there’s an issue that deserves full exploration here. One other factor beyond the good ones you mention is that the inference from our superiors disagreeing to the questionability of our beliefs is akin to inferring from a statistical generalization to a specific case, and such inferences are generally weak.

  11. Right. But…

    Say I disagree with Chalmers on some specific issue. I have talked with David on several occasions and I know full well he can kick my philosophical ass, especially when it comes to issues he has worked on for years. If I disagree with him on some consciousness claim, say, how am I reasonable? Am I supposed to think “Here, finally! I have seen the light and David has not, even though on just about every other consciousness issue he can kick my ass”?

    The problem is that I know that when it comes to the issues surrounding P (the claim we disagree on), he can kick my ass. Does this cause problems with the statistical point you made? Maybe I have been too loose with ‘issue’.

    By the way, I don’t really disagree with David on consciousness; topic is too hard for me to have any interesting beliefs about it!

  12. I think the inference is still weak in the Chalmers case, but it supports a conclusion that you’ve got independent evidence for. The independent evidence is what’s behind your last remark that you don’t take yourself to be competent at these issues. I doubt you’d have the same reaction when it’s an area you feel perfectly competent in. Of course, the issue isn’t whether such a reaction is rational; the hard question is to explain why. I think, though, that you may be more tempted than I to think being dismissive in this way is rational?

    Aussies in the movies, Aussies in philosophy; they’re everywhere!

  13. I do not know what to think about an “area I am perfectly competent in”. Even in the areas I feel most competent in, I recognize that I have plenty of epistemic superiors in that very area.

    I think a big factor that rescues my reasonableness with regard to my belief in P (when I recognize that some of the people who disagree with me are my epistemic superiors with regard to the issues surrounding P), is that I simply do not think about the evident tension. Instead, I am thinking about P. I am disposed to admit that those superiors exist and are my superiors. But perhaps the fact that we do not think this through all the way–fully realizing that someone superior in all the ways I listed in comment 5 disagrees with me–helps save my epistemic reasonableness. This is like believing P, believing P entails Q, and yet not believing or even disbelieving Q. Such a situation is epistemically okay if I have yet to put my beliefs together in an appropriate way. Same for the “recognized epistemic superiors” problem: if I have yet to put all these points together, then my belief retains its rationality.

    But when I DO put it all together, then it is harder to get me off the hook.

    By the way, as you problably know Richard Feldman and Thomas Kelly have works in progress on these issues. However, they focus more on epistemic peers, not superiors. I find the problem of recognized epistemic superiors harder for several reasons.

  14. I don’t mean “perfectly” by ‘perfectly’! I mean “competent in a way that leaves my not having reservations about whether or not I’m competent.”

    I don’t see how not thinking about the tension helps, if there’s a problem here. Defeaters in general don’t require that we think about them or attend to them in order for them to defeat. So if there’s a problem, it doesn’t go away in this way.

    Here’s an idea. Maybe when you’ve thought through the problem of epistemic superiors, you have a reason to find out why they think as they do. Given my views on epistemic justification, that implies that your justification is not adequate for knowledge. But it doesn’t by itself imply that you lack ordinary justification for your belief. It would just make it more like your justification for lottery sentences.

  15. I would have thought that when I have fully realized that I have superiors who disagree with me, I gain a reason to think I have made a mistake in coming to believe P.

    I guess I was thinking of the following line of reasoning. I believe P. In order for me to know P I have to be able to epistemically neutralize (whatever that comes to) at least some of the Qs inconsistent with P. (Maybe not BIV possibilities, but some of the not-Ps.) Now that I have gone through the reasoning about my epistemic superiors, I have the following not-P possibility staring me in the face: my epistemic superiors–and here I insert what that means–are right in thinking not P. So that not-P possibility is relevant, contextually salient, etc, even though a hour ago it wasn’t. And now I do not know P because I cannot neutralize that possibility. Something like that.

  16. Bryan, I suppose to neutralize the disagreement by a superior or a peer, one will have to advert to some explanation about why they are mistaken. But I don’t think the explanation has to be specific in any way. For example, I look at the issue, reason about my superiors, and wonder why they think as they do. One response undermines my justification, but another, equally rational response, is to conclude that their general superiority is not displayed in the present case. I may continue to wonder why that’s the case, and may not come to a conclusion. But that won’t undermine the adequacy of this response for neutralizing the disagreement.

    This is complicated by the fact that the discipline we’re probably both thinking of is philosophy, and my view here is skeptical about the positive knowledge in philosophy (as opposed to negative knowledge such as that a certain position is subject to counterexample or a certain argument is invalid or unsound). But when I think about the parts of philosophy where I’d be willing to claim knowledge of things, I don’t find disagreement by superiors effective in undermining rationality. They disagree; I look at the counterexample again, and wonder how they could miss something so obvious.

  17. My natural sympathies are with the view that Bryan at various times in the comments here has suggested (if not fully endorsed), that a disagreement with one’s epistemic superiors about some issue throws one’s opinion on that issue into justificatory jeopardy (or, at least, prevents the belief from counting as knowledge). But here’s an argument in favor of your view, Jon. See what you think.

    Suppose the following is true: I believe that X is my epistemic superior, X has an opinion concerning subject matter S that is inconsistent with my own, and I have found no reason to believe that X’s opinion about S is mistaken. Because I take myself to be worthy of my trust concerning the opinions I form, I trust my own opinion about S, so I come to believe that X is mistaken about S. At the same time, however, my self-trust extends to my opinions that X is my epistemic superior, X’s opinion on S conflicts with my own, and there is no apparent error in the way X arrived at and sustains her opinion. Those opinions push me in the direction of believing that X is not mistaken, after all, in her own belief about S. I trust those opinions, too, so what am I to do? What I have here is an inconsistency in my own beliefs, or, at least, the makings of an inconsistencyâ�� call it a tension, if you will, in what I believe. I believe that my own opinion about S is correct, while at the same time I hold opinions that push me in the direction of admitting that X’s belief about S, though conflicting with my own opinion about S, is correct.

    Now, one response to inconsistency or tension in one’s own beliefs is to reject one of the offending beliefs. But in cases in which it’s not too terribly easy to figure out which opinion is the misleading one, the most rational response is to learn to love the inconsistency bomb, embracing the opinions that are in tension until information that would help resolve the tension is forthcoming. I take it that my disagreement with X about S is precisely a case like this. If so, then it looks as if I am justified in embracing my belief about S, even at the same time I justifiably acknowledge that X is my epistemic superior who has a conflicting opinion that I have no reason to think is mistaken. I will continue to be justified in holding and trusting those opinions of mine, despite their tension, until presented with information that leads me to resolve the tension between them.

    P.S.: On a technical note, in three different RSS reader programs I’ve subscribed to the RSS feed for the comments on the “Testimony and Defeat” topic, and none of the three programs downloads any comments beyond the first ten comments for the topic. Do you have any idea why that might be, Jon? Thanks for the info.

  18. Chad, your approach is the one I was thinking of when I said that maybe the tension blocks the possibility of knowledge even if it doesn’t undermine justification. I believe epistemic justification requires something stronger than ordinary justification, since the kind of justification necessary for knowledge legitimates the experience of closure with regard to any need for further inquiry. Your resolution leaves the person rational or justified, perhaps, but in the ordinary senses of those terms, not in the sense needed for knowledge.

    My hesitation about this resolution can perhaps be put easiest by appeal to a theory of knowledge I don’t accept: proper functionalism. Here’s the line. My intellectual superiors disagree with me about a topic, but my belief is the product of a properly functioning noetic system. When I learn that they disagree, I consider the question of whether my noetic system is malfunctioning here, or whether I should conclude something negative about my superiors. Noting their disagreement may give me grounds to wonder whether my cognitive system is malfunctioning, so I reflect on the matter. There is no sign of malfunction, and there is a long list of actual cases of which I know where superiors have been wrong and inferiors right (even though this is still a minority of the total cases). So what should I conclude? In particular, if I conclude that there is something epistemically inadequate about the total situation of my superior in this case, would that itself be a display of some cognitive dysfunction on my part? I think the answer to that is “clearly not,” and finding such dysfunction on my part would be required on the theory in question to block the case from being a candidate for knowledge.

    Since I don’t accept this account of knowledge, however, I can’t view this argument as conclusive. But it does display my inclinations about the case in a simple way.

  19. Jon, what you say sounds sensible to me. I think it’s plausible to see your response to my argument as a description of one way of dissolving the threat of inconsistency I pointed to. Suppose I have an opinion that is in conflict with that of an epistemic superior. If I re-examine the way I came to have the belief (and/or the way I sustain that belief), and I can find nothing defective about it, then I am justified in continuing to hold the belief, despite my epistemic superior’s conflicting opinion. The reason the consistency threat is dissipated here is that after reflection, I now have a positive reason to think my superior’s opinion is misguidedâ�� that reason being, that inspection reveals me to be functioning properly in arriving at my own opinion. So I am no longer pushed in the direction of thinking that my superior’s opinion might just be correct, as I was before I had some positive reason to think his opinion was mistaken.

    All of that sounds plausible to me. Even if that’s the right way to view the matter, though, I think there are two important circumstances in which my opinion in a case of disagreement with an epistemic superior is not justified:

    First, if I do not re-examine the way I came to / sustain my opinion when I discover that it conflicts with the opinion of an epistemic superior, then it seems clear that I am no longer justified in holding that opinion. Suppose, for example, that I am taking a physics class, and I’ve just concluded an involved calculation on a homework assignment. If I discover that my instructor has arrived at an answer that conflicts with my own, I may still be justified in sticking to my answer, but only if I re-examine the means I used to arrive at that answer and find them unobjectionable. If I simply say, “This just must be one of those instances in which my instructor made a mistake,” without double-checking my own work, then I hardly seem justified in sticking to my own result.

    Secondly, if I happen to examine the means by which my epistemic superior came to or sustains his opinion and also can find nothing faulty about those means, then my own opinion is likewise unjustified. In a case like this, inconsistency in one’s own opinion once again threatens. I would contend that the inconsistency here is more devastating than the inconsistency I pointed to in my last comment. In the situation here, I not only have no reason to doubt that my superior is mistaken, I also have some positive reason to think his view is correct (that reason being that I can find nothing wrong about the manner in which he formed and sustains his belief). Now that I regard the ways he formed / sustains his belief as non-deficient, perhaps the best thing to do, epistemically speaking, is to suspend my problematic belief until I can discern where the inconsistency-inducing error lies.

  20. Chad,

    I am glad you brought up the physics example, as I had not been thinking of anything other than philosophy and politics. I studied physics before philosophy. You wrote:

    “Suppose I have an opinion that is in conflict with that of an epistemic superior. If I re-examine the way I came to have the belief (and/or the way I sustain that belief), and I can find nothing defective about it, then I am justified in continuing to hold the belief, despite my epistemic superiorâ??s conflicting opinion.”

    But this looks wrong in the physics case. Speaking from experience, when I did a homework problem and discovered that the professor had given us a different answer, I would be very unreasonable in concluding that he had made an error – even if I went back over my work and could find no mistake. By far, the rational thing to do was conclude that not only had I made an error but I was confused enough that I could not even smell out my error.

    So I think that what you wrote does not apply to cases in which (a) there is a huge discrepancy between individuals (professor and student), and perhaps (b) the belief in question is well within our power to decide the truth of. So perhaps what you wrote is true for philosophy but not much of science.

    I realize that we typically humor the undergraduate who thinks Hume, Descartes, Kant, etc. were a bunch of idiots. The overwhelming self-confidence can be a good thing in some respects. Maybe those students are justified in their belief. I doubt it, but in any case that kind of excuse would be irrational for most physics students, and would be irrational for adults.

  21. Chad and Bryan,

    It is important to control for interfering factors when assessing the issue of disagreement by superiors. What we need to do is construct cases so that nothing beyond the superiority of the disagree-er is playing a role in defeating our justification, and the physics cases don’t do that. They play off of the lack of confidence that is appropriate for students to have. The fact that the professor is a superior certainly is a part of the explanation for why one shouldn’t conclude that the professor is mistaken. But that won’t tell us whether contrary testimony by a superior, in itself, constitutes a defeater.

    To isolate examples from interfering one’s, my suggestion is that we have to imagine cases where we take ourselves to be competent and we take our investigation to have been thorough, fair, and honest. Since competence comes in degrees, I don’t claim that this list is sufficient to rule out interfering factors. Maybe we need to construct the cases so that we take ourselves to be fully worthy of trust in the area in question. But however we do this, the situation of a student with a professor is not going to satisfy the need for controlling for interfering factors.

  22. Chad, a couple of worries about the sufficient conditions you cite for defeat by the word of a superior. First, we need to say something about the quality of one’s own sources of information and those of a superior. I’m assuming, in line with my last comment, that we are trying to isolate the word of the superior from other interfering factors, so I’m assuming that you fully trust yourself in the domain in question and that you accurately view your inquiry into the matter as fair, honest, thorough, and complete.

    In such a case, I’m not sure that re-examining one’s source is necessary. It may be important to draw some conclusion about the disagreement, but something as nebulous as “gosh, he’s so smart, I wonder what exactly his mistake is in this case,” might be enough for preserving justification. If that’s right, then disagreement by a superior is a defeater, taken on it’s own; it’s just not that hard a defeater to undermine (in some cases, at least), nor is it different in kind from disagreement by inferiors and peers (though it may be harder to undermine than these others).

    Second, even if you examine the quality of the inquiry by your superior, what you should conclude will depend on other matters as well, depending on whether one thinks of intellectual blindspots as a fault in inquiry. At least, positing such a blindspot is not the same as finding something faulty in the inquiry itself, such as a weak statistical inference or a deductive mistake or a biased sample. Full coherence is not restored to a system of beliefs until and unless one finds a satisfactory explanation of the blindspot, I would think, but full coherence isn’t required for justification.

  23. This may constitute a tangential departure from what you folks are interested in on this topic, but as I read through your posts on the relevance of mere disagreement among intelelctual peers/superiors for justification, it reminds me of an issue that fascinates me in what I consider social epistemology. Scientific breakthroughs are often made by brilliant young researchers who defy the conventional wisdom of their field to develop strikingly new theories (think of Kuhnian revolutions vs. “normal science.”) This can pit these young researchers against most of the repected people in their field, often including the researcher’s own former teachers. Furthermore, for every brilliant scientific revolutionary, there are at least ten delusional crackpots who *think* they are brilliant revolutionaries. Given all this, what is the justificatory situation of the young researcher? We don’t need to assume that the researcher believes his new theory–scientists often don’t. But what about his belief that his theory is better than those of his intelletual peers and superiors (including former teachers and mentors)? Can such a person ever be justified in such a belief? Why shouldn’t he conclude that he is one of the delusional crackpots?

    I’m not sure how many “interfering factors” are included in this description. It probably depends upon how we flesh out the details of the scenario. But it has always struck me that such scientific radicals must be a little megalomaniacal to think they are right when the entire scientific “establishment” is wrong. And that might imply that they are unjustified (even in the ordinary sense) in believing this. Perhaps this bears on the issue touched upon briefly regarding whether the *amount* of intellectual peerage you have arrayed against you matters.

    Just a thought.

  24. Hi Wayne, these cases are very interesting. When we look at some young person filled with such hubris, we are inclined to assess the likelihood of truth to be quite low. But I’m inclined to think they are nonetheless justified, since justification is a first-person rather than third-person phenomenon.

    To get the cases to fit the topic, we have to assume that the young scientist thinks of his/her elders as intellectual superiors… and we know enough about the phenomenon of intellectual hubris to be wary here!

  25. Jon and Bryan,

    I think my view of the physics student example was colored by the fact that I, too, studied physics before going on to philosophy! So I had in mind a more advanced student not as subject to the lack of confidence and relative inexperience that play a role in what we say about the novice, non-physics-major student’s calculated answer. Suppose, then, if even an advanced undergraduate physics major won’t do, we think of a relatively advanced physics grad student, postdoc student, or a junior physics faculty member (henceforth, “Junior”) having a disagreement about an involved calculation with a physicist (henceforth, “Senior”) more established and respected in the field than Junior. Let’s say that the well-seasoned physicist has many of the qualities Bryan earlier noted an epistemic superior as having– he has more raw intelligence than Junior, he has thought about and investigated the area of calculation longer and in more depth than Junior has, he is just as or even more intellectually careful and honest than Junior is, and he has understood and fairly and thoroughly evaluated the same evidence Junior has.

    Let’s also assume that when it comes to calculations of the kind in question, both physicists, despite Senior’s epistemic superiority to Junior, are justified in the confidence they place in their own calculated answers– they both have all the information necessary to perform the calculation in question, they have accumulated lots of experience performing similar calculations, they are both skilled at performing such calculations, and their general breadth of experience gives them a good sense of the errors to watch out for and the variables to control for in performing such calculations. So each justifiably takes himself to be skilled, fair, thorough, and trustworthy in coming up with the answer he does. I think the conclusion I reached in my last comment still obtains here– Junior is not justified in maintaining his calculated answer in the face of disagreement with the seasoned vet until Junior re-examines his work. I take it (and this is partly in response to Jon) that Junior’s simply thinking something like, “Hmm, Senior must not be thinking straight today” is insufficient on its own to override the defeater posed by Senior’s conflicting calculation. Some sort of double-checking of his own work, or discovery of a flaw in Senior’s work (but not necessarily both the double-checking and the flaw-discovering), is needed. But once the double-checking/flaw-discovering is done, and Junior either finds no flaw in his own work or discovers some flaw in Senior’s work, I contend that Junior is justified in maintaining his answer, until such time that some flaw in his calculation (should there be one) is discovered.

    Did I correct for all the interfering factors you had in mind? I hope the better-filled-in example is more compelling than my previous (admittedly-inadequately-developed) example as an example of a disagreement where competence alone is at issue.

    Jon, I’m not sure whose intellectual blindspots you are referring to in your last comment. Are you saying (in terms of my example here) that Junior might have blindspots that prevent him from adequately assessing Senior’s work, so that his finding Senior’s work inadequate would be insufficient to override the defeater of Senior’s conflicting calculation? Or are you saying that Junior’s simply chalking up Senior’s conflicting calculation to some blindspot Senior has is sufficient to override the defeater of Senior’s conflicting calculation, even if Junior isn’t able to give an explanation of Senior’s blindspot?

    Let me take in turn each interpretation of your comment. Let’s start with the first reading. Suppose that finding Senior’s work inadequate is one of the two ways to override the defeater of Senior’s conflicting calculation (the other way being, as I mentioned above, to re-examine one’s own calculation and find it adequate). Obviously, Junior needs to be as careful as he can in assessing Senior’s work, and if he thinks himself susceptible to certain blindspots in that assessment (perhaps he knows himself to have a track record of manifesting certain biases or overlooking certain factors in his inquiry), he will want to do the best he can to correct for those blindspots (by, for instance, being more on the lookout for the factors he’s overlooked in the past). But assuming that he is as careful as the circumstances allow in his examination of Senior’s work, then his finding that work inadequate in some particular, specified way should suffice to override the defeater of Senior’s conflicting answer.

    I take it, though, that the second reading of your comment is probably the one you had in mind. If we understand “an explanation of Senior’s blindspot” to be an explanation of why Senior came to have that blindspot, you’re right to say that an explanation of Senior’s blindspot is not required to override the defeater of Senior’s conflicting calculation. But I think Junior needs at least to have an explanation of what that blindspot actually is– what aspect of the calculation in particular Senior is not paying sufficient attention to. If Junior can’t even say what the blindspot is, then when he says that Senior has a blindspot, all he means to be claiming is that Senior wasn’t being sufficiently attentive to some (unspecified) aspect or other. And that, like merely claiming that Senior must have made a mistake somewhere or other, isn’t sufficient to override the defeater of Senior’s conflicting answer.

    Sorry this post is so long. I’ll be quiet now.

  26. Chad, I like the physics examples; it’s easy to get misled by the “soft” nature of some disciplines and topics, and physics is a good antidote.

    So here’s a “soft” discipline example. Suppose David Lewis has offered an principle or argument which you think there are counterexamples to. Since you respect him in the way we are discussing, you think carefully before bringing your case to him. So you fit the description of a competent and thorough inquirer. When you give him the example, he is unpersuaded. You try and try to see why, but you can’t understand what the problem is.

    You can respond in two ways. One is like Wayne’s brash young turk above, who concludes that Lewis has a blindspot here, even though you can’t say exactly what it is. The other is to acquiesce, and conclude that you might be making a mistake and hence are not entitled to your belief.

    I think both responses can be rational, and because they can, disagreement by a superior doesn’t have to constitute a defeater (in these circumstances).

  27. I like the David Lewis example. You go to see Lewis with your argument that P. He listens very attentively, asks intelligent questions, but then says that it is not a very good argument. But he cannot stay and explain why because he has to catch a train. We can suppose the argument concerns one of the areas in which he is clearly an expert.

    It seems pretty intuitive to me that after experiencing all this you have to alter your epistemic situation vis-a-vis P in SOME way. Perhaps you can continue to justifiably and reasonably and rationally believe P. But something must change.

    Perhaps it is degree of confidence in P. If you do not lower your degree of confidence in P, then you are being unreasonable. But this holds only if the argument you presented him with is your sole reason for believing P.

    Perhaps you must do this: lower your degree of confidence in the claim that your argument for P is a good one.

    Or am I all wet?

  28. Bryan, you may be right on this one. Think about it this way, though. Suppose you had a full conversation with Lewis about a counterexample you propose. You talk till both of you are convinced no further conversation will be useful. You think the counterexample works, and he doesn’t, and you can’t see why he’s missing it or how anything he says undermines the counterexample. I’m inclined to think you’ll attribute, or could reasonably attribute, a blindspot to an incredible philosopher. You might also come to doubt your assessment, and that could be rational too. But I don’t see that you’re required to doubt your assessment (i.e., lower your degree of confidence).

  29. Hi, Jon and Bryan.

    If Bryan is “all wet,” then I think I’m in the pool with him! It sounds like a reasonable suggestion to me that in light of Lewis’s dashing-off-to-the-train-station disagreement, one must lower one’s confidence in the claim that one’s argument for P is a good one.

    Still, I think the case of “persisting disagreement” that Jon raises reveals some pretty interesting insights about the threat of defeat that disagreement poses. I’m inclined to think that when Lewis reveals his reasons for thinking the argument for P is a bad one, and when disagreement persists even after extended discussion, then, as Jon says, you don’t have to lower your confidence in your own argument, after all. The argument has “stood up to the challenge”â�� it has not been shown to be obviously false, and the disagreement that remains has been pushed back to a disagreement about fundamental commitments you and Lewis holdâ�� commitments to, for instance, different (1) intuitions, (2) foundational principles, (3) inferential rules, (4) weightings of various reasons and arguments, or (5) weightings of various epistemic goals.

    Now, of course, more could be said on behalf on each of these fundamental commitments. But given that there are reasonable people on either said of the “commitment divide” that exists between you and Lewis in the example, and given that it’s unclear what the truth of the matter is about which commitments are the most appropriate ones (and even about what the criteria for “appropriate commitment” are), then disagreement at the end of the day does not look to render either party less justified in his philosophical view. Here, I think, is where the differences between a “soft” discipline like philosophy and a “hard” discipline like physics become apparent. We tend to be more optimistic that a dispute about a calculation in physics is resolvable if we reason hard enough about it. So when there is disagreement about some calculation in an area like physics, we are hesitant to reinstate a challenged belief’s justification until we can show in a reasonably definitive way how the challenger is mistaken or until we can show in a reasonably definitive way how the believer properly arrived at his belief. In philosophy, though, it’s enough to override the potential defeater of the disagreeing opinion of an epistemic peer/superior that we push the disagreement back to a disagreement about fundamental commitments that are not obviously false or otherwise inappropriate (where “inappropriate” could perhaps get a contextual treatment?).

    All of this is, I think, consistent with Bryan’s assessment of the original “disagreement with Lewis” example and with my earlier claims about the threat disagreement with epistemic superiors poses to one’s justification. In such cases of disagreement, some effort needs to be made to defend one’s own opinion in order to override the defeater of disagreement and to retain the pre-disagreement level of confidence in the belief. The important insight Jon’s case of persistent disagreement reveals is that the disagreement need not always be resolved in favor of one party or the other in order for the parties in the disagreement to go on being justified in their conflicting opinions.

  30. Chad,

    Lots of good insights in your posts!

    Another issue here regards what expertise in a soft discipline amounts to. In philosophy people on occasion become famous and well-respected “merely” because they are able to generate new ideas that seem exciting and worth taking up. This does not mean that they are any good at evaluating particular arguments, keeping track of nitty-gritty distinctions, and the like. Other people are counterexample kings and queens. I suppose that with some work one could find several interestingly different kinds of philosophical expertise that have little to do with philosophical area. And of course disagreements with philosophical experts might have very different epistemic consequences depending on the kind of disagreement and the kind of expert.

    In the Lewis case that Jon describes, when I imagine what I would actually feel like in that situation, I think I would consider myself epistemically naughty if I did not lower my degree of belief. Lewis and I might trace the disagreement to a difference of opinion regarding some really foundational issue, but that does not leave me off the hook. The fact would remain that my epistemic superior disagrees with me even after seeing my entire line of thought for my belief in P. And in this case he is probably a superior with regard to foundational issues, which are often much harder than “surfacy” issues. I need not say that Lewis right and I am wrong; and I might think that Lewis himself should lower his degree of belief in not-P. But I would consider myself epistemically guilty if I went away from my long discussion with Lewis with the same degree of belief in P that I started with.

    But what if Lewis says 2 + 2 = 58? What will I do then? I will probably think he either has a blindspot or has gone insane. What if he explicitly endorses some really crude form of behaviorism and offers only a really crude reason for it? (Something from a really easy intro to phil book that is almost too simple even for first-year students.) Of course I would suspect that he really has some sophisticated theory and argument up his sleeve and he is not expressing himself well. But what if extended conversation (orally, or in journals) showed that he really was endorsing the crude theory with the crude argument? I think the theory is false and the argument poor; he disagrees.

    In this case, I am with Jon: I have to attribute a blind spot or other error to Lewis. And I would not lower my degree of belief one bit.

    Ah, but then Kripke, Williamson, etc all say that Lewis has got it exactly right. Now what!? Now I conclude that I am a brain in a vat and this is all a dream.

  31. Bryan and Chad, There are three positions I see here. First, there is the position that disagreement with superiors never needs to undermine rationality. Second, there is the position that it always does so (at least once one has noticed the fact and considered the reasoning that Bryan has suggested). Third, there is an intermediate position that in some cases the disagreement undermines rationality, though not always.

    I think both of you hold an intermediate position. Chad holds that disagreement (by a superior) fully explored need not undermine rationality, and Bryan holds that the more obvious the claim is on which the disagreement occurs, the less threat disagreement by a superior is to rationality.

    I guess I’m still more of a Quine/Duhem guy on this one: there’s a great deal of optionality to what changes to a belief system need to be made in the face of anomolous experiences. What we all agree on is that disagreement by a superior (and maybe a peer as well) is an anomolous experience. I’m inclined, I think, toward Bryan’s position here, that the more obvious the claim is, the less inclined a person needs to be to decrease confidence levels in the face of disagreement. So, even in the “train leaving the station” version of the Lewis case, I think I’d go for explaining away Lewis’ disagreement as Bryan suggests if the claim in question is really obvious (as it is likely to be when you have a carefully worked out counterexample to a universal principle).

    I still feel the pull of something Chad is pushing for, though. When I work through a proof, put it on the board, and then my logician colleague shows up and questions it, I experience something like a momentary suspension of belief. Maybe that’s not the right description of being startled by an anomolous experience, but that’s one thing that it might be. But then I quickly return to the state of belief when I recall the effort and understanding experienced at completing the proof. From which I conclude, “No, the proof is right; my logician colleague is just missing something here.” But maybe this is too much like Moore deducing he’s not a brain-in-a-vat after proving that he has hands…

    By the way, if you give contrastive renditions of what you have evidence for, there may be a resolution. I have good enough evidence for concluding that I have a proof rather than an invalid imposter, but maybe I don’t have good enough evidence for concluding that I have a proof rather than my really smart logician colleague being wrong when he doubts the proof. The problem for me in going this way is that my beliefs aren’t contrastive, and I want a theory of evidence that informs me what to believe…

Leave a Reply

Your email address will not be published. Required fields are marked *