Tom Kelly on Evidence

Just got back from a very fun Central Meeting at the fantastic Palmer House. In the session on evidence, Tom talked about the following fact (apparently well-confirmed empirically):

You and I disagree about some claim (his example was about whether capital punishment is a deterrent), but neither of us is strongly certain of our beliefs. We are then presented with mixed evidence: studies that are balanced between those that suggest I’m right and those that suggest you’re right. The result is that our beliefs become polarized: I become more confident that I’m right and you that you are right.

Tom defended a view on this phenomenon, but rather than post what his view is, it might be fun to think abuot the case itself first. Armchair psychological theorizing about why this would happen are of course welcome, but the real question for epistemologists is whether such polarization is, or can be, rational (or justified).


Comments

Tom Kelly on Evidence — 7 Comments

  1. I conjecture that, psychologically, such mixed evidence might tend to strenghthen our incompatible opinions because of our tendency to attend to evidence that reinforces our prejudices and to discount evidence that supports their falsehood.

    Prima facie, I didn’t see an epistemic problem here. For if the new evidence, taken as a whole, supports my belief that P exactly to the extent that it supports your belief that ~P, then neither of us is epistemically any better off. For if we grant–as seems plausible–that justification should be proportional to evidence, there is no change in this case in the relative strength of the evidence for and the evidence against our competing beliefs. Each of us remains as (un)justified as before. If rationality is, ceteris paribus, a function of evidence and justification, increased confidence would be irrational.

    A bit more reflection suggested a problem, though. For perhaps the fact that (presumably objective) studies are unable to support P any better than ~P, it might be rational for us to have less confidence than before in our commitments despite there being no relative change in the amount and quality of the evidence for and against. For perhaps new, strong evidence against our view should give us a reason to reconsider our commitments even in the face of new confirming evidence that is just as strong. If we assume that we each had as much evidence for our belief before as after the new evidence became available, and we assume that the new evidence is as strong for P as for ~P, the result is puzzling because if confidence should only be fixed by the quality of our evidence, we should not change our confidence and our justification will remain unchanged. But perhaps such results should actually reduce our confidence in our beliefs. This case might show that justification is more than just a matter of the amount of evidence available for a belief. If so, it’s a problem for strong evidentialism: the view that epistemic justification is determined only by the amount and quality of one’s evidence. In either case, however, an increase in confidence in our respective beliefs seems irrational. If belief is justified only by the amount and quality of evidence available, then justification remains unchanged. If strong evidence against one’s view makes it rational to reassess, then the new evidence make it rational to have less confidence in our prior beliefs. In either case, increased confidence seems irrational. The strong evidentialist has a powerful reply here: once we become aware of all of the relevant evidence, the rational thing to do is to suspend judgment, since, ex hypothesi, all of the evidence is counterbalainced. Again, to repeat, strengthening our commitments seems irrational.

  2. The polarization seems irrational because evidence for P is evidence against not-P, and evidence for not-P is evidence against P. One way of justifying the polarization is by defending an asymmetry of force between evidence for P and evidence against not-P: although evidence for P is evidence against not-P, its force for P is stronger than its force against not-P. Suppose my evidence for P and your evidence for not-P support P and not-P (respectively) to the same degree; given the asymmetry, I am justified in being more confident about the truth of P (than when I have no evidence for P) even though your evidence for not-P is evidence against P; and same for your being more confident about the truth of not-P.

  3. I don’t know whether this fact is relevant to the epistemological significance of the polarization phenomenon — partly because I am uncertain about the epistemological significance of the phenomenon in the first place — but it may lead someone to think twice before trying to rationalize it: the polarization phenomenon seems to be culturally local.

    According to, e.g., Peng & Nisbett, “Culture, Dialectics, and Reasoning About Contradiction”, American Psychologist. Vol. 54 (9) September 1999, pp. 741-754, especially experiment 5, East Asian subjects in situations similar to (though, I should note, not exactly the same as) the one under discussion tend to increase their evaluation of the plausibility of the proposition that they did not antecedently agree with. They are anti-polarizing, following what as Peng & Nisbett term a “naive dialecticism”.

  4. Here’s a hint about Tom’s view. Isn’t it sometimes rational to look for ways of explaining away studies that conflict with one’s view? And perhaps it can be rational not to look for ways of explaining away studies that conflict with one’s view.

  5. Jon, the last question should probably read “And perhaps it can be rational not to look for ways of explaining away studies that DON’T conflict with one’s view.” That might be made stronger “Perhaps it can be rational not to look for ways of explaining away studies that support one’s view”.

  6. Imagine they had ALL the facts and wholeheartedly disagreed about only one, just tipping the scale to either side: their disagreement would be total.

    From a statistical perspective this could be somewhat rational. Both proponent and opponent get a bigger slice of facts, their preferences/distributions move closer to the center and each other but their uncertainty/deviation decreases with a higher rate if I recall correctly.

Leave a Reply

Your email address will not be published. Required fields are marked *