Truth-insensitive epistemology: radical or commonsense?

Many philosophers endorse a truth-insensitivity hypothesis: certain core, philosophically important evaluative properties of a belief are insensitive to whether it is true. For example, if two possible agents believe the same proposition for the same reason, then either both are justified or neither is. This does not change if it turns out that only one of the two agents has a true belief. Epitomizing this line of thought are thought experiments about radically deceived “brains in vats.”

Proponents claim that the truth-insensitivity hypothesis is extremely intuitive and appealing pre-theoretically — we have an “overpowering inclination” to think that it’s true (Richard Fumerton). To deny the truth-insensitivity hypothesis has been labelled “extraordinary” and “dissident” (Earl Conee). However, other philosophers claim that exactly the opposite is true: the truth-insensitivity hypothesis itself is counterintuitive and violates commonsense. The appeal of truth-insensitive epistemology, they claim, is limited to narrow circles within “the professional philosophical community” (Jonathan Sutton).

In a paper forthcoming in Philosophy and Phenomenological Research, I investigated which side of this debate is correct. Proponents of the truth-insensitivity hypothesis illustrate their view’s plausibility with pairs of thought experiments. These pairs include mundane cases and fanciful “brain-in-a-vat” scenarios. I tested both sorts of cases.

Across three experiments (N = 1262), the results were absolutely clear:

Ordinary evaluations of belief were deeply truth-sensitive. There was a consistent pattern whereby belief and evidence were judged much more favorably when the proposition in question was true rather than false. This was true across multiple narrative contexts and for evaluations elicited with a wide range of normative vocabulary (e.g. justification, evidence, rationality, reasonableness, responsibility, and what an agent should believe). It was true when people judged the pairs of cases separately (between-subjects) and when they judged the cases simultaneously (within-subjects). The basic finding appears to be very robust.

I find it fascinating that commonsense cuts so strongly against truth-insensitivity. I also find it somewhat disconcerting in light of the way introductory texts tend to treat the matter. For instance, how many epistemology students in recent decades have had their own judgments marginalized or contradicted by what “we” find intuitively compelling, and to what effect? That seems like a question researchers in the field should take seriously.

(x-posted at the x-blog)


Comments

Truth-insensitive epistemology: radical or commonsense? — 19 Comments

  1. This is a highly interesting study with a surprising result. I have to quibble a bit with the #3 test question. You rightly noted that the interesting question regarding common sense is the comparative one, so experiment three seems like the most relevant experiment. But your question was, in my eyes, quite problematic: “Who has better evidence for thinking that he’s looking at a fox?” I am a truth insensitive epistemologist, but even I think that Harvey has “better evidence” in a certain sense. After all, his evidence is truth-tracking, so it is superior. But I do not think that the fact that Harvey’s evidence is better has any bearing on justification, even though justification is determined by the evidence, and it is this that I take to be truth-insensitivity.

    • Hi Anon Grad,

      Thanks for your comment! A couple quick points in reply:

      I didn’t actually note — nor do I think — that “the” interesting question about commonsense is comparative. But I agree that it is *an* interesting question.

      As I understand things, you’re saying that you accept the following three things: (1) evidence is truth-sensitive, and (2) evidence determines justification, but (3) justification is not truth-sensitive. Could you please elaborate on that a bit? (I could probably come up with readings of “determines” on which (1)-(3) express a consistent view. But instead of speculating, I figured I’d just ask.)

      In this context, it’s worth keeping in mind that there was no meaningful difference among the many different ways of probing for belief and evidence evaluations in Experiment 1. So I suspect that phrasing the test question in Experiment 3 in terms of “justification” won’t make a difference. At the same time, Experiment 2 showed that blame-judgments can be truth-insensitive. One possibility is that people are more likely to hear a “justification”-question as (implying) a “blame”-question. To the extent that people do that, perhaps you’re on to something.

      • Hi John,

        Thanks for your reply. All I mean to say is this. Suppose that a belief is justified just in case it is proportioned properly to the evidence. Then it seems like justification will track the truth in the same way the evidence does. But my point is that truth-tracking might be a good-making property of the evidence but not of the justification. Thinking about the respective purposes of evidence and of justification might help us see why. For it seems the purpose of evidence might be to point us to the truth, whereas the purpose of justification involves standing in the right relation to the evidence regardless of the truth. Truth-tracking helps evidence achieve its purpose, but truth-tracking does not help justification achieve its purpose. So truth-tracking can make evidence good while not making justification good.

        Well, I think there are several ways one could explain the distinction, and this is just one of them, but I think it is fairly intuitive anyway (and your comment that you could probably come up with some ways suggests to me that you have some idea of what I am talking about).

        I see you ran the experiment again using justification and did not get significantly different results. First, I appreciate your willingness to do that so quickly. Second, I suppose this is not surprising, given the results of experiment 1. Cheers!

        • Hi Anon Grad,

          That’s an elegant and appealing theory. Properly contextualized and developed, it sounds like it has the makings of a publishable unit.

          I was thinking about whether any of the results shed light on whether this two-part evaluative structure (belief aiming at evidential fit, evidence aiming at truth) is reflected in our ordinary evaluative practices. (Of course, this is separate from whether it should be reflected, and I’m not prejudging that question.) In Experiment 1, one of the questions asked participants, “What does [the agent’s] evidence justify him/her in believing?” To me, this sounds like a matter of evidential fit. As it turns out, people’s judgments of evidential fit were sensitive to the truth. When P was true, the central tendency was to judge that believing P fit the evidence. By contrast, when P was false, the central tendency was to judged that believing probably P fit the evidence.

  2. Anon Grad’s really interesting suggestion struck me as worth pursuing, so I ran a quick follow-up study to test whether switching from “better evidence” to “better justified” would result in a significantly different pattern when making a comparative assessment between a normally embodied human and his “brain-in-a-vat” (“BIV”) counterpart.

    Thirty-seven people participated. The procedures were exactly the same as in the “Plain” condition from Experiment 3 in the paper linked above, except this time participants were asked, “Who is better justified in thinking that he’s looking at a fox?”

    The results were very similar to those reported in the paper. The clear central tendency was to judge that the normally embodied human was better justified. On a seven point scale — where 0 indicates that neither agent is better justified, 3 is coded to mean normal agent is definitely better justified, and -3 is coded to mean that the BIV is definitely better justified — modal and median response was 2. Mean response was 1.35 (SD = 1.29), which was significantly above the neutral midpoint (p less .001). The effect size was large-to-very-large (MD = 1.35 [0.92, 1.78], d = 1.05).

    Only 19% of participants (7 of 37) thought that the two agents were equally justified. Approximately 76% (28 of 37) thought that the normal agent was better justified. Approximately 5% (2 of 37) thought that the BIV was better justified.

  3. The main claim I want to dispute is “I investigated which side of this debate is correct.” The reason is that I have little reason to believe, and some reason to doubt the reference class tested is the relevant reference class to the assertions.

    • Hi Trent,

      I’m super interested to learn what this other reference class is, and what your analysis of the data on it looks like. Please do share!

      In the meantime, I thought it worthwhile to draw your attention to a passage where Richard Fumerton argues against a truth-sensitive-ish theory of justification: early Goldman’s reliabilism. Fumerton’s main objection to reliabilism is that it implies that a “veridical perceiver” has better justified beliefs than “the victim of massive hallucination.” Fumerton objects,

      But we have this overpowering inclination to think that this [implication] is just wrong. We are convinced that however reasonable it is for the one to believe what he believes, it is just that reasonable for the other to have similar beliefs. Bad luck might deprive the one of knowledge, but it surely can’t deprive him of justified belief.

      These remarks occur on p. 93 of his 2006 book Epistemology. Others put forward the truth-insensitivity hypothesis and motivate with reference to similar pairs of examples. The extent of support offered for the view usually doesn’t get beyond claiming that it’s “obvious” or “intuitive” or asking rhetorical questions about what rejecting it would imply. (Exceptions include Chisholm and Ginet, who argue for the truth-insensitivity of justification by linking it to blamelessness, which is assumed to be truth-insensitive.) This strongly reflects my own experience in the classroom as an undergraduate and graduate student. It also reflects my experience in conversations with many contemporary epistemologists.

      • But this could be true! (If he’s using ‘we’ to refer to RF.)

        In all seriousness, I’m glad that you wrote this up. I hope it helps mitigate the ways in which the NED objection serves as a way of screening submissions. Fwiw, your findings seem to be in line with similar research on action carried out by Darley and Robinson: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=136811

        It might be nice to see some objections to truth requirements on warranted assertion, evidence, or justification that don’t just involve the appeal to NED intuitions. Spent years trying to defend factive accounts of each of these notions and there was a constant appeal to ‘our’ NED intuitions.

        • Thanks, Clayton. I suppose that the royal “we” might have made an unnoticed comeback. 🙂

          Great point about the connection to Darley & Robinson’s findings! Convergent evidence like that shouldn’t be taken lightly. In a companion paper (“Evidence of Factive Norms of Belief and Decision“), I actually cite those findings and note the similarity. (There I say that the results are “consistent with existing findings on people’s intuitions about responsibility and punishment, where objective facts matter beyond what is reflected in the agent’s evidence about the situation.”) Maybe I should put up a separate post about the remarkably consistent set of experimental findings that pertain to belief, assertion, and decision-making.

          I enthusiastically agree with you that referees really should stop wielding these alleged intuitions as cudgels. I expect that you’ve suffered this sort of abuse as much as anyone. I definitely sympathize!

          Btw, Jennifer Lackey has an argument against a truth requirement which does not merely appeal to such intuitions (though the two sorts of objection might not always be kept completely separate). The other argument is based on an assumption about the relationship between rule-breaking and blame. Something similar can be found in Igor Douven’s writings. The argument repays time spent considering it. (If you’re interested in references or my response, check out “The Test of Truth.”)

          • Hi John,

            Thanks for that. I knew about the Douven and Lackey stuff because I had to address their remarks about blame in the course of arguing for/responding to objections to a truth-requirement on warranted assertion (and justified belief, justified use of premises in practical and theoretical reasoning, evidence, etc.). Fwiw, in one of the first things I managed to sneak into print on this general set of topics, I realized that I’d have to try to be a bit slippery to get past referees, so I ended up saying that (a) it was weird to have something more demanding than a justified belief norm for practical reason and wrong to say (b) that it was wrong to work with something weaker than a justified, true belief norm for practical reason. Since I thought that ‘justified belief’ was factive, there wasn’t a difference here. My first success at slipping my view into print, but I had to do it under a dodgy description and say nothing about the redundancy of the truth-requirement.

  4. Hi John, interesting stuff! I was just wondering about what conclusions exactly you’re drawing from these results. As I’m sure you’ll agree, there’s a distinction between the intuitiveness of a theoretical claim and the intuitiveness of the case verdicts that such a theoretical claim predicts. A good example might be the famous Knobe effect type results. Both Knobe’s results and anecdotal reflection clearly show that the moral status of an act affects whether we’re likely to judge it as intentional. But that doesn’t show that there’s nothing unintuitive about the theoretical claim that the moral status of an act affects whether it counts as intentional. That claim still seems, to me, unintuitive. As a result, at least many readers will read Knobe’s studies not as lending strong support to the claim that moral status affects intentionality, but rather that we are subject to some cognitive bias where our moral judgments erroneously affect our judgments of intentionality.

    I wonder whether you think the same might be true here. The idea would be that there is still something unintuitive about the *theoretical* claim that the truth of a belief affects its justification, and as such we should read your results as showing whether a belief is true will infect our judgment our whether it’s justified or supported by the evidence, even though it shouldn’t. Here is a further analogy to make this plausible: in evaluating the baserunning decisions of baseball players, fans will often be strongly (even solely) influenced by the actual outcome of the play. For example, we’ll say that a decision to attempt to steal a base was a good one when the runner makes it (say, because the catcher botches the throw), but that it was a bad decision when when the runner gets caught (say, because the catcher makes a good throw), even if everything else is held constant. Likewise with all sorts of risky behavior. It’s pretty tempting to say that this is just a bias, though. We’re primed to judge a decision as wise when it works out and as imprudent when it doesn’t, even holding all else constant. But really risky behavior can be wise even when it doesn’t work out, and foolish even when it does. We’d never want to preclude those possibilities. This seems like an epistemic analogue: when the belief “works out” (is true), we’ll issue a post hoc evaluation of the evidence that is positive, and when it’s false we’ll issue an evaluation that’s negative.

    Anyway, it seems at least a potential reading of your data, no? Perhaps relatedly: is the within-subjects effect *just* as strong as the between-subjects effect?

    • Thanks, Alex!

      Yes, I agree that proponents of truth-insensitive epistemology could still argue that the evaluation of belief and evidence should be insensitive to the truth. I definitely don’t think my findings here preclude that. Indeed, it would be great to see future discussions focus on arguments of that sort instead of just telling readers, or assuming, that the view is incredibly intuitive and compelling in itself (I’m not suggesting that you’re doing that, only that it happens frequently in the epistemology literature).

      I agree that there’s a distinction between the intuitiveness of a theoretical claim and that of its implications. Interestingly, in this case, proponents of truth-insensitive epistemology usually attempt to illustrate their view’s appeal by presenting pairs of cases like the ones I tested. That is, they use the concrete cases to illustrate how compelling the general principle is supposed to be.

      The within-subjects and between-subjects effects were all similarly large.

  5. Following on somewhat from Alex’s point, I think it’s interesting that what you seem to have found is a (significant) group difference in response between different kinds of questions. But one could have very different responses to such isolated cases, where one’s response depends on truth/falsity, but nonetheless endorse the idea that truth/falsity makes no difference when presented with comparative cases i.e. given two subjects who differ only on true/false (e.g. the New Evil Demon case).

    Of course, if participants generally had the responses you found, *and* had the supposedly intuitive response to comparative cases, they would be inconsistent. But part of the supposed job of analytic philosophy is to reveal and iron out such inconsistencies.

    • Hello Ben,

      Thanks for chiming in.

      Perhaps I’ve written something misleading in the comments here, but I actually did not find a significant difference between any of the questions used in Experiment 1. These included questions about what the evidence supports, what the agent is justified in believing, reasonable in believing, rational in believing, responsible in believing, and what the agent should believe. Basically, all of these resulted in the same basic pattern of truth-sensitivity. Neither did I find any indication that the pattern of truth-sensitivity disappeared or even diminished when people considered both cases in the same context (i.e. here’s a normally embodied human and here’s his “brain-in-a-vat” counterpart — which one, if either, has better evidence?).

      In Experiment 2, I found an interesting, large difference between judgments about what an agent should believe and quality of evidence, on the one hand, and whether the agent is blameworthy, on the other. Blame judgments were truth-insensitive, whereas the other judgments were truth-sensitive.

      Also, I’ll just quickly mention that New Evil Demon cases (and BIV cases) do *not* differ only on the true/false dimension. They differ in many other potentially important dimensions. However, I did test that type of case alongside cases which actually did differ only on whether the target proposition was true/false. I found no difference between an ordinary false-belief case and a BIV case.

  6. Any guesses as to why philosophers have reported the contrary intuitions? I find it a bit frustrating that a lot of X-Phi stuff stops with noting that philosophers and the folk diverge about X, without even speculating as to why philosophers have the intuitions they have.

    • Hello David,

      A good question! I offer two (compatible) hypotheses. On the one hand, proponents of truth-insensitive epistemology might just have idiosyncratic intuitions and then suffer from a false consensus effect, perhaps amplified by a gate-keeper or self-selection effect within the discipline. On the other hand, they might be relying on perfectly normal intuitions about blamelessness — which, according to my results, was the only truth-insensitive form of evaluation — which they either misunderstand or misdescribe using vocabulary that, as it turns out, expresses highly truth-sensitive forms of evaluation. (Check p. 23 of the paper linked above.)

      In connection with this, I’ll note that I’ve not found experimental philosophers to be reticent about offering hypotheses about why people disagree. However, I’ve had anonymous reviewers express indignation at the suggestion that philosophers would be susceptible to factors knowns to bias human judgment. So perhaps in some of the cases you’re familiar with, the authors shied away from this sort of thing because they worried about inciting referees, or removed hypotheses that referees complained about. The anonymous review process presents plenty of opportunities for referees to abuse authors, so I can definitely sympathize with such a decision even if, in the bigger picture, the research would be better if accompanied by such hypotheses.

  7. I’m not sure either of those explanations seem totally satisfying to me (which doesn’t mean their wrong!). The mistake one might well be correct, but it just seems to push the question back; what is it about philosophers that makes them muddle blameworthiness with the other concepts. Maybe it’s that philosophy selects for individuals who are particularly concerned with the normative evaluation of other people’s epistemic conduct, and that means they’re more likely to run together other important epistemic notions with the notion of whether or not someone is blameworthy for forming a belief?
    My gut instinct meanwhile says that the gatekeeper thing is just likely to be false, unless there’s some strong correlation of not believing that things like ‘justification’ are insensitive with other unpopular views (always possible), because students who are exposed to epistemology at a level where the issue of truth-sensitivity comes up, are also likely to be exposed to externalist views on which at least many important dimensions of epistemic assessment do turn out to be truth sensitive. But maybe that’s underestimating how off-putting people find it to find things they disagree with described as ‘intuitive’? Is there any hard data on this?

    By the way, this is totally outside my area as a philosopher, but it seems to me, having now looked at the paper a bit more closely, that there are some not that unreasonable looking ways in which your results could fail to refute the claim that truth-insensitivity is intuitive IF truth-insensitivity is defined as ‘certain core, philosophically important evaluative properties of a belief are insensitive to whether it is true’, even if the 2nd and 3rd experiments do show that certain claims about what’s obviously or intuitively true about the beliefs of brains in vats being equally just are false. I may well be totally wrong about this though, and maybe you address one or both of the points somewhere in the paper and I just missed it.

    In the case of the first experiment, it’s notable that the most commonly given answer amongst those who refused to say that the subject should X the belief, where X is the relevant epistemic content, that the owned a certain watch or there was a gunshot in the woods, was that they should instead believe that it is probable that they own the watch/there is a gunshot in the woods. Now, it seems to me that one reason this might be the case is that the people actually think that ‘strictly speaking’ your only ever ok to X the probablistic claim, but that it’s ok to speak loosely and give people permission to X the absolute claim, when the possibility of error isn’t conversationally salient. (This is absolutely not evidence for the view in the previous sentence, but Tim Williamson attributes something like the view that strictly speaking, were only allowed to believe that things are probably true to the voice of post-scientific commonsense in his book for beginners that came out recently). This would be a ‘high standards, loose talk’ view, a bit like the view that Peter Unger use to have about ‘flat’ and ’empty’ and ‘knowledge’ in the 70s.

    The above seems a bit desperate even to me, but I think there’s a more serious problem with the 2nd and 3rd cases. There, the subjects in the cases where the belief is false, whether or not they are envatted, are all presumably (at least, this seems a reasonable way to read the descriptions), coming to believe something on the basis of misleading perceptual evidence, in the sense that the perceptual states in question aren’t veridical. But it seems like thinking that beliefs formed on misleading perceptual evidence are less justified/responsibly formed etc., is perfectly compatible with thinking that whether or not a belief is justified is insensitive to its truth, unless you identify the belief with the misleading perceptual state. In fact something like this problem *might* even come up for the first experiment, if people are thinking of the beliefs as essentially grounded in the belief that the list is reliable or the perceptual evidence is not probablistically misleading, in the sense that it ought to raise the degree that one thinks a proposition which is in fact false is likely to be true. Those are both differences between the true and the false beliefs which are not about whether those beliefs are true/false, strictly speaking. Though it’s clearly much more of a stretch to say something like this with the first experiment.

    • Hi, David. Yes, further work is needed to make good on any of these speculative hypotheses. I happen to find the one via blame-insensitivity to be the most promising. And I don’t think it pushes the question back in any problematic way. Pretty early on, some influential philosophers argued for a truth-insensitive theory of justification on the grounds that justification is blamelessness, and blamelessness is truth-insensitive. (Not sure whether that counts as “muddling.”) However, the deontological conception of justification became unpopular starting in the 80s. So lots of philosophers dropped the deontology but kept the truth-insensitivity hypothesis, perhaps without realizing that they had jettisoned the pre-theoretically compelling grounds previously associated with it.

      In general, questions about what we judge and why can be really interesting but also challenging to sort out. Since so much of recent philosophy trades in claims about what’s intuitively obvious, commonsense, and the like, it seems important for the field as a whole gain more awareness of the potential pitfalls and challenges in this area.

  8. ‘Pretty early on, some influential philosophers argued for a truth-insensitive theory of justification on the grounds that justification is blamelessness, and blamelessness is truth-insensitive. (Not sure whether that counts as “muddling.”) However, the deontological conception of justification became unpopular starting in the 80s. So lots of philosophers dropped the deontology but kept the truth-insensitivity hypothesis, perhaps without realizing that they had jettisoned the pre-theoretically compelling grounds previously associated with it.’

    Ah ok. That makes the idea that people have just got confused here *a lot* more plausible, I think.

Leave a Reply

Your email address will not be published. Required fields are marked *