The Epistemic Relevance of Origins of Belief

Most epistemologists recognize that many of the common informal fallacies are epistemically relevant in various ways. Thus, even though tu quoque arguments are irrelevant to the truth of the position being espoused, they are relevant epistemically because one way to provide supporting evidence for a view is to be able to explain away why intelligent people might disagree.

Recently, there has been a flurry of activity in blogdom about the genetic fallacy and whether Dan Dennett committed it in his latest work on the origins of religious belief. Some say he did, some say he didn’t. Those who say he didn’t point out that finding unsavory origins for a belief is epistemically relevant even if it is not relevant to the truth or falsity of a belief.

Well, so it is, sometimes at least. But, of course, any good epistemologist will want a fully general account here, and it is certainly false that finding unsavory origins for a belief always undermines any warrant or justification one might have for the belief.

In short, unsavory origins do not imply irrationality or lack of justification or warrant. My father thinks there were more earthquakes in the 20th century than in any previous century because it fits his wacky eschatology. I grew up thinking so as well, and still do. I inherited the opinion by testimony from the unsavory origins just described. But I also know it to be true: there’s a quite neat plate tectonics explanation for it. There’s nothing at all epistemically untoward that afflicts my belief because of its unsavory origins.

Or think of Foley’s Swampman, who arises fully formed both physically and doxastically out of a lightning strike in the swamp. Ask him anything and he will give you a correct answer. His grasp of truth far exceeds any of ours, and Foley concludes from this that Swampman knows a lot more than you or I. I’m not sure this is correct, but what I do think is correct is that he is epistemically superior in some really important way. I’d prefer to say his understanding far exceeds our own. But whatever we say, we shouldn’t attribute irrationality or lack of justification to the guy because of our knowledge of the origins of his beliefs.

Even so, there are lots of cases where origin makes a belief epistemically suspect. So the question is, for we fans of full generality, what’s the difference between cases where origins cast aspersions on a belief and cases where they don’t?


The Epistemic Relevance of Origins of Belief — 9 Comments

  1. Foley-swampman (Fs) is interesting. So Fs correctly believes that he is a swampman who happens to have only true beliefs. You ask Fs: “Gee, Fs, it seems really unlikely that you would have so many true beliefs and no erroneous beliefs, given that you were formed by a lightning strike. What do you think accounts for that?” Fs says: “Nothing. It’s just a huge coincidence.” Now it seems to me that there really is something irrational about Fs, even though by hypothesis his answer to your question is correct. As a first pass: Fs holds a set of beliefs about where his first-order beliefs came from, on which it is extremely improbable that his first-order beliefs are true (and he acknowledges that); yet he still holds on to those first-order beliefs. That’s a kind of incoherence. (I call this meta-incoherence, because it is the meta-beliefs that fail to cohere with the first-order beliefs.)

    About the informal “fallacies”: I think a lot of so-called “fallacies” are actually perfectly good non-demonstrative reasoning (one reason why most informal logic books are annoying). Btw, I don’t follow your distinction between something’s being “epistemically relevant” and its being “relevant to the truth of” some proposition. I would have thought those two things were the same. So I would say that sometimes, the origin of a belief is relevant to its truth (that is, to the probability of its being true). Specifically, if I know that X is the (only) reason why I believe that P, and I also know that beliefs based on X are unreliable, then that’s an undercutting defeater for my belief that P.

    In your earthquake example, presumably the reason why you now believe there were more earthquakes in the 20th century than in previous centuries is because you read about that in science books, or something like that. Assuming this is a good justification, the fact that you first believed the proposition for some inadequate reason does not defeat the justification of your present belief. On the other hand, the facts about the origins of your father’s belief would provide an undercutting defeater for his belief, as long as he doesn’t have any alternative, adequate reason for belief.

    Similarly, whatever Dennett said about the origins of religious belief might provide an undercutting defeater for the religious beliefs of those individuals for whom what Dennett said was an adequate account of their (only) reason for endorsing religion. But if you have another, adequate reason for endorsing religion, this other reason would obviously not be defeated. (But we have to be careful here: one’s judgement as to what is an adequate reason might itself be subject to undercutting defeaters of the same sort. E.g., maybe I judge the Ontological Argument to be sound, really, because I want to live on after I die.)

  2. Mike, interesting thoughts here. I’ll take them in a bit different order.

    First, it’s the distinction between undercutting and rebutting defeaters that underlies the distinction between epistemic relevance and truth relevance. That’s all I meant. Of course, anything relevant to the truth of a claim is epistemically relevant, but not everything epistemically relevant is truth relevant because undercutting defeaters say nothing about the truth or falsity of the proposition in question.

    I like what you say about informal fallacies; I think it is exactly right. It’s an interesting epistemological exercise to say how and when they are good arguments, but it’s clear they (often) are.

    I like your principle, too, but it makes books like Dennett’s much harder to see as epistemically relevant. Note that the epistemic relevance depends on what reasons a person actually has for what they believe, and it would be rather astonishing to me to find out that an etiological account or the origins of religious belief was exhaustive of the reasons ordinary religious people have for their beliefs. Of course, it is another question whether these other reasons are good ones, and I’m not making any claims about that. I expect that a Dennett story could go some way to explaining how and why people originally develop religious beliefs, but the earthquake story and your quite nice principle undermine any attempt to draw a quick inference from this point to the epistemic impropriety of the beliefs in question. In fact, your principle, given the obvious ways in which what we base our beliefs on develops and changes as we mature, makes it quite hard to find any epistemic relevance for such a Dennett story.

  3. “maybe I judge the Ontological Argument to be sound, really, because I want to live on after I die.”

    I don’t know. Maybe it’s true that had I not wanted to live on, I would not have judged OA to be sound. But maybe it’s also true that had I not wanted to live on, my judgment on the matter would not have been as good or I would not have looked at the argument so closely or I would not have seen so many rational responses to the argument. So, yes, the causal explanation for the judgment is no doubt this unappealing, self-interested desire. And, yes, I see many rational responses because I want to see them. But that does not mean that I don’t actually see these rational responses.

  4. I have a worry about Huemer’s claim about Fs appearing irrational.

    Fs holds a set of beliefs about where his first-order beliefs came from, on which it is extremely improbable that his first-order beliefs are true (and he acknowledges that); yet he still holds on to those first-order beliefs. That’s a kind of incoherence.

    I don’t see the force of the probabilistic argument offered. We reason under conditions of uncertainty precisely when we are not in the same state as Fs, namely when we do not know that our first-order beliefs are true. Fs does, by hypothesis.

    Analogously, it may be equally unlikely for an 18 year old American female Yale student from an upper east side New York family to be married to her second husband. But if Sally is such a female having a second-go with Joe, well, she might be irrational or plain crazy, but those of us in the know aren’t irrational to doubt that Joe is her second husband.

  5. ooo, a pile-up of negations at the end of that sentence. ‘believe’, ‘doubt’,…these are close, right?

    Skipping ahead, a question to settle is in what sense is it improbable for Fs’s beliefs to be true? Given all the evidence about Fs, it is not improbable at all. In fact, it is certain. Given typical lightning strikes, it is practically impossible; there are no swampmen. The point I’m gesturing toward is that there is a reference class imagined here, not just a partition of the language into first-order beliefs and their basis. To assess the force of the probability argument, we need to know what statistics hold in the swampman example, and what statistics don’t.

    So, I confess to twisting Huemer’s example out of its original context with Sally and Joe. But the conjecture is that the same point crops up being more faithful to the example but that it might be harder to see. That’s the idea, more or less.

  6. I don’t actually know what Dennett said. However, maybe what he intended was an accusation of bias. Your current reason for believing P might be argument A, but it also might be that you judge A a good argument due to bias on your part, and perhaps Dennett is giving evidence that this is so.

    Mike A, when you say “But that does not mean that I don’t actually see these rational responses,” I’m not sure whether your point is (a) that it does not deductively follow that you don’t actually see these rational responses, or if you’re claiming that it isn’t even any more likely that you don’t actually see these rational responses. I think that if you acknowledge that you are biased about X, then you should also acknowledge that your judgements as to what constitute “rational defenses” of X are less likely to be correct than they otherwise would be. Of course, there are some cases in which bias would have little or no relevance (no matter how much I want to believe I’m a millionaire, that won’t make me actually see “$1,000,000” when I look at my bank statement), but this isn’t true of the Ontological Argument. Philosophical and religious issues are prime cases in which bias may cause people to “see” what is not the case.

    I don’t know what the right reference class is, except vaguely–it’s the class of all beliefs formed by the method you used to form the given belief, but I haven’t got an analysis of what is “the method” a belief was formed by. In other words, say you formed the belief that A by method M. Suppose (for simplicity) that you’re sure of that, and you’re also sure that the probability of a belief formed by M being true is p. Then I would say you should believe A to degree p, in order to be meta-coherent.

    About your 18-yr-old Yale student example: the prior probability of the scenario you describe occurring is low. But my analysis would be that you presumably formed the beliefs that the person is 18, has a 2nd husband, and so on, by some method(s) such that the probability of beliefs formed by those methods being true is high. That’s why you’re meta-coherent. What you believe about the student may be improbable conditional on some set of evidence, but not conditional on the fact of how you actually came by the belief (or, more precisely, of what your belief is now based on).

    Btw, not having read the relevant Foley piece, any more than I read the relevant Dennett piece, I was taking it that F-swampman knows next to nothing. FS has a lot of true beliefs (which Richard Foley mistakenly thinks is sufficient for knowledge), but almost no knowledge, since almost none of his beliefs are justified. What FS should do (though, by stipulation, he doesn’t) is to start questioning almost everything he believes, since he has no coherent account of how he could know those things.

  7. Mike, your assessment of Fs accords with mine, but I’d like to pursue your reasons a bit. But I think I’ll do it in a separate post, since it is so interesting to me.

  8. I have a formal theory paper on exactly this subject, rationality constraints between beliefs and beliefs about their origins:

    Uncommon Priors Require Origin Disputes
    by Robin Hanson

    In standard belief models, priors are always common knowledge. This prevents
    such models from representing agents’ probabilistic beliefs about the origins
    of their priors. By embedding standard models in a larger standard model,
    however, *pre-priors* can describe such beliefs. When an agent’s prior and
    pre-prior are mutually consistent, he must believe that his prior would only
    have been different in situations where relevant event chances were different,
    but that variation in other agents’ priors are otherwise completely unrelated
    to which events are how likely. Thus Bayesians who agree enough about the
    origins of their priors must have the same priors.
    available at
    It is still under review (after three years and three revisions) at Theory and Decision.

  9. I think there are two different concerns with the genetic fallacy. One is whether the source is reliable; that’s an epistemic issue. Gigerenzer and Todd’s research indicates that we reason fast and furiously based on simple heuristic rules, and one of them, which is generally reliable, is that the source of a claim can be an indicator of the worth of the claim. It’s not indefeasible, though, so we should consider if a genetic claim is worthwhile on other grounds.

    The other issue is whether or not the argument is compelling despite the source. This is a rhetorical matter. If the argument is from Nietzsche or some other distasteful authority, it doesn’t matter if the premises are true and the argument is valid.

Leave a Reply

Your email address will not be published. Required fields are marked *