In defence of adherence

What more is required of a belief, besides being justified and true (JTB), if the belief is to count as knowledge? In my view, at least two further conditions are required: the belief must meet the conditions of safety and adherence.

Safety is popular these days: it has been defended by many distinguished epistemologists, such as Duncan Pritchard and Timothy Williamson, among others. But adherence – the fourth condition that Robert Nozick imposed on knowledge – has few defenders. Most of the philosophers who have discussed adherence have rejected it. In this post, I defend adherence against its detractors.

First, let me explain how I understand adherence. Let us focus on a case C1 in which a believer believes a true proposition p1 in a doxastically justified or rational manner. Then this belief “adheres to the truth” if and only if every normal case C2 that is sufficiently similar to C1 with respect to what makes the belief rationally held in C1, and with respect to the case’s target proposition p2’s being true, is also similar in that the believer believes p2 in C2.

Adherence explains why knowledge is lacking in cases in which the believer’s environment is full of misleading defeating evidence, which by a fluke the believer never encounters. In a case of this sort, the belief does not adhere to the truth – because there are normal cases sufficiently similar to the actual case, both with respect to what makes the belief rational in the actual case, and with respect to the target proposition’s being true, in which the believer encounters this misleading defeating evidence and so does not believe the proposition.

For example, in Gilbert Harman’s “assassination” case (Thought, p. 143), what justifies the believer in believing that the political leader has been assassinated is the belief’s being based on an experience of hearing the original radio broadcast. So, cases in which the believer hears the radio broadcast, but then also encounters the denials of the original broadcast that are printed everywhere, and so ceases to be believe that the leader has been assassinated, will count (in at least some contexts) as “sufficiently similar”.

To take another example, Timothy Williamson (Knowledge and its Limits, p. 62) considers a burglar who ransacks a house all night, risking discovery, because he knows that the house contains a diamond. If the burglar had merely safely believed that the house contained a diamond, the house could have been full of misleading defeating evidence, which would have led the burglar to become agnostic about whether the house contained a diamond. It is only because the burglar’s belief adheres robustly to the truth that it is so unlikely that the burglar will abandon the belief that the house contains a diamond.

(In fact, I would defend a contextualist interpretation of adherence, according to which the context in which the term ‘knowledge’ is used may make a difference to how similar to the actual case these other cases have to be in order to count as “sufficiently similar” in the context; but we may bracket these complexities for present purposes.)

Some philosophers have tried to give direct counterexamples to adherence. Here is an attempted counterexample due to Ernest Sosa (“Tracking, competence, and knowledge”, p. 274):

One can know that one faces a bird when one sees a large pelican on the lawn in plain daylight even if there might easily have been a solitary bird before one unseen, a small robin perched in the shade, in which case it is false that one would have believed that one faced a bird. Prima facie, then, it seems unnecessary that one’s belief be [adherent]; one might perhaps know through believing safely even if one does not believe [adherently].

A second attempted counterexample is due to Saul Kripke (Philosophical Troubles, p. 178):

Suppose that Mary is a physicist who places a detector plate so that it detects any photon that happens to go to the right. If the photon goes to the left, she will have no idea whether a photon has been emitted or not. Suppose a photon is emitted, that it does hit the detector plate (which is at the right), and that Mary concludes that a photon has been emitted. Intuitively, it seems clear that her conclusion indeed does constitute knowledge. But is Nozick’s fourth condition satisfied? No, for it is not true, according to Nozick’s conception of such counterfactuals, that if a photon had been emitted, Mary would have believed that a photon was emitted. The photon might well have gone to the left, in which case Mary would have had no beliefs about the matter.

These cases may be counterexamples to rough and imprecise statements of adherence, but it seems clear that they are not counterexamples to the formulation that I have given.

Consider the case in which I see a large pelican on the lawn in daylight in front of me. What makes my belief that there is a bird in front of me rational? Presumably, it is the fact that I have an experience of a certain sort, an experience that inclines me to deploy my concept of a bird. So the only cases that count as “sufficiently similar” are other cases in which I have an experience of this sort. Clearly, cases in which I have no such experience – even if in fact there is a bird in front of me, a small robin concealed in the shade – are just not “sufficiently similar”.

Kripke’s case suffers from a similar defect – even though Kripke claims about his case “Here the method is held fixed.” As I shall argue here, this is a mistake: the method is not “held fixed”. In the actual case, Mary’s belief is rationally held because it is based on an experience of observing the detector plate’s responding to the presence of a photon. Cases in which Mary has no such experience are just not sufficiently similar.

In fact, Nozick himself made a similar mistake, assuming that each of the relevant “methods” could be used to answer the question of “whether or not” the target proposition p is true. It is clear, however, that in many cases, the methods that could be used to come to know a proposition are very different from any methods that could be used to come to know the proposition’s negation.

For instance, to know that an existentially quantified proposition is true, one needs only to observe one true instance; but to know the negation of such an existentially quantified proposition, one would have to survey the entire domain of quantification. (E.g. to know that there is a spider in the room, one needs only to observe a single spider; to know that there is no spider in the room, one would have to search the whole room to make sure that no spider is hiding anywhere.) As they say, “proving a negative” is harder than proving the corresponding positive statement.

It seems to me, then, that adherence is not vulnerable to these counterexamples. But Kieran Setiya (Knowing Right from Wrong, pp. 91f.) has suggested a more general kind of objection:

I can know the truth by a method whose threshold for delivering a verdict is extremely high, so high that it virtually always leaves me agnostic. A method of this kind may be epistemically poor in other respects; but it can be a source of knowledge.

This may sound like a single objection, but in fact there are two very different kinds of case that are suggested by what Setiya says here.

In some cases, I may believe a true proposition by a “method” that is the same as an ordinary rational method except that it is arbitrarily restricted in some way. E.g. I believe a proposition that I have proved through rigorous mathematical reasoning, but only on the condition that I also believe that today is a Thursday (suppose that if I had not believed that today is a Thursday, I would have responded to this mathematical reasoning with agnosticism). Or I believe what I seem to see before my eyes, but only so long as there is nothing apparently orange in my field of vision (if there had been anything apparently orange in my field of vision, I would have been totally agnostic about the scene before my eyes).

In these cases, the belief in question seems not to be doxastically justified or rationally held. The believer is basing her belief in crucial part on utterly irrelevant considerations, and so the dispositions that the believer is manifesting do not count as rational dispositions. This sort of irrationality seems to me to be incompatible with genuine knowledge.

In some other cases, it is rational for one to use a high-standards method, or a method that can only be used in a narrow range of cases. Perhaps a physician is trying to diagnose whether a patient has a certain illness, and the only available test is one that yields a verdict only in a very narrow range of cases; but luckily, in the actual case at hand, this test does indeed yield a verdict.

In this case, as with Sosa’s and Kripke’s examples, it seems to me that cases in which the test yields no verdict are not just sufficiently similar to the actual case. So cases of this sort are not counterexamples to adherence.

In short, the only cases of justified true beliefs that fail to satisfy adherence are cases where the thinker’s environment is rife with misleading defeating evidence, which by a fluke the thinker never encounters. Unless it is wrong to deny knowledge in such cases, adherence is not vulnerable to the objections that have been raised against it.


In defence of adherence — 15 Comments

  1. Does the account give the correct verdict (that is, that the subject knows) in Paxson and Lehrer’s Grabit case? I have my original visual evidence that Grabit stole the book, but there is a relevantly similar situation in which I’m aware of the misleading counterevidence (that Grabit’s mother says that Grabit has a twin). In that case, I don’t believe that Grabit stole the book. The general worry is that if adding awareness of barn facades to leave the cases sufficiently similar, then adding awareness of knowledge-allowing misleading evidence will also leave the cases sufficiently similar.

  2. Hi Jeremy —

    I’m not sure that I agree with you about the “correct verdict” on the Grabit case. If Grabit’s mother has taken out full-page advertisements in all the newspapers, and is constantly being interviewed on national television, repeating her false claims that Grabit has a twin, then the Grabit case seems to me just like Harman’s assassination case — which (at least in some contexts) cannot be truly called a case of “knowledge.”

    On the other hand, if it couldn’t that easily happen that you would encounter this misleading evidence, then the case in which you do encounter it doesn’t seem sufficiently similar to the actual case, and so (at least in most contexts of utterance) it will be true to say that this is a case of “knowledge.”

    I’m not at all sure that I understand your “general worry” — but at all events, I wouldn’t appeal to adherence to deal with barn façade cases. In the barn façade cases, the belief is unsafe: there are relevantly similar cases in which you hold the belief but the proposition believed is false. Indeed, we can stipulate that in these cases, the belief adheres to the truth as tightly as you like (i.e. there is no real chance of your encountering any evidence that would lead you not to believe that the object in front of you is a barn).

  3. Ralph, I was thinking more of a case in which Tom’s mother just had mentioned (say, to the police) that Tom was abroad and that he had a twin brother (so hadn’t broadcast it all across the media). I might not have enough of a handle on what makes a case relevantly similar. If Tom’s mother could be overheard from State St. and I just happened to walk down Main St. instead, the case in which I overhear her doesn’t seem that different. But I still have the intuition that I have knowledge. But I admit it’s not a super-strong intuition.

  4. I see — thanks, Jeremy!

    The case that you have in mind seems to me to call out for a contextualist treatment. In some contexts, the chance of your overhearing Tom’s mother’s testimony will seem to be a real chance, and in those contexts it may not be true to say that you “know.” But in other contexts, the chance of your overhearing this testimony will seem remote, and in those contexts, it may be true to say that you “know.”

    As for what makes a case “relevantly similar” to the case in question C1, this post is meant to be something of a sequel to my last post, Aptness entails safety.

    In general, there are two aspects of similarity that seem important:
    1. On the internal side, the cases must be similar to C1 with respect to the factors that make the belief rational (i.e. doxastically justified) in C1.
    2. On the external side, the cases must be similar to C1 with respect to the extent to which, and the way in which, C1 counts as “normal” for the operation of the cognitive capacities involved.

    Finally, I propose that with respect to both aspects, how similar a case must be to C1 to count as “sufficiently similar” is just determined by context.

  5. Ralph, what about a case in which I just believe my eyes in thinking there’s a tree over there, but I am such that whenever I hear a robin sing the sound makes me extremely skeptical and I refuse to believe my eyes based on a glance from one point of view (I require seeing a thing from various angles before forming a belief about what it is). If, in the actual case, I’m not hearing a robin sing, and I rationally believe my eyes in thinking there’s a tree over there when I see one, isn’t my belief knowledge? Yet it wouldn’t satisfy adherence, would it? For, there are normal cases – it’s plenty normal to hear a robin sing in a tree – similar in respect of what makes my belief rationally appropriate (the tree-ish experience) and in similar respect of there being a tree before me but in which I don’t believe what I see is a tree.

    I don’t see why the method I use in the actual case would have to be restricted or complex – I’m just relying on visual appearances in forming doxastic attitudes about what’s before me. Nothing about the robin’s sound is part of my method.

    Maybe you’ll want to say that the method is different in cases in which I do hear the robin’s sound. But couldn’t I use the same method even when I hear the robin? Perhaps it’s just that hearing the sound raises my threshold for accepting beliefs about what’s in my environment based on visual appearances. Or perhaps hearing the sound makes me believe irrationally in various defeaters such as “I’m in tree-facade country.”

    Generally, isn’t there a worry about making too much hang on what I would believe in various other normal cases? Maybe I’d have higher standards in those cases; maybe I misapply the method, have crazy beliefs in defeaters, etc. Still, given the case I’m actually in, when all is going just fine, I do know there’s a tree over there.

  6. Thanks, Matt! This is a very nice case. Let me explain how I’m inclined to respond.

    In general (as I explained in the comments on my previous post, Aptness entails safety), I’m assuming that our account of knowledge is given against the background of an independent account of rationality / doxastic justification. On the kind of view that I favour, whether or not a belief counts as rationally held (i.e. doxastically justified) depends on the kind of dispositions that the believer is manifesting in holding the belief.

    So, we need to know what dispositions you manifest in believing yourself to be confronted with a tree. Your belief will count as knowledge only if these dispositions are rational dispositions; and rational dispositions, it seems, must involve some sensitivity to the presence or absence of defeating conditions. So what dispositions are you manifesting in this case?

    There are two relevant possibilities here:
    1. Perhaps the dispositions that you manifest in believing yourself to be confronted with a tree involve treating the sound of a robin’s singing as a defeater for the visual experience as of seeing a tree.
    2. Alternatively, it might not be the case that the dispositions that you actually manifest involve treating the sound of a robin’s singing as a defeater in this way. It is just that you have a separate disposition, to start going crazy in response to hearing the sound of the robin singing.

    In the first case (1), I would say, your belief fails to count as knowledge because it is not rationally held or doxastically justified: it results from your manifesting a crazy disposition that responds to tree-experiences-in-the-absence-of-robins-singing, rather than to tree-experiences-in-the-absence-of-genuine-defeaters.

    In the second case (2), I would say, the case in which you hear the robin singing is not “sufficiently similar” to the actual case. Of course, it is, as you rightly say, “plenty similar” in terms of the external normality of your circumstances. (What is more normal, after all, than a robin’s audibly singing in the vicinity of a tree?!) But it is not at all similar in internal respects, since in the actual case, you are manifesting perfectly ordinary rational dispositions, whereas in this case, you have gone crazy and are now manifesting some different irrational dispositions instead.

  7. I don’t think the account gets this case, from my “Competence to Know” (Phil. Studies, early online)

    ROCK AND HARD PLACE: Annette is taking a walk through the countryside. Looking at what seems to her to be a sheep in the field, she forms the belief that there is a sheep in the field on the basis of her perceptual experience. In fact it is a sheepdog, but there is a sheep standing behind a rock, out of view, that the dog is keeping track of. Unbeknownst to Annette, she is in hard-working sheepdog country, in which it is very rare for a sheepdog to be in a field unless it is keeping close watch on its sheep.

    (The case is further described: “… the case is not a kind of fake barn case. Annette’s perceptual faculties are reliable at discriminating sheep from non-sheep in hard-working sheepdog country. It is just that, when she is in circumstances where a dog in a field looks like a sheep to her, it is highly likely that her perceptual belief that there is a sheep in the field will be true.”)

    Adherence seems to be satisfied because every normal case sufficiently similar to C2 either has a sheepdog following a sheep causing the perceptual experience or has a sheep causing the perceptual experience. Annette would believe truly on the basis of such an experience in all such cases. Nevertheless intuitively she fails to know.

  8. Thanks, Lisa!

    Your case is interesting, but it is not a counterexample to the claim that adherence is necessary for knowledge. (It would be a counterexample to the claim that adherence is sufficient for knowledge, but I certainly wasn’t making that claim.)

    So, I agree that in your case, Annette’s belief satisfies adherence. So why does not Annette’s belief count as knowledge in this case? I would say that it is because this belief is based on a false lemma: Annette’s true belief “There is a sheep in that field” is based on her false belief “That is a sheep” (in your case, the demonstrated object referred to as “That” is not a sheep, but a dog).

    In general, I would say that beliefs that are based on false lemmas in this way cannot count as knowledge. (In the end, I would hope to argue that adherence, safety, and the no-false-lemmas requirement can all be explained on the basis of the right sort of “aptness” — but arguing for that is a task for another occasion…)

  9. Thanks, Ralph. That gives me a better idea of how you’re understanding adherence. I do worry a bit about your (1), though. Couldn’t someone know some humdrum fact but also be disposed to treat a certain consideration that doesn’t really bear on it as a defeater? So, perhaps a person knows P based on evidence E even despite being disposed not to believe P given the combination of E and the knowledge that the tealeaves suggest not-P. More realistic cases might involve people who know based on solid evidence but who are disposed to give too much weight to the word of some authority figure (e.g., a certain sort of radio talk show host or TV news commentator) if that figure suggested otherwise, which did not transpire but could have done in a normal case.

  10. That’s a good point, Matt. I guess my official view is that rationality also comes in degrees, and knowledge doesn’t require perfect rationality; it only requires that the agent’s belief should be sufficiently rational.

    Then (of course…) I will go contextualist about how rational the agent’s belief in the proposition p has to be if the agent is to count as “knowing” p.

    The upshot is a kind of double contextualism:

    1. On the external side, there is a range of cases such that in each of these cases, the knower must have a belief in the case’s target proposition if and only if the proposition is true. (The ‘if’ part of this corresponds to adherence, and the ‘only if’ part corresponds to safety.) But what exactly this range of cases is depends on context.
    2. On the internal side, a belief can constitute knowledge only if the belief is rationally held (i.e. doxastically justified). But the degree of rationality that the belief has to exemplify if it is to count as knowledge also varies with context.

    So, to your question — Can’t one know a proposition p even though one’s belief in p is the manifestation of a less-than-perfectly rational disposition? — I say, like the good contextualist I am: In strict contexts, the true answer is ‘No’; but in more relaxed contexts, the answer is ‘Yes’.

  11. Hi Ralph,

    Thanks! I read your post as claiming that Knowledge = Justified true safe adherent belief. My mistake. Thanks for clarifying!

    I worry that a no false-lemmas condition overgeneralizes. We can, I think, have knowledge from falsehood. But that is an argument for another day.



    • Lisa —
      You’re absolutely right that any no-false-lemmas requirement has to be very carefully (and quite narrowly) formulated. In some sense of ‘from’, there is indeed “knowledge from falsehood.” I would hope to argue that we can distinguish between lemmas, in a strict sense of the term, and other cases where a piece of knowledge is in a looser way “based on” a false belief. But as you say, it’s an argument for another day…

  12. Hi Ralph,

    At the end you write, “Unless it is wrong to deny knowledge in such cases, adherence is not vulnerable to the objections that have been raised against it.” But it definitely seems that knowledge is present in many such cases. So, for instance, the protagonist in Harman’s assassination case knows that the political leader is dead. And the person in fake barn country knows that it’s a barn.

    So I was wondering, if the view you’re discussing implies that knowledge is absent in such cases, why is there such a strong tendency to attribute knowledge? Is it a matter of individual differences in what’s thought of as “normal” or “sufficiently similar”?

    • John —

      You say: ” the protagonist in Harman’s assassination case knows that the political leader is dead.” I am willing to concede that there are (relatively permissive, low-standards) contexts where it is true to say this; but I’m convinced that there are also other (stricter, higher-standards) contexts where it is true to say that the protagonist does not know this.

      In general, adherence seems to me to have genuine intuitive support. E.g. see my earlier post Meno requires adherence, not safety; and as I pointed out above, Williamson’s claims about the causal efficacy of knowledge seem to imply that knowledge must satisfy adherence, at least to some degree.

      So, I would just give a contextualist explanation of the tendency that we sometimes have to attribute knowledge in the assassination cases. In some contexts, cases where the agent encounters the defeating evidence just aren’t deemed relevant. Still, in other contexts, such cases are deemed relevant, in which case it is true to deny knowledge in such cases.

      Incidentally, as I explained above in replying to Jeremy Fantl, I don’t think that it’s plausible to say that adherence explains our tendency to deny knowledge in the fake barn cases (at least in the stricter contexts, where a wider range of cases counts as relevant). In my view, it is safety, not adherence, that explains this.

  13. Hi Ralph,

    Thanks. So is there a principled way to decide when we’d expect low standards as opposed to high standards to predominate and generate positive, rather than negative, knowledge verdicts?

    Lacking that, it doesn’t really seem, at least to me, like the proposed view “explains why knowledge is lacking” in certain cases. Instead, it seems like we’re retrofitting the view with a contextualist theory of relevance/similarity, and this retrofitting is guided by the knowledge verdict. But maybe I’m missing something.

    Incidentally, every time researchers have looked, people tend to attribute knowledge in fake barn cases. So it doesn’t seem like a tendency to deny knowledge in such cases exists to be explained

Leave a Reply

Your email address will not be published. Required fields are marked *