The proto-reliabilist hypothesis

In contemporary Anglo-American epistemology, it is very widely assumed that knowledge must be reliably produced. On this view, knowledge must be produced by abilities (processes, faculties, powers, etc.) that “will or would yield mostly true beliefs,” as William Alston put it. Call this consensus view “knowledge reliabilism.”

One thing I’ve always been surprised by is how little explicit, direct argumentation there is for knowledge reliabilism in the literature. An old paper by Goldman contains a weak explanatory argument, which gets cited sometimes. Aside from that, the main consideration offered in support of knowledge reliabilism is that it’s just commonsense. For instance, Edward Craig claims that reliabilism “matches our everyday practice with the concept of knowledge as actually found,” that it is “a good fit to the intuitive extension of ‘know’.” And Ernest Sosa claims that reliabilism is the theoretical “correlate” of “commonsense” epistemology. Call this “the proto-reliabilist hypothesis” about folk epistemology.

The proto-reliabilist hypothesis makes at least a couple straightforward predictions. First, people will tend to deny knowledge in cases of unreliably formed belief. Second, clear and explicit differences in reliability should produce large differences in people’s willingness to attribute knowledge. These predictions can be tested with some very simple experiments. Below I briefly describe one I ran.

Participants read a brief story about Alvin. While visiting a friend in an unfamiliar town, Alvin needs to pick up a prescription. He’s on his way to the pharmacy and approaches an intersection where he needs to turn right. Some crucial details of the story differed across conditions. I manipulated whether Alvin was very unreliable or very reliable at remembering driving directions. I also manipulated whether he made the incorrect or correct turn at the intersection. Here is the text participants read, with the two manipulations noted in brackets:

Alvin is very [unreliable/reliable] at remembering driving directions. Today he is visiting a friend in an unfamiliar town. Alvin needs to pick up a prescription while he is there, so his friend gives him directions to the pharmacy. On the way, Alvin needs to make a [left/right] turn at an intersection. Alvin gets to the intersection and turns right.

Participants then responded to an open knowledge probe:

When he got to the intersection, Alvin _____ that he should turn right to get to the pharmacy.

The options were “knew” and “only thought.” Here are the results:

protoreliabilsm

 

The results from the two conditions where Alvin makes the incorrect turn (the “false” conditions) were exactly as you would expect: he thinks he should turn left, so people overwhelmingly denied that he knows he should turn right. But the results from the two conditions where Alvin makes the correct turn (the “true” conditions) were very different from what the proto-reliabilist hypothesis predicts. When Alvin made the correct turn, people attributed knowledge at similarly high rates, regardless of whether he was very reliable or very unreliable at remembering driving directions (80% vs. 77%).

This same basic pair of findings — high rates of knowledge attribution for beliefs produced by unreliable abilities, and little to no effect of reliability/unreliability on knowledge attributions — replicates across different narrative contexts, cognitive abilities, and ways of measuring knowledge attributions.

Overall, this leads me to conclude that the proto-reliabilist hypothesis is false. Knowledge ordinarily understood does not require reliability.

(A fuller description of these findings and other studies can be found in a paper forthcoming in Ergo.)


Comments

The proto-reliabilist hypothesis — 15 Comments

  1. Hi John,

    I think that’s interesting and I agree that it would be good if there were more arguments for the reliability condition on knowledge (and that the actual world isn’t all that good and there aren’t the arguments that there should be). One question, though, about the case/prompt/response. I haven’t had the chance to read the other cases, but there’s something about this case that worries me a bit. My initial reaction is something like this. Alvin is unreliable when it comes to remembering directions, but it seems that the case described is a case in which I might think that he did (somewhat surprisingly) remember the directions. In all the cases where he does remember the directions, there’s a reliable process that’s responsible for the formation of his belief. In response to this prompt, I might think the following things:
    (a) A doesn’t reliably remember directions;
    (b) In this case, A does somewhat surprisingly remember;
    (c) In the cases where A does remember, his beliefs based on the relevant memories will be reliable;
    (d) A knows.

    On this way of reading the case, the unreliability of A’s memory might be like the unreliability of my cell phone reception (i.e., it’s not unreliable because it often feeds me false things; rather, it’s unreliable because it often leaves me with no help BUT when it does provide some guide, the guidance is reliable):
    (a’) C’s phone doesn’t reliably have service.
    (b’) Still, there are the rare cases where I can get a signal and get on google maps/
    (c’) When in such cases, my beliefs based on the reliable results in google maps will be reliable.
    (d’) In such cases, I’d know.

    If this pattern of responses delivers the knowledge verdict and is in keeping with proto-reliabilism, it seems we’d need some reason to think that the participants wouldn’t/didn’t respond in this way. Might some respondents think that the reason that A turned right in the case is that this is a case in which A’s unreliable memory did provide some help and that the conditional probability of getting it right on getting some guidance is high?

    This kind of worry might not arise for the other cases, but it struck me as a potential worry about this one case.

  2. The very same setup could be altered slightly to make a test of the factiveness of knowledge: Ask whether a very confident John who turned left when the pharmacy was to his right *knew* that he should turn left. Depending on details of wording, I wager that the majority would say he knew. If this were so, would you conclude that knowledge ordinarily understood does not require truth?

    To me this reveals a general flaw drawing philosophical conclusions from surveys that elicit kneejerk reactions in thinly specified hypothetical situations. I’m no opponent of x-phi, but I will only be moved by data from subjects who are reporting on their reflective equilibrium and not their first-pass kneejerk reactions.

    • Hello David,

      Perhaps you missed this part of the description of the study, but I included control conditions in which it was false that Alvin should turn right. There was an enormous effect of truth-value on knowledge attributions, with rates of knowledge attribution ~10% in false conditions. In other words, this very same experiment provides addition support for the view that knowledge ordinarily understood is factive. So, on the approach I used, you’d lose your wager.

      More generally, you’ll be happy to learn that there is no reason to think that these participants were simply reporting their “knee-jerk reactions.” I included many comprehension checks in this series of studies. Over 90% passed routinely. And if we eliminate from the analysis all who failed a check, we still see the same basic patterns.

  3. Hi Clayton,

    I think it’s theoretically possible that some participants are conceiving of the case that way.

    One reason to think that this is not what is happening — or, at least, that it’s not primarily driving the observed results — comes from the way participants responded to manipulation checks regarding Alvin’s reliability. For instance, in this particular study, when participants in the unreliable condition responded to the statement, “When it comes to driving directions, Alvin’s memory is reliable,” they tended to disagree.

    I used a variety of other manipulation checks in the other studies, and it always turned out that people were actively categorizing the agent as unreliable, and there were extremely large differences in reliability ratings between unreliable and reliable conditions, despite little to no difference in knowledge judgments.

    • Hi John,

      Interesting stuff! Can you say more about how the checks address Clayton’s hypothesis? The way I see it he’s making a global vs local reliability distinction. To change the analogy:

      * Birdwatcher A is unreliable because they have a very blurry vision.
      * Birdwatcher B is unreliable because even though they have a really shape vision, they are asleep half of the time.

      Both are globally unreliable, but B is locally reliable when they are awake.

      Similarly, you can think of the directions case on model A or B. Model A: some days Al remembers directions perfectly, some days he doesn’t. Model B: every day Al makes errors in the directions. Clayton’s hypothesis is that subjects apply model A and think that knowledge only requires local reliability.

      I find it hard to control for that by telling that *the agent* is unreliable. On the hypothesis that there are these two notions, that may trigger the global one.

      One possible check would be to add that Al has taken a few wrong turns on the way already.

      • Hi Julien,

        Yes, I can say more about that. Clayton’s hypothesis requires that people answer the knowledge question with one sense of reliability in mind, and then in the very same context answer the reliability question with a different sense of reliability in mind. That is unlikely.

        To put this another way, if people infer that the agent is reliable as part of the knowledge attribution, then when they are asked whether the agent is reliable, they will probably interpret “reliable” in the same sense and agree that the agent is reliable.

        • Hi John (and Julien),

          “Clayton’s hypothesis requires that people answer the knowledge question with one sense of reliability in mind, and then in the very same context answer the reliability question with a different sense of reliability in mind. That is unlikely.”

          I agree that that’s unlikely, but that wasn’t quite the hypothesis that I had in mind. I don’t think I did a great job describing the hypothesis that I did have in mind, so let me take another crack at it.

          How do the readers understand ‘A’s memory is unreliable’? To my mind, a perfectly natural way of reading it is as follows:

          UM1: A’s memory is unreliable insofar as it doesn’t reliably store the kind of information that healthy, well-functioning memory would.

          If A’s memory is unreliable in this way, there might be many circumstances where A is left without any assistance and so is forced to guess or search for evidence. Memory that is unreliable in this way doesn’t create misleading impressions; rather, it is like my cellphone: it simply offers the person nothing to work with and so cannot be counted on. (I get no coverage in London, so I have to generate maps when I have wifi and take screenshots. My phone is unreliable in this sense, but it never falsely represents anything. When it represents things, it always represents accurately.)

          UM2: A’s memory is unreliable insofar as the ratio of accurate to inaccurate representations it offers to A is poor: when A seems to remember that something is so, it isn’t likely that things are that way.

          Memory that is unreliable in this sense isn’t unreliable because it cannot be counted on to offer some representation; rather, it is unreliable because the representations it provides to A don’t reliably correlate with how things are. This memory is unreliable in that it generates too many inaccurate representations (and is thus unreliable in a way that is very different to the way in which my phone is unreliable).

          My suggestion was this: I think that a perfectly natural way to understand an unreliable memory is UM1. If A’s memory is unreliable in this way it could still be that when A’s memory makes it seem to A as if p, p is very likely to be true.

          Just to fill out my hypothesis:
          * Some respondents who attributed knowledge to A ALSO would disagree with, ‘When it comes to driving directions, Alvin’s memory is reliable’ BECAUSE they think that A’s memory is UM1.
          * However, these same respondents think that this is one of the rare occasions on which A’s memory, which doesn’t store information as reliably as a healthy, functioning memory would, did store the information. (This is why A turned the way he did, after all.) If prompted, they might describe this as a case in which A remembered to turn left. (I don’t think they were probed on this.)
          * Provided that the information that these respondents didn’t think that A’s memory was unreliable in the way that UM2 describes, this pattern of responses is consistent with proto-reliabilism.

          For what it’s worth, when I hear people speak of ‘unreliable memory’ in normal situations, I have UM1 in mind, not UM2. UM2 is possible, of course, but it seems much less common than UM1. I don’t see that UM1 is all that problematic for the proto-reliabilist unless we can rule out the hypothesis that in the vignette described A remembered to turn left/A’s memory, which usually offers no guidance, offered guidance.

          Part of why I’d want to resist the description you seem to be attributing to the folk is that I like the idea that knowledge involves a kind of success that’s attributable to ability, but if we don’t see A as remembering that left was the way to go, it’s hard to make sense of the knowledge attribution on an ability account. But once we assume that A turned left because A remembered that that was the way, it’s harder to see how the case is a threat to proto-reliabilism.

          • Hi, Clayton. Thanks for clarifying. That’s a pretty complicated hypothesis. In other studies, the percentage of true beliefs formed by the relevant ability (e.g. vision or reading comprehension) is explicitly quantified. The difference between 90% and 10% correct made no difference to knowledge attribution. Participants in the 10% condition disagreed that “most of Alvin’s” relevant beliefs were true. So we see the same basic pattern for different sources and different ways of explicitly indicating (un)reliability. The simplest explanation for this consistency is that knowledge ordinarily understood doesn’t require reliability (in the relevant truth-conducive sense).

            Reading your last remark about “an ability account,” overall, the results strongly suggest that this is roughly what the ordinary concept amounts to! (I call the view “abilism.”) It’s just that reliability is not required for ability (i.e. memory and vision can be unreliable and yet produce knowledge).

        • “Clayton’s hypothesis requires that people answer the knowledge question with one sense of reliability in mind, and then in the very same context answer the reliability question with a different sense of reliability in mind. That is unlikely.”

          I don’t find it unlikely. Epistemologists use “reliability” as a term of art, like “safety”. Suppose we ran the experiment with “Al’s opinions about directions are (un)safe” in the prompt instead. I wouldn’t be surprised that the safe/unsafe difference affects answer. Would that show that people don’t take knowledge to entail safety? I don’t think so, because epistemologists use “safety” in their own special sense; non-epistemologists won’t understand “Al’s opinions are safe” in the way epistemologists do. I don’t think “reliability” is different. When I explain reliabilism to students I spell out what “reliably formed” means; I don’t take this to be a well-understood pre-theoretical notion (as opposed to, say, “knows that p” or “sees that p”).

          Hence the following doesn’t seem to me at all far fetched. When they read “Al is very unreliable at remembering driving directions” people understand it as: you can’t rely on Al to remember driving directions. Some days he will remember them, some days he won’t. That understanding will roughly be coextensive with the epistemologist’s “global reliability”. At the same time, when answering the knowledge question, they are evaluating whether at that particular day and time, Al’s belief is produced in a way that guaranteed truth at close cases. And given that Al did get that particular turn right, they may guess that it has. That is roughly coextensive with “local reliability” in the epistemologist’s sense.

          To control for that I would think that the prompt must make Al’s belief locally unreliable in the epistemologist’s sense, rather than simply *say* “it’s unreliable”. That’s why I was suggesting adding that Al’s already been taking 5 or 6 wrong turns on the way.

          • Thanks, Julien. I agree that “reliability” and “safety” are not “well-understood pre-theoretical notions,” which is another reason to not expect them to be reflected in the ordinary knowledge concept, which is, after all, well understood pre-theoretically.

            Also, some of the later studies in this paper, and other recent results, make it clear that knowledge ordinarily understood does not imply “guaranteed truth at close cases.” For instance, in Experiment 9, people attributed knowledge while agreeing that the person might have gotten it wrong in a nearby case.

  4. “Hi, Clayton. Thanks for clarifying. That’s a pretty complicated hypothesis.”

    I don’t think it’s really that complicated. When you ask people what comes to mind when they think of ‘unreliable memory’, some people think of loss of information and some thing of apparent memories that contain false information. On its face, it looks like only one kind of unreliability is a threat to proto-reliabilism. (And I agreed, for what it’s worth, that the other cases might be cleaner challenges to that view, but I had worries about this case.)

    • Clayton, does your hypothesis involve the following? People distinguish between two types of reliability, then apply one sense when answering the knowledge question, and another sense when answering the reliability question.

      I took your original proposal to include something like that. And your latest comment makes it sound like that again.

  5. I’m not sure if this is a version of Clayton’s suggestion, but I offer it in case it is not. It occurs to me that there are two ways to be unreliable. First, one can take appropriate actions to find out a bit of information, but simply fail regularly. Someone with a bad memory, for example, would still be aiming at remembering driving directions, but will often fail to recall them correctly. Second, there’s having just the wrong sorts of actions to reliably come at information. To extend the example, if the driver had simply draw a slip of paper from a hat containing slips with the letters “R” and “L”, this would be unreliable because the process itself is inappropriate as a way of divining one’s route. The word ‘reliable’ itself tends to be ambiguous between the two without context.

    I think this is relevant, because in examples I hear from people doing epistemology, the cases always sounds like they are the second type: random answers drawn from a hat, psychics, etc. But in the driving case here, it sounds like the first– Alvin is actively trying to do the right thing to get to the pharmacy; he just has a track record of doing it not so well.

    Another way to look at it: if Alvin draws “R” out of the hat and gets the direction right, it’s poor dumb luck. I think people would be hesitant to call that knowledge. But if Alvin manages to remember despite his poor odds of doing so, he’s simply beaten the odds and managed to succeed at something he often fails at. I think people would be much more willing to call that knowledge.

    • if Alvin draws “R” out of the hat and gets the direction right, it’s poor dumb luck. I think people would be hesitant to call that knowledge. But if Alvin manages to remember despite his poor odds of doing so, he’s simply beaten the odds and managed to succeed at something he often fails at. I think people would be much more willing to call that knowledge.

      Very well put, Brandon! I think you’re exactly right.

Leave a Reply

Your email address will not be published. Required fields are marked *