What philosophers think might not be what you think they think

Professional philosophers often appeal to patterns in ordinary thought and talk — “commonsense” — in order to support theories or assumptions. In recent years, the emerging interdisciplinary field of experimental epistemology has revealed many instances where commonsense epistemology has been seriously mischaracterized. But even if professional philosophers misidentify what the folk think about knowledge, certainly they know what they themselves think about knowledge. Right?

Wrong.

In a fascinating paper forthcoming in Philosophical Studies, a pair of researchers tested ordinary people and professional philosophers (“experts”) on a range of cases.* A principal finding concerns knowledge attributions in cases where an agent sees an object that is surrounded by visually indistinguishable fakes.

Here is one case they tested:

(Sculpture) The director of a sculpture museum is so impressed with recent improvements of hologram images that she decides to perform a secret test on the visitors of her museum. To this end, she orders hologram images that even art experts cannot visually distinguish from the real sculptures in her museum, and she replaces all but one of the sculptures by their hologram image. As the director had expected, no one realizes any difference between the hologram images and the real sculptures. One day, the world’s greatest Rodin expert is visiting her museum. The expert is standing in front of a famous marble sculpture by Rodin, which is the only real sculpture that is presently on display in the museum, and she thinks to herself: “I’m facing one of Rodin’s famous marble sculptures now.”

Participants rated their agreement with whether “the Rodin expert knows that the sculpture in front of her is one of Rodin’s famous marble sculptures.”

The case is structurally similar to the famous “fake barn” case. Textbooks and review articles tell us that there is “broad agreement” among experts that these aren’t cases of knowledge. And this verdict is often treated as a litmus test for theories of knowledge: if your view implies that there is knowledge in a “fake barn” case, this is often treated as a decisive refutation of your view. Accordingly, we would expect that most experts will deny that Sculpture is a case of knowledge.

But that’s not what the researchers found. Instead, a majority of experts attributed knowledge. Surprised, the researchers tested the case again on another group of experts. And, just for good measure, they tested another case with a “fake barn” structure too. Perhaps the initial result was a fluke?

It wasn’t.

Again and again, most experts attributed knowledge. Ongoing work by another team of researchers has returned broadly similar results.**

The researchers also found that in a case structurally similar to a “lottery case,” the majority of experts attributed knowledge, which again contradicts “the textbook consensus.”

Based on these findings, the researchers concluded that “the discipline of epistemology is dysfunctional insofar” as it is “deluded about” its practitioners’ verdicts about cases.

I’ve only covered some of the paper’s interesting findings. Check it out!

*Horvath, J., & Wiegmann, A. (2016). Intuitive expertise and intuitions about knowledge. Philosophical Studies, 1–26. http://doi.org/10.1007/s11098-016-0627-1

**Carter, J. A., Pritchard, D., & Sheperd, J. (ms). Knowledge-how, understanding-why and epistemic luck: an experimental study.


Comments

What philosophers think might not be what you think they think — 36 Comments

  1. What happens when you move the observation that “even art experts cannot visually distinguish from the real sculptures in her museum” closer to the observation that “the world’s greatest Rodin expert is visiting her museum.”??

    Did the researchers test for this? Because I suspect that there would be a significant difference in the results if this switch was made. In other words, it seems to me that what we are finding here is not about our common sense beliefs about knowledge, but we are finding out about our attention or retention of such details.

  2. Hi Jonathan,

    No, the researchers didn’t test for that. It might be interesting to see whether ordering the information differently would have an effect. But, overall, I thought that the materials tested in this paper were relatively clear and short. And the “Sculpture” case is only about 130 words long. So I think it’s reasonable to expect that people can retain the details. Wouldn’t you expect professional philosophers to be able to keep track of information through the course of a short paragraph?

    Also, it’s probably worth mentioning that high rates of knowledge attribution have been observed repeatedly in cases like this, including in studies where closely matched controls were judged differently. So I don’t think that inattention or forgetfulness is at the root of this pattern.

  3. Interesting! But I think there might be some details that could change the results. For example, suppose we add this sequence to the story: the expert had false beliefs about some other famous sculptures by Rodin.

  4. To be honest I’m quite shocked that the tested expert didn’t recognise this as a case of the fake barn scenario. The structural similarity is quite striking and one would have expected the philosophers to see this. Then again they aren’t free from competing intuitions themselves. Whatever be the reason I find this most interesting.

    • We do not really know whether our expert subjects recognized the fake-barn-structure of this and one other scenario that we tested. Yet one clearly shouldn’t presume that they didn’t recognize it simply because they gave the “wrong” answer! Also, the issue is not so much whether they recognized these cases as instances of the fake-barn-pattern, but rather what the really think about cases of this kind. And it seems that the majority really thinks that these are not cases of knowledge – whether they recognize the fake-barn-structure or not.

  5. Hi, Turri. I think if adding such details could change the results, we should ask ourselves which of these stories is the best candidate for the fake barn intuition, or which one is best for verifying our hypotheses about knowledge? By telling these short stories, I guess, we want to capture some special intuitions and beliefs, and we want to test our hypotheses about knowledge, but ordering of details may inadvertently change our target. For example in the Rodin scenario we want to aim at our intuitions about fake barn scenarios, but maybe the Rodin story is aimed at another target, maybe the Rodin scenario focuses on the relationship between acquiring knowledge and being an expert.
    Actually, I agree with Jonathan that these tests are about how to develop a philosophical scenario and also about different types of interpreting stories.

    • Hi, Hadi. That’s a fair point. At the same time, I think that part of what is at issue is what “the fake barn intuition” actually is. In the literature, it has been asserted that we are inclined to deny knowledge in such cases. But every test to date has revealed a central tendency to attribute knowledge. (And some contributors to the philosophical debate have defended simple arguments that these are cases of knowledge.)

    • Thanks for your comment, Hadi. One key issue is whether the stuff you add to a scenario is epistemologically relevant or not. Your initial suggestion to add the sentence “the expert had false beliefs about some other famous sculptures by Rodin” would clearly make an epistemological difference here, for it suggests that the Rodin expert is not so reliable after all (though I’m not sure whether this is actually coherent with her being the world’s greatest Rodin expert…). And if you add some epistemologically relevant detail, then (a) it might not be the same type of case anymore (i.e., not a fake-barn style case), and (b) it could be a good thing if subjects change their view about the case – because this might show that they are sensitive to relevant details. This is all very different if you only add epistemologically irrelevant stuff.
      On a more general note: if you think your suggestions have enough initial plausibility, then why not just go ahead and test them? Otherwise, this is just armchair speculation about empirical questions.

  6. Thanks so much, John, for discussing our work here at Certain Doubts (I only came across your post today)! Since I basically agree with your presentation, and also with most of your replies, there’s not much left to do for me at this point. I’ll still add a few replies of my own to the comments above. And I would like to encourage others to join the discussion!

  7. Very interesting! I have always found the fake barn cases much less persuasive than Gettier cases, and apparently I’m not alone. That someone could have been fooled (but in this case wasn’t) has never struck me as a compelling reason to decline to attribute knowledge.

    • Hi Dan, people who want to defend the fake barn intuition could object that your way of putting things is somewhat tendentious. For the crucial point is not that, e.g., our Rodin expert could have been fooled (which is almost always the case, if only because of evil demons), but that she could *very easily* have been fooled. More generally, I wouldn’t want to rule out that there are good arguments for the standard verdict about fake barn cases that do not at all depend on intuitive judgments about those cases (this much, I think, we should concede to people like Cappelen or Deutsch).

  8. The version of the fake barn case that’s always most moved me is not this one — where you just imagine the subject plopped in front of the real item — but the version suggested by DeRose, where you imagine the subject going from fake to fake to fake to real to fake to fake, saying at each point, “There’s a ____”, “There’s a ____”. It’s in that description of the case that it seems most bizarre to say that, suddenly, the subject knows it’s a (real) ____.

    There also seem two features of the Rodin example that diminish my intuitions: 1) the neighboring fakes aren’t fake versions of the very statue that the subject is looking at, whereas in the barn facade case (at least, how I usually imagine it), the fakes are just like the the very barn the subject is looking at; the mere presence of other fake things around seems less of a knowledge-destroying defeater than the presence of fake things just like the thing the subject is looking at. 2) the lack of the word “real” in the query seems problematic. In a way, even the hologram of Rodin’s famous statue can truly be said to be Rodin’s famous statue, in the same way that a painting of the statue is the statue in a way that a painting of an avocado isn’t. When I imagine the case, I have much stronger intuitive “no” response to the statement “the Rodin expert knows that the sculpture in front of her is one of Rodin’s famous marble sculptures” than I do to the statement “the Rodin expert knows that the sculpture in front of her is a real — non-holographic — version of one of Rodin’s famous marble sculptures”, especially when the other two difficulties, above, are fixed.

    • Hi Jeremy,

      Good points!

      I think it would be interesting to figure what might differ between the iterated-falsehood version of the case, which is rarely discussed, and the original version, which has been tested repeatedly. It could turn out that there is a very simple explanation for why the iterated-falsehood version would elicit a different intuition. For example, one possibility is that if the proposition is false repeatedly, then that undermines one’s confidence that it is true even when it is true. Another possibility is that if the person is repeatedly wrong, then that undermines one’s confident that the person actually does accept the proposition the next time.

      One main reason the fake-barn case was thought to be important was that it might have pointed to an additional factor that a theory of knowledge needed to account for, beyond things like truth or belief (or, in Goldman’s original discussion, a causal condition). However, if either of the explanations suggested above is correct, then the case would not serve that purpose. Instead, intuitions about the case would then just reflect connections between judgments about knowledge, truth and belief.

      • Hi John. Your two proposals would definitely undercut the original import of the barn facade case. I’m not sure either rings true to me. For one thing, I get similar intuitions when it’s not one person sequentially encountering all the fakes, but a bunch of people, one in front of each item (but only one person in front of the genuine article). Each one says, “That’s a real ____”. I pretty strongly have the intuition that the lucky one doesn’t thereby luckily know. But that lucky one has not been repeatedly wrong, so I’m quite strongly confident that the person accepts the proposition. Nor do I really see why I’d doubt that the lucky one’s belief is true. I — from the outside — see which one is real. I see that the person is standing in front of it. So, I don’t have any doubt about its truth.

        Just introspecting, it seems to me that what explains the stronger intuitions in the iterated falsehood case is that that way of putting the case prevents the luckiness of the person getting it right from being hidden or lost in the description of the case. If that’s the explanation, then it’s particularly suited for doing the work that the barn cases were originally meant to do.

    • Hi Jeremy, thanks for your really interesting points! We also had doubts about our data on the Sculpture case, so we repeated the case in a second study and also added a case, Dollar, that’s much closer to the original fake barn case (e.g., insofar as all the fakes are instances of the same kind of thing, i.e., dollar notes). The results were almost the same for Dollar as for Sculpture (see our paper for the details).

      I’m not so convinced by your second point that we can truly say of the perfect hologram that it is Rodin’s famous statue – at least that’s not consistent with my intuitive ontology of artworks (e.g., artworks are unique objects, due to certain historical properties and a particular origin – a perfect copy of the Mona Lisa just isn’t the Mona Lisa).

      But I’m really fascinated by the DeRose version of the fake barn case, which I wasn’t aware of so far. I would be odd, however, if even epistemological experts change their verdict about the case if one presents them the DeRose version instead of the standard version. Do they really need this kind of salience or framing in order to realize that barn judgments are locally unreliable in this particular environment? And it’s hard to see what else the DeRose version could do for an epistemological expert. By the way, I’m not so sure about inserting ‘real’ into the proposition believed. Couldn’t this somehow affect the standards for knowledge in this case? The proposition ‘that’s a real barn’ might require different evidence than the proposition ‘that’s a barn’. In any case, I take your point about the DeRose version as an intriguing suggestion for future research on this issue!

      • Hi Joachim. That’s interesting about the second study. I do agree that a perfect copy of the Mona Lisa isn’t the Mona Lisa. But I do have the intuition that if someone is looking at a picture of me in a room of pictures of other philosophers, they speak truly when they say, “That one is Jeremy Fantl.” And I wonder if something like that is bleeding into Rodin case. The expert can truly identify the hologram of the Rodin (rather than the Donatello). I agree that inserting a “real” or “non-holographic” into the query doesn’t adequately address it, for various reasons (though, FWIW, I do have the intuition that, in a normal situation, an expert can look at a real Rodin and know that it’s a real, non-holographic Rodin; I know some have skeptical scenarios raised to salience by this kind of thing, though, which undercut their intuitions).

        I don’t know about what the mass of experts would say about the DeRose version of the fake barn case. You say it would be odd if epistemological experts changed their verdicts. I guess that might be so. All I know is that my verdict is much stronger in DeRose’s version, and that apparently DeRose’s is, too, and that when I present the cases to students, I’m much better at getting them to see the intuition when I do it DeRose’s way (but perhaps that’s a function of my own intuitive leanings). If our pattern of intuitions is indicative of epistemologists generally, then I would expect (to some extent) the results you’ve documented.

        • Jeremy, you are right that we say such things as “That one is Jeremy Fantl” in front of a picture of you. However, would we also say something like “The person in front of me is Jeremy Fantl”, while merely standing in front of a picture of you? I’m not so sure about this one – yet this would be more closely analogous to the wording of our vignette.

          If it should turn out that most experts are like you or DeRose, that would be a very interesting thing to know. Still, if those experts cannot point to any epistemologically relevant feature of the case that would explain their different intuitions (or the different strength of their intuitions), then this may point to a certain kind of limitation in their intuitive expertise. For why do they need the DeRose version at all if there is no epistemologically relevant difference between the two cases? Only because they are subject to some kind of psychological limitation or bias, it seems.

  9. Hi, Jeremy. Yes, that’s definitely another possible explanation. I don’t think it’s conclusively ruled out, but previous work on cases like this makes it seem unlikely that high rates of knowledge attribution are due to inattention or forgetfulness. I say this because in previous work, high rates of knowledge attribution were accompanied by significantly lower rates in closely matched control conditions. So it doesn’t seem that people are unable to follow or keep track of details. Rather, it seems, they just have a different view on what those details imply about knowledge. (To put the point another way, for most people, on the natural way of judging the case, whatever luckiness is involved doesn’t inhibit knowledge.)

    At the same time, I agree with your basic point that there are probably things you could do to make people deny knowledge. If that happens, then it would be important to figure out what is responsible for the change. I think that, as a whole, the field has made progress on that question. But it can be pretty challenging and our understanding is only partial.

    • Maybe I’m not sure which studies you’re referring to, but I’m under the impression that the way earlier studies showed that respondents kept track of details was by asking them (for example, by questions like, “Is it true that p?”, etc.). If that’s the way it’s done, I wonder if this sort of worry leaves the “masked luckiness” suggestion on par with your first proposal — that reader confidence in the truth of the proposition is undermined. I’m not suggesting that the luckiness is forgotten about — just that the description of the case diminishes its salience and renders the reader’s intuition immune from its force. This would allow readers to answer questions correctly when asked about the details of the case.

      Of course, I claim that my awareness of the truth of the subject’s belief means that my denial of the knowledge to the subject doesn’t depend on lacking confidence that the belief is true. But maybe I just know the details of the case in a way that allows me to correctly answer questions even though my intuitions are immune to my knowing that the subject’s belief is true. Still, at worse this puts the proposals on a par. And, again just introspecting, in the non-iterated-falsehood case, my awareness that the subject’s belief is true seems far more immediate than my awareness that the subject’s belief is only lucky to be true. If I had to say which my intuitions were masked from, I would say the latter.

      But the way you describe the studies, it sounds like you think that they test for whether the readers are keeping track of details, not by asking them, but by directly testing whether the subject’s intuitions are sensitive to those details. So all the above may be beside the point.

      • Hey Jeremy,

        I don’t think it’s beside the point. Quite the contrary — I think those considerations are all worth having in view!

        Theorists will have different reasons for being interested in these cases and judgments about them. For my part, my most immediate interest is whether the luckiness exhibited in a fake-barn-style case naturally leads people to tend to deny knowledge, especially when compared to closely matched cases that exhibit no luck or different luck. All the evidence to date suggests that there is little to no natural tendency to deny knowledge in fake-barn-style cases, that fake-barn-style cases are viewed similarly — though not exactly similarly — to cases exhibiting no luck, and that they are consistently viewed very differently from cases exhibiting other types of luck.

        I agree that explicitly asking people about details always leaves open questions about the consequences or effectiveness of doing so. And, again, I think it would be interesting to learn what factors could lead people to view fake-barn-style cases more like cases of ignorance.

        • Point taken about all the evidence-to-date, John. I think the results are really interesting, insofar as they might create worries for people who rely on the original formulation of the case. As noted, though, I find the evidence-to-date limited because it doesn’t include the only kinds of fake-barn style case that really move me to draw the relevant conclusions. I would have given exactly the same responses in the studies-to-date, as well as most other X-Phi studies. So I take my own intuitive responses as fairly well correlated with the respondents in X-Phi studies. And my intuitive response in the iterated-falsehood case is in line with the traditional response. So I’m still pretty comfortable with that conclusion. To the extent that others have the same pattern of intuitions I have, the studies-to-date shouldn’t move them either.

          • Thanks, Jeremy. I’m curious, if we grant that the original case is naturally judged knowledge while the iterated version is naturally judged ignorance, what do you think that tells us about (the concept of) knowledge?

          • I guess I’d think that it shows that the causal theory of knowledge is problematic, because it seems to me that irrelevant features of the original case are blocking the “luckiness” from having its proper effect on my intuitions, whereas the way the modified case is framed, the luckiness remains — as it should — salient. (Also, I shouldn’t overstate things; while I have the intuition that the original case is one of knowledge, it’s not that strong, and it’s really conflicted, and I can get myself to have the more traditional intuition by really focusing on those other fakes all around. So, the knowledge-intuition in the original case is weak and malleable, whereas the no-knowledge intuition in the modified case is strong and stable.)

      • Hi Fritz, in fact, our operationalization of expertise was much stricter than in most other studies of this kind. For example, in our first experiment we only count people who (a) hold a PhD in philosophy and (b) have epistemology as one of their areas of specialization or competence. If one requires still more, then the danger is that most professional epistemologists do not count as relevant experts, which we regard as a skeptical result of some sort. Apart from that, we found no significant difference between subjects of different levels of expertise in our study (e.g., MA vs. PhD), and so there is no evidence that intuitive expertise in epistemology might be discontinuous above a certain level of expertise.

  10. Thanks for the interesting paper and very nice discussion in the comments here. One point related to Jeremy Fantl’s line of questions. I wonder in x-epist surveys (and I don’t know the literature all that well) if follow-up questions can be, or have been, asked that gauge not only subjects’ understanding of the facts of the case (like as suggested by Jeremy) but also just ask the subjects about their theoretical intuitions about knowledge. One important thing to tease out in the Sculpture case is whether or not the subjects thought that it was a matter of what we’d call epistemic luck (of the variety that most epistemologists take to be inconsistent with knowledge) that she had a true belief in that proposition.

    So one could have asked the subjects, after asking them if it is a case of knowledge, whether they thought it was a matter of luck that the Rodin expert was in front of the only real sculpture rather than one of the holograms, or if it was a matter of luck that her belief is true, to see if their intuitions really disconfirm a plausible anti-luck necessary condition on knowledge. One could have asked them what would have happened if the expert had been looking at one of the holograms – perhaps, unlike everyone else, she would have been able to tell it was a hologram since she is the world’s leading expert. One could also ask them straightforwardly, after these questions, whether knowledge excludes being lucky that one’s belief is true. Obviously one would have to design the test questions better than how I just put them. And the Dollar case may get around some of these concerns but I still do wonder what the subjects are thinking about what we’d think of as the waiter’s epistemic luck.

    In the Monitor case, I wonder if one had asked the test subjects what they thought the likelihood is that all of the monitors could have been on a tape loop from the previous night. The presence of one faulty monitor might be enough to render all of the monitors unreliable sources of information about whether people are in the building (at some relevant level of generality), even though most of them are in fact operating properly.

    Last, I wonder if the x-phi blog is a good source to recruit test subjects, since I would guess that philosophers who are attracted to x-phi are more likely to be skeptical of standard epistemologists’ intuitive answers in the first place, or are perhaps happy to provide answers to test questions that they know to be contrary to traditional epistemologists’ answers. I am curious what the results would have been had subjects been recruited via solicitation on the Certain Doubts blog. (Maybe this is Fritz Warfield’s concern.)

    (Full disclosure: I typically do find myself in agreement with the majority of test subjects in answers in most x-phi cases, but not in the Scultpure and Dollar ones. Monitor, I am unsure of.)

    • Hi Avram,

      Thanks for sharing your thoughts!

      I’m not sure whether there have been studies that asked follow-up questions about luck or some of those other things, but there have been studies that directly tested people’s judgments about knowledge and luck, and there have been studies that systematically manipulated luck-related structural features of scenarios to test what effect they have on knowledge attributions.

      Perhaps that’s related to Fritz’s concern, though it’s hard to tell given that he didn’t elaborate.

    • Hi Avram, thanks for your interesting comments! Let me just add a few points.

      Concerning the Monitor case, the case description explicitly says that the one faulty monitor is due to “some unusual malfunction”. So I think it’s unlikely, at least in the case of our expert subjects, that they should be led to think that all of the monitors might be unreliable.

      Concerning our recruitment procedure, only a small percentage of our expert subjects came from the x-phi blog – most of them where recruited via the mailing list Philos-L (we cannot tell exactly how many, but the sharp increase of participation numbers after sending out our call for participation on Philos-L gave us some hint that most of our expert subjects were recruited in this way).

      By the way, there probably is no easy and “clean” procedure for recruiting expert subjects in philosophy, because (a) the number of relevant experts is pretty small compared to standard pools of lay subjects, and (b), one cannot really access the pool of philosophical experts in a randomized way. So I’m actually skeptical that asking the readers of Certain Doubts would have resulted in a better sample quality.

  11. There seems to be something missing in the sculpture example. In order to determine if the observer “knows” that they are looking at a real sculpture as opposed to a hologram, you have to determine if the observer knows the difference between the real and the fake; the observer should be able to demonstrate that they can “tell the difference”, or recognize the distinction between the two cases. They need to know the difference; you can’t say anything if you’ve got just one case (one statue). It’s not “knowing that this sculpture is a real Rodin”; it’s knowing the difference (knowing what distinguishes the one from the other). In general, judgments about categorical problems are always about differences (Is x an A, or is x not an A?). If the observer (the expert) has not demonstrated “knowing the difference”, I am not yet going to attribute knowledge to that person.

    (I do not yet know about the “fake barn” examples. Perhaps the test subjects include in their thinking an assumption that the expert has the required ability to tell the difference, but the ordinary person does not, so would be simply making a lucky guess. But the need to demonstrate the ability to tell the difference would apply whether the observer is an expert or an ordinary person.)

Leave a Reply

Your email address will not be published. Required fields are marked *