While hard at work deep within the armchair, he was handed a clipboard

In a forthcoming paper, “Is Knowledge Justified True Belief?“, I had occasion to cite James Beebe and Wesley Buckwalter’s admirable forthcoming paper, “The Epistemic Side-Effect Effect,” which reports interesting experimental results about epistemic judgments, related to those reported by Joshua Knobe on intentional action and side effects. There’s a somewhat surprising story behind this, which some people here might find interesting, so I thought I’d share it.

The paper is on the Gettier problem. One thing I do is to present a new argument for the conclusion that Gettier subjects do in fact know. (I don’t ultimately accept this, but the argument is interesting nonetheless.) In making my argument I present cases involving “bad” counterparts of original Gettier subjects. For instance, “Bad Henry” isn’t just gazing at a roadside barn, but out to destroy a barn. He aims his bazooka and fires with deadly accuracy. I say, it’s plausible that Bad Henry knows that he is about to destroy a barn. But to do that, he must know that it is a barn. And if Bad Henry knows that it’s a barn, then so does Good Henry (the protagonist of the original Ginet/Goldman case). So this gives us some reason to mistrust the intuition that Good Henry doesn’t know.

Maybe you buy that; maybe you don’t. But this brings me to my story. One referee for my paper singled out the claim “it’s plausible that Bad Henry knows that he is about to destroy a barn.” It just didn’t seem like that to her or him. The referee then suggested that I run an x-phi experiment to test whether the intuition was widely shared!

This has never happened to me before. Has it happened to others? Will it become a trend? That is, will it become more common for referees to prompt armchair philosophers to run experimental studies to bolster intuitions not shared by referees? Would this be a good development?

I know that the x-phi phenomenon has caused some consternation in some philosophers, but I’m not one of them. I continue to find the results fascinating and I’m happy to see people advancing their research program and enjoying successful academic careers. But at the same time, I don’t think that, in order to get work published, philosophers employing armchair methods should be expected or required to conform to this methodological innovation.

In the present case, I was fortunate that relevant experimental work had been done, and that I happened to become aware of it in time for this R&R. Maybe it was just an isolated incident, and maybe it would have been accepted even if I hadn’t been able to cite that work; but then again, maybe not. Who knows? One thing I do know, though, is that I’m not all that enthusiastic about being handed a clipboard when I’m working hard in the bottomless pit of the armchair.


While hard at work deep within the armchair, he was handed a clipboard — 10 Comments

  1. A survey to back up the plausibility of a conjecture? Foul! To back up a conjecture that is ultimately refuted? Shooting foul!

  2. I have mixed feelings about this. At least the referee gave you a sporting chance. Worse is when someone declares on apriori grounds that no one has the relevant intuition and offers that as the reason for rejection.

  3. Following up a bit on Clayton’s post: just what is it that a referee should do when reviewing a mss which crucially relies on a basic premise that they find implausible, or at least fail to find plausible? I’ve never been willing to do what John’s referee did (I.e., exile some poor unwilling soul out into the xphilic realms), but it’s not clear to me what the best thing to do is instead of that?

  4. If a referee reviews a mss which crucially relies on a basic premise that they find implausible, or at least fail to find plausible, then the referee is obliged to flag that premise and provide a clear enough reason for the worry for the editor and author to see what is at issue. Good reasons might be a priori, experimental, or exegetical.

    The paper under discussion is within a well established, speculative branch of epistemology; and I agree that the licenses taken within this area are a legitimate target. However, what is crucial to the paper here is not the truth of the conjecture, and thus not the soundness of the argument. Rather, the contribution of the paper is to highlight logical relationships which appear in the form of (proposed) constraints on the space of possible positions one might take on the Gettier problem.

    Now, one might think that an implausible empirical assumption scuppers the paper. But, that would be an error and, besides that, it loads the dice; after all, you might grant that the premise is implausible and be happy to see this constraint used as a reductio.

    So, as the legendary Dave Zinkoff might have roared, TURRR-eee at the line; THREE, for two.

  5. I think the experimental solution here is completely misplaced. John is making an argument that is aimed at someone who finds the Gettier example problematic. He’s saying, suppose you, dear Reader, will not grant good Henry that he knows, then what about bad Henry?

    As it happens, I wouldn’t grant bad Henry knowledge here either. And there really is something distressing about x-phi if an experiment or survey of people’s intuitions could do anything to change the terms of the argument that I might have, as a philosopher, with someone who would defend bad Henry’s knowledge.

    That is, I think the paper should have been published (all else being publishable of course) and making publication contingent on “empirical” inquiry here misunderstands what philosophical papers are supposed to do.

  6. Clayton,
    You’re right, the referee was actually a good sport about it. Overall the report was intelligent and helpful, and I was glad to get a chance to R&R in light of it.

    That’s a good question. I think Greg’s strategy would usually be good. People inevitably bring different perspectives to the table, and it doesn’t seem desirable for work to be held hostage to this fact alone.

    I can imagine situations in which I’d recommend acceptance straightaway, even if I found one of the paper’s basic assumptions to be implausible. One prime example that jumps to mind is Berkeley’s Principles of Human Knowledge. He says some pretty implausible things, especially in those crucial opening passages. But the perspective he brings to bear is so intriguing and original that it deserves to be widely read and discussed. Lots of people feel similarly about some of David Lewis’s work on modality.

    And just to be clear, I’m certainly not comparing my paper to Berkeley’s or Lewis’s work! The question was a general one, and I’m speaking generally in response.

  7. There’s serious x-phi and there’s informal x-phi, which is when one simply asks around (philosophy friends, neighbors, etc.).

    I think that most philosophers do informal x-phi when they’re philosophizing anyway. When I think up a thought experiment and consider my intuition about it, I also run my scenario by both philosophers and nonphilosophers. This plays a role in determining how confident I am in the truth of the content of the intuition and whether it’s the sort of intuition that a referee is likely to accept. Is there anybody reading this who does not do this? It seems that everybody should.

    There are three options I’ll discuss:
    1) everybody (or almost everybody) shares my intuition,
    2) a significant number share it and a significant number don’t, and
    3) nobody (or almost) agrees with it.

    If nobody agrees shares it, then I drop the appeal to intuition in my paper and try to think of more argument to back up what I think is intuitively true; otherwise, I just drop the whole argument altogether. If everybody I ask agrees with me, I just say, “Intuitively, p” or “It seems that p” in the paper, assuming that the ref will agree. Lastly, if some agree and some don’t, then I say, “It seems to me that p”, but then I go on to say how I would address someone who did not share my intuition. I might point out in the paper that it is part of the nature of philosophy that not everybody will be convinced by every argument, but at least my paper is worthwhile in that a significant number of people do have the intuition that p, and so my argument will be convincing to them. Great thought experiments like BonJour’s clairvoyance case or Cohen/Lehrer’s NEDP case have had a significant number of dissenters (regarding intuition), but the papers in which they appeared were certainly publishworthy.

    That’s how I try to write. Similarly, if I were a referee and did not share the author’s intuition that p, I would ask around informally and see whether others share the intuition that p. If everybody does, then I would not push the author to do informal x-phi and explore why I’m so quirky. If nobody else does, then I would either push the author to do informal x-phi or point out that it is very unlikely that people will share the intuition that p, and so this is a major weakness in the paper. If I found a mixed response, then I would think that the author has still probably stumbled upon something interesting, but then I would ask the author to at least say something that addresses those who do not share the intuition, since at least I, the referee, do not.

    So, it seems that, in general, only informal x-phi should be required or recommended by referees (which isn’t that hard and is probably already done anyway), and there are ways to do responsible paper-writing and refereeing w/out recommending the serious x-phi.

  8. So sorry for the trouble, I think I agree with what’s been said about this being a pretty unusual referee request given the specifics of this particular argument, but I’m delighted to hear our article was able to help out with your RR situation nonetheless!

    For those who haven’t read our paper, the main finding was that surprising non-epistemic factors like people’s moral judgments, play a role in their intuitions about knowledge. When James and I wrote ESEE, I remember thinking at the time that we might be able to get people to ascribe knowledge to an epistemic subject in Gettier conditions by manipulating moral valence (aka “Gettier made ESEE”), and it’s really exciting to see that idea developed so well here. On that note, I thought CD readers might be interested in some cool new data that bears on your discussion in the paper of possible responses to this result. Particularly, the part against the renewed vigor you’ve instilled into the argument for JTB:

    “A better response to the argument would be to say that the moral dimension of the “bad” cases causes a performance error by polluting our intuitions about whether the bad people know.” We might call this the “blame objection” or even “the Kripke objection” traced back to the Nozick on Knowledge lectures (of course it wouldn’t due to say Nixon only had “true belief” but not “knowledge” about those break-ins!). The idea here being that one plausible way to run an error explanation is that people count the subject’s beliefs as knowledge simply because they want to blame him.

    This worry came up enough times in subsequent discussions of ESEE that Knobe and Schaffer, (see Contrastive Knowledge, Surveyed) ran a further experiment to try and specifically test if that’s going on. There they say, “Our follow-up added an additional character, an environmentalist who knew that scientists were predicting helpful or harmful effects and then learned about the chairman’s decision to go ahead with the program. The question then was whether participants would agree that the environmentalist knew that the environment would be helped or harmed. In this new version, there is no question of blaming the environmentalist for the outcome, but…the familiar asymmetry remained. On a scale from 1 (‘disagree’) to 7 (‘agree’), we found
    a rating of 4.8 in the harm case, and 2.8 in the help case.” So the shocking result is that even though there is no reason to blame this other guy, people still ascribe knowledge differently to this third-person based on the presence of this moral thing the chairman did. Just thought I would point you to this in case you were interested in developing the argument further!

  9. Thanks, Wesley, both for your excellent work that helped me out in a pinch, and for the further information. That result in the K&S paper is pretty striking.

    I wonder, in the environmentalist case, is there maybe some quick and dirty reasoning going on there, based on something like a “parity of epistemic position” principle? We want to blame the chairman, and (so, we think) he knows. But the environmentalist is in just about as strong an epistemic position as the chairman re the relevant proposition, so (we think) the environmentalist knows too. (This is, in effect, the way I reason in the paper, from the bad Gettier character to the good one.) It’d be interesting to see if there’s a way of running the experiment to block this sort of “parity reasoning” from happening: make it so there’s no one to blame. Or, just to cover all our bases, no one and no thing to blame, since people might think that the corporation knows.

    That’s a sensible way to proceed. But I wonder, what would authors write in their R&R report? Something like, “I asked around, and most of my informants agree. So I’ve settled that objection”? Is that it?

    Isn’t another way to handle it to just say, “If you don’t share this intuition that P, then please treat the argument as conditional, i.e. if P, then Q,” where Q is the relevant conclusion? The stage would then be set for further investigation on the topic. That might not work in every case, but I bet it might in some.

  10. Thanks for sharing this, John! For what it’s worth, as an x-phier of sorts myself, I agree that this currently seems quite a demand on the part of the referee. At the very least, the methodology is so new that not everyone can utilize it (even if they wanted to). It would, I think, be like asking a historian to do some sort of empirical work typical of some other discipline, even if we assume said work would be relevant to the author’s historical claims and even if some historians were doing it.

    So, despite my affinity for x-phi, I’m not immediately sympathetic to this sort of referee request. I think x-phi is a new methodology that might help with certain problems in philosophy. So it seems legitimate to ask that one make use of it to get a paper published only if it’s standard that people in the discipline know how to do so.

    And that just specifies a necessary condition. Even if met, it certainly seems a bit much to demand only the use of x-phi. My sense is that the appropriate thing to do is something more akin to what one would have done before the introduction of x-phi. For example, the referee could say:

    “Crucial claim X doesn’t seem obvious. Given that your paper hangs on it, you need to provide some independent support for it. This could include, for example: (1) a more theoretical argument, (2) conditionalizing your thesis in a way that keeps it interesting, or (3) doing some x-phi.”

    Since I think of x-phi as an additional methodology, that’s the kind of approach I’d like to see. But that’s just my two cents.

Leave a Reply

Your email address will not be published. Required fields are marked *