Does knowledge always suffice for action?

There’s a lot of exciting work being done on epistemic norms lately, including the norms of action. What I’m about to discuss clearly relates, in one way or another, to a lot of that work. But I’m not going to make the connections explicit here — I’ll save that for another time.

The principle I’m interested in is this:

Knowledge Suffices for Action (KSA): If you know that Q, then you may act as if Q.

This is intended as a necessarily true generalization.

I think the greatest threat to KSA comes from cases where you know that Q, something important turns on whether Q, and double-checking is very easy and nearly costless. For instance, consider this case.

(VIAL) Dayna is a medical doctor called to the scene, where a snake-bite victim lies unconscious. It will soon cause the victim serious pain if she gives him the wrong antivenin. The park ranger called and told Dayna all the relevant details just minutes ago over the phone. The ranger clearly saw that it was a cobra that bit the victim. So Dayna brought the cobra antivenin. Dayna remembers that it is cobra antivenin in her pocket (she just placed it there a couple minutes ago, and there’s nothing else in her pocket). The ranger has a simple test kit, which in just one minute can be used to double-check that it is cobra antivenin. The victim was just bit, and it will take hours before any harm comes to him. Dayna kneels down to treat the victim, sees that it is a cobra bite, pulls the vial of antivenin from her pocket, and . . .

So here’s the question. Is it okay for Dayna to just administer the antivenin without double-checking? Double-checking would consist of placing a drop of the liquid into the simple, quick, perfectly reliable, and easily available test kit.

My take on this case: (A) Dayna knows that it’s the cobra antivenin (in virtue of remembering that it is), (B) it’s not okay for her to administer the antivenin without double-checking, and (C) double-checking is not a way of acting as if it is the cobra antivenin.

What do you think?


Does knowledge always suffice for action? — 25 Comments

  1. This looks like a good place to apply Jason Stanley’s “practical interests”. The effect would be to reject (A). If there’s a great deal at stake, Dayna actually doesn’t know that it’s cobra antivenin. Under the circumstances, she hasn’t done enough to make sure. A professional code of conduct may be enough to raise the stakes. Even if it is the right antivenin, her failure to double-check might (if discovered) result in having professional sanctions brought against her. It was an easy precaution that she failed to take.

    Doctors know things by virtue of applying procedures under particular circumstances, not by virtue of “remembering” them.

  2. I first wrote a reply agreeing with Thomas, but now I’m not so sure. If the vial in Dayna’s pocket actually contains antivenin then presumably we would say she knows it. The problem is not that Dayna doesn’t know it’s the cobra antivenin, it’s that Dayna doesn’t know that she knows it’s the cobra antivenin. She can’t make use of the KSA principle because she can’t know whether it applies.

    Am I not up on my modern epistemology? Are there cases where we can know we have knowledge of something as opposed to degrees of certainty in our beliefs? If not, the KSA seems useless as a prescriptive principle.

  3. Er… on thinking that over some more, I don’t think it’s useful to say she doesn’t know that she knows. Thomas’ approach makes more sense. I would guess that Dayna would be more comfortable saying she “knows” she has antivenin in her pocket as she’s leaving than she would be asserting the same as she blows off the ranger’s offer to test. Rather than say she’s wrong in one of those circumstances, it makes more sense to allow some flexibility in the concept of knowledge.

  4. Thomas,

    Yes, that’s one move that could be made. But it does strike me as implausible. It would be very strange indeed if what prevents Dayna from knowing is the fact that the ranger happened to be carrying a test kit!


    Perhaps I’m misinterpreting you, but the principle is not intended to apply only when the agent consciously applies it. The agent needn’t reason from the fact that she knows to an intention to act.

  5. If we focus on the idea that she knows it is antivenin, it becomes hard to deny that it is ok for her to act without double checking. At least that is how it is for me.

    Maybe it helps to think about it like this. Suppose that she pulls up the needle and is about to administer the shot, and that someone goes “stop – you should double check that it is the antivenin”, and that she responds by going “No it is ok; I know this is antivenin”. Hasn’t she adequately justified herself in saying as much?

  6. Double-checking increases our certainty. So the reason to do it is to improve our epistemic situation. If we have to double-check, we don’t know enough.

    But I see your point. How can the accidental presence of a technique for validating a belief in and of itself require that we use it? In this case, Dayna would confidently administer the antivenin if the test kit was not on hand.

    In science, however, this can and should happen all the time. Scientists can confidently know particular matters of fact on the sort of basis Dayna has. But this is only allowed until a cheap and easy way of double-checking the fact is developed. Once that happens, the standard for knowing goes up. Sometimes permanently (certainly under the ideal conditions of the lab).

    Dayna has simply found herself in a situation where the standards to which she might be held accountable suddenly went up. This increase in standards is merely a subtle but everyday activation of skeptical concerns.

  7. Hi Dennis,

    That’s exactly how I imagine myself responding to her. If I saw her about to administer it without double-checking, I’d be thinking, “Hey, wait, double-check!”

    She can say she knows. I don’t feel like that’s an adequate justification for not double-checking when it’s so easy to do so. She says, “But I know this is the right one!” I want to say, “Of course. But you should still double-check.”

    Let me also point out that if we have her say that she knows Q, then she’s representing herself as knowing that she knows. And this complicates the situation. It might be that the intuition that she may administer without double-checking is generated by taking her to know that she knows, rather than her just plain knowing.

  8. Maybe when she says that she knows, she’s pointing out that she meets the standard for adequate action.

    Suppose the dialogue goes like this

    John: Stop, you should double check that it is antivenin

    Dennis: No it is ok, I know that it’s antivenin

    John: Of course you know, but you should still double-check

    Dennis: Why should I double check it if I already know?

    What is the next thing in the dialogue? Presumably, it is something like

    John: Well, even though you know, you still might be wrong, and the costs of double checking are negligible.

    Now, if John said that at this dialectical juncture, I’d respond by going:

    Dennis: What do you mean “you know but you still might be wrong”? The “might” there is the “might” of epistemic possibility. And since I know P, it is epistemically impossible that not-P. Which makes “you know P but you still might be wrong” a contradiction. So, it isn’t true that I know P but still might be wrong. It isn’t true that I might be wrong! I’m not wrong, because I know.

    What comes next?

  9. Thomas,

    I see where you’re coming from. It’s certainly consistent, and attractive in a certain way. But I just lack the intuition that standards for knowledge are increasing, or the ways of knowing are changing, in the sort of cases you mention.

    I don’t see why we should want to say that rather than this: sometimes, appropriate action requires more than knowledge. This is in fact how it seems to me.

    Here’s another way of getting at the same point. In the abstract, this seems true: sometimes knowledge isn’t good enough — sometimes, you need to know that you know, in order to act appropriately.

    On the other hand, perhaps you might say something like this. Maybe I’m just confusing permissible action with wise action. Dayna acts unwisely, but permissibly.

  10. Hey Dennis,

    I agree that having the conversation go like that makes that “John” character sound a bit silly, and that “Dennis” character gets the best of him. But my general point still holds: when people start “going meta” and talking about who knows what, this introduces a different element, one which opens the possibility of confusing representation for fact (especially when we’re being charitable).

    By the way, at the point where character John says, “Well, even though you know, you still might be wrong, and the costs of double checking are negligible,” I myself would say, “Because double-checking is sooo easy, and this is important.”

  11. Oh, I should have added, yes, you’re right to say that she could be pointing out that: she knows, and that’s good enough to justify her action. Nothing I’ve said rules that out.

  12. Ok, so it was an uncharitable conversation. Sorry about that…

    You are right, of course, that by going meta and talking about knowledge, we open up the possibility of confusing representation for fact. It would be nice if there were a way to somehow control for that…

    Here’s one more thought. Suppose that, at the relevant point in the dialogue, you said

    “Because double-checking is sooo easy, and this is important”.

    Suppose I believed you, and double-checked, and then started again to pull up the needle. Suppose you stopped me again and said:

    “But wait, you should triple check. So much is on the line, and the cost of triple checking is negligible”.

    I worry that perhaps, if I should have been in the first place convinced to double check, I should now in the second round be convinced to triple check – and I should similarly be convinced to quadruple check, etc etc. Which seems wrong….

    What do you think?

  13. No problem, Dennis!

    I don’t know how to control for a potential fact/representation slip. The utility of such a technique would far outstrip the present debate.

    Good point about the unwanted iteration. I’ve worried about this too. One principled stopping point would just be knowledge, and that’s one reason to favor KSA.

    Absent KSA, maybe this will work to stop the ascent in a principled way. We ask, at the outset, how much checking would be very easy and nearly costless? Once, definitely. Twice, maybe. Three times, starting to seem like not. Four times, no way.

    That is of course vague, but we see it won’t go on too long.

  14. Hey John,

    Part of me thinks that it is okay if it is okay for her to double check, but it’s bad if we have the intuition that it’s not okay for her not to double check. Part of me thinks that it’s real bad if we set up a case where double checking is costly. If it seems very intuitive that the agent ought to double check in spite of the significant cost it is going to be harder to defend the idea that KSA is true. Just between you and me, there are times where I have the problematic intuitions.

  15. I agree that appropriate action requires more than knowledge. But Dayna is using more than knowledge in any case; she must feel some basic human commitment to help the snake-bite victim, etc. I guess I’m trying to keep the analysis confined to “purely epistemic” matters. Your approach looks a bit like Jason Stanley’s “tendentious” formulation: we must reject the assumption, he argues, “that knowledge is a purely epistemic notion.”

    Or we can, as I’d prefer, broaden our view of “epistemic factors” to take into account disciplinary (or, in this case, professional) standards. Science (and science-based professions like medicine) institutionalize skepticism, and therefore socially condition what counts as “knowing”. As an individual, Dayna knows full well that she’s got the right antivenin and she may take a great deal of personal pride in that belief turning out to be true; but as a member of a profession that can also easily interpret the result of the test kit she’s to be held to a higher standard. Her social situation (not her material situation) defines the context for our knowledge ascription.

    I’m not yet decided about whether to side with DeRose or Stanley on the issue of how to analyze these ascriptions, i.e., contextualism vs. interest-relative invariantism. But I do think that some form of social epistemology is the generally right approach. It is because Dayna, as a doctor, is accountable to a “socially constructed” (to use a loaded term) reality that her individual knowledge is undermined by skeptical worries, though only long enough to shored up again by the test kit, which offers a contextually valid solution to the very doubts it raises. (That’s also characteristic of institutionalized inquiry: it raises doubts it is capable of dealing with in practicable ways.)

    Last point: the only thing effect double-checking can have is epistemic (in a traditional sense). So its presence in this example can’t (to my mind) be to introduce something “more than knowledge”. Using the test-kit can only make Dayna more knowledgeable. That’s why I chose not to reject (C). Double-checking opens the question of whether Q is true or not. It is acting as though Q may be true, but not as though it is true. By applying the test, Dayna is acting as though she doesn’t know that Q. I’m trying to find a way to make sense of the knowledge ascription (actually ignorance ascription): “Dayna doesn’t (quite) know that Q.” Her role as a socially embedded actor may do the trick. But that comes at a cost that some epistemologists (perhaps rightly) do not want to pay.

  16. That’s a pregnant ‘may’ in KSA. But even with the wiggle room this affords, I don’t think this saves the principle if you allow that an agent may know p yet entertain some doubt whether p, even if just a pin drop. For it is then easy to construct examples that paralyze non-suicidal rational agents: a penny if you’re right about p, and a bullet to the head if you’re wrong, say.

    There are a couple of moving parts in this remark. One is that knowledge alone does not suffice for action; you’ve got to take stock of utilities. Two, focusing on stakes alone is a mistake, at least for non-skeptical programs; instead, you need to balance the risk (stakes) from acting as if p and p turning out false against the rewards from acting as if p and p turning out true. (Exercise: take any stakes sensitive example, hold fixed the part about the high stakes, yet wiggle the rewards.) Third, linking belief/rational belief/knowledge to a disposition to act does not entail that epistemic justification is pragmatically encroached upon. You’d need to argue that separately, I think, and not simply coast on the fact that you get it for free with the Bayesian machinery.

  17. I agree with Thomas. The view that knowledge is sufficient for action is made much more palatable by combining it with a stakes-sensitive conception of knowledge. In the situation described, the stakes are sufficiently high, and the costs of checking sufficiently low, that it may be that Danya does not know that it is cobra-venom, and hence should double-check.

    What Hawthorne and I say in reply to ‘Objection 7’ in our 2008 paper, “Knowledge and Action” is relevant here. On the one hand, there is the stakes-sensitive strategy – when there are high stakes, you don’t know something that you would otherwise have known. There is also an alternative strategy – to distinguish normative evaluation of a particular episode of practical reasoning from normative evaluation of epistemic character.

  18. Gregory – you are right that there is a pregnant “may” in the statement of KSA. In our 2008 paper, Hawthorne and I try to fill it in – this is the stuff about ‘p-dependence’ in the statement of The Reason-Knowledge Principle.

  19. Here’s an argument:

    (1) If you know that Q, then you justifiably believe Q.
    (2) You justifiably believe Q only if you can justifiably treat Q as a reason for action.
    (3) You can justifiably treat Q as a reason for action only if you may act as Q.
    (C1) If you know that Q, then you may act as if Q.

    I don’t think (3) is substantive. Justifications are permissions and to treat Q as a reason for action is to act as if Q. I know some question (1), but I don’t know if the reasons for doubting (1) have much to do with what’s at issue here. What about (2)?

    Here’s an argument for (2):
    (4) There’s an epistemic norm that enjoins us to refrain from believing what should not be treated as a reason.
    (5) Whenever there’s a norm that enjoins you to refrain from believing unless C obtains, there’s a reason not to believe unless C obtains.
    (6) When there’s a reason not to believe, it is permissible to believe only if there is a reason that defeats this reason and demands, inter alia, that the subject believes.
    (7) There is no such reason.
    (C2) You justifiably believe Q only if you can justifiably treat Q as a reason for action.

    I guess the controversial claims in this argument are (7) and (4). If you deny (7), you have to identify not just any old reason that would give you the right to believe (in the absence of reasons not to believe) but a reason that makes it prima facie wrongful not to believe that overrides whatever reason there is to keep beliefs that don’t belong in deliberation out of deliberation. I don’t believe in such reasons and I doubt you do. So, is there an epistemic norm that enjoins us to refrain from believing those beliefs that ought to be excluded from deliberation for purely epistemic reasons. Sure. If that means I gotta go stakes-sensitive, so be it. I like the argument.

    (Footnote: since you can’t justifiably treat Q as a reason for action unless Q, there are far fewer justified false beliefs than we’ve been telling the kids all these years.)

  20. Thanks to everyone for weighing in.

    Gregory, you make some excellent points.

    Jason, thanks for directing my attention to that part of your paper.

    Clayton, I think 3 is underspecified as stated (the same goes for 2). It’s not true that, if you can justifiably treat Q as a reason for action, then you may act as if Q in any way that would count as acting as if Q. Treating Q as a reason for action is just one way of acting as if Q.

    Suppose you know that you have a winning lottery ticket (= Q). Q is a reason to cash the ticket in, and you’re justified in treating Q as such when deciding what to do. But a powerful and observant person will incinerate you and a billion others if you act as if Q by cashing in the ticket. So you may not act as if Q by cashing in the ticket.

    As for 5, what do you mean by “there’s a norm”? Not just any norm will work here, right? For instance, if the law states that A-people may not marry B-people, then there is a legal norm enjoining you to refrain from marrying a B-person unless you are not an A-person. But there doesn’t seem to be a corresponding reason for you to refrain.

    Is there an independent way of stating which norms 5 is true of? (By “independent” I basically mean “independent of invoking reason-giving as a criterion”.)

  21. “So you may not act as if Q by cashing in the ticket.”

    Yes, you can’t cash in the ticket but you can act as if you won. The problem is that the permissible ways of acting as if you won don’t include cashing in the ticket and receiving your winnings.

    You are right that not just any old norm will work here, the kind of norm I have in mind is akin to the kind of norm that is at issue–one that indicates that there is a reason that applies to all that enjoins us not to treat certain things as reasons under certain conditions. If the norm talk bothers you, can I just use reason-talk? There’s a reason not to believe p if there’s epistemic reason not to treat p as a reason (for some matter that is known to be p-relevant). If there’s reason not to A, the permissibility of A-ing depends upon what other reasons can be found.

  22. Hi Clayton,

    I’m having trouble thinking this through right now. But I’m worried that your C1 isn’t the same as KSA, because of a potential ambiguity. Perhaps this isn’t a problem with your presentation, but a problem with my original presentation, or even a problem with how the discussion in the literature has been conducted till now.

    Here’s one principle: if you know Q, then you may treat Q as a reason for some potential act or other. That doesn’t seem to be what KSA’s proponents want — it’s just too weak. I think it should be more like this (as per p. 578 of Stanley and Hawthorne’s JPhil paper): for any Q-dependent choice, if you know Q, then you may treat Q as a reason.

    What do you think?

  23. Oh, sorry, I seem to have slipped from speaking of “acting as if Q” to “treating Q as a reason.” I hope that doesn’t cause too much trouble. My underlying point (and question) should hopefully be clear enough.

Leave a Reply

Your email address will not be published. Required fields are marked *