Where the action is

I heard through the grapevine that Jason Stanley is claiming on Facebook that there is an emerging consensus in the experimental literature. The consensus is that there is a robust stakes-effect on knowledge attributions, and the real debate is whether to explain it in terms of semantic contextualism or interest-relative invariantism. I’m not on Facebook and have no plans to ever be, so apologies to Jason if this is not an accurate portrayal of what he wrote. But since it’s generating enough buzz for me to hear about it second- and third-hand, I figured I’d take to the air here and help to correct any misimpression, even if the misimpression is due to people mischaracterizing Jason’s post.

There is no such consensus. How much is at stake, or how important people judge the situation to be, has an anemic and entirely indirect effect on knowledge attributions. Moreover, the stakes/importance-effect on knowledge attributions is entirely mediated by people’s estimation of whether the proposition in question is true and their estimation of the quality of evidence. Consequently, there is very little if anything for contextualism or interest-relative invariantism to explain.

By contrast, people’s judgment about whether an agent should act on a proposition has a direct and robust effect on knowledge attributions. The effect of actionability on knowledge attributions is as large and direct as the effect of truth and evidence.

In short, a practical factor definitely plays a large role in ordinary knowledge attributions and might even be part of the ordinary concept of knowledge. But that factor is not stakes. It is actionability.

For those that are interested, I’ll be presenting some joint work with Wesley Buckwalter at the CPA in St. Catharines, Ontario later this month, where I’ll walk through the relevant findings from some recent, very large behavioral experiments.

(Cross-posted at the x-phi blog.)



Where the action is — 96 Comments

  1. John – this is wonderful news. Of course, as you probably know, actionability is *precisely* what I claim to be the source of the practical effects on knowledge!! That is the why central principle of my 2005 book, *Knowledge and Practical Interests*, is the knowledge norm for action, and why I spent years refining that principle, culminating in the 2008 paper with Hawthorne (I was writing it for years, and then he made so many contributions that were essential that he had to become a co-author). As you probably know, I try to spell out the exact mechanism of actionability on pp. 91ff. of my 2005 book – see my discussion of ‘serious practical question’. The idea is that whether or not an agent should act on p depends upon whether p is a serious practical question, a notion I get from Isaac Levi and Jim Joyce’s work on decision theory. So you are tracing exactly the thought route of my 2005 book. The practical component of knowledge is ‘whether the agent should act’, and then my entire explanation of that practical effect in my 2005 book goes into what decision theorists say about bounded rationality as applied to this question. So this is a thrilling result, if true. You are saying that I discovered an effect on knowledge as large as truth and evidence. (I am not sure what you mean by ‘stakes’, but self-evidently it has nothing to do with anything I have ever written).

    • Hi Jason,

      Many themes from your work are definitely relevant and acknowledged, to be sure, but we are not “tracing exactly the thought route of [your] 2005 book.” (1) Our discussion is rooted in a broader historical and different interdisciplinary context. (2) Your book does not provide empirical evidence about what “the intuitive reactions” to your “High Stakes” and “Low Stakes” cases actually are. (3) We neither assume nor claim to prove that this works through a “serious practical question” mechanism (I’m aware of no evidence for this hypothesis).

      In your co-authored empirical work with Chandra, you claim that interest-relative invariantism is about “the practical costs of being wrong,” that the view predicts that stakes can have “a direct impact” on knowledge, and that the view would be challenged if we did not have “stakes-sensitive intuitions.” If the view really is squarely about action, then why didn’t you test for the relevant sort of judgments, and why didn’t you just tell all the people going on about stakes that they were misinterpreting the view?

      We do not say that you discovered the connection between knowledge and action. Do you think you discovered it?

    • Hi Keith,

      The work I mentioned in the OP does not try to tease apart multiple different versions of the K/A principle. We started with the very simple and readily understood thing by asking people to make judgments about whether someone “should” act in a particular way. Once it is established that there is a direct connection between knowledge judgments and actionability judgments, then it could make sense to try to more finely discriminate among different ways of spelling out the connection in detail.

  2. There is a big disconnect between the X-phi literature and the non-X-phi literature on this topic. In the non-X-phi literature, epistemologists attacked my claim that actionability is a condition for knowledge. Jessica Brown, Jennifer Lackey, and Keith DeRose all provided difficult challenges for actionability as a condition on knowledge. I am extremely explicit in chapter 5 of my book (and in the PPR symposium) that the issue is the actionability condition on knowledge. I have no stake in whether the bank cases are good cases, but I used them as schematic letters to map out the consequences of actionability, on the undeniable assumption that stakes affect actionability (and so indirectly affect knowledge, as Turri and Buckwalter rightly seem to be arguing). If Buckwalter and Turri are correct, this poses problems for contextualism, because contextualism can only accommodate some versions of actionability. Keith has done an admirable job of spelling this out, and is quite right that deciding between contextualism and IRI will depend upon figuring out the fine grained details here.

  3. John – I couldn’t agree more that stakes has “only an indirect effect” on knowledge! Stakes *only* come in via actionability, and in the detailed spelling out of my view in Chapter 5 of my 2005 book, stakes only come in via an effect on the decision theoretic question of what propositions an agent with limited time should consider in making a decision about one she ought to do. It is exactly right that according to IRI stakes only have an indirect effect, via actionability.

    • Hi again, Jason. We do not say that “stakes only come in via actionability.” Instead, we found that stakes affect knowledge by affecting judgments of truth and quality of evidence. So if “according to IRI stakes only have an indirect effect, via actionability,” then IRI is false.

      I actually don’t understand why IRI would come packaged with such an exclusionary hypothesis. Why do you rule out stakes affecting knowledge via other means?

  4. John I wonder what you mean when you write “Consequently, there is very little if anything for contextualism or interest-relative invariantism to explain.” Obviously if (as you claim) there is little or no “stakes/importance-effect”, then sure there’s little or no such effect for those views to explain. But you say “very little if anything” and yet clearly contextualism and IRI could both explain (in different ways & perhaps not equally well, but that’s another question!) the sensitivity of knowledge attributions to “actionability”.

    • Hi, Geoff. Ah, yes, of course the dispute could continue on other grounds. I only meant the narrower thing — that is, that there is no (direct) stakes/importance-effect to explain.

  5. What I’m wondering, then: Is the result that there is no *direct* effect supposed to go against what anyone working the question holds? If so, who?

  6. Certainly not me:
    “The best cases for testing whether contextualism is true do indeed feature differences in stakes, but, as I’ve stressed, not because the standards directly follow the stakes, but because …”

  7. Hi John. This is great. I would be very interested in reading the paper.Let me add that in all the experimental papers I have published where I show “stakes” effects, I also, separately, attempt to show that people are adhering to what you are calling “actionability”. I did find some evidence to support that thesis. I mention this, because, actionability (together with some plausible principles) seems to entail stake sensitivity. So it would be extremely surprising if people adhered to actionability but did not use “knows” in a way that was sensitive to stakes. You mention that you guys found that there was indeed a stakes effect but that it is entirely mediated by judgments about truth and evidence. I guess I would like to see details on that. I don’t see, off the bat, how a plausible human model can make sense of that result unless “evidence” itself was pragmatically encroached. But of course, I believe, though I could be mistaken, that Jason Stanley accepts this view about evidence, and I am also sympathetic to it. In fact, your result, it seems to me, is nicely explained by theories which accept that epistemic notions including evidence and knowledge are pragmatically encroached. So I agree with Stanley above (though for different reasons) that the result is not bad for IRI or for anything that I have written, for example. Though this comment is perhaps premature, not having read your paper (it is not linked above).

    • oops.. by “actionability” I just mean a principle connecting knowledge and action (for example, if you know P, then it is acceptable to act on P), whose adherence seems to be supported by the data here. Jason interprets me correctly below.

  8. Hi Keith,

    Yes, the in/direct question is relevant to much of the debate as it’s unfolded. But, far more importantly, it’s a principal question that must be addressed if we’re to understand how these judgments actually work.

  9. I got on the computer after a day away to find that Angel had already made the points I wanted to make. I find myself similarly deeply perplexed by John Turri’s responses on this thread. First, you write “We neither assume nor claim to prove that this works through a “serious practical question” mechanism (I’m aware of no evidence for this hypothesis).” It is natural (this is Angel’s point too) to take your talk of “actionability” to refer to the notion of what an agent ought practically to do. Then it is natural to take expected utility theory as relevant for this question. In looking at any formal attempt to make sense out of the propositions an agent with limited time needs to consider in acting in the framework of expected utility theory, one will arrive at the notion of a serious practical question. It’s just a necessary part of any explanation in expected utility theory. I said my account was incomplete because it’s sort of the only part I discuss. But at any rate, I am interpreting you as saying that you have found that actionability has nothing to do with expected utility theory (and furthermore you are aware of “no evidence” that it does!). I guess I think if I were you I would back off the claim that there is *no evidence* that expected utility theory has something to do with actionability – or rather if there really is *no evidence* that expected utility theory has anything to do with what an agent ought practically to do, then that seems to me a much more significant result. I am certainly excited to see what you have come up with. Secondly, yes, as Angel said, my official position in the book is that interest-relativity affects all the epistemic properties and relations; they are all connected with action (final chapter). If one is a traditional epistemologist (not a knowledge-firster), then yes the way to put this point would be exactly “stakes enter in by mattering to the quality of evidence”. If one is a knowledge-firster, to say that stakes come in via evidence is like saying they come in via knowledge. Third, you are quite right that I didn’t do any empirical work on the intuitions. It looks like yes, one can do epistemology purely through a priori reflection from the armchair then. Cool.

  10. John – in the contemporary epistemology literature (you know this I’m sure) it was Williamson who first formulates the connection between knowledge and action. However, he does not follow through on its consequences. In fact he seems to suggest that if you know, you can always bet the house. It was in Hawthorne and my discussions of this problem for Williamson, while trying to come up with a solution, that we both started thinking about how what is at stake affects what one should do. Fantl and McGrath were the first in print about this, though I am not sure they were initially guided by the connections between knowledge and action, though later they do have them there. I don’t think their route was the same as John and mine (via Williamson). That’s the history I guess.

      • Ah that makes sense. I thought a lot about Levi’s discussion of practical certainty in writing chapter 5 of my book. And the notion of a serious practical question (“serious practical possibility”) I think I also got from him, via Jim Joyce. I definitely relied on Levi in writing up that material (though it was often filtered through Joyce).

        • There is more in that paper than the seeds of Isaac’s pragmatic notion of serious possibility.

          Here you also find their critique of Braithwaite (1946), the grandfather of the “more you care, the less you know” idea. Indeed, that motto is their critique of Braithwaite’s original stakes-relative proposal, and (best of all) it is delivered with Morgenbesser’s signature wit.

          • This is great Gregory! Thanks so much for bringing this to my attention! I am returning to this topic right now as it happens, so this is super useful.

  11. John – just to emphasize – what you refer to as “the serious practical question mechanism” was simply my incomplete attempt to canvass the literature on bounded rationality and expected utility to see what folks had to say. That’s all. So if you say there is “no evidence that the serious expected utility mechanism has any effect” you seem to be saying that there is no evidence that expected utility reasoning is implicated in actionability. That’s a marvelous and indeed shocking result.

  12. More specifically, the route from actionability to stakes (the “entailment” to which Angel refers) goes via a subsidiary premise. The subsidiary premise is that one cannot bet one’s life on ordinary empirical knowledge (say, including memory). It is this subsidiary premise that Williamson denies.

  13. Great to hear that some behavioral data is coming out on stakes/actionability! (I’m no skeptic about survey methodology. But I’ve always thought that it was not the strongest evidence for or stakes/actionability effects).

    I share a bit of the puzzlement expressed above. As I can gather from your report, your data indicates the following

    (i) actionability judgments have an effect on judgments about knowledge ascriptions (or on knowledge ascriptions themselves, if this is what you found).

    But this is entirely consistent with

    (ii) stakes have an effect on actionability judgments.

    And there are experimental and pre-experimental considerations in favor of (ii). However, absent interaction effects, evidence for (i) and (ii) amounts to evidence for an indirect stakes effect on knowledge ascriptions.

    So, I am also puzzled about the claim that “there is nothing to explain.” As a traditional invariantist, I very much feel the pressure to address such an effect – however indirect. My own favored way is to reject that the link between knowledge ascriptions and action is constitutive of knowledge (or ‘knowledge’). Rather, knowledge ascriptions can have the pragmatic function of directing action. This approach involves replacing the brutish knowledge norms of action and assertion with more sophisticated epistemic norms of action and directive speech acts.

    But that seems to me to suggest that the action is more or less where people have thought it is. Namely, pertaining to whether the link between knowledge and action is constitutive or merely a communicative heuristic. Am I missing something? Did you find evidence against (ii)?

  14. Hi Jason,

    Thanks for your further thoughts. I know that this is just a blog post, and I assume that you simply made an unintentional mistake, but please do not use quotation marks to attribute to me words that I’ve never used. Here I have in mind the quoted phrase, “no evidence that the serious expected utility mechanism has any effect.”

    I wasn’t reporting a “formal attempt” to make sense of actionability with expected utility theory. Instead, I was reporting evidence about how these judgments are actually made and relate to one another. If you’re aware of evidence that a “serious practical question” mechanism —– your words, quoting yourself, not mine —– is doing work here, please do share the reference.

  15. Hi Mikkel,

    You seem to be arguing that our data provide evidence for an indirect stakes-effect on knowledge. But, as I mentioned, we observed a small, indirect stakes-effect on knowledge, so there is no need to argue for this. The data already tell us.

    To answer your question about (ii), stakes had a small, mostly indirect effect on actionability judgments in one experiment and no effect in the other.

    The explanation for an indirect stakes-effect on knowledge is simple. One part of the explanation would go something like this. Sometimes when more is at stake, people become more guarded in their own estimation of the truth. This in turn depresses knowledge judgments, because truth matters for knowledge.

  16. Hi John – thanks for the clarification. I was mainly just trying find out what your finding was. Like others I was puzzled that your post suggested the alleged stakes effect on knowledge ascriptions was not there to be explained. So, I just wanted to note why the finding you reported struck me as perfectly compatible with such an effect.

    I agree that the explanation of the indirect stakes effect that you offer is an important part of the full explanation. But I doubt that it is the full explanation. So, I am skeptical about the idea that the explanation of the varieties of stakes effects will be “simple.” But I very much look forward to see the study when it is ready for consumption!

    PS. I think that there are good reasons to think that the Stakes-Actionability effect (i.e., my (ii)) is fairly hard to generate experimentally especially via survey methodology. (Indeed, this may account for some of the null-results). All the more reason to be excited about a behavioral study on this!

  17. Hi Mikkel,

    When you speak of “the varieties of stakes effects,” what are you referring to?

    If stakes have an effect on people’s judgments about how someone should act, why would it be hard to generate the effect by having people make judgments about how someone should act?

  18. I’m writing working on a book on this stuff and can send you the relevant chapters when they are readable. But here are some quick answers.

    1. I think that there is a quite wide variety of putative stakes effects to be explored reflectively and experimentally. One manipulation may be the salience of the stakes. In one case-type, the salience of the stakes may be part of the conversational context – conversationally salient stakes. In other cases it may be part of the description of the case – psychologically salient stakes. In some cases the (salient) stakes may pertain to the subject of the knowledge ascription. In other cases to the one who makes the knowledge ascription. In yet other cases, it may pertain to the audience but neither the ascriber or ascribee. One might also construct cases in which the stakes pertain to the participants who make a judgment about the case. And there may be interaction effects between these manipulations.

    2. It is not crazy to think that there is a difference between being in a high stakes case and evaluating one for a fee of 25 cent. Evans, for example, argues that considering a case requires dealing with instructions and that this can trigger Type 2 reasoning. (Again, I am not a skeptic about the relevance of survey methodology. But it should be supplemented with behavioral experiments).

  19. Thanks, Mikkel.

    Sorry, when you spoke of “the explanation” for these stakes-effects, I thought that meant there was evidence of actual effects, as opposed to possible effects that we might eventually get evidence for.

    When the relevant behavior is the attribution of knowledge or actionability, having people make these judgments in an experimental design just is a relevant behavioral experiment. (There are other potentially useful forms of behavioral experimentation that collaborators and I are currently working on.)

  20. I’m OK using “behavioral experiment” for this – that’s just terminological. My point, inspired Evans and other psychologists, was that we need to distinguish (a) knowledge ascriptions by someone in a high stakes case (b) knowledge ascriptions about such cases. It is an empirical question whether stakes are more likely to affect the former but not the latter. Glad you’re working on it. It is not easy to come up with experimental designs that put participants in a high stakes scenario.

  21. Hey, Mikkel. Yes, it isn’t easy, especially when one cares about ethical guidelines. 🙂

    I definitely agree that the distinction is relevant. At the same time, it’s worth bearing in mind that we don’t need to put people in high-stakes situations to see a strong connection between judgments of actionability and knowledge. In light of that, since you’ve clearly given this some thought, I’d be interested to hear what do you think it would teach us if we did find that *putting people in high stakes situations* affects their knowledge judgments. (I suspect it would affect knowledge judgments, along with a bunch of other stuff.)

  22. Knock off the saintliness and run those studies! To answer your last questions – my point here was just that (b), about-case studies, are supposed to be evidence for (a), within-case knowledge ascriptions. So, within-case experiments would provide more direct evidence. I also wanted to urge caution about using null-results in about-case studies to provide strong evidence that there is no within-case stakes effect on knowledge ascriptions.
    I’m also thinking about within-case experimental designs where the stakes for some participant are high relative to a baseline. Haven’t nailed it but will write you if I do.

  23. I find myself increasingly concerned about the ethics of announcing experimental results purporting to show a certain conclusion without prior peer reviewed publication and release of the paper. In this case I am no longer certain the authors have a grip on the relevant theoretical issues. I am concerned about the effects of this blog post on young people writing papers and suggest, before people commit valuable time to trying to explain something that has not been published, that the rhetoric here gets toned down. Lots of experimental effects are not replicable, and the discussion reveals serious concern about interpretation IMO. Let’s wait until peer reviewed publication and hope it is subject to honest peer review.

    • Hi Jason,

      I’m not going to get goaded by the condescension and disrespect. I’m sure that you could still contribute to our understanding of these issues and, in the process, set a better example for the young people out there reading along.

      As I mentioned, I’ll be presenting the work in detail publicly at a symposium later this month, and everyone is welcome. We’ve been sharing the paper informally for nearly a year with peers and have received lots of very helpful and encouraging feedback. Interested parties — including you, of course, if you like — should feel free to email me if they’d like to have a look at the manuscript.

  24. A couple of points to be made here: (i) Keith’s question at 5/11 2:4pm had really already been answered by John in his comment at 10:15am. The relevant issue is not “stakes effects must be direct in every possible sense!” but rather, “there are various possible intermediaries that stakes can’t go through, like simply altering the subject’s estimation of the truth of the putatively known claim”. I take it that this sort of move, like the move that people may withhold belief in high-stakes situations, is a perfectly standard kind of counter-move strategy to both the contextualist and the IRI theorist. (ii) Jason need not have the ethical worries he expressed at 5/13 6:46am; researchers discussing their work in public forums before peer review in a journal is utterly standard, for a great many reasons. (I would note that the FB thread that set this all off was about a non-peer-reviewed report regarding a non-replication of the WNS Gettier case!) And in this case, I do think it was clearly in service of a public good, namely, of making clear that there is in fact not any such consensus about stakes effects of the sort that had been claimed. (iii) John is right that, according to his results, there is a big actionability-knowledge correlation without any stakes correlation, and in this way, he seems to me totally right that he is following a different line of evidence here than that used by Jason and others in earlier intuition-based arguments. However, I think it will be really important to take some of the head-scratching about that in the comments here seriously. We really do expect people’s judgments about actionabilty to be sensitive to stakes! So, if a putative actionability effect is nonetheless immune to manipulation via substantial increase or decrease in the stakes at hand, that has to pose some kind of at least prima facie worry that perhaps either stakes have not been operationalized well, or that actionability has not. If it really turns out that people’s psychology of when and how to act is insensitive to stakes, that would be a radical finding, and frankly would be of greater theoretical interest than anything about the concept of knowledge! But this is why, I am certain, people will want to follow up on John & Wes’s excellent paper and explore their general findings with other probes, materials, and so on. Which is exactly what good science looks like – when it makes people want to go out and do even more science.

    • Hi Jonathan,

      Thanks for your insightful critical remarks. Wesley and I definitely welcome and encourage work that does just what you said, including work that ultimately identifies limitations/weaknesses of our findings and methods.

      Just real quickly, I wanted to mention that we do not find that actionability judgments are insensitive to stakes. In one study, there was an indirect stakes-effect on actionability; in another, there was no effect at all. Instead of a point about insensitivity, then, I’d just emphasize that actionability is much more intimately connected to knowledge judgments than stakes, and this A/K connection is *dissociable* from stakes (as evidenced by the fact that we observe it even in the absence of an indirect stakes-effect).

  25. Hi John and Wes,

    First of all, I just wanted to second Jonathan Weinberg’s point that this is a really nice paper you’ve got, and I think Jonathan does a great job in his comment of identifying some of the key issues here.

    But I wanted to ask for some further clarification about your view. You find that actionability judgments predict knowledge attributions, and a lot of the discussion on this thread seems to assume that this means that people’s actionability judgments are somehow impacting their knowledge attributions. But do you think that your results provide any evidence that the effect goes in this direction, rather than the other way (with knowledge attributions

    • Hi Josh,

      Thanks for the kind words and the great question. Basically, this is the main question we end our inquiry with (in this paper). Given that the judgments are intimately and directly related, which judgment, if either, has priority? The mediation analyses by themselves cannot settle the issue. Causal modeling on the data was indeterminate between two models: K–>A versus A–>K. Numerous follow-up studies continued to yield mixed or indeterminate results on this issue.

      So this is still a topic of active investigation and considerable interest. The work continues.

  26. Hi John,

    This response is really helpful, and I think it points to one way of getting out of the puzzle that Jonathan Weinberg poses in his comment above. Jonathan points out that stakes surely impact actionability judgments and that you are suggesting that actionability judgments impact knowledge attributions. He then asks how it could possibly be that stakes do not, in turn, impact knowledge attributions.

    Your most recent response opens up a very simple answer to Jonathan’s question. It is that actionability judgments do not actually impact knowledge attributions and that the connection is going entirely in the opposite direction.

    • Very good discussion (though Josh, I’m not understanding why you are ignoring the fact that Weinberg is repeating the point Angel originally made earlier in the thread, repeated by me and Mikkel. That’s kinda weird!). Anyway, this is all retracing the route I sketched above after the publication of Williamson’s book *Knowledge and its Limits*. Williamson claims that knowledge is an absolute guarantee of actionability. So, in short, Williamson claimed that knowledge attributions guide actionability, rather than the other way around. Decision theorists reacted to Williamson by criticizing his account, saying that on his view you could bet your life on your knowledge that your car was parked around the corner. This is something decision theorists consider strange. Williamson thinks it is correct (alternatively, one could be a skeptic about empirical knowledge, and thereby preserve the ‘knowledge guarantees actionability’ claim). This is precisely the problem that got me into epistemology, and this is what Hawthorne was ought to solve as well. We agreed with the decision theorists that it is irrational to act on ordinary empirical knowledge, say of the form that your car is around the corner, if lots depends on it (for example, your life, or the lives of all humans). That’s expected utility reasoning (the stuff about serious practical questions is simply bounded utility theory). So Hawthorne and I precisely reversed the direction. Take actionability as guided by expected utility theory (or bounded expected utility theory), and then knowledge comes and goes with stakes, since utilities are a measure of what is at stake. Like Angel and Mikkel, I simply could not understand John Turri’s claim that there was no evidence that actionability was governed by stakes – stakes are just utilities, so that is just saying that actionability is not determined by expected utility theory. That was the problem with Williamson’s proposal. But now I understand the source of the mutual bafflement. It appears that John Turri and Wesley Buckwalter have found evidence that Williamson was right that I can bet my life on my knowledge that I parked my car around the corner yesterday. This is precisely the point in 2002 when everyone was rolling their eyes at *Knowledge and its Limits*. And it explains the mysterious talking past each other above (for which I apologize now, John). I was exactly right to say that John and Wesley had found evidence against expected utility theory! That is precisely the result. It’s a much better fish to fry than Interest Relative Invariantism – it would show that expected utility theory is not intuitively correct. This was Williamson’s claim all along!

      • I’m not sure what you think Joshua should have been saying w.r.t. what I said, Jason, but anyway I think if you go back and re-read what I wrote, you’ll see I’m just straight-up endorsing what y’all were saying — that’s the up- thread head-scratching I was adverting to.

        As for the overall interpretation of the results, my own inclination (having had the benefit of reading earlier drafts) has been towards the line of interpretation that Joshua suggested. It seems to me clearly more conservative than other interpretations, given the state of the data at this time.

  27. John Turri – I apologize for the misunderstanding. It is a very radical pattern of intuitions you are describing. Shockingly, and ironically, it’s the very pattern of intuitions that Williamson was defending against multiple attacks after the publication of KaIL. It *does* entail that expected utility theory is false. So you can see why people deeply involved for years in the debate, such as Mikkel and Angel and I, are a bit taken aback. It is an X-phi vindication (ironically) of the original Williamson position that led to what was thought to be an absurd conclusion.

  28. Jason and Josh, I agree with Jason, for the reasons he said, that without pragmatic encroachment, knowledge by itself cannot determine actionability. However, maybe one way to follow through on Josh’s idea is that knowledge and stakes both contribute (independently) to judgments about actionability (what one should do). But this model doesn’t make sense of what John said to Mikkel above (9:06 may 12): “stakes had a small, mostly indirect effect on actionability judgments in one experiment and no effect in the other.” How can that be since what one should do must depend on what is at stake. Josh’s suggestion about the priority of knowledge and action still doesn’t resolve this issue.

    • Angel, that’s why I suspect (as noted earlier) that there may be issues to explore here in particular about how either stakes or actionability are operationalized.

        • Well, this might just reflect some slightly different ways of expressing the same idea, but I’m not sure that we should expect subjects to display a _tight_ connection there so much as a _substantive but highly noisy_ one. (I guess I’m reading “tight” as at least something like “very closely & linearly correlated”, but maybe that’s not what you had in mind.)

  29. Angel – I interpreted Josh as having the view, regarded as extreme, that Williamson had (and probably still has); you parked your car around the corner yesterday. It is still there. By ordinary standards, you know it is still there. You can act on that knowledge even if the existence of all sentient life is at stake.

  30. Hi again everyone,

    Many thanks for these probing comments and insights. Since there’s convergence in the contributions, I’ll offer some general responses.

    (1) We did not find that actionability is insensitive to stakes. We did find evidence that actionability *can be* insensitive to stakes, and that, even in such cases, the connection between actionability and knowledge judgments remains strong.

    (2) I’m trying hard, but I do not share the puzzlement at the dissociability of stakes and actionability judgments. This result does not, as far as I’m aware, contravene an established body of prior findings. Neither does it seem implausible to me that things other than stakes would be relevant to actionability. This is why I’m unable to see this as either puzzling or radical.

    (3) Good to see people raising questions about our stakes manipulations, since this is obviously a really important issue for this work. Quickly in response, I’ll just say that we included a manipulation check for stakes. In particular, we asked people to rate how important the matter was. In both experiments, higher stakes (an independent variable) predicted significantly higher importance ratings (a dependent variable), with a very large effect size. This makes me confident that our stakes manipulations were at least adequate (I do not say optimal).

    (4) I am interested in testing the claim that knowledge is an absolute guarantee of action. While our findings might be suggestive, I think a more pointed test would be prudent.

  31. I’ve been feeling confused about the disagreement between Jason and John, and I feel that my confusion is the result of them not meaning quite the same thing by “stakes”. So, when Jason says that stakes just are utilities and that to deny that actionability needs to be sensitive to stakes is just to deny expected utility theory, I get the feeling that, by the “stakes” in a decision or situation, Jason means something like “expected utility orderings”. When John says that there is no direct stakes-effect on knowledge, he seems to mean that the outcomes of a decision can be assigned much greater utility values, without affecting whether the subjects are intuited to know.

    It doesn’t seem hard to reconcile the two positions: when Jason says that knowledge is sensitive to “stakes” he means that knowledge is sensitive to expected utility orderings. That is, if acting on p turns out to have lower expected utility than failing to act on p, then we’ll have the intuition that the subject doesn’t know that p. That’s what Jason (it seems to me) means by a stakes-effect on knowledge. But that’s just what John seems to be calling actionability. What John seems to require for a direct stakes effect on knowledge is for 1) the ordering to remain the same even though 2) there are changes in the absolute utility values assigned to the various outcomes.

    And, I take it, it is this second thing that Jason is saying he never — in his book — claimed. There was never claimed to be a direct stakes effect in that sense.

    • Hi Jeremy,

      Thanks for this. Just to clarify on my end, in the paper, a direct stakes-effect requires a change in either (a) the stakes (which is manipulated in the experimental design) or (b) people’s judgments about how important the matter is (which people provide to us) to be directly related to a change in people’s knowledge judgments. By contrast, a direct action-effect requires a change in people’s judgments about whether the agent “should” act a certain way to be directly related to a change in people’s knowledge judgments.

      • My wife is on call and I am alone at home with a three year old. So I can’t read the paper yet. But Jeremy is right that we have been talking past each other. As I say way up in the thread, Schiffer makes a similar misunderstanding in his PPR contribution to my book. The idea of a judgment about how important a proposition is plays absolutely no role in anything I have ever written about this topic. In my book, stakes only come in via the connection between knowledge and action, in the way Jeremy points out. (I need to check whether Schiffer actually added that point into the PPR symposium; that misunderstanding, that absolute importance has something to do with it, was a presupposition of his APA comments in the APA symposium, and he admitted that it was a misunderstanding of the book of course, so maybe he took it out of his PPR contribution. But I think he did include it and I did reply straightening this out. I’ll have to check).

    • Jeremy, yes exactly. That’s exactly what I mean by a stakes-effect on knowledge. That’s what I’m trying to spell out with whole serious practical question thing; when acting on p makes a potential difference to expected utility orderings of the actions at your disposal, you need to gather more information to know.

      • It really can’t be about absolute importance, actually. For one thing, unimportance isn’t closed even though (we argue) actionability is. To use an example from Robert Howell (I may be misremembering the details), that my sister is in LA (unimportant) entails that she’s not dead in Denver (important). Given my current evidence, though, both are actionable. And that’s why emphasing actionability allows closure to be preserved.

  32. Thanks, John. I thought this was what you meant. I just didn’t think Jason was thinking of “stakes” as meaning the absolute importance of the matter. He’ll correct me if I’m wrong.

  33. Jason – I’m not sure why you take this (or any) empirical study to *entail* or *vindicate* any position. Isn’t the position supported by the findings simply the one that best accounts for it and other relevant considerations? For example (Warning! Hobby horse!), everything seems consistent with the idea that because knowledge is a good but imperfect predictor of actionability and vice versa, then (i) knowledge ascriptions may be routinely used to direct action and (ii) judgments about them are affected by judgments about action. What am I missing?

    John – on your (2) above: you’re right that several people have suggested (or presupposed) that other factors than stakes bear on actionability judgments. Some of us have even argued that actionability itself is determined by other things than stakes. (In my series of papers, I mention urgency, alternative courses of action, availability of evidence, social roles and conventions). One consequence, is that configurations of these parameters may run contrary to the commonly presupposed idea that higher stakes requires better position. If it is urgent to take action, evidence is costly to acquire and there is little cost of acting on a false belief, then rising stakes may *lower* the epistemic requirements on action. Another consequence is that encroachers appear to have to accept that considerations of urgency, availability of evidence, alternative courses of action etc. partly determine whether one knows (the truth values of knowledge ascriptions).

  34. Hi Mikkel,

    Great points! When you point out the “consequence” that these things might partly determine whether one knows, it seems like you’re implying that this is somehow an unpalatable consequence. Am I reading you right?

  35. Yes you are! I think that it is a very bad consequence for the encroachers and I have argued so (in my 2011 paper, Sect. 7, p. 545-6 and elsewhere). Whether it is unpalatable, I don’t know. Encroachers have a different palate than mine and some of them might just accept that urgency, availability of evidence, alternative courses of action etc. are also determiners of knowledge. I’m not sure – hopefully your study will help “smoke them out” on this issue.

  36. Thanks Angél – That had slipped under my radar and I will def take a look. But given what you say, it looks like at least one encroacher takes what I argued to be a “bad consequence” to be a feature not a bug!

  37. Hi Mikkel,

    Here’s why I don’t think it is unpalatable.

    According to our best cognitive science, cognition and action are intimately connected. Some cognitive scientists claim that “action” is “constitutive” of cognition. To date, relevant behavioral and neurological findings pertain to sensory processing, predicting events, object categorization, attention, decision making, memory, social cognition, and language use. Why should coherence with our best science be a bad consequence?

    Neither is this basic approach philosophically radical or new. It harkens back to William James’s admonition — with a big nod to Darwin — that we not “pretend that cognition is anything more than a guide to appropriate action.” Indeed, it goes back at least all the way to Locke, who responded to (professed) Cartesian doubt by appealing to the “assurance” we require to “govern our actions.” Indeed, as best I’ve been able to tell, the idea that knowledge is somehow essentially uncontaminated by practical factors was a radical invention of Descartes’s. (Perhaps the historians will prove me wrong on that one.)

    Now we’ve got this mounting behavioral evidence that the scientifically standard view of knowledge is deeply reflected in ordinary social cognition, and there’s every reason to suspect that there’s more where that came from.

    So, given that knowledge scientifically and ordinarily understood is so deeply connected to action, why shouldn’t we prefer a philosophical view of knowledge that connects it to actionability and its determinants?

  38. Hi John – (those are big questions. So, this is gonna be rough): I do emphatically *not* reject that there are intimate, and even constitutive, connections between cognition and action. I just want to urge caution about the certain steps. E.g., steps from data indicating that folk judgments about stakes or actionability predict or correlate with folk judgments about knowledge to conclusions that stakes are a constitutive determiner of knowledge or that actionability entails knowledge. While we *should* accept that there is an intimate connection, we should not accept that it is as tight as entailment. Reflection on the knowledge-action principles suggest otherwise.

    As suggested to Jason above: If actionability is a good but imperfect predictor of knowledge and vice versa, then strong correlations between folk actionability judgments and folk knowledge judgments should be expected. Due to our cognitive limitations, judgments about actionability are, from a cognitive heuristics standpoint, very effective proxies for judgments about knowledge and vice versa. While recognizing such cognitive (and communicative) heuristics is important to understand folk epistemology, it should not dictate epistemology. On systematic reflection it is not too hard to find counterexamples to knowledge-action principles.

    On the unpalatable consequence: If all sorts of parameters other than stakes affect actionability judgments, and thereby indirectly knowledge judgments, this seems to me to speak against taking the indirect stakes effect to be constitutive of knowledge (or ‘knowledge’). “It’s not that important. Hence, I know.” is pretty bad. But “We’re not that busy. Thus, we don’t know.” is worse. So, is “We could phi instead of psy. Therefore, we do/don’t know.” or “It will be more costly to me to gather further evidence than to act on a false assumption. So, I know.” etc.

    Sorry if this is too rough (some of this is spelled out more carefully in various articles). As mentioned, those are big questions and I do not want to suggest that I’m near the bottom. Also, I’m only half way through your piece and haven’t yet checked the Shin piece.

    • Mikkel – I haven’t responded to you in print, only because I have been working on some other things. However, I have been working on a reply to your excellent articles, which pose I think the strongest challenge to interest-relative invariantism. It puts the pressure on us to explain why the cognitive limitations are the *point* of knowledge. I think Brian Weatherson’s 2012 paper “Knowledge, Bets, and Interests” is good on this point. But very roughly, the way I think of the point of knowledge is as Isaac Levi thinks of practical certainty. As Harman pointed out, it’s impossible for us to be expected utility reasoners, given our cognitive limitations. We need a mechanism that allows us simply to exclude certain propositions from reasoning. The function of knowledge is that. That’s the short version of the account of why knowledge is constitutively connected to the fact that we are cognitively limited agents.

      • Hi Jason – I’m excited that you’re working on a reply! I think that the “bounded rationality” line that you sketch is the most promising meta-justification for knowledge-action principles. As you know, I’m inclined to think that these considerations ultimately speak against taking the knowledge-action links to be constitutive. But I look forward to see what you come up with!
        Feel free to run a draft by me anytime.

  39. Hi Mikkel,

    That’s a very fair and not-too-rough response. I agree that the contours of folk epistemology should not simply dictate epistemological theory. At the same time, there’s this very impressive and growing body of work which suggests that cognition is essentially tied to action. If cognition is by its nature tied to action, and this deep connection is reflected in the central tendencies of ordinary social cognition, then I honestly wonder why it would matter much if we found certain consequences odd upon systematic reflection. This is not an unusual occurrence in the history of science.

    Perhaps one reason why those bits of reasoning you mention sound odd is that we don’t typically explicitly reason to conclusions about what we do or don’t know in that way. Even setting aside issues of knowledge and its relation to action, I tend to find “Thus, I do/don’t know” a bit stilted. It’s not clear to me that making it about stakes (etc.) would make it sound any more or less unnatural. Just a thought (and one that’s straightforwardly testable).

  40. Thanks for this, John – I’ll need to think more but its late here and I’ve been summoned by my wife. So briefly:
    1. I agree that it is a desideratum for epistemology to account for the ties – essential and contingent – between cognition and action. I just want to insist that this involves critically examining whether the ties are constitutive or whether they are there because they mark a boundedly rational trade-off between cost-efficiency and accuracy. (This is pretty standard in the developmental literature on belief ascription, for example).
    2. I think the infelicity of the types of reasoning indicates something important. If stakes (urgency, availability of evidence, alternative courses of action etc.) were partial determiners of knowledge (or ‘knowledge’), then we *would* be able to felicitously cite them as reasons for/against knowledge ascriptions. The fact that we can speaks against taking them to be constitutive/semantic. I.e., reconstructing the effects as explicit reasoning (using argument markers ‘Hence’, ‘Therefore’ etc.) may serve as a (fallible) test of whether a given effect on knowledge ascriptions mark a constitutive or semantic feature or something else (bias, pragmatic feature).

  41. I have now read the paper. This issue is now cleared up. As I suspected early in the thread, and as Jeremy Fantl made clear, the notion of stakes Buckwalter and Turri are focusing on is not what epistemologists in the debate about interest relative invariantism have meant by “stakes”. The confusion in their preliminary draft is the same confusion made by Stephen Schiffer in his 2007 contribution to the PPR Symposium on *Knowledge and Practical Interests*, where he makes precisely the objection to me that Jeremy Fantl makes to the Buckwalter and Turri interpretation of stakes; see p. 194 of Schiffer’s 2007 PPR commentary. There, Schiffer argues, as Fantl does above, that every low stakes proposition entails a high stakes proposition; my son is playing happily in the park entails that my son’s arm isn’t broken. But, Schiffer says, that my son’s arm isn’t broken (or is broken) is a high stakes proposition. Therefore, Schiffer concluded, there is no hope on the view of preserving closure (Fantl’s point). In my reply, I point out this is simply a misunderstanding of the relevant notion of stakes. At the APA symposium I sat down with Schiffer to work through Chapter 5 of my book, and he saw the point immediately of course. Our pieces were already in press at PPR, or already submitted, and he graciously declined to change his, because he said it’s ok to be wrong sometimes. Since Schiffer is of course a great philosopher, I had always assumed everyone had read the exchange between me and him. At any rate, I haven’t seen anyone else make this misinterpretation in the whole literature, which explains this thread. What is being tested for in the Buckwalter and Schiffer paper as stakes has nothing to do with the notion in the literature on pragmatic encroachment, which concerns actionability. If I were referee of this paper I would suggest dropping the stuff about very important propositions, since to my knowledge there is not a single epistemologist who has ever claimed that that notion has epistemic relevance. It has, as Turri points out above, only a highly indirect connection to actionability. It’s a very important proposition that there is a deadly cobra in my living room as I type this, but it has no effect on actionability; it is not a serious practical question in the sense of Chapter 5 of my book. So I would recommend dropping that whole part and focusing on the stuff that provides evidence for interest relative invariantism (impurism, pragmatic encroachment, whatever you want to call it). This of course leaves important challenges like Keith DeRose’s points about which versions of the knowledge action principle the evidence supports, and Mikkel Gerken’s arguments that the intuitions have to do with the fact that human agents have bounded and imperfect rationality, not with knowledge itself. But as I said in the original Facebook post, this is yet more evidence, very interesting evidence, for pragmatic encroachment. Having read the paper that’s the only interpretation I see.

    • Hi Jason,

      Thanks for these thoughtful and constructive remarks.

      I completely get how your usage of “stakes” has taken on a lot of extra meaning, but there is no confusion on this in the draft. There are two separate paragraphs where we distinguish two separate ways of connecting knowledge and the practical. On the one hand, there is the group of things clustered as “interests,” “stakes,” “needs,” and the “costs of being wrong.” On the other hand, there is the “related but different” approach which “focuses on knowledge’s normative role in licensing” action or practical reasoning. This leads naturally to the two different ways of testing for a connection. One doesn’t really pan out (stakes); the other works extremely well (actionability).

      That seems like so much water under the bridge. But I was potentially more concerned about another thing you said. I just want to make sure I’m understanding you correctly when you write that what we “tested for … as stakes has nothing to do with the notion in the literature on pragmatic encroachment.” Are you saying that a pair of low/high stakes cases is irrelevant here? I ask because our stakes manipulation was just a low/high pair. Basically every empirical paper on “stakes” has tested pairs like this (including Sripada & Stanley 2012).

      The “importance” question is used as a very natural check on the stakes manipulation (and it ended up working very well). We do not speak of “very important propositions.”

      • I need to think more about this and it may connect with the issues about the connection between actionability and expected utility. Chandra and I gave cases designed to test the connection between knowledge and expected utility (I talk a lot in my book about the relevant notion of expected utility – cases like ignorant high stakes, for who we found evidence of an effect, suggest as I speculated in my book, a more objective notion of credence in the expected utility calculation). Talk of importance is not going to be an accurate check on stakes. The proposition that my son is about to be hit by a car is very important. But it is in no sense high stakes for anyone in the literature, as it is not relevant for any decision I am now making (he is sitting next to me inside our house). That is, the proposition that my son is about to be hit by a car is very important but it’s not a serious practical question for me right now, so it isn’t in the relevant sense high stakes (and as Jeremy points out importance is clearly not preserved under closure).

        • Likewise, it’s extremely important to me — and relevant to my current decision to keep sitting in this chair — that a meteor is not about to land precisely on the chair I’m sitting in. But my decision to continue sitting in this chair is not a high stakes decision.

          We may have done ourselves no favors with the use of slogans like “The more you care, the less you know,” which might invite some to think that it’s about overall importance. But those were just slogans, and were always accompanied by qualifications like, “To put it a bit misleadingly…” The views themselves seem pretty clear to me: assuming that there is no meteor about to hit the chair does not change the expected value of acting on that proposition very much. Therefore, the decision to act on that proposition is low-stakes and, in that sense, the proposition itself is low-stakes.

        • This all sounds sensible, Jason and Jeremy, thanks. But I’m not yet seeing how it relates to the low/high manipulations we actually used. Also, if for some reason you don’t like the (successful!) manipulation check that we used, I’d be interested to hear about potential alternatives.

          • Hi John. I’m not sure it’s supposed to relate to the manipulations. And I’m certainly happy to have the empirical confirmation (as I’m sure Jason is, too; hence his “this is wonderful news” in the opening comment). I think the worry was more about relating the results back to the preexisting debate. For example, in your original post, you take the results as evidence against what you were reported as Jason’s claim for the emerging consensus. But Jason didn’t mean that there was an emerging consensus about the knowledge-relevance of outright importance -of “stakes” in your sense. So I guess the worry was about the degree to which it was being claimed that the lack of a stakes-effect posed a problem for some arguments for SSI. On the contrary, I think it was only some critics of SSI that took SSI to be making a claim about absolute importance. (So, we’re happy to have your results handy to battle against that confusion.)

          • Hey Jeremy,

            As I understand the preexisting debate in the experimental literature, there is a history of focusing on low/high cases, with some researchers finding a small “stakes effect” (in the pure low/high sense of “stakes”), others failing to replicate it, and lots of dissatisfaction. I submit that this is not consensus.

            If by “stakes” we instead mean “expected utility orderings,” then there is no consensus either because no one has ever really tested for an “expected utility ordering”-effect on knowledge attributions.

            Wesley and I detected a tight a relationship between knowledge attributions and actionability judgments (i.e. whether an agent “should act” in a relevant way). I am cautiously optimistic that our results can help form the basis of a yet-to-emerge consensus about knowledge/actionability judgments. If this helps combat misinterpretations of your and Matt’s and Jason’s views, all the better!

            However, in my view, it is definitely a further question whether/how this connection relates to expected utility orderings. Maybe EU-orderings underlie it, maybe they don’t.

          • Thanks, John. This sounds right about whether there was a consensus in the literature on the claim Jason was trying to make. But I took you as offering evidence for an exception to the claim Jason was claiming there was a consensus about. The claim he was purportedly trying to claim there was a consensus about (I don’t know if I agree that there was consensus about that claim) is that there is a stakes-effect on knowledge, by which HE meant (and which all the SSIists have always meant) that there is an actionability-effect on knowledge. So, I think we were just trying to correct the impression that your results were a counterexample to that view — the view we all express when we say that there is a stakes-effect on knowledge. Likewise with the claim that there is no data for the IRI theorist to explain. The data is the intuitions in the cases. This is data driven by taking the HIGH cases as involving failures of actionability.

            Regarding the fact that previous X-phi studies seem to have neglected making the failure of actionability as explicit as it should have been made — well, that’s always been a complaint to made against those studies, and a reason to like the data you provide.

      • Just defending the “consensus” claim. If you look at Buckwalter and Schaffer’s most recent Nous accepted paper, you will see that they acknowledge, in the light of Angel, Chandra, and my work, that there is an actionability effect (since, as Jeremy points out, an actionability effect was the actual claim). Then they argue that contextualism explains the actionability effect. This is what DeRose argues as well. They also acknowledge a very important point I have been making on blogs for almost a decade, that given Kruglanski’s work in the 1980s, which spawned a large literature about settled belief, that it really is very hard to deny an actionability effect. After all, if Kruglanski is right that when the costs are high (the utility), we take longer to settle on a belief that is relevant to the action at hand, then, given that knowledge requires settled belief, it simply follows that knowledge depends on actionability. Nagel has been saying this for awhile so it was nice to see the point finally acknowledged. I am not sure whether X-phi allows results from psychology to be legitimately included in whatever game we are playing, but when I talk of “consensus”, what I mean is the broadening of X-phi results to include the recognition of long standing replicable results in social psychology. So that’s what I meant. And it looks like this paper is further confirmation of the connection between actionability and knowledge. There is a further assumption Fantl and McGrath and I make, which one could question – as I have said, Williamson questioned it – and that’s that expected utility determines actionability; that expected utility is at least a major factor in determining what an agent ought to do. “High” and “Low” are just meant to be labels for utility calculations. It could be that the examples given as exemplars of utility calculations are bad examples. That is certainly a possibility, one that seems perfectly likely. A more dramatic claim is that the connection between actionability and expected utility is not what we think it to be (the connection I draw is between bounded expected utility theory and actionability). This is an attack on decision theory. It is quite true that Fantl, McGrath, and I presuppose decision theory to be a guide to actionability. It would be worthwhile I think to distinguish aims. There is the very modest aim, early in the X-phi literature, of showing that the bank and airport examples aren’t very good examples of anything. There is the larger claim of providing further evidence for a connection between actionability and knowledge, of the sort argued for in the pragmatic encroachment literature (btw – Kvanvig I think introduced “pragmatic encroachment” as a slur for the views of me, Fantl and McGrath, and Hawthorne, but like “Obamacare” it seems to have stuck). Then there is the very large claim, aimed at a lot of people, that expected utility theory isn’t a guide to actionability. I think it would be worth it to situate the claims of the paper along these dimensions. I’m having a hard time assessing them myself at the moment along these dimensions (my main interest is the connection between knowledge and actionability actually, a la the beautiful description of Cartesianism and Lockeanism earlier in the paper).

        • Hi Jason,

          I am 100% in favor of including as part of the evidence relevant findings from the cognitive and social sciences.

          Interestingly enough, one thing we found in our studies is that belief attributions were basically irrelevant to knowledge judgments, even as measured with the very liberal cognitive verb “thinks.” This follows on the footsteps of several other recent papers which provide evidence that knowledge, ordinarily understood, does not require belief, ordinarily understood. (Not to mention some really impressive work on knowledge attributions in comparative psychology.) So, when it comes to how these judgments are actually made, I think that the inference drawn from Kruglanski’s findings is interesting, relevant, and still potentially viable, but increasingly questionable in light of research that is directly aimed at answering questions about knowledge attributions specifically. (Clearly, I agree with the conclusion of the inference, but I think the evidence suggests that the relevant pathway is different.)

          Also, I follow Jeremy in applauding the point about clarifying “various aims.”

          • “Think” and “believe” have evidential uses, where they preclude knowledge. Also there is, as comment on Schwitzgebel’s work has brought out, a dispositional-occurrent ambiguity, where “think” really brings out the occurrent sense. “Know” just seems to require dispositional belief. If the response to Kruglanski’s work on settled belief requires disassociating knowledge from belief, I’d like to see if the questions etc. are sensitive to those confounds. Furthermore, Kruglanski’s work basically shows that the norms governing settled belief are stakes-sensitive. If, like me, you think that the norm governing settled belief is knowledge, that is another intuition-driven route to the states-sensitivity of knowledge.

          • Hi Jason,

            Mine was not a response to Kruglanski’s work per se but rather to a particular argument from his findings to substantive conclusions in the present context regarding knowledge attributions.

  42. Thinking about it more I guess I am confused about the methodology. In particular, I just don’t understand what it is to have “a hunch that strongly suggests Ivan does not have a nuclear weapon”. “Strongly suggests” suggests that there is strong evidence that Ivan does not have a nuclear weapon. But “having a hunch” is since Plato’s Gorgias what is contrasted with having strong evidence. So I have no sense of what the set up conveys to the research subjects. I am not saying this to be weird or anything. I just have no idea what subjects understand by “hunch that strongly suggest Ivan does not have a nuclear weapon”. Williamson has a paper arguing that one can know that p, despite only having a little evidence that one knows that p. But this is considered very radical. Is that is meant? I really love the spirit of camaraderie and I love love the background of this paper and what it tries to do. But I can’t for the life of me imagine what the research subjects are thinking!

    • Thanks again, Jason. Love that you love that about the paper!

      About the potential weirdness, sometimes that can happen simply as the result of thoroughly crossing a bunch of factors. Also, leaving out a cell in a factorial design could look suspicious and it’d be awful to have to re-run a study of this size to satisfy a persnickety referee. I believe that counts as high-stakes, right? 😉

      But, in the end, participants in that condition did respond sensibly, along the lines you’d suspect if they understood it as it was meant: they denied knowledge. In fact, across both studies, people denied knowledge in “hunch” conditions. Hunches are disreputable. Everyone’s on board with Plato.

  43. I’m also through a first read. It is a very interesting study that raises a lot of questions – more than it answers, IMO. But I haven’t fully absorbed it yet. May I ask a few more questions about the data?

    Q1: Before getting to the linear regression analysis, you helpfully start with some preliminary analysis (the t-tests) and go through whether there was a significant effect of stakes on some but not all of the probes. Here is what you report for the respective vignettes:

    Belief No/No
    Truth Yes/No
    Evidence Yes/No
    Action ?/?
    Importance Yes/Yes
    Knowledge Yes/?

    I wonder whether you could fill in the blanks? (I would do this in the paper as well. Also, it would be much easier for the reader if you reported in a single results section for the two vignettes).

    Q2: Am I right that you had 15 participants/condition? If so, some of the nulls may be explained by the lack of statistical power (Pinillos and Simpson argue that this might be a problem if there is a robust but small stakes effect).

    Q3: Did you give the probes in the order you mention (starting with belief and ending with knowledge)? Or did you vary the order? (My worry here is priming effects. Asking about, e.g., evidence before asking about knowledge may not be the best representation of intuitive judgments about knowledge ascriptions. Btw. I have the same worry about Sripada&Stanley and Pinillos&Simpson). In any case, I’d include info about the order in the methodology section.

    Will it still be helpful with comments on the paper? If so, I’ll send you some once I’ve thought some more.

    • Thank you, Mikkel. Since we’re well into review on this paper, I’d feel bad asking for or even encouraging you to spend time writing up comments. It’s very generous of you to offer, but you know how it goes when anonymous referees get involved — it tends to crowd out other voices.

      To answer your questions:

      Action: Yes (d = .30)/No (p = .362).
      Knowledge: Yes/No (regression analysis reported on p. 22 of the draft; t-tests were no different)

      We tried hard to not over-report. I’m not sure what the justification would be for reporting preliminary stakes/action tests. No one was proposing stakes-mediation via actionability.

      Yes, that’s the correct N. But what nulls are you referring to? Are you interested in a particular 3- or 4-way interaction on knowledge judgments? I just did a quick power analysis, and in order for the observed unique predictive value of stakes to be statistically significant — explaining less than 1% of variance in knowledge judgments — it looks like we’d have needed a sample of over 1800.

      Definitely! The statements were randomized (p. 16).

  44. Some personal commitments and travel have prevented me from participating in this thread until now. But since the discussion seems to have winded down, I thought I’d just write to thank everyone who wrote in about our paper, and especially Jason, Jeremy, Mikkel, Jonathan, Angel and Josh, for their helpful comments, questions and suggestions. While evaluating the predictions of prior philosophical theory is an important part of these discussions, it also seems from the comments that more generally, there’s a strong shared interest in reaching a basic scientific understanding of how these judgments work and how they relate to each other, come what may. Personally, I find this very exciting for epistemology, and can’t wait to see all the new progress sure to continue with so many great researchers now working on this question.

  45. Isaac Levi was mentioned in the discussion, I thought it could be useful to quote what he has to say on the topic. As far as I understand the paper quoted below represents his current view.


    “9. Depositing Paychecks
    Consider the by now overused bank examples discussed by Keith De Rose (1992).

    Case A: Mr. and Mrs. X agree that depositing their paychecks today (Friday) would be an inconvenience due to the long lines. Mr. X suggests that they drive home and deposit the checks on Saturday. Mrs. X points out that many banks are closed on Saturday. X says, “I know the bank will be open.” X reports that he was at the bank a couple of weeks ago on Saturday and it was open until noon.

    Case B: The scenario is the same as before except that the couple have written a very large and important check that might bounce if funds are not deposited in their checking account before Monday. After X declares his conviction that the bank will be open on Saturday and gives his testimony that it was open two weeks ago, Mrs. X asks: “Banks after all do change their hours. Do you know the bank will be open tomorrow?” Remaining as confident as he was that the bank will be open, X replies ”Well no. I’d better go in and make sure.”

    In both cases, the bank is open on Saturday so that X’s conviction was correct.

    De Rose sees a prima facie discrepancy between these two cases that can be explained away by maintaining that X’s declaration that he does know in (p.247) Case A and admission that he does not know in Case B are uttered in different contexts of knowledge attribution. X’s claim to know in Case A is a different proposition than the proposition denied when X concedes he does not know in Case B. In both cases, De Rose maintains that X speaks truly. De Rose offers these scenarios as intuition pumps in support of contextualist conceptions of knowledge that have acquired a widespread vogue.

    Jason Stanley recognizes that the contextualism on offer here is primarily a linguistic thesis, explores its merits as such, and finds it wanting. Yet he takes the cases seriously and suggests an alternative account according to which the general truth conditions for knowledge attribution are the same in both cases but the truth of a knowledge attribution is relative to several factors including one controlled by practical interests. In Case A, the practical interests make the risks of being wrong in waiting until Saturday to deposit the checks fairly small. So it is true that X knows that the bank will be open. But the stakes are higher in Case B. The risk of being wrong in waiting until Saturday to make the deposit is too great to take. So X does not know that the bank will be open. Both Stanley and De Rose have come up with different rationalizations of what they take to be real phenomenon typified by cases A and B.

    I have my doubts as to whether there is apparent inconsistency to rationalize. Recall in both cases, X and Mrs. X are making a joint decision so that the deliberation as to whether to go to the bank on Friday or Saturday is one that, if possible, should terminate in a consensus as to what to do. The knowledge that matters for decision-making is that of the joint agent constituted by X and Mrs. X together. Moreover, a clear—headed X would not claim to know in Case A and admit to not knowing H1 Case B. In both scenarios, X is certain that the bank will be open on Saturday. Mrs. X is not. In Case A, X, in spite of his full belief, which he expresses in the declaration that he knows that the bank will be open, offers a reason for believing that the bank will be open not because he needs one (he does not) but to offer considerations that might persuade Mrs. X to agree with him. Apparently in Case A, X succeeds.

    In Case B, Mrs. X is not mollified by X’s initial argument seeking to persuade her to agree with him. She mentions that banks sometimes change their hours of business. X sees that he isn’t going to persuade Mrs X. So he suggests that they go and check out whether the bank will be open on Saturday. Both Stanley and De Rose claim X confesses that he (p.248) does not know that the bank will be open tomorrow. Mr. X must be extremely browbeaten to confess to that. A far more likely scenario is that he agrees to check things out rather than engage in a hopeless debate with his wife. He does not say: ”I guess I did not know after all. Let us check it out.” He says simply, “Let us check it out.” X is just as certain in Case B as in Case A. From X’s point of view, X knows that the bank will be open tomorrow in both cases.

    In short, I think the two scenarios fail to provide the kind of data for pumping intuitions that De Rose, Stanley, and many others think they do. The dispute between contextualists and interest—relative invariantists is a tempest in a teapot built on appeal to understandings of the verb “to know” that many of us do not share and do not pretend to comprehend.”

Leave a Reply

Your email address will not be published. Required fields are marked *