New Experiment on Bank Cases

In a recent post, Keith DeRose offers some nice criticisms of existing experimental studies of bank case intuitions, along with some very helpful suggestions about how future studies should be conducted.

Wesley Buckwalter has now completed a new study that incorporates all of DeRose’s suggestions. Yet, though the methodology is now more sophisticated, the result remains exactly the same. Once again, one finds that explicitly mentioning certain error possibilities can impact people’s intuitions but that there is no difference between high stakes and low stakes.

At this point, then, we have a whole series of different experimental studies on the bank cases (here, here, here and here), and they all seem to be converging on the same basic result: one can sometimes change people’s intuitions by making error possibilities salient, but one can’t alter people’s intuitions by changing the stakes.

I would be very interested to hear any thoughts about the epistemological significance of these experimental results. What bearing might they have on debates about contextualism, interest-relative invariantism, etc.?


Comments

New Experiment on Bank Cases — 63 Comments

  1. Josh,

    I’m wondering how these results square with the lengthy literature on stakes in psychology. There is a long tradition that suggests that confidence estimates vary with stakes. As Mayseless and Kruglanski (What makes you so sure? Effects of epistemic motivations on judgmental confidence. Organizational Behavior and Human Decision Processes, 39, 162-183. 1987) write:

    “…subjective probability estimates often depend on specific beliefs and assumptions persons bring to bear on judgmental tasks. Beyond such cognitive factors, confidence estimates might be importantly affected by the motivational context (of costs and benefits) in which judgments are rendered (cf. Christensen-Szalanski 1978, 1980; Fischoff & Beyth Marom, 1983). –Mayseless and Kruglanski (1987)”

    Mayseless and Kruglanski (1987) show that subjective probability estimates are affected by, among other things, what they call “fear of invalidity”:

    “…fear of invalidity…has to do with the desire to avoid judgmental mistakes, in view of their perceived costliness….fear of invalidity may often promote an “unfreezing” (of the epistemic process). By this is meant an increased tendency to generate alternatives to a currently entertained hypothesis, and/or an increased sensitivity to information inconsistent with the hypothesis. A fear of invalidity might be aroused under various situational conditions, e.g., by instructions stressing the importance of correctness and accuracy, via anticipated evaluation of one’s judgments by significant others, via an expectation that target persons would suffer costs if subjects’ assessments were incorrect, etc. (Ibid., p. 165)”

    The hypotheses that they tested and confirmed included the hypotheses that “the fear of invalidity would lower the initial level of confidence in a hypothesis, (b) lower the magnitude of confidence shifts occasioned by new information, and (c) heighten the tendency to seek more information before making a final judgment.”

    The psychology literature suggests that shifts in stakes are what typically *lead* to raised salience of alternative possibilities. At least the papers I’ve looked at make a persuasive case that one’s confidence in the truth of a hypothesis is a stakes-sensitive matter. It would be a bit of a remarkable discovery to find that while one’s confidence about a hypothesis shifts with stakes, one’s judgments about whether one knows do not.

    How do you see the literature written by the philosophers on these topics interacting with the empirical studies done by psychologists? Or is empirical psychology not relevant here?

  2. There is obviously a vast psychological literature on the interaction of stakes with epistemic judgments. As you can see, I’ve only just started reading through it – I’m still over twenty years behind the most current stuff. I don’t mean to suggest that at the absolute end of the day it will turn out that there is decisive intuitive evidence for interest-relative invariantism. But what I’m puzzled about is how it seems to be well-established in the articles I’ve read so far that raising the stakes affects one’s judgments about subjective certainty – which presumably reflect one’s beliefs about epistemic certainty, but those effects are ones that you guys haven’t been able to reproduce.

  3. Furthermore, there is no unanimity in the papers you cite. In the paper with May, Zimmerman, and Sinnott-Armstrong, you do find that stakes have an effect, and raised salience does not. As you write:

    “An analysis of variance conducted on our
    subjects’ degrees of agreement with “Hannah knows that the bank will be open on Saturday” revealed a significant main effect of the Stakes variable F(1,237)=4.36, p=.04, such that participants were less likely to agree that Hannah knew that the bank would be open when the stakes were high. However, contra Schaffer, the mention of the alternative (“banks do change their hours”) did not have a significant
    effect on whether subjects attributed knowledge (F(1,237)=1.16, p>.25), nor did the two variables interact (F(1,237)=.87, p>.25). So, while the mean responses reported in Table 1 may seem to indicate that the two variables are working together in HS-A to lower degrees of agreement among subjects, only Stakes had a main effect.”

    You give a certain spin to this data that seems unwarranted. Rather than conceding it shows that stakes have some effect on our judgments about whether people know, you say that stakes (but not raised salience!) affect our *confidence* about whether people know. This seems awfully like the authors were antecedently committed to saying that stakes don’t affect our judgments of knowledge.

  4. By the way – I think it’s probably a bad idea to test these intuitions on undergraduates with things like banks and depositing checks and such like. Do undergraduates nowadays even have a clear idea of how these things work? My sense is that it’s really important to try to present people with cases that are similar to high-stakes situations that they themselves might encounter, or at least ones to which they could relate. I mean, that’s like presenting the actual Gettier cases to non-Americans – that would just be silly, because who knows what cultural assumptions people would be bringing to vignettes about Americans and their cars.

  5. Finally, there is recent work in the experimental philosophy tradition that supports the thesis that knowledge is stakes-sensitive, by Angel Pinillos. It is linked to here, on the X-phi site:

    http://experimentalphilosophy.typepad.com/experimental_philosophy/2010/06/new-experiments-support-the-thesis-that-knowledge-is-sensitive-to-stakes.html

    So it seems that we’ve got different results even using the survey methodology employed by philosophers, and then we have a literature in empirical psychology to work through…

  6. Hi Josh,

    I find Wesley’s paper really quite interesting, but not as a piece of evidence that the folk don’t get the pattern of intuitions generally taken to be supportive of IRI (or contextualism). I think the paper instead shows something about certain differences between lay and professional interpretation of epistemological vignettes (and if we could control for these differences, I have every reason to believe that the folk would get the same intuitions as the pros — not least because I think this pattern of intuitions gets generated by some pretty well-known psychological forces, as Jason has suggested).

    In the professional debates, contextualists and advocates of IRI are interested in figuring out whether certain non-traditional factors might play a role in our assessment of knowledge-ascribing sentences. They invite us to consider pairs of stories in which we are supposed to consider the traditional factors fixed (the truth of the proposition believed, the subject’s psychological attitude — confidence, outright belief, etc., and the factors which make it more or less likely that the belief is true — the subject’s evidence, the manner in which the subject is weighing this evidence, etc.). As professionals, we try to play ball in reading the cases, and hold those factors fixed — and then we see if we still get the intuition that a knowledge ascription in one of the paired stories is true, where a knowledge denial in the other is also true. If we get both the ascription and denial seeming true, when we’ve made the effort of reading the cases in the way the author intended, then it’s thought that there’s something fun and special going on. (No one is going to be very excited by a theory which says that sometimes the apparent truth-value of a sentence ascribing knowledge shifts when the subject it applies to gains or loses evidence, or when the proposition she believes turns out to be true in one story and false in the other….).

    I think it actually takes considerable effort to play ball and read the cases (which always involves fleshing out rough thumbnail sketches) in the way that will make them good test cases for contextualism/IRI. One problem with doing a between-subjects design on the lay public is that we are not doing anything at all to ensure that they are playing ball — e.g. we are doing nothing to ensure that readers of the high-stakes and readers of the low-stakes condition will think of the subjects in the stories they are evaluating in the same way as far as the traditional factors are concerned. And we have every reason to think that, left to their own devices, rational readers will NOT take the subjects that they are evaluating to be matched in this way. So, for example, there’s no stipulation in either case that the memory of the recent opening is all the evidence that the subject has for the bank’s being open — and realistically, both Hi and Lo would draw on some huge/open-ended body of evidence about retail operations, posted signs, whatever, and might have been presumed to have done various kinds of checking, etc. And crucially, Hi would naturally be anticipated to have done more checking. Ordinarily, as Jason notes, high-stakes subjects collect more evidence and weigh it more carefully before they make up their minds; participants assigned the assertion versions of the cases would naturally tend to attribute more evidence to the High Stakes protagonist than their counterparts would assign to her Low Stakes counterpart. There’s a pretty simple reason for this: normal evidence-collecting behavior is roughly adaptive, in that it balances expected costs and rewards. (In fact cognitive behavior in general follows the general rules of rational behavior — where it falls under our control, we exert ourselves roughly in proportion to expected pay off.) We go to the effort of collecting more evidence when we anticipate a significant payoff from greater accuracy in judgment on the question at issue (and yes, normal subjects are considered aware of the accuracy/effort tradeoff). Payne, Bettman and Johnson’s classic The Adaptive Decision Maker is a reasonable guide to the early literature on this, or if you want more recent stuff you can look at work by people like Benjamin Newell, Otto Rieskamp, Michael Lee and Tarrant Cummins.

    So, as an example, consider a high/low stakes pair where Lo has a casual interest in whether a dish contains peanuts and Hi has a life-threatening peanut allergy. If different pools of subjects are given separated stories that include some mention of evidence collection and end in a declaration “I know this dish is peanut-free”, it would not be fatal to IRI to have participants in the two groups report roughly equal acceptability for these different-stake declarations, as long as it is natural for participants who read the Hi version to assume that Hi has more evidence, or thinks about it more carefully (as they naturally would, on the assumption that Hi is normally rational). Epistemological vignettes almost never fully specify the body of evidence a subject is drawing on. (And except with rather artificial cases it’s going to be hard to make them otherwise — precisely what pool of evidence *do* we draw on in judging that a bank will be open on a given day, anyway?) So where it’s high stakes for Hannah, and she says, “I know the bank will be open”, she comes across as having more evidence than has been specified in the story. (Intuitively: if you had to ask one person about whether the bank was open this Saturday, would you rather ask the person who had a huge amount hanging on that question, or the person with only a casual interest?)

    If we don’t go between-subjects: I think that in the one experiment I saw where participants were asked to evaluate cases presented pairwise (Lo right next to the corresponding Hi), the experimenters — Neta and Phelan, if memory serves — did get a pretty robust IRI-type result. I think that in this case participants read the conspicuous stake manipulation as the only difference between the cases, and inferred (see Norbert Schwarz on this) that everything else about the cases was to be held constant. That is, they were cued to construe the cases the same way that the professionals construe them, and they got the same intuitive responses.

    Anyway, it’s clear in the latest Buckwalter experiments that subjects are learning something not only from the basic story, but also from the moment at which the subject of the story makes her declaration. Here we can see some fairly direct evidence that between-subjects, the stories are not being read in a way that holds all the traditional factors invariant across the crucial pairs. The assertion and the denial versions of the various stories are all receiving high scores: it looks like participants who read the version of any given story in which Hannah declares herself to know are construing the facts of the story differently from the way in which their peers read the story in which Hannah declared herself not to know. One suggestion: in versions of the story in which she declares herself not to know, Hannah may come across as not having made up her mind whether the bank will be open or not — she is still looking for evidence, wondering about the matter. But if the crucial pairs are being read as different in whether the subject does or does not possess an outright belief (=traditional factor) then the intuitions they generate aren’t directly relevant to nontraditional and exciting theses like IRI. It’s not impossible to get participants to evaluate cases that they have set up and construed as matched pairwise in the way IRI wants, but it’s not easy to ensure that the construal is right.

    I’m also convinced that when we ask lay subjects to evaluate a sentence which has “I know” as the matrix predicate, they may just end up giving us an assessment of the embedded proposition (professionals are more likely to focus on the key mental state verb, knowing that it is what is relevant here). You mention that: “there was a surprising main effect for attribution such that across the board, whereby people thought that knowledge sentences involving an assertion were more likely true than knowledge sentences which involved a denial (F (1,177) = 8.6, p < .01)." That's a big effect. I expect people are hearing the "I know that.." in the assertion version of the case as what psychologists of mental state ascription call a "merely conversational" use of the term "know". Conversational use figures on the denial side too: when someone says, "I don't know that P", they can simply be taken to be conveying doubts about P (as in "Gosh, I don't know that the weather will be good for a picnic tonight"). Unless participants are cued to focus on the mental state verb, they can reject "what the person said" there as unacceptable not because they have any doubts about the truth of the self-attribution of ignorance, but because they want to indicate — if they've been told that the picnic weather will in fact be great — their resistance to the doubts the speaker has conveyed about the proposition of interest. Left to our own devices, we read for gist (see Anne Britt's work on this), and the gist of a sentence in which people are talking about knowing/not knowing that the bank will be open can easily be taken to be something about whether the bank will be open rather than something about the speaker's state of mind. There are contexts in which people will focus more on mental state verbs (disagreement contexts are good, for example).

    There are various ways of testing whether people are even registering the mental state verb. You could for example try a version of Anthony Sanford's semantic change blindness protocol — e.g. check if people even notice if you change the key assertion from "I know that the bank will be open" to "I think that the bank will be open".

    Sometimes it can seem that one population has very different intuitions from another where in fact the two groups have just the same underlying intuitive mechanisms, but different conceptions of their task. (Hugo Mercier has some great observations on this point in his paper on the universality of argumentative reasoning — when hunter-gatherers initially came across as incapable of syllogistic reasoning, this was because insufficient attention had been paid to ensuring that they were cued to deploy their in fact perfectly adequate syllogistic reasoning skills.) There is a pretty standard professional set of intuitions reported on the classic Bank and Airport cases (and DeRose's soon-to-be classic Office cases) — epistemologists with quite different theories tend to agree on those intuitions, even when it's awkward for them to do so. I think the fact that these cases produce quite similar responses in us is some empirical evidence that there is something uniform in us producing these responses, and I haven't yet seen reason to suspect that this factor which is operative in us would be missing in the general public. I have seen reason to suspect that considerable care is needed to cue laypeople to exhibit just the intuitive capacities that are of interest. But to say that it's hard isn't to say that we should give up (I've been struggling all summer to strip out confounds, and it's a long road).

  7. Hi Josh (and perhaps Wesley can weigh in)

    I’m glad this study was done!
    Looking at the figures (from the short note linked from the xphi blog–there is no full paper so I had to fill in the details myself). I see that subjects in the experiments tend to think that the knowledge denial in high stakes is true and that the knowledge assertion in low stakes is also true (regardless of error). (If I am reading this right) why isn’t this just straight forward evidence that folk attributions of knowledge are sensitive to stakes?

    I anticipate the following response: “The denial and the assertion bias subjects to give certain answers that favor IRI or contextualism, so we should keep that variable fixed. We have to compare high and low stakes holding the speech act constant. and no good result for stake sensitivity was found here”.

    My response is that keeping the speech act fixed can also create problems (perhaps worse problems). This is why: Consider a high stakes/assertion speech act case. Suppose also that IRI is true and that subjects are aware of this (in some sense). When they see the sincere assertion of knowledge in the vignette they will at this point have some reason to think the person in fact knows. And since they accept that knowledge is sensitive to stakes, they will likely now think that the stakes are lower than previously thought. So now you have an unintended distortion on the perceived stakes in “high stakes”. It is very possible then that adding assertions or denials in the vignettes is causing some problems (this is perhaps a more serious issue for the between-subjects studies for the reasons Jennifer points out above). After all, isn’t it a little fishy how in every single condition in this latest study (there are eight of them) subjects simply agree with what the protagonist says.

    Actually, if we are interested in cleanly testing IRI, I think it might be good to get rid of assertions or denials of knowledge in the vignettes (since IRI is an invariantist thesis, we can have the knowledge claims outside the vignettes). In my paper on this topic (which Jason Stanley cites and links above) I create new vignettes of this type (not bank ones–I invented new ones) and I get results that folk attributions of knowledge are in fact sensitive to stakes.

  8. Thanks so much to all of you for these very helpful comments and suggestions. I certainly won’t be able to address all of these interesting ideas myself — though I hope that other experimentalists will chime in — but I did want to quickly write back to ask for some further clarification about a remark Jason makes regarding findings in the social psychology literature.

    Clearly, it is important here to distinguish between (a) the epistemic *subject* and (b) the *ascriber* who claims that this subject does or does not have knowledge. (In the special case of self-ascriptions, the subject and the ascriber turn out to be the very same person.)

    Now, what the social psychology literature shows is that epistemic subjects tend to have lower confidence when they are in high-stakes cases. By contrast, what the experimental philosophy literature seems to show is that ascribers do not take the stakes themselves as relevant when arriving at intuitions as to whether a subject has knowledge.

    Jason’s suggestion seems to be that there is a kind of tension between these two findings. I could certainly be convinced that this is correct, but I wonder if someone could spell out a little bit more explicitly where that tension is supposed to lie.

  9. Josh,

    The link is this. Subjective confidence is a guide to what people think that their evidence is. If people take evidence to be stakes-sensitive, then one can easily explain why it is that when we are in high stakes situations, we have diminished evidence – we take the quality of our evidence to be lower in high stakes situations.

    Generally, whatever the experts in epistemology say, I’m pretty convinced that the folk take their evidence to be what they know. So there is your link to knowledge.

    More succinctly – we align our subjective credence with our epistemic credences. As stakes go up, our subjective credences diminishes, revealing that we take our epistemic credences to diminish.

  10. At least that’s the story I’m spinning – that’s the explicitly IRI friendly story. It is admittedly a bit controversial.

    Here is a less controversial way to show that it would be shocking if the folk believed that knowledge wasn’t stakes sensitive, yet took subjective credence to be stakes-sensitive. Knowledge requires *full belief* – in the vocabulary of the social psychologists, “settled belief”. There is overwhelming psychological evidence that settled belief is stakes-sensitive (see Nagel’s work for a summary of some of this research). It follows (as Nagel and Weatherson have pointed out, in different ways), that the folk will take knowledge to be stakes-sensitive. In short, the psychological literature on settled belief entails the presence of stakes-sensitive intuitions about knowledge.

    In contrast to what I said above, this Nagel-Weatherson line is advanced as an argument *against* IRI. In short, Nagel wants to argue that we can predict the IRI intuitions, without impugning epistemology – it’s all about settled belief. This is Weatherson’s line too, in “Can We Do Without Pragmatic Encroachment?”. The intuitions derive from the philosophy of mind, rather than from the epistemology.

    Nagel and Weatherson seem perfectly correct that stakes-sensitivity of settled belief entails stakes-sensitivity of knowledge. The issue with my dispute with them is whether they really avoid some version of IRI in in the end.

    What’s radical about what some philosophers claim to have discovered (though not Angel!) is that knowledge is not stakes-sensitive. If this were right, we would have to conclude either that knowledge does not entail belief, or that the empirical psychology literature is wrong.

  11. Josh –

    So the social psychology literature shows pretty clearly than when you raise the stakes on a subject, that subject takes herself to know less than she otherwise would do. This is something IRI smoothly explains. And was intended to explain.

    There *are* various intuitive problems with IRI (as opposed to contextualism) with third-person attributions, where the attributor is not the subject. These are well-known in the literature, and are extensively discussed in my own work as well as the work of other advocates of IRI, such as Hawthorne and Fantl and McGrath (as well as the contextualists). None of us needed surveys to realize that we are less sensitive to the subject’s stakes when the subject is not in our practical situation – this was intuitively obvious to all of us. I devote half a chapter of my book to responding to precisely this. Hawthorne and Fantl and McGrath also have developed responses.

    I’m not sure whether you mean to raise this issue by your latest comment. But if that’s what you take the upshot of the surveys to be, we didn’t need surveys for that. This is just the pattern of intuitions all the researchers took as given.

  12. Josh,

    In short – the pattern of intuitions I claimed we have is as follows. When I find myself in a high-stakes situation, I take myself to know less than I take myself to know in a lower stakes situation. This is what IRI is trying to explain – that we have different criteria of evidence when the standards are higher.

    Now all of us have known since the beginning of time that when *I* find myself in a high-stakes situation, I think that subjects in low-stakes situations know less than they take themselves to know. Contextualism can make sense out of a lot of these intuitions (though not all, as I argue), and IRI can’t. I say a lot about this problem in my book. Maybe what I say is unsatisfactory, but if so, it’s unsatisfactory for theoretical reasons.

    The idea with the bank cases is that you’re supposed to put yourself in the position of the knowledge-ascriber. In the case I call “High-Stakes”, you’re supposed to think that you are in fact High-Stakes Hannah. In “High-Attributor-Low Subject Stakes”, you are also supposed to be Hannah – the one making the attribution.

    There clearly seem to be difficulties arising in asking undergraduates to place themselves in the relevant situations. The social-psychologists have circumvented this, by actually placing the subjects in high-stakes situations. When placed in such situations, the subjects respond as the advocate of IRI says they would respond.

    In contrast to undergraduates, we professionals know what we’re being asked to do when confronted with these cases. Undergraduates don’t. But that’s irrelevant. We already know from decades of social psychology that in high-stakes situations, people take themselves not to know propositions that they otherwise take themselves to know.

    I also suspect that we didn’t even need the social psychology. We just could have imagine how *we* react in high-stakes situations – that is, consulted our own intuitions (like you did in discovering the Knobe-effect…).

  13. I should add that a number of other folks also advanced similar accounts of the IRI intuitions that Nagel and Weatherson develop. I recall both Kent Bach and Jon Kvanvig arguing that this is what is going on with the stakes-sensitive intuitions. I’m currently working on a paper responding to it, among other things (including working out as much as I can of the social psychology literature).

  14. Josh,

    Here is something useful experimentalists could do. Everyone in the literature agrees that when stakes go up, subjects take themselves to know less than they ordinarily take themselves to know. One response to this is the one I discussed above – that this is because in high-stakes situations, people lose the belief. And that’s why they lose the knowledge.

    In response to this, I do rely a bit on a case I call “Ignorant High Stakes”. I argue that if the person doesn’t recognize that they are in a high-stakes situations, but ought to recognize it, we do think that epistemic requirements are higher for them. The response that people lose the level of confidence that is required for full belief is rendered otiose here, because the subjects, by stipulation, retain their level of confidence. The question I would like resolved is whether or not there is any case for “ignorant high stakes” – whether we do in fact think that epistemic requirements are higher for people in ignorant high stakes situations.

    So there really is a use for you guys – ignorant high stakes is a contested case. The fact that some of you guys are having a hard time finding evidence for the *uncontested* cases is what freaks me out a bit though (though as both Jennifer and I have pointed out, there is a wide range of results across different X-phi studies).

  15. Thanks for all these really great comments! As I tried to respond to them, more popped up before I could finish, so this will just be for the first round.

    #3 and 5. Jason, about the unanimity of previous results, maybe the confusion here has to do with the way various researchers have interpreted or represented the relevant data. In my earlier bank paper I don’t detect any variance at all, and this was also similar to the F&Z results here and in a number of cases regarding stakes. Conversely, it is true that May et al. detect (a very small) variance for the factor of stakes, but make a big deal of the fact that mean judgments are significantly in agreement across the board when discussing their philosophical ramifications for certain theories of knowledge. And lastly, Knobe and Shaffer were able to detect a larger variance by the factor of error in the bank cases by making that error more salient. Of course, while all of these studies involve the same general bank concept, they are all different in various ways (for instance they ask different questions or use different stimulus materials). However it seems like the common thing in most of these studies that Josh might have been alluding to is that overall, people tended to either strongly ascribe knowledge or judge knowledge statements a subject makes true (in cases where philosophers have said they wouldn’t) despite the stakes manipulation.

    Regarding the different results concerning stakes that Angel Pinillos posted on XPHIB, it’s a little difficult to know how to compare them, since what is specifically tested in that particular experiment (and also some of the psych results you discuss) is arguably very different from that of the current discussion. I also think Jennifer Nagel makes some good points in the comments section on the XPHIB post you linked to, which readers here might enjoy checking out. I think though, that the general thing to do here is not just to find some result which says something about mental processing related to stakes, but rather to examine the specifics of that particular study to see how these tests compare, what exactly they measure, and if they are really at odds

    #4. Jason, I agree that it’s important that the stakes manipulation in the experiment utilize stimulus materials involving situations that participants are familiar with. I think that for this reason, it’s really important to run further tests of the same basic structure using different situations. Though in these particular experiments, the bank cases weren’t done on undergraduates or in classrooms, but on the web. Using the particular requirement tool we did, mean age is about 30, about 50% have a bachelor’s, and make over 40k. I’m not sure if that means they are more or less likely to understand check cashing than 20 year-olds, but maybe it helps.

  16. Wesley,

    Thanks for your replies. I’m a bit confused by this comment:

    “Regarding the different results concerning stakes that Angel Pinillos posted on XPHIB, it’s a little difficult to know how to compare them, since what is specifically tested in that particular experiment (and also some of the psych results you discuss) is arguably very different from that of the current discussion.”

    I’m not sure what you mean here. For the reasons I’ve given above, the psych results bear directly at least on the topics on which I’m working (the thesis that knowledge shifts with stakes), and Angel’s paper does too. But maybe I’m not understanding what you mean here…

  17. There is indeed a great deal to discuss here! A few things for what they’re worth:

    (1) On Jason’s Comment #14:
    You say x-phiers can be useful by testing Ignorant High Stakes. Adam Feltz and Chris Zarpentine did (in their paper Josh K. linked to above in the post). They used your exact case but found results problematic for IRI (see pp. 8-9 of their paper draft online).

    (2) On Jennifer (Comment #6):
    You mention Neta & Phelan doing a within-subjects study with juxtaposed cases. They did find an effect of stakes, though that was about evidence and relies on a linking principle which isn’t obviously solid. But, in my paper with Walter, Jay, and Aaron, we did juxtapose low and high stakes cases concerning knowledge. And we did find an effect of stakes (see sect. 3, pp. 269-70). Yet, first, the means were about the same as they were for the corresponding between-subjects cases. So this might count against the claim that juxtaposition of the bank cases will change the folk intuitions.

    Second, although we found an effect of stakes, the means were again on the agreement side of the midpoint. And contra Jason’s ad hominem suggestion, we weren’t out to disprove IRI. Our reason for thinking this holds even if we were out to prove IRI: if the claim is that we should find changes in judgments among the folk, then shifts on one side of the midpoint don’t alone yield a victory for SSI-ers. And this is so especially given that (as we point out, p. 273) intellectualists can explain the results (i.e. a shift in mere confidence) in a plausible way as well.

    (3) On Angel (Comment #7):
    Wesley might have reverse-scored the responses for comparison (as Feltz & Zarpentine did for their denial case), but I’m not sure. Wesley, can you make explicit whether you did or not?

    (4) On Alleged Disunity of Previous Results:
    I’d just like to second Wesley’s insistence on looking at the details of the studies and what the researchers are claiming. Much of the differing results are not troubling whatsoever since, as Wesley points out, much about them differ significantly, both in design and in their interpretations of the results.

  18. Josh,

    I don’t put any stock in using “my exact same case”. I’m not a trained psychologist, and I don’t have the relevant training to do the relevant controls. Being perfectly aware of my own lack of competence in psychology, I’m very suspicious of anyone who uses my exact same cases. Surely, those people in psychology have degrees for a reason!

    I’m also suspicious of Feltz and Zarpentine because I recall (I have to look back) that they claim that intuitively, knowledge doesn’t vary with stakes. As I said above, this entails either that (a) knowledge doesn’t entail belief, or (b) decades of research in social psychology has been undermined by philosophers with little training in psychology.

  19. Again, to repeat – I did not mean the bank cases to be some kind of scientific experiment. I meant to call our attention to the obvious fact that when you are in a high stakes situation, you take yourself not to know things that you would otherwise take yourself to know. The contextualist is calling our attention to the fact that when certain things are salient to you, you take yourself not to know things that you otherwise take yourself to know. These points are obviously correct. Then there are some controversial cases – for example, ignorant high stakes, high-attributor/low subject-stakes. I’d like to see some work that successfully uncovers what is obviously correct, and therefore gives us some clue into the controversial cases.

  20. I’ll add my support to the notion that Angel Pinillos’s results showing stake-sensitivity really are relevant, and add some specific reasons about why they may work better than the bank cases as test materials on people who aren’t epistemologists-instructed-on-keeping-the-relevant-factors fixed. Angel’s cases involve much simpler evidence-collection situations (how many pennies are in the contest jar?) for which we won’t naturally be inclined to see Hi and Lo-stakes as having different relevant background information. The effort/accuracy tradeoff here is much simpler, too — we know well enough that when you’re counting about 100 pennies you’re going to be more accurate if you check a few times. Again, the bank cases are just fine if you give them to trained professionals who know what you are trying to do (=how to understand the problem, which is not to say that they already get or endorse the conclusion you are trying to draw). But they are pretty open-ended stories when contrasted to the more clean and controlled penny-counting cases.

  21. #7. Angel, this particular experiment was designed to meet some of the challenges Keith raised in his earlier paper about how the previous bank results relate to epistemic contextualism. One of the essential things about this was that characters of the vignettes actually had to make knowledge statements within the story. But you might totally be right that what you suggest could be a better way to test the predictions of other theories of knowledge, like IRI for instance. I should mention that in a previous version of my earlier bank paper, I do exactly that…and there’s essentially no difference in results.

    Regarding your point about truth judgments, you’re right that by looking at the numerical anchors on the Likert scales used, a mean significantly above 3 would constitute an aggregate judgment of “true” for pretty much all eight cells. Though, given the effects detected, I’m not sure why you think this is necessarily fishy. Maybe you could explain what you mean there? Our strategy in reporting the data was just to conduct analyses of variance to see which isolated factors: stakes, error, sentence type, or their various interactions, resulted in significant differences to the way people made the various truth judgments. Then we only found that of the answers people gave, basically the interaction between error and attribution type, and not stakes and attribute type made such a difference. On the level of reporting these data, I would agree with your anticipated objection when you point out that there are some real technical problems with making inferences from a comparison of one particular attribution cell and a particular denial cell as evidence that it is in fact the *stakes* of the cases that are having an effect on the truth value judgments.

    #6. Jennifer. Wow lots going on here! I take your general point to be that the null stakes result can be explained—not by the notion that stakes don’t play a role in the way people judge the truth conditions of knowledge sentences as IRI says—but because the stakes cells actually manipulated an unintended factor (you speculate that this factor is one involving either (i) the perceived evidence the character has between lo/hi, or (ii) the actual possession of the belief between assertion/denial). I think this is an interesting possibility and agree that it’s often tough to interpret nulls. I wonder, is there is a good way to test your alternative hypotheses about this new way you suggest stakes might be working in the bank cases, or a way to narrow in on what exactly this new evidence is you think that people are using in hi to belie the true effect of stakes IRI assumes?

    You say, “it’s clear that subjects are learning something not only from the basic story, but also from the moment at which the subject of the story makes her declaration. Here we can see some fairly direct evidence that between-subjects, the stories are not being read in a way that holds all the traditional factors invariant across the crucial pairs” About this, I agree it’s true that the factor of attribution type does have an effect on participant judgments, but why is it obviously the case that this effect derives from your “not playing ball factor” and not from one of the factors the experiment intended to manipulate?

    You also advance another alternative explanatory hypothesis regarding the tendency of the folk to focus on embedded propositions rather than the target mental state verb. I was wondering if you could say a little more about how you take this to account for the results of the study, I’m not sure I was following that part. If the merely conversational thing were true, I would have thought (and still am surprised by this) that people would have found the denial sentences more-true that the attribution sentences. Or am I getting this mixed up?

    #16. Jason, all I meant by the comment was that it might be helpful not just look at whether “knowledge shifts with stakes” but to focus on all the various different conditions under which this phenomenon arises (such as our willingness to attribute, to evaluation truth conditions, our desires to collect sufficient evidence, the subject attributor distinction) especially if one study is to be used as leveraged to problematize the results of another, etc.

  22. Jason,

    I didn’t mean to suggest that Feltz & Zarpentine’s test of your Ignorant High Stakes cases settles the issue, or that there aren’t avenues of response. I simply wanted to make clear that there has been the beginnings of some exploration there.

  23. Hi Wesley —
    The embedded proposition line could explain why people are more likely to accept assertions than denials of knowledge of propositions they have just been reminded are true propositions. To see this, take some proposition you know to be true (say, “Barack Obama was born in the United States”) and imagine reading vignettes about people who end up saying either (a) “I know Barack Obama was born in the United States” or (b) “I don’t know that Barack Obama was born in the United States”. If you are not an epistemologist but just a lay participant invited to evaluate the truth of “what this person just said” — you could well feel that the (b) type utterance comes off as wrong — if you read the “I don’t know that..” in a conversational way (as expressing doubts about the embedded proposition) rather than in a mental-state-self-attribution way. (I mean, if you don’t have your epistemologist hat on, you might say: “What you’ve said is wrong!” to the speaker of a b-type utterance even if you are well aware that this person is the kind of Tea-Partier who really doesn’t know that Obama was born in the USA because he doesn’t even believe that Obama was born in the USA.) We don’t always focus on mental state verbs as indicating mental state attributions. “I think that..” also has pretty heavy conversational use as a hedging device, too — when someone says “I think that p”, we often go right ahead and say “that’s wrong” where we don’t accept p ourselves (even if we don’t doubt that the person we are talking to is a p-believer). Does that help?

  24. Oh rIght, but what I was wondering was how that would explain the main result of the study, the error*attribution interaction.

  25. Is that the main result? It’s a pretty complicated 3-way interaction with no intuitively obvious explanation, and so I think it would be premature to try to speculate about what is going on. One would want to run follow-ups to see whether it’s robust across stories with different content, and also to weed out some of the noise in your data (like the noise that might be behind the “suprising” asymmetry in assertions and denials, which I think might be coming from people focusing on the embedded sentence rather than the mental state verb as such). The magnitude of that weird asymmetry is statistically greater than the interaction you are claiming as your main result, which should give you some reason to be cautious in drawing conclusions from this study.

  26. I’m really enjoying the fascinating exchange. Here’s a small comment to add. Although I’m in considerable sympathy with Jason’s suggestion that it would be very surprising to conclude that knowledge intuitions don’t vary with stakes, given the wide empirical consensus that full belief (and intuitions about full belief) vary with stakes. Nevertheless, I think Jason overstates the point when he says, in comments 10 and 18, that “this entails either that (a) knowledge doesn’t entail belief, or (b) decades of research in social psychology has been undermined by philosophers with little training in psychology.”

    In particular, the (a) horn of the dilemma is stated too strongly. It’s consistent with the stakes-sensitivity of (full) belief and the insensitivity of knowledge to stakes that knowledge entail belief, even full belief. After all, knowledge might entail the highest standard of full belief, which would itself entail weaker full beliefs. (For a related point cogently made, see p. 79 of Stanley (2005).)

    I don’t know that this loophole is exploitable; the view in question doesn’t look especially plausible. (Perhaps the very same cases are firmly established to undermine full belief, but now suggested not to undermine knowledge.) But it should be considered, so long as we’re thinking about what entails what.

  27. Hey Jennifer, I agree with you that the complicated three way interaction I report has no obviously intuitive answer (like besides that it was luck). Though I’m a little confused why you think it’s premature to speculate about what happened in the experiment. In your comments, I took you to be doing *exactly that* when you offered this further alternative explanatory hypothesis about what people in the experiment tended to do to account for these new bank data. I was simply asking you to clarify how you thought your hypotheses could specifically explain all the results in hand. So?

  28. Angel,

    First of all, I wanted to say that your recent experimental studies are really impressive and intriguing, and I am excited to see how you develop this line of research in further work.

    But I’m not quite clear on what you are saying about the bank case experiments. You seem to be suggesting that the very fact that people agree both with the assertions and with the denials provides evidence for the claim that stakes are having an impact. But this fact about assertions and denials does not initially seem to have anything to do with stakes per se. (In the paper by DeRose linked in the original post, the effect is explained entirely in terms of principles of accommodation.) So could you say a little bit more about how the argument here might work?

  29. Jennifer and Wesley,

    Just wanted to make a quick clarification about Wesley’s comment. Presumably, he wasn’t initially talking about that complex and somewhat bizarre three-way interaction. Rather, it seemed like he was just asking Jennifer for an explanation of the more straightforward effect found for error possibilities, namely, that making the error possibilities more salient causes people to be less inclined to agree with knowledge assertions and more inclined to agree with knowledge denials.

  30. Jason,

    This clarification is very helpful. So it seems at this point that there is a tension between three claims:

    – The claim, backed by research in experimental philosophy, that stakes have little or no impact on ascribers’ intuitions in bank cases

    – The claim, backed by research in social psychology, that stakes do have an impact on subjects’ degrees of confidence

    – The conditional claim that if stakes have an impact on subjects’ degrees of confidence, then stakes have an impact on ascribers’ intuitions in bank cases

    You suggest that the tension between these three claims gives us reason to call into question the first one (the experimental philosophy finding about intuitions in bank cases). I certainly see the appeal there, but one might also go in the opposite direction. Given that we have empirical evidence supporting the first two claims, one might think that we thereby acquire specific reason to deny the conditional in the third claim. (For example, one might think that the subject’s degree of confidence is simply specified in the vignette itself and therefore isn’t inferred in ways that depend on the stakes.)

  31. Hi Josh,
    Thanks for that clarification — that helps. I should say the “embedded proposition” line was just intended as a partial explanation of something surprising that seemed to be going on in the experiment — something I would see as a source of noise, relative to the signal we are looking for.

    I’m actually happy to agree that making error possibilities salient will make people less inclined to agree with knowledge assertions and more inclined to agree with knowledge denials. I’ve been testing that myself and have found a clear effect. If you’re curious, you can see something about my line on this here. Other things being equal, where there’s an open question relevant to the proposition we are judging we expect the agents we are evaluating to resolve it on the basis of some evidence; if Sarah raises the question of a change of hours, we expect Hannah to have or collect some evidence to rule that out. It looks bad if she brushes off the challenge. Actually, it would look even worse if it were really clear in the story that she didn’t have any information on this point — as it stands, her reply: “I was just there; I know that the bank will be open tomorrow” could easily be taken to carry the implicature that she checked that there was no change in hours when she was there (so she’d be represented as having extra evidence which wouldn’t naturally be attributed to her “counterpart” in Low Error Salience). As it happens I don’t think that we need to adopt either contextualism or IRI to explain why intuitive knowledge ascriptions show sensitivity to the mention of error, but I don’t dispute that sensitivity.

  32. Josh,

    I am sorry but I don’t accept the results of the X-phi studies as evidence. There are just too many problems with them. In general as I said I am suspicious about using the actual bank cases to test the view that knowledge is stakes-sensitive or attributions are context-sensitive. There are just too many potential confounds. The cases weren’t written up by trained psychologists.

    I will get back to the conditional claim later. Ichikawa is right about the falsity of the entailment claim, but as he said that position isn’t very plausible.

  33. I too find it puzzling that the intuitions found in these experimental studies diverge so systematically from the intuitions of trained epistemologists. This is definitely a striking phenomenon, worthy of further study, but I thought it might be helpful to make two quick remarks about it.

    First, it may be helpful to keep in mind that other sorts of theoretical claims about people’s epistemic intuitions actually have been systematically confirmed in studies using this same basic methodology. For example, Beebe and Buckwalter have a recent paper showing that people’s intuitions about knowledge can be impacted by their moral judgments. This basic finding has been replicated again and again in further studies, and it now seems clear that it is a real effect, which would have to be accommodated by any correct theory of people’s ordinary epistemic intuitions. So it is striking that the very methodology that so clearly confirmed the Beebe-Buckwalter hypothesis is not confirming this claim about stakes in the same way.

    Second, it seems that the divergence between the judgments of epistemologists and the folk intuitions can be explained using a theoretical framework that has proven helpful in numerous other domains of experimental philosophy. Suppose that people’s epistemic judgments are shaped by two distinct psychological mechanisms:

    (a) an implicit capacity that enables people to arrive at epistemic intuitions in particular cases

    (b) more abstract or theoretical beliefs about which factors are epistemically relevant

    Then it might be that people in general — both epistemologists and ordinary folks — hold a more abstract belief that the stakes should be relevant in cases like the ones under discussion here. However, it could also be that people’s more implicit capacity to arrive at intuitions about individual cases does not take stakes into account in this way.

    (Neta and Phelan provide specific empirical evidence for such an explanation in their paper. They end up characterizing the claim that stakes impact people’s intuitions as ‘neither surprising nor true’ — meaning that people do think that their intuitions are impacted by the stakes but that, in fact, this view is incorrect.)

  34. Hi Josh,
    I don’t quite understand your proposed explanation of the professional/amateur differences, given that you are attributing similar resources to both groups (implicit capacity plus theoretical beliefs). Did you mean to suggest that the theoretical beliefs of the pros guide their judgments more than the theoretical beliefs of the amateurs? One difficulty with that suggestion is that epistemologists with strongly opposed explicit theoretical commitments seem to share a lot of the same intuitions, and to generate similar intuitions when new cases are proposed. Or maybe there’s some special kind of theoretical baggage you had in mind as afflicting professionals in particular? I wasn’t sure.

  35. Josh,

    I didn’t understand your suggestion of how to square the results in social psychology with the results that most of the-non-Angel-Pinillos-Experimental philosophers have gotten. Suppose we think of confidence as stakes sensitive, as the social psychology shows that we do. If the subjects asked to consider the vignettes are reading the confidence levels as fixed across different high stakes and low stakes subjects, as you suggest, they are therefore treating the people in higher stakes situations as having *more* evidence (since that would be what is required to have the same level of confidence as in the low-stakes situation).

    In short, this is exactly the confound that Jennifer Nagel has been worried about throughout. Subjects are assuming that the people in the high stakes situations have acquired more evidence than the people in the low stakes situations.

    In short, if you are right that the subjects in those X-phi papers that show no effects of stakes are holding the confidence levels of the subjects in the high-stakes and low-stakes vignettes fixed, then, given the social psychology, it follows that the subjects are assuming that the high-stakes subjects have more evidence than the low-stakes subjects. That would undermine the desired conclusions these authors want to draw (that knowledge isn’t stakes-sensitive).

  36. Jennifer,

    Yes, you’ve got the basic idea. The thought is that philosophers proceed by considering the two cases and engaging in a kind of reflection about whether the difference between these cases has a real epistemic importance. By contrast, participants in the studies just look at a particular case and have an immediate intuition as to whether or not the epistemic subject has knowledge. These two approaches to answering the question may draw on two very different kinds of psychological mechanisms.

    Incidentally, I think the contrastive effects that Jonathan Schaffer and I find in our paper have exactly the opposite character. In essence, we show that people sometimes think it is true that (a) Mary knows that it was Peter who stole the rubies but that it is not true that (b) Mary knows that it was the rubies that Peter stole. This effect does seem to come out consistently in people’s intuitions, but when I talk with professional epistemologists, they often feel that it is not the sort of thing one could endorse after reflection.

  37. Jason and Jennifer,

    This is a nice hypothesis, which is definitely worth exploring. Just as you say, it could be that people ascribe more evidence in the high stakes cases and that this ascription of greater evidence exactly cancels out the effect of stakes, leading in the end to no discernible effect on people’s overall judgments.

    I just wanted to ask two questions about the idea here:

    First, do you have any suggestions about how one might go about testing it? If so, I’m sure that people would be very interested in pursuing the project and trying to get to the bottom of this issue.

    Second, regardless of what is going on with these stakes effects, I wonder what you think about the various experiments that do seem to show an effect of conversational context. In the studies that Schaffer and I did, people offered different intuitions about the very same epistemic subject, when all that was changed was the conversational context of the ascriber. Similarly, in this new study from Buckwalter, there is no effect of stakes, but there is an effect of making error possibilities salient. Do you have any thoughts about how these results might be explained?

  38. Hi Josh —
    About testing the hypothesis that evidence is not being held constant, one could try simpler stories in which the level of evidence ascribable is better controlled. Angel’s penny-counting case is like this (and he got the IRI-type result).

    I think that making error possibilities salient creates the expectation of particular evidence on a given point, rather than greater evidence in general, which is one reason why it’s easier to find the effect on testing of the kind that Wesley was running.

    I’d really want to emphasize that epistemologists aren’t resting their cases on intuitive reactions to specific stories, taken out of context, but on intuitive reactions to very specific construals of these stories. The epistemologists’ ways of construing the key stories often involve various stipulations which might not naturally occur to a reader (for example, the stipulation that the subjects in the Bank Case all start with exactly the same evidence). These are stipulations that epistemologists who are aware of what theory is being tested will try to honour (perhaps with effort).

    On the experiments you did with Schaffer, I think there’s reason to believe that participants were not responding differently to the same proposition (in different contexts), but that they were actually representing and evaluating different propositions. Maria Aloni and Paul Egre have a nice paper on that here.

  39. This is what I wonder about the psychology results concerning confidence and stakes (and how they relate to issues of knowledge). Suppose Abe and Zelda are in equivalent financial situations and are generally equally risk averse when it comes to accepting bets. Both are inclined to think that the Magic will not win the Eastern Conference of the NBA next season, but, in large part because he’s put more time into studying the matter, Abe would accept any anti-Magic bet that Zelda would, and some others as well. Zelda has been offered an even-up $200 bet, where she would be betting against the Magic. Abe has been offered a similar bet, except that it’s for $200,000. Zelda immediately accepts the bet, just as Abe would if he were in her shoes. Abe hesitates and asks for more time to look into the matter before deciding on this financially momentous matter, just as Zelda would if she were in his shoes. Zelda isn’t at all worried. Abe is very worried, is sweating a bit, and won’t be able to sleep tonight. Abe wants time to get more evidence about the matter. When Zelda was offered time to look for more evidence, she brashly said she didn’t need any more stinking evidence–she was ready to go right now, sucker. (If they switched positions in terms of the size of the bets they were offered, they’d also switch worry-levels and further-evidence-desiring levels.)

    Now, my wonder: How are these pyschologists who are doing these studies (when, for instance, they conclude that stakes impact confidence) construing “confidence”? Would they rule that Zelda is more confident that the Heat won’t win than Abe is, since, for instance, she’s ready to act now without further evidence, while Abe wants to look into the matter further?

    If so, is there not a good sense in which Abe is the one who is more confident that the Magic won’t win? After all, he’ll take every anti-Magic bet that Zelda will, and then some, and Zelda would want more evidence, too, if the stakes were so high for her. Isn’t there some good way of thinking of confidence such that Abe is more confident that the Magic won’t win, but even that higher level of confidence isn’t enough to generate confident behavior, given the gravity of Abe’s situation? Might this way of thinking of confidence be the one most tightly connected with knowledge?

  40. Hey Jennifer,

    Before, you were suggesting the embedded proposition line as a partial explanation of something that happened in the study (the main effect for attribution). The reason I was asking you to elaborate about how this might explain other results here was that it seemed to me like if you were right about that, then we shouldn’t have gotten the interaction for error that we did. In other words, if the study turned up completely null, then the family of various “not playing the game” objections like for instance, not focusing on mental state attribution, could really get off the ground. Yet the fact that the predicted variance was detected for the interaction of attribution and one factor, but not the other, makes me think that we have some reason to think participants are playing the relevant game in this particular experiment, we just might not all like the score.

    Similarly though, it seems like in the various combinations of cases, the same line of reasoning could be used to question your objection about the accidental manipulation of the perceived evidence in these case. That is, if you’re right about this, shouldn’t the amount of perceived evidence accidentally manipulated by high stakes cases also influence participants’ answers when it comes to error, in those same exact cases? I would have thought that if the stakes of the various cases accidentally raised evidence, that difference evidence would have obfuscated the error effect. Yet shouldn’t the fact that this didn’t happen give us reason to doubt your explanation?

  41. Since it was suggested to me that I weigh in on this discussion, and since I doubt what I say in #39 is what was envisioned, I’ll add two very quick things.

    First, as I indicate in the paper Josh links-links to, I do take the issues tied with survey methodology that I mention in sect. 1.6 (and that Jennifer has been explaining a bit in this discussion) to be the most serious worries affecting the x-phi work in epistemology that we’re discussing (including the new studies now being reported). I was raising distinct issues that I thought should be brought up, but that were not in my own judgment as pressing. And as I indicated in that section, I judge the methodological worries that were not my focus to be very substantial. As I write (p. 17), in my judgment those worries are “by themselves are pressing enough in the case of the literature we have been looking at to make it unwise for participants in the ongoing philosophical debate over knowledge attributions to significantly alter their (our) debate in reaction to these results – though we should be open to new, better empirical results having a legitimate impact on the debate.” I guess this puts my attitude in the same ballpark as what Jason expresses above in #32 — though not quite so strongly negative. On the other hand, people shouldn’t care much about my opinion on this matter, since, as I stress, I’m very far indeed from being an expert on survey methodology. (And I’m a very interested party, to boot.)

    Second, concerning Josh’s question in the post: Contextualism per se isn’t committed to stakes being relevant, though, of course, particular contextualist theories could be. My own view on the role of stakes (see the bottom 4 lines of p. 9 and p. 10 of the paper) are kinda subtle, and it isn’t clear how much I should be bothered.

  42. Josh,

    About the effects of conversational context – two quick points. First, IRI and contextualism are consistent with one another. One (contextualism) is a semantic thesis, about instances of the predicate schema “knows that p”. The other (IRI) is a metaphysical thesis about the nature of knowledge. One can endorse both the semantic thesis of contextualism and the metaphysical thesis of IRI. Secondly, all parties to the debate have always agreed that conversational context affects our intuitions (again, supporting Jennifer’s point that experts with very different views nonetheless concede the data). We all agree that there are certain intuitions we have that contextualism can explain, and an advocate of IRI who denies contextualism cannot explain. Different advocates of IRI have different reactions. Hawthorne combines his “subject-sensitive salience constraint” with an error theory of the relevant cases, I give an error theory, and Fantl and McGrath tentatively endorse both contextualism and IRI.

    As Fantl and McGrath in particular emphasize (and me too at points, though not as explicitly and admirably as Fantl and McGrath), the case for IRI isn’t really one that is based on intuitions. It is based on a rejection of skepticism and philosophically motivated links between epistemic concepts and concepts of practical rationality. Still, if there were *no* stakes sensitive intuitions, one would worry something had gone wrong.

    In short – the presence of contextualist intuitions is not a big worry for the advocate of IRI. The absence of stakes-sensitive worries would be.

  43. Hey Jason (42), thanks for the helpful theoretical clarifications on conversational context! I was thinking though, presumably Josh (37) wasn’t asking Jennifer to explain the CC results in his earlier work with Jonathan Shaffer or the results of the current bank study here regarding error in the sense that he thought IRI wasn’t consistent with contextualism or anything like that, but rather in the sense I was suggesting in (40), on the level of interpreting data. For instance, the thought might have been about trying to reconcile how, (i) when we collapse across all these cells to make comparisons we could get one beautiful predicted effect (error) but not another (stakes)…with (ii) the particular confoundy hypothesis we were all discussing that participants ascription of greater evidence exactly cancels out the effect of stakes, leading in the end to no discernible effect on people’s overall judgments. What I was saying before was it seemed like if there was this weird evidence thing happening in the two high stakes/low error and high stakes/high error cells, then the resulting raised evidence thing would have also cancelled out the error thing too.

    I agree with you that all parties in the debate have always tended to agree that conversational context affects our intuitions, even perhaps independently of whether or not it helps advance their particular professional views. But what I was thinking is that those of us who think i) that experimental epistemology is stacking up all this evidence to the contrary concerning actual folk intuitions or practices, and ii) that the goal here is to somehow capture or do justice to these actual intuitions or practices, would probably also think that the presence or absence of stakes-sensitive worries really is not a matter best decided by whether contemporary epistemologists all agree on it or not, but rather by going out and running the relevant tests!

  44. Hi Josh. thanks for your comment. Concerning Josh #28 I don’t want to say that subjects agreeing with assertion (in low stakes) and denial (in high stakes) supports folk stake sensitivity. In my post (#7) I was just asking why you guys didnt take this to support it. Above, you suggest that it does not support it because the result can be explained by the rule of accommodation. But the most straight-forward way of working this out in detail (that I know of) comes from Lewis and it assumes contextualism. If you are a stake-sensitive invariantist (non-contextualist) person, then it is not obvious how you can help yourself to this rule. Are we assuming contextualism when we assess the results of these experiments? Why can’t the IRI -non contextualist explain the results by saying that stake sensitivity is true?

    The main point I would like to make, however, and as a way to also respond to Wesley #21 is that adding assertions or denials in the vignettes is problematic. I discussed some of these reasons in my paper (linked by Jason in #5) and also in my #7. But perhaps the nicest discussion of this appears in Jennifer’s #6. The basic point is that denials in high stakes can have a number of unintended effects (from the perspective of the experimentalist) including an effect on perceived evidence and/or perceived stakes as I urge.

    However, it is possible to test folk stake sensitivity without adopting this problematic feature. In my paper, i did this and I got the result (in a variety of ways) that the folk really behave as if knowledge is sensitive to stakes. I would love to see more studies where there are no assertions or denials in the vignettes. My bet is that we will start to see some sensitivity to stakes.

  45. Wesley,

    Jennifer has already addressed this – but the fact that Professional epistemologists all agree on some data should make you at least consider what professional psychologists say about the issue. That’s all I can say now because the boarding door has closed.

  46. Wesley,

    Ok, now I can address your first point. What we need to do is to go and look carefully at the social psychology literature on confidence and see whether it motivates a change in confidence based on raised salience but not stakes. If confidence is only intuitively stakes-sensitive but not intuitively salience-sensitive, that would support the explanation here. Remember, the really weird thing is the tension between the results in social psychology that confidence is stakes-sensitive, and your failure (but not Angel’s) to find any stakes-sensitivity of knowledge. That really needs to be addressed, given the multiple links between confidence and knowledge.

    I also think one needs to think much more about what salience is. In the social psychology literature, raised stakes go along with heightened awareness of counterpossibilities. It’s natural to think that raising awareness of counterpossibilities might lead people to think more is at stake.

  47. Wesley,

    In general, I’m a bit confused about what these subjects are thinking. The papers I’ve looked at in social psychology suggest that raised stakes go along with an increased consideration of counter-possibilities. This is supposed to be because we use different heuristics for evidence in low and high stakes situations. You’re trying to increase consideration of counter-possibilities, without raising the stakes. Wouldn’t it be natural for a subject to assume that the counter-possibilities are being raised, because the stakes are higher for the subject of the knowledge attribution? Why else are they being asked to consider different counter-possibilities?

    Conversely, one might wonder if you have successfully gotten the subject to think of the subject of the knowledge-attribution as in a high-stakes situation. If you have successfully gotten the subject of the experiment to focus on a high-stakes situation, that should lead to increased salience of counter-possibilities. Any disparity you get here is as much evidence of poor experimental design – failure to get the person to consider a high-stakes situation.

    In general, it really doesn’t seem that these factors are as easy to separate as you guys think.

    You still haven’t addressed head-on why Angel is getting results that contradict yours. Unlike the empirical work in psychology, the philosophers’ surveys don’t seem deliver as solidly uniform results.

  48. Hey Jason. As I was suggesting before, I don’t see how these particular bank data support the principle objection born from Jennifer’s #6 that you and Angel #44 have been discussing. I could certainly be convinced of this, I just need someone to explain to me how it’s specifically supposed to account for the data in this particular experiment, given what I say in #40 and #43.

    Maybe big picture our principle disagreement is revealed in your exchange with Josh in #30 and #32, when you just deny xphi studies count as evidence. Conversely, I don’t think that the fact that results in social psychology in related areas of mental processing suggesting that stakes do have an impact on subjects’ degrees of confidence means that we can therefore happily ignore the research in experimental philosophy that stakes have little or no impact on ascribers’ intuitions. Instead, I think it means we need to take a look at, test, and perhaps reevaluate some of the connections we’ve all assumed exist between subjects’ degrees of confidence and the impact stakes have on ascribers’ intuitions in situations like bank cases.

    Now, about your question of why I think Angel’s results seem to differ from most all the other tests that examine ascribers’ intuitions. For bloggers who haven’t read the paper, in his experiments Angel gave participants hi and low stakes stimuli involving student x who had to proofread an assignment for typos. He then asked people, “How many times do you think x has to proofread his paper before he knows that there are no typos? ____ times.” The result was that participants were more likely to fill in a larger number when the stakes were higher. I was thinking though, could these differences be subject to predicate blindness in the DV? That is, would answers differ if the relevant measure was “How many times do you think x has to proofread his paper before he believes that there are no typos?” or even just thinks there are no typos?

    I was thinking what we should do is just test this worry, by running the same experiment with these different predicates. Then if we get the same differences for something like ‘thinks’ as the original study did for ‘knows’ between low and hi cells, we might have the beginnings of a good explanation for why these results differed from most all the other xphi work on ascribers.

  49. Wesley,

    I looked carefully at my messages, and don’t see where I “just deny” that X-phi counts as evidence. There is a cultural difference between philosophers of language and philosophers of psychology. We spend a lot of time reading and summarizing the relevant empirical literature. In philosophy of psychology, you guys don’t seem to draw on the work of the psychologists, I guess figuring that professional philosophers are just smarter or something. If I combine this cultural difference with the differences in results you get, and all the confounds that are present, I lose credibility in the methodology.

    I am struggling now with the relevant psychology literature. It’s hard. I probably would learn more from a really good survey of it from a philosopher of psychology than from another decade arguing about philosophers’ surveys.

  50. Wesley,

    But it’s useless talking about complaints about methods. The only thing we can do is talk about potential confounds in your study. I’ve just raised some additional worries in 48.

  51. Wesley,

    I think I now understand why you think I said I don’t accept X-phi as evidence. When I wrote in 32:

    “I am sorry but I don’t accept the results of the X-phi studies as evidence. There are just too many problems with them. In general as I said I am suspicious about using the actual bank cases to test the view that knowledge is stakes-sensitive or attributions are context-sensitive.”

    I am clearly not speaking of X-phi in *general*. I’m speaking of the X-phi papers done so far on stakes-sensitivity. The rest of my messages cover why – conflict with the social psychology literature on confidence together with links between confidence and knowledge, too much reliance on “bank case” type examples, etc. I’m all for the project in general. I’d just like to see more engagement with the methods and literature of psychology.

  52. Hi Wesley,

    this is a response to your comments #49 on my paper

    http://experimentalphilosophy.typepad.com/experimental_philosophy/2010/06/new-experiments-support-the-thesis-that-knowledge-is-sensitive-to-stakes.html

    which I think gives good evidence that folk judgments are sensitive to stakes). To give background, here’s an example of a probe I discuss in that paper. I give subjects either a high and low stakes situation (high or low consequences for having typos in a class paper) and ask subjects “how many times do you think that Peter should proofread his paper before he knows that there are no typos? _____ times”. It turns out that for high stakes people give much higher numbers than for low stakes (this sort of probe controls for perceived evidence unlike the original ones, there are also no knowledge assertions or denials in the vignettes themselves which many have argued in this thread causes some problems).

    Now these results go against your data, so you suggest that a problem with my probes is that subjects are suffering from “predicate blindness”. I am not exactly sure what this is. I assume from the rest of your comments you think that subjects are not even reading the word “knows” in the prompt. Or at least they are *misunderstanding* the predicate part of the question to mean something else.

    I find this to be a puzzling suggestion for a variety of reasons. First, It is not clear why my study should be singled out as one in which predicate blindness is at play. Virtually all English sentences have predicates, so why can’t we object to any survey by saying that predicate blindness is at work? It seems like absent some further independent reason, this response is not very well motivated. Second, I tested for cognitive reflectiveness/intelligence (CRT) and found no effect on answers. High scores on the CRT are correlated with performance on the SAT, ACT as well as a variety of other related measures. I would expect predicate blindness to affect only low scorers, but this was not found. Third, in most of the surveys I did, I only asked a single question and students had plenty of time to answer it. It seems unlikely to me that most students would fail to properly read the only question they were given. Fourth, for hundreds of subjects I asked just that one question and then asked them to explain their answers. Most of them did answer this question thoughtfully. I find it hard to believe that in this task most subjects ignore or are blind to the predicate in the question.

    Having said this, In the paper I do express support for the idea that subjects are in fact replacing the question concerning ‘knowledge’ with a different one that does not concern that concept. So your instinct is right that something like this is going on. However, I think this replacement is not due to a misunderstanding or flaw in their reasoning. In fact, I test and find support for the idea that subjects are often replacing the knowledge question with one that is about the normativity of action (how many times should Peter check for answers before he turns in his paper). But this just plays into the hand of IRI and stake sensitivity—Because it looks like people are accepting something like Stanley and Hawthorne’s RKP: [Where one’s choice is p-dependent, it is appropriate to treat the proposition that p as a reason for acting iff you know that p]. But this principle (and related ones) entail stake sensitivity (assuming fallibilism).

    In sum, I think that your instincts are right that people might be replacing the knowledge question with something else. But we have to ask why they are doing this. The evidence is that they are not just making a silly error. It looks like they are appealing to some principled connection between knowledge and action and this gives further support for the idea that folk judgments concerning knowledge are indeed sensitive to stakes. If we combine the results from my paper with the fact that (1) most epistemologists seem to also share the intuition, (2) the psych literature that Jason and Jennifer have talked about supports it, and (3) there are possible confounds with the studies aiming to show otherwise; it looks plausible that folk attributions of knowledge are indeed sensitive to stakes.

  53. Hi Jason,

    Here’s a question for you based on the comment you made in 52: “I’m all for the project in general [Experimental Philosophy. I’d just like to see more engagement with the methods and literature of psychology.”
    I agree with you here. But I think the same could be said about traditional philosophy. One method of traditional philosophy involves the presenting of thought experiments and the reporting of our natural reactions to these. The difference between this method and those of x-phiers is that the former “experiments” are done in private, by experts and without the usual controls. But I would think that some of the problems we raise for X-phi (the confounds we have been talking about in the bank cases) would just as easily arise when traditional philosophers use thought experiments. So it seems like your criticism of experimental philosophy also applies, perhaps even with more force, to traditional philosophy. Do you think that when philosophers present thought experiments and ask readers to share intuitions, philosophers should have read all the relevant psychology literature about the possible confounds etc? If so, this is an interesting suggestion and would surely involve a radical change to current practice. On the other hand, my characterization of some of what we do as traditional philosophers is perhaps mistaken. Personally, I have found it very difficult to get straight on what is really going on when philosophers use thought experiments to get “intuitive” reactions.

  54. Dang it, why do these interesting threads always pop up when I’m travelling and away from reliable internet access???

    So much interesting & valuable stuff up there, I’ll just poke at a little bit.

    (1) I share JK’s puzzlement at how the social psych literature that JS is appealing to is supposed to have any obvious consequences for either third-personal knowledge attributions, and/or our evaluations of other people’s first-personal knowledge attributions. It is a fairly clear prediction from that literature that first-personal knowledge attributions should be stakes-sensitive, but I’m just not seeing why we should _expect_ it, on its face, to have the same impact when evaluating others. It very well might have that impact, but work would have to be done showing that it did so.

    For example, we also know from the psychology literature that inducing negative affect will lower subjective confidence. But we wouldn’t expect (would we?) that simply changing the affective state of a target in an epistemology vignette would lead subjects to change their attributions of knowledge in such a vignette. It seems to me that the psychology literature thus far is fairly silent on third-person knowledge attributions — not completely so, but enough that it is simply not in much of a position to inform these sorts of debates about IRI, etc.

    (2) I also have a concern very near the one raised by KD, that “confidence” as used in the psychology literature is not at all obviously the same thing as the philosopher’s notion as something like “that which indicates what a person takes their evidence to be”. E.g., it is often glossed in the psych literature as a concept with a clear practical dimension, e.g., when it is operationalized in terms of betting behaviors. We care about what sort of distinctions, if any, are to be drawn between the practical and the epistemic here, but I don’t think that the psychologists on the whole have been interested in that.

    (The above two points underscore one of the reasons that it is essential to have philosophers involved in x-phi: the distinctions and issues that are important to us just aren’t necessarily, or perhaps even often, the same as those that are important to those who make their homes in more traditionally experimental disciplines.)

    (3) There seems to be an assumption made by several people on the thread that the disparate attributions of knowledge in the bank cases is a consensus view in the epistemology community. Is that really so? My impression has been that it is, while at least a moderately common set of intuitions, not at all a consensus one. (I, for one, have never had those intuitions! But I’ve always had nonstandard intuitions about lots of cases, unfortunately.) Is it really a consensus? Or at least a consensus that it’s a consensus?

  55. Jonathan,

    I should stress that I am a complete novice on the social psychology literature on stakes and confidence – I have read something like three or four articles from the 1980s, which clearly show the effects of stakes on confidence. I don’t think any of us besides Jennifer Nagel knows the work by the scientists well enough to make any sort of judgment about it yet. That’s why I’m crying out for somebody besides Jennifer Nagel to take a look at what the scientists have done. Before we talk about the deficiencies of the scientists, and the superiority of the philosophers, we should study their work carefully (this is what we language people do when we talk about language!). Once we summarize and get a good handle on that literature, we can turn to filling in the questions they don’t address. You guys are the philosophers of psychology – you shouldn’t be leaving it to me to read the scientists’ work. I’ve got enough on my plate reading linguists.

    On to IRI. I take the basic case for IRI to be from first-person knowledge ascriptions. When you are in a low-stakes situation, you take yourself to know things that you take yourself not to know in a high-stakes situation. As you rightly say Jonathan, this is a clear prediction from the psychology literature on stakes and confidence.

    I also take the basic evidence for contextualism to be from first-person cases. When I am outside the epistemology classroom, I take myself to know I have hands. When I am inside the epistemology classroom, I take myself not to know that I have hands. Why is this?

    The function of the bank cases is to provide a non-philosophically controversial example of this phenomenon – that is, one that occurs outside the epistemology classroom.

    As you say Jonathan, the psychology basically entails that first-person knowledge ascriptions are stakes-sensitive. I’ve got theoretical views that can explain this (namely, it’s because knowledge is stakes-sensitive). These theoretical views predict that certain third-person knowledge attributions are true, but which intuitively seem to us to be false. I’ve got a theoretical explanation for this too (and Hawthorne has another, and Fantl and McGrath yet another). All three books advocating IRI concede that it has trouble with third-person cases, and go out of their way to provide explanations.

    I also show that for each kind of third-person cases IRI has problems with, one can use propositional anaphora to construct a third-person case that the contextualist has problems with (everybody in the literature has seemed to miss this argument).

    The fact that there are some intuitions IRI can’t capture would be a problem if the case for IRI was based on a summary of intuitions. It isn’t. The case for IRI is based on principles linking knowledge and action, and claims about the value of knowledge. IRI is a metaphysical claim about the knowledge relation, and there is no reason to think we have deep insight into the metaphysical determinants of the properties and relations about which we speak.

    Of course, if there are the links between knowledge and action that I say there are, then we should see some evidence of that in our behavior. And we do – in high-stakes situations, we are reluctant to act on certain beliefs that we are not reluctant to act on in low stakes cases. That’s uncontroversial. I think it’s linked to the fact that we (now uncontroversially – thanks Jonathan!) take ourselves to know things in low-stakes cases that we don’t take ourselves to know in high-stakes cases. For some reason, the fact that the knowledge relation constituitively involves stakes is less apparent to us when we are thinking about others. Maybe it’s because we care less about them, and we have a hard time putting ourselves into their practical situations.

    At any rate, we all agree that there are various third-person cases that IRI gives the wrong predictions on, and contextualism gives the right ones. I’ve argued that there are exactly similar third-person cases, using propositional anaphora, where contextualism gives the same predictions as IRI. So contextualism actually doesn’t give a uniform account of third-person cases. At least IRI gives the same result for both explicit third person knowledge-attributions, and uses of propositional anaphora. That’s my argument that IRI is better off – even the contextualist can’t avoid an error theory about many third-person cases.

  56. @39 – I can’t answer Keith right now, since I’m in Budapest running a summer school on context-dependence and linguistic interpretation. I just gave a bunch of lectures, and have to prepare for tomorrow. I’ll address this in August. I don’t think this is how the psychologists I’ve read are thinking about confidence, but I’m going to have to go back to look. Hopefully, Jennifer can address this before I can.

  57. Hey Angel, I was thinking maybe I shouldn’t have called my idea about explaining the contrary results of your study predicate blindness. I didn’t mean to suggest that participants were making some kind of error in reasoning or doing anything wrong in answering the question you asked. Instead, what I was questioning was your interpretation, whether these data actually speak to the crucial question of people’s mental state attributions particularly having to do with knowledge.

    The worry, reminiscent of our earlier discussions of confounding factors in the bank cases, is that you are accidentally manipulating other more traditional factors, like for instance, whether or not the subject in your vignette has the relevant belief. So it’s totally an empirical question about whether people are doing this in your study, but the thought here is that if participants display the same hi/lo asymmetry when asking the same question you did about knowledge, about belief, then we would have good reason to doubt your inference that considerations of stakes impact the relevant mental state attributions in these discussions about knowledge. Does that make sense?

  58. Hi Wesley, even if the numbers for the knowledge prompt and the belief prompt which you suggest to do come out the same, I don’t see why this would cast doubt on my interpretation of the original results. We could explain this by assuming that subjects accept the very plausible idea that Peter will form the belief when he knows there are no typos and not before (otherwise he would be violating a knowledge norm). Perhaps another way to address your worry is by adding a sentence in the vignette to the effect that Peter has already formed the belief that there are no typos (even before he checked for typos). I can’t imagine that this would affect the responses. It seems to me that even if Peter believes there are no typos in his paper, he still needs to gather a lot of evidence (for the high stakes case) before he can count as knowing (compared to the low stakes case). I suspect the subjects will feel the same way. It would be very easy to run this sort of probe.

  59. Hi guys,
    (1) Josh (@8) and Jonathan (@56) have raised the worry that while raised stakes clearly matter to first-person knowledge ascription, it remains to be seen what the effects might be for how we perceive others. My sense of the mental state ascription literature is that on all of the major theories there are very close connections between first- and third-person assessments. In simulation theory (e.g. Goldman) we ascribe states of belief and knowledge to another by pretending to be in his position and running a self-assessment which is then projected onto the other. At the far end of the spectrum (e.g. Carruthers), we gauge our own epistemic position by running on ourselves the same mindreading competence we generally apply to others. Even in fancy positions that do not make either of first- or third-person mindreading straightforwardly derivative of the other (e.g. Apperly), there is enormous overlap in the resources we have for readings ourselves and others. If stakes matter to my epistemic self-evaluations, they are going to matter to my evaluations of others — or at least there’s a heavy burden of argument on anyone who wants to try to show otherwise.

    As for Jonathan’s specific query about the relationship between mood and confidence in first- and third-person assessments, it is going to matter just how closely we attend to the mood of the person evaluated (close enough to suffer emotional contagion?), and we’ll need to be careful to offset a number of complicated interactions involving mood, cognition, and mental state ascription (see Benjamin Converse’s 2008 paper in Emotion on that — sorry, WordPress is freaking on me and won’t let me link). You won’t be surprised to hear that it’s going to be hard to get that experiment right. Many really interesting experiments in mental state ascription stick to amazingly simple scenarios involving whether someone knows a colored ball is in a certain cup, exactly because we don’t know enough yet to generate an accurate picture of what is going on in more complicated settings. This is the sort of worry that makes me reluctant to take on Wesley’s challenge to sort out exactly what’s going on with the interaction between the various conditions he is running. I think the data are pretty confusing because a number of effects are involved, including, but not limited to, effects involving Gricean implicatures, conversational vs. mental state ascriptive uses of mental state terms, accommodation, reading comprehension, attention, variations in attributed evidence, confidence and the possession of outright belief. I understand the desire to “explain all the results”, but I don’t think any simple single theory is going to do this for the particular set of results we’re looking at here, and we’ll learn more by running experiments that isolate these various factors as much as possible. In order to do this, it probably helps to armor up somewhat on the existing literature on mental state ascription.

    (2) Keith (@39) and Jonathan (@56) are concerned that social psychologists might be talking about something other than the thing we care about as epistemologists. First, I have to say that social psychologists talk about many things, and that we as epistemologists also care about many things. We could start with subjective probability, which is perhaps one of the possible meanings of “confidence” in our discussion, and what I take Abe to have more of than Zelda in Keith’s 56. That is, I think Abe is assigning a higher subjective probability to the proposition that the Magic will not win the conference final than Zelda is. Psychologists (e.g. Phil Tetlock) have shown that a shift to high stakes does tend to curb overconfidence, although there are some doubts about the magnitude and significance of the shift (Cesarini, Sandewall and Johannesson). For what it’s worth I’m not entirely convinced that psychologists would grant that Abe and Zelda would be focused on the same proposition here — Zelda’s low-stakes condition would make it natural for her to think in rough and qualitative terms “The Magic are really pretty likely not to win”, where Abe’s high-stakes condition will trigger more controlled, numerical thinking about the odds of their victory, and the costs of those odds (I’m thinking about dual process theorists like Evans, Stanovich and Sloman). It would be natural enough for Abe to require more evidence to achieve evidence-based belief in his more fine-grained target proposition than what Zelda needs for hers. But worries about whether we’ve got the same proposition naturally attributed here could be set aside: we could write the case up to make it clear that each of these two sports fans is focused just on the categorical proposition “The Magic will not win”. Whatever we think about knowledge, it does have some psychological attitude component, and we may still have a sense that it’s not going to be the same for these two subjects. For example, it’s possible to read the case as one in which Zelda has an outright belief that the Magic will not win, and Abe is still trying to make up his mind about that proposition. Here again the DPT crew can help — Jonathan St. B. Evans, for example, argues in some detail that we typically (like Zelda) think of the single most plausible outcome, but when given (like Abe) suitable incentives, we construct more detailed mental models of what could happen, and subsequently require more evidence to satisfy ourselves about any particular prospect — or, philosophers might say — to achieve plain outright belief in a normatively appropriate evidence-based manner.

    However important subjective confidence might be for knowledge ascription, we can leave room for the idea that the possession or failure to possess a plain outright belief on a given question also matters. Weatherson has argued that stakes could make a difference to the level of subjective confidence one needs to have in a proposition in order to count as an outright believer; I think there’s something to his line of thought. I also think that this something is backed up by a little something in social psychology. It’s not invariably the case that those who continue the hunt for evidence on a question are seen as lacking outright belief in it (one might be after an iteration of knowledge, if one is particularly self-conscious, and there’s some interesting work on self-consciousness and accountability), but the pursuit of further evidence is often taken to indicate that one hasn’t yet achieved the “desired level of confidence” (Rieskamp and Otto) where this level shifts up with a change in stakes. On another way of looking at it, higher-stakes situations lead to something like a drop in the value of our evidence. Benjamin Newell’s work is especially interesting here. If anyone other than Jennifer Nagel has been reading it, let me know. I’m not at all sure I’m getting it right. But I do think that it looks relevant to a number of issues we are interested in as epistemologists.

  60. Hey Angel, Sorry, I was thinking about it more in regards to your earlier discussion with Jennifer (over on the XPHIB under your original post) in terms of confidence to form the relevant belief. Instead of using the ignorance cases, I was thinking that we could just set up a 2×2 to directly test the worry that she, and now I am raising about your results. Namely, whether the differences in the amount of evidence collected between lo and hi stakes subjects arises not because third-person mental state attributions involving knowledge are intrinsically sensitive to stakes or anything like that, but rather because participants are just thinking high-stakes subjects are expected to collect more evidence than low-stakes subjects to actually have an outright belief on the issue at all.

    So since I am a firm believer of proposing objections to studies that can actually be tested, and which plausibly account for the data in hand, I was suggesting we run the same hi/lo cases with other predicated besides knowledge to see if people are so influenced by this confidence issue, telegraphed by their propensity to more or less display the same asymmetry you detected for lots of different predicates besides knowledge.

    When I run both of the (soon to be famous!) proofreading cases on myself, it seems to work for things like believes, decides and thinks. Does it work for anybody else?

  61. Hi Wesley,
    Yes, I think your proposed experiment would work as you predict. As I mentioned above, this possible result does not tell against my position however. The best way of testing your suggestion (and Jennifer’s) IMO is to simply stipulate in the vignette that Peter already believes/thinks that there are no typos. And now the question we ask subjects is this: “how many times do you think Peter (who already believes there are no typos) has to check for typos before he knows there are no typos?” I think you will still get a difference between high and low stakes. If we don’t get a difference, then your objection would be vindicated. But my intuition about this modified case is that there will still be a difference.

  62. Hi Wesley, one more thing. I got a stat. sig. difference between low stakes and ignorant high stakes (and no stat. sig. difference between high stakes and ignorant high stakes). This would not at all be expected given your suggestion. After all, if you are aware of high stakes, you will gather more evidence (before forming the relevant belief) than someone similarly situated but who was not aware of the high stakes of her situation.

Leave a Reply

Your email address will not be published. Required fields are marked *