X-Phi on Bank Cases and Pragmatic Encroachment

Discussed at Experimental Philosophy, one study on bank cases and another on knowledge ascriptions being sensitive to practical concerns. The former is being done by Wesley Buckwalter and the latter by Mark Phelan and Ram Neta. The posts contain links to papers with the results, and the results give some evidence that ordinary folk don’t share the view that when the stakes go up, knowledge (or “knowledge”) goes down.

It’s interesting to ask such questions of non-philosophers, and I do it every time I teach epistemology courses. I don’t share the views, though, that make the results important for epistemological theorizing. If I held a simplistic foundationalism about the role of intuitions in philosophy and in epistemology in particular, I’d attach greater weight to the results, for example. Since I tend to think more in coherentist terms, when I present cases such as these (as well as Gettier cases), I present them as generating conflicts with some relatively straightforward assumptions we tend to make about knowledge or justification. Without presenting the theoretical background, I expect non-philosophers to respond in ways that are quite confused. So I present some of the theoretical background in order to get students to engage in the process of achieving some sort of reflective equilibrium. Thinking of the cases in this way still allows surveys on the uninitiated interesting, but little more than that. After all, reflective equilibrium takes hardly any work at all until you develop, or bring to the surface, some general commitments.

There are other assumptions that if true would also give the studies much more probative force, turning philosophical theorizing into a semantic enterprise of one sort or another, with elicited intuitions displaying speaker competence when elicited properly. I don’t find such a picture attractive, and have only a historical understanding of why some philosophers think this way. The straightforward account of what epistemologists do is that they think about knowledge and justification and the like, and try to understand the natures of these kinds of things. It takes an argument to get us from an interest in these kinds of things to an interest in the concepts or in words and ordinary meanings of words. There are historical explanations of how 20th century philosophy made those leaps, but no good arguments, as far as I can tell.

All that said, I’m thus not surprised when students think fake barn cases are cases of knowledge or when they have trouble taking account of misleading defeaters in the Grabit case (just think of the difficulty ordinary jurors had in the O.J. Simpson case of sorting through the defeaters and rebutters about the glove that O.J. couldn’t get on his hand). I find it intriguing that there might be cultural differences here as well, and wonder what would explain that. But, as is typical of inquiry, the word of the uninitiated doesn’t carry much weight: teach them some epistemology, get them to think hard about it for awhile, and then their opinion about thought experiments will matter more.

One might think these points apply only at the object level, but that when epistemologists move to the metalevel, theorizing about the truth of epistemic sentences in English, the studies will have more significance. That’s not right either. What may be right is that if one is constructing a theory about the ordinary language use of an epistemic term, a theory about what people would say, then of course the armchair response is responsible to such studies (with the caveat that the studies need to be properly designed, follow appropriate methodology, and all the other stuff that social scientists are (supposed to be) experts about). But simply moving to the metalevel doesn’t commit one to ordinary language philosophy of that sort.

It remains true, however, that lots of epistemologists think of what they are doing in precisely the ways necessary to make their views answerable to well-designed and well-run empirical studies of this sort. And that makes the studies interesting in just the way that any difficulty for a view one doesn’t hold is interesting. I just find it fascinating to see what various groups of people think about the kinds of things I theorize about. So I’d like to see what people with PhD’s are inclined to say, versus undergrads. And what people with substantive PhD’s versus all the rest… 🙂 (No, I won’t divulge my views here about that!) And women versus men, and high IQ versus low IQ, and red-state inhabitants versus blue-state inhabitants, and high income versus low income, etc. Such information would give some data relevant to which types of theories different groups are attracted to, and that would be nice to know.


Comments

X-Phi on Bank Cases and Pragmatic Encroachment — 78 Comments

  1. Hi Jon,

    Thanks for this post! I’d like to mention that, so far as I can see, Mark Phelan and I would be happy to grant all of the methodological points that you make above, including your central claim that “the word of the uninitiated doesn’t carry much weight: teach them some epistemology, get them to think hard about it for awhile, and then their opinion about thought experiments will matter more.”

    Mark and I were NOT attempting to level an argument of the form: People make such-and-such unreflecctive judgments about these cases, and these judgments must be by and large right, therefore the following epistemological view is true.

    Rather, we were trying to call into question whether the “intuitive data” concerning the Bank Cases are really intuitive data after all. Now, why does this matter? Well, the only reason that it matters is that, in recent epistemology, some argumentative weight has rested on claims concerning the intuitiveness of some verdict about one or another of the Bank Cases. To the extent that those claims have been empirically defeated (and Mark and I are happy to leave it to others to judge that), the arguments that have rested on those claims have been undermined.

  2. Hi Ram, I guess I’d still wonder about the connection between intuitive data and what the folk think. But you are right that lots of philosophers link the two, and to the extent that they do, X-phi data is relevant. I think, on this score, some epistemologists are more committed to a link between what they say is intuitive and what the uninitiated would say or agree to under the right test conditions, but I don’t see why they need to accept such a link.

  3. Hi Jon,

    I certainly didn’t mean to suggest that there isn’t room for a conception of intuitiveness on which claims concerning what is intuitive are not defeasible by appeal to what the folk say under certain test conditions. Perhaps many recent epistemological claims concerning what is intuitive should be understood by appeal to this conception of intuitiveness. In that case, I don’t want to try to challenge those claims; rather, I want to try to understand how I should adjudicate them at all. (And note: it won’t do to say that I should adjudicate them holistically, or by appeal to how they cohere with other claims. That’s because the claims I’m talking about here are not epistemological claims concerning particular cases — rather, they’re claims concerning what epistemological claims it’s intuitive to make concerning particular cases. And what further considerations are supposed to support or undermine claims about intuitiveness?)

  4. Yes, I agree with that, Ram. The obvious answer is that I’m responsible to what seems most obvious to me, and the same for you. If we disagree about what’s obvious about particular cases, that’s just another item for the epistemology of disagreement to deal with. In the face of disagreement, any of a number of things might legitimately occur: one might demur, or defer, or desist, depending on the circumstances. The first occurs when one rationally disagrees with the other; the second, when it one rationally acquiesces to the other’s viewpoint; and the third, when agnosticism is appropriate.

    As you probably can tell, I don’t really know what counts as an intuition, and I don’t know if I have any. But some things are obvious to me, both at the particular level and the general level. I think the same is true within other belief systems, of course, and I don’t expect others to work toward reflective equilibrium by assuming what I think is obvious.

  5. Ram: Insofar as this (from your comment #1, above) —

    Mark and I were NOT attempting to level an argument of the form: People make such-and-such unreflecctive judgments about these cases, and these judgments must be by and large right, therefore the following epistemological view is true. Rather, we were trying to call into question whether the “intuitive data” concerning the Bank Cases are really intuitive data after all.

    — accurately reports your intentions, you should probably change such statements as this (from your paper’s abstract):

    Thus, anti-intellectualism about evidence is not a surprising truth, for it is neither surprising nor true.

    ‘Cause that sure reads like you’re drawing a conclusion about the truth of the matter (that anti-intellectualism isn’t true, and I guess then that intellectualism is true), & not just saying that a certain argument is undermined.

  6. Jon: I agree with you that “I’m responsible to what seems most obvious to me, and the same for you.” But of course what is obvious to me can be affected by what I learn. Once I learned that some judgments that some epistemologists (including me) have been inclined to make about cases like the bank cases, are in conflict with judgments that lots of other people would make about those same cases, then, as it happens, this renders those judgments less obvious to me. I am not saying anything about the epistemological question of how I ought to respond doxastically to this disagreement. I am simply reporting a fact about my actual response: namely, certain propositions appear less obvious to me than they did before.

    Keith: Of course you are right that the statement in our abstract is not accurate, even by our lights. Simply because the only argument for an implausible position is undermined, it of course doesn’t follow that the position is false. (Does it even justify the conclusion that the position is false? Probably not.) We just couldn’t resist the pithiness of that overstatement for our abstract. As I recall, we tried to state our thesis more circumspectly in the paper itself.

  7. Ram: Thanks for clarifying your position. It’s interesting that based on your study certain judgments became less obvious to you; that is, you are less confident in certain propositions. I’ll grant you that result, but I would still want to know (though this is not what your study shows) how you would respond doxastically to the disagreement. Further, how would you justify your response? It seems to me these larger issues are unavoidable and this is why Jon is drawn to them.

    Jon: I will try to address some of your larger concerns. It seems you are arguing that intuitions are indefensible when couched in simple foundationalist terms. Instead, you embed intuitions in a coherentist justificatory structure and use intuitions as revisable bits of data, which are justified by being brought into coherence with principles and background theories. In your posts you appeal to the authority of reflective equilibrium (RE) as the mechanism for accomplishing this. If this is correct, I might challenge your appeal to RE in a couple of ways.

    Comment 4 is consistent with reflective equilibrium, which is properly regarded as a Socratic method of self-examination. Yet, as a method of justification, RE is objective in that rational people can use the method, ceteris paribus, to reach the same correct conclusion. That is, divergence in the conclusion is explainable by divergence in the steps of the argument. So, two people can look at the steps they took to reach equilibrium and choose to demur, defer, or desist. However, as D.W. Haslett has pointed out, this is not possible with RE because the method lacks clear-cut guidelines on how to achieve the best “fit” between judgments, principles, and theories. As Haslett states:

    “Without any solution to the problem of which, from among the innumerable ways of bringing any given set of theories, principles, and judgements into equilibrium, constitutes the best “fit”, reflective equilibrium methodology does not even enable two people with virtually identical starting points to iron out minor disagreements between them. In fact, as a consequence of their differing adjustment decisions they might well find themselves even further from agreement after reaching their respective equilibria than before” (1987, p. 310).

    This brings up a further problem with using RE in the epistemological sense. Recent “impossibility” results in the belief revision literature have shown that coherence methodology is not truth-conductive (see Erik Olsson, Synthese 2007 for a good summary of these results). So, RE as a coherence methodology will not necessarily track truth. Dan Bonevac (2004) has even questioned whether it is possible to reach a state of equilibrium in a finite amount of time. These concerns, among others, call for recognizing that RE is properly conceived as a method of moral, not epistemic, justification (see Sam Freeman, 2007, pp. 29-30). Elsewhere, in a paper entitled “The Moderate Interpretation of RE,” I revised RE along foundationalist lines using criteria for what it minimally takes to be a virtuous inquirer. (http://christopher.cloos.googlepages.com/papers).

    You also mention, “I don’t really know what counts as an intuition, and I don’t know if I have any.” In a couple of recent posts on my blog (http://justiceandjustification.wordpress.com/) I approximate filling-in what an intuition is using the notion of a moral heuristic.

    In conclusion, much positive work needs to be done with RE before the method can carry authority in arguments. Regarding the X-Phi movement, I think much of the results are negative and belabor providing a positive formulation of intuition. RE accepts that intuitions are not reliable and redeems them by using them in a productive manner.

  8. Christopher, I’m not sure why you took anything I said to involve using RE as a method of justification. I don’t think it is such a method, and didn’t describe it that way. Perhaps you have something special in mind by the phrase, though, that ties something I said to it. But, to be clear, my appeal to RE was simply this: when your particular judgements conflict with your general ones, you need to adjust your views in the usual case (i.e., lottery and preface aside). Nothing in that says that achieving equilibrium guarantees justification for the views that result. In fact, my use of the language of reflective equilibrium doesn’t point to a method in any interesting sense at all. It is, rather, the result of achieving consistency between general and particular judgements. How that connects with justification is a complicated matter, and I didn’t make any commitments above about the connection between consistency and justification. For the record, though, justified inconsistent beliefs are possible: that’s what lottery and preface cases show. That point doesn’t imply that consistency isn’t important, however. It’s still important because you can’t achieve the goal of believing only truths and believing all the important ones without having consistent beliefs.

  9. …the statement in our abstract is not accurate, even by our lights. . . . As I recall, we tried to state our thesis more circumspectly in the paper itself.

    Well, in the paper itself, where you officially state your conclusion (“Conclusion,” pp. 19-20), you close that section with the same irresistible overstatement as is in the abstract.

    Simply because the only argument for an implausible position is undermined, it of course doesn’t follow that the position is false.

    I’m not an anti-intellectualist myself, so this isn’t my fight. But I would have thought they present several strands of support for their position (e.g., also their view’s alleged ability to make sense of plausible connections between knowledge and practical reasoning), that don’t seem to be undermined by your results. And I’m not so sure that what your subjects say about the particular matter of how confident the subjects in the examples should be very directly undermines any case SSI-ers make. I may well be forgetting something, though (or perhaps there are defenses of anti-intellectualism I haven’t encountered): do they ever rest any arguments on appeals to intuitions about the matter of how confident subjects in examples should be? I thought it was mostly appeals to intuitions about whether subjects in examples know various things. At any rate, I’m pretty confident in saying that their whole case doesn’t rest on appeals to intuitions about the matter of how confident various characters should be.

  10. Keith, I can understand if you thought the matter rested (at least in part) on whether subjects in various examples ‘know’ various things, since the most prominent example of SSI (particularly, interest-relative invariantism) is a thesis about knowledge. However, as careful reading of our paper will reveal, we are interested in SSI about evidence, which Stanley has recently endorsed. We support the connection between confidence and evidence in our paper.

    Given that our argument bears on a recently proposed variety of SSI (that relating to evidence, not knowledge,) there may also be fewer sources of support for the position than you remember for SSI about knowledge.

  11. Hi Keith,

    Of course Mark is right that the “anti-intellectualist” view that we’re taking on in our paper is not a view about knowledge, but rather about evidence — I should have pointed that out earlier. Now, if there are arguments from anti-intellectualism about evidence that do not start from consideration of case pairs like the bank cases, then we’d have to examine those. One line of argument could start from some principle about a connection between knowing that p and acting on the premise that p, conclude from this principle and some auxiliary premises that anti-intellectualism is true about knowledge, add the premise that knowledge = evidence, and so conclude that anti-intellectualism is true about evidence. Now, that’s not an argument that Mark and I attempt to take on in our paper, but, as you know, all of the substantive steps in that argument have been challenged elsewhere. So, once again, we don’t (contrary to the impression given in our misleading abstract) want to claim that our data establishes the falsehood of anti-intellectualism.

  12. Yes, but in the very passage you guys quote (in your fn. 7), Stanley appeals to his IRI account of knowledge, together with an alleged connection between knowledge and evidence, as his support for anti-intellectualism about evidence. So this would make his case for anti-intellectualism about knowledge part of what he is basing his anti-intellectualism about evidence on (the other part being the connection between knowledge & evidence). So it seems that, so far as appeals to intuitions about cases go (but now the other strands of the case are also brought in), Stanley is ultimately relying on intuitions about whether subjects know things in various cases.

    Does he ever rest his case for anti-intellectualism about evidence on intuitions about how confident subjects should be?

    If not, you would seem not so much to be undermining any case Stanley actually makes as providing an independent argument against his position — an argument based on your survey results, together with some case for the connection between evidence and how confident subjects should be. As for the latter…

    In your comment above, you write:

    We support the connection between confidence and evidence in our paper.

    Just for the sake of those who might be reading this discussion w/o having read your paper, it should be pointed out that by “support,” you don’t seem to mean something like: “provide an argument for,” but rather something like “endorse.” The operative word you use here is “assume”:

    We assume that one point of contact, which will hold for proponents of AIE-S and AIE-Q, is between the epistemic notion of evidence and the ordinary concept of confidence. Specifically, we assume that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa.

  13. My #12 was in response to Mark’s #10. Ram’s #11 got in between, but I hadn’t read that. As it happens, 12 also responds to 11, but my use of “you” was referring to Mark, not Ram.

  14. Hi Keith,

    You’re right that we simply assume the principle that you quote stating the connection between evidence and confidence. We were trying to measure the folks’ commitments to claims about evidence, but it seemed to us to be a tricky business designing the right measuring instrument. (Just asking people whether two subjects have the same “evidence” in, say, the two bank cases is bound to seem like a perplexing question to people who never hear the term “evidence” used outside a courtroom or a laboratory.) Now, whenever anyone uses any measuring instrument to measure anything, they make assumptions about the reliability of the instrument, and of course we do that too. But I wonder if people think that our assumptions — in particular, the assumption above, that you quote connecting evidence and confidence — is false. The assumption strikes me as obviously true, but maybe it won’t strike me that way anymore if I find out that lots of other people disagree, or can find fault with it.

  15. Well, if you want to make that armchair assumption 🙂 in constructing an independent argument against Stanley’s position, that’s your business. (I myself am a bit especially careful of judgments concerning confidence levels, b/c there’s a way that I think we tend to duck/rabbit on confidence: there are two different construals of confidence that come into play in cases involving changing stakes. But that’s a longish story…)

    But if what you mean to be doing is undermining Stanley’s actual argument, then, since that arg (among his other arguments) is based on intuitions about whether subjects know things in various cases, not on how confident they should be, then knowledge would seem to be the thing to test intuitions about.

  16. Hi Keith,

    I’m puzzled by one thing here. You talk about Stanley’s “actual argument”, and you say “Stanley appeals to his IRI account of knowledge, together with an alleged connection between knowledge and evidence, as his support for anti-intellectualism about evidence.” But what’s confusing me is that the argument that I sketched in my #11 (and that I take you to sketching in the passage of yours that I just quoted) is an argument for a version of anti-intellectualism about evidence that is not the same as the version that Stanley explicitly endorses in his book. That argument would support the conclusion that whether p is an element of S’s evidence set depends upon the costs to S of being wrong about p. But the version of anti-intellectualism that Stanley explicitly endorses is not obviously logically related to that conclusion. The version of anti-intellectualism that Stanley endorses is that the strength of evidential support that p enjoys for S depends upon the costs to S of being wrong about p. Neither of those conclusions implies the other. So I think I’m missing what Stanley’s argument for his own explicitly avowed version of anti-intellectualism is. Help?

  17. Clearly, part of the case IRI-ers cite for their position on knowledge rests on our intuitive judgments about the effects of practical facts upon certain knowledge ascriptions. Suppose that an argument from IRI about knowledge, and K=E, also supports IRI about evidence (though, as Ram points out, that argument won’t support the actual version of IRI about evidence Stanley articulates in “Precis of Knowledge and Practical Interest,” which we cite in our paper.) Still, shouldn’t our intuitions about the effects of practical facts upon certain evidence ascriptions be relevant to this discussion? If our intuitions support IRI about knowledge but not IRI about evidence, maybe we should reject K=E. Or maybe we should just disregard out intuitive judgments about knowledge ascriptions. Why should knowledge be special in this regard?

  18. Ram,

    Is it clear that the ‘alleged connection between knowledge and evidence’ Stanley has in mind is E=K? That seems to be strongly suggested by p180 of the book, where Stanley notes the same point Mark Phelan makes in #17, namely that if knowledge is interest-relative but evidence is not, then we have materials for an argument against E=K. It’s natural then to take Stanley’s argument for the interest-relativity of evidence as flipping this argument around, as you do. But Stanley does seem to be remarkably unspecific when giving the argument concerning the shape of the alleged connection, and that might explain, in part at least, why it’s hard to get an argument for AIE-Q in clear sight.

  19. I’m not sure about Stanley’s argument in the book, but here is an argument he may endorse for a kind of anti-intellectualism about evidence — that is, for pragmatic encroachment on evidence.

    1) Suppose that knowledge can interestingly come and go as practical interests change. (This is motivated only in part by reflection on cases, and also by theoretical considerations about the relationship between knowledge and action.) Call this “pragmatic encroachment” on knowledge. It’s one kind of anti-intellectualism about knowledge.

    2) Assume that another kind of intellectualism about knowledge is true in the following sense: there is no change in knowledge (or, rather, whether one is in a position to know) that p without a change in one’s strength of evidence for/against p.

    It follows that one’s strength of evidence can vary as practical interests change. This is pragmatic encroachment on evidence (though not anti-intellectualism in the same sense that assumption 2 uses the term ‘intellectualism’).

    I take it that this is Stanley’s point in his responses (#11 especially) to Jon at http://fleetwood.baylor.edu/certain_doubts/?p=829#comments:

    “I think that the basic positive formulations of intellectualism all sound quite intuitively plausible, and as such they are claims I’d like to accommodate. So, if all epistemic notions are sullied, then the pragmatic encroacher can also agree with the intuitively plausible claim that when a true belief counts as knowledge depends only upon evidence. In short, if all epistemic notions are sullied, it becomes very difficult to state what intuitive supervenience claim the pragmatic enroachment theorist denies. That is, it becomes very difficult to state the intellectualist position. That removes a significant and weighty objection to pragmatic encroachment.”

    So, I think Keith is right that Stanley would want to ground his kind of anti-intellectualism about evidence in pragmatic encroachment (one kind of anti-intellectualism) about knowledge in conjunction with (another kind of) intellectualism about knowledge. And I don’t think Ram gets it quite right when he implies (at comment 6) that reflections on cases is the “only argument” for anti-intellectualism about evidence.

  20. Hey Jeremy,

    Very helpful: thanks! While I wouldn’t for a moment accept premise 2 of the argument that you’ve presented, it is certainly an argument, and it may be Stanley’s argument as well.

  21. Hi Ram. I don’t accept premise 2, either. Nor do I accept 2*:

    (2*) No change in knowledge that p (your position to know that p) w/o a change in your epistemic position wrt p (your standing only truth-related dimensions wrt p, e.g. probability, reliability, safety, etc.).

    But if the sticking point in your acceptance of 2 is the restriction to evidence, then you might be more sympathetic to 2*. And if 2* is true in addition to pragmatic encroachment on knowledge, Stanley will at least get pragmatic encroachment on generalized strength of epistemic position (you know, supposing that 1 — pragmatic encroachment on knowledge — is true).

    Are you more sympathetic to 2* than you are to 2?

  22. Jeremy, I think 2* needs restrictions to be even close to plausible. Think of introducing gettier considerations, or loss of belief, or change in truth value (that last one might be able to be accommodated by claiming propositions can’t do that). But the revised claim is worth considering: no change in Kp without a change in epistemic position wrt p, provided no change in psychology, truth value, or gettierizability.

    That would be a nice statement of intellectualism, except for Jason’s corruptions… 🙂

  23. Ram, you say “Once I learned that some judgments that some epistemologists (including me) have been inclined to make about cases like the bank cases, are in conflict with judgments that lots of other people would make about those same cases, then, as it happens, this renders those judgments less obvious to me.”

    This is an interesting point, in several respects. One way to read your psychological response is to view it as an expression of the assumptions I pointed to in the post about connecting epistemology to semantics and speaker competence. You may hold views about the connections here that I don’t, and those views might make your loss of confidence just the right response to have.

    There may be something else going on, and it happens even for people like me who think the connections between ordinary meanings of epistemic terms and speaker competence don’t have much to do with the subject matter of epistemology. What might be going on is that we sometimes learn from epistemic inferiors, to use a slightly emotive term while disavowing the negativity. It’s an interesting, and unaddressed issue, in the epistemology of disagreement. The implications of being epistemic peers is discussed, as is confrontation with an epistemic superior. I suspect that the stronger positions on peerhood and superiority may need to hold also that learning from an inferior is hardly possible (all of this holding fixed empirical information about the subject matter in question). But sometimes great mathematicians learn from lowly students that they made and mistake, and sometimes epistemologists learn about knowledge and justification from the untutored and unreflective and ungifted as well. As John Hawthorne likes to say, “that’s a datum!” And it has interesting implications in the epistemology of disagreement that haven’t been appreciated.

  24. Hi Jon,

    Right. The “position to know” parenthetical is there to cover loss of belief, and the “etc.” in the second parenthetical is there to cover gettier conditions and truth value, both of which seem to me broadly “truth-related” and certainly “epistemic”.

    -j

  25. Jon: Perhaps set theorists and biophysicists don’t have much to learn about their subject matter from the intelligent undergraduates who are taking their courses. But I very much doubt that there are epistemologists of whom the same is true. The “data” from which we work are either very widely accessible, or else their status as data is not completely certain. So, yes, I think you’re absolutely right to suggest that epistemologists have much to learn from their “epistemic inferiors”.

    Jeremy: In order for me to have strong intuitions about the acceptability of 2*, I think I’ll need to understand your phrase “epistemic position” better than I now do. You say that one’s “epistemic position” with respect to p consists in “your standing only truth-related dimensions wrt p, e.g. probability, reliability, safety, etc.” But here’s a truth-related dimension with respect to p: the rationality of betting on the truth of p. The greater the extent to which your evidence supports the truth of p, the more rational it will be for you to be on the truth of p. But if the rationality of betting on the truth of p is included among truth-related dimensions with respect to p, then I think 2* is false. And if it is not included among truth-related dimensions with respect to p, then exactly what IS included among such dimensions?

  26. Hi Ram, you say, “The “data” from which we work are either very widely accessible, or else their status as data is not completely certain.” That’s the part I don’t see any good reason to believe. I can see philosophical perspectives that get one to such claims, displayed prominently in the last hundred years of philosophy, but I don’t see why we should accept those perspectives.

  27. Hi Jon, The only justification I can offer for the claim of mine that you quoted is not anything metaphilosophical. It’s just a broadly empirical, inductive justification. Perhaps some epistemologists have made substantive epistemological discoveries that are both highly certain and not widely accessible. If so, I hope to learn about them.

  28. Sorry, I didn’t get around to seeing this thread until just now, and just quickly read Neta and Phelan’s paper. I’m a bit bemused by it. As Keith and Jeremy point out, the only arguments I have ever considered for anti-intellectualism about evidence have involved theoretical bridge principles connecting knowledge and evidence. The only intuitions I ever appealed to in the book in such arguments were ones about connections between knowledge and evidence (as reflected in various counterfactual claims). I also (in the book) appealed to various theoretical claims about the relation between knowledge and evidence of the E=Kish sort. I certainly never provided anything like bank cases for evidence. And if I had ever even considered doing this (which I didn’t), I wouldn’t have framed the cases in terms of subjective notions like confidence.

  29. Thanks for corroborating that Jason: I suspected as much in light of the material that Jeremy produced. So do you think that the principles connecting knowledge and evidence are more obviously true than our principle connecting evidence and rational confidence? Or more to the point, do you think it’s easier to read facts about evidence off of intuitions about knowledge than it is to read those facts off of intuitions about rational confidence?

  30. Just in case I need to remind anyone of our principle connecting evidence and rational confidence, it is this:

    (CE) The more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa.

    And the principle that I take Jason to be committed to is (to quote Jeremy):

    (KE) There is no change in knowledge (or, rather, whether one is in a position to know) that p without a change in one’s strength of evidence for/against p.

    As far as I can see, Jason wants to say that EITHER CE is false or people’s intuitive judgments about rational confidence are systematically wrong. And his argument for this disjunction involves appeal to KE.

    Now, do I have all that straight?

  31. Ram, you probably know what I’ll say about your last remark! So here it is for the public: tell me all the important epistemological discoveries there have been and I’ll tell you which are certain and not widely accessible. I suspect that to tell me the first part, you’ll have to talk about what’s true from your own perspective, and if we go that route, I can list lots of things that meet the stipulation, though of course from my own perspective…

  32. So, I just looked at the end of Jason’s book to see what’s going on. What argument does he endorse for anti-intellectualism about evidence? In a way, the answer is: none. In a brief (pp. 179-182) “Conclusion,” Jason considers what moral might be drawn from his conclusion that knowledge is interest-relative. One possibility is that knowledge is quite unlike other important epistemic notions like evidence and justification in that knowledge is interest-relative while those others are not. (Jason thinks in that case, some will think epistemologists should stop paying so much attention to knowledge, and instead focus on the “unsullied” notions.) Since some of the ways that Jason presented his case might lead some readers to think that Jason thinks there is such a difference here, Jason wants to point to another possibility: maybe justification and evidence are interest-relative notions, too. What’s more, Jason lets us in on the fact that he’s inclined to go the second way: to think these other notions are also interest-relative. And he does say what thinking is leading him in that direction — that’s where his brief invocation of Williamson’s work linking knowledge with evidence comes in: to explain why Jason suspects these two are linked closely enough to make him think interest-relativism will spread here. But it’s just an indication of why he’s leaning the way he is, not an argument he takes himself to be endorsing in the way he endorses the real arguments of his book. That comes out quite clearly at the closing sentences of the book, where he indicates that, so far as what he takes himself to have established goes, either moral — the one on which evidence is not I-R, and the one on which it is — might be correct.

    Authors who have worked hard arguing for their conclusions have the right to then speculate, without providing arguments that they endorse in the same way, where they think their conclusions are likely to lead. (This is especially true of those authors who actually have provided lots of arguments that they do fully endorse. And that’s certainly true of Jason. As it was put by one reviewer, whose judgments about such matters I almost always find to be correct, Jason’s book is “packed with arguments, both offensive and defensive.”) Such “looks ahead” are often very interesting & valuable. Now, insofar as an author doesn’t just say what s/he thinks about such matters, but gives some indication of why s/he thinks that way, it is certainly legitimate to critique the reasons they give or hint at for thinking they way they do. But it’s probably a good idea to then indicate that that’s what you’re doing, rather than attacking one of the arguments the author fully endorses.

    Insofar as the critique under consideration targets Jason’s defense of I-R about evidence, then, it can’t be targeting any argument that Jason really endorses, but just some thoughts about where things might go from where he leaves them. And, as I’ve been saying, insofar as he gives such reasons for thinking evidence might also be I-R, his reasons go through I-R about knowledge, and so are based (among other strands of argument) on intuitions about whether subjects in certain cases *know* things, not on intuitions about how confident they should be, so the survey in question doesn’t seem to be well-aimed at the case that is hinted at.

    As for comment #32, it’s worth noting that even in the brief hints Jason gives about why he thinks evidence will turn out to be I-R, he does go so far as to indicate that his thinking here is based on or at least influenced by Williamson’s work on the connections between knowledge and evidence, and he isn’t just going by how some proposed connection initially strikes him. This may explain why he would be more inclined to base theses on knowledge/evidence connections than on how-sure-one-should-be/evidence connections. At least, *I* don’t recall that Williamson, or anyone else, has investigated the latter connection as fully as Tim has supported the former.

  33. Keith,

    You say:

    “Insofar as the critique under consideration targets Jason’s defense of I-R about evidence, then, it can’t be targeting any argument that Jason really endorses.”

    But that’s not true. Suppose you argue as follows:

    p
    q

    therefore, r.

    Then, I show that r, in conjunction with some empirical premises, has the following consequence:

    Either it is false that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa — or else people’s intuitive judgments about rational confidence are systematically wrong.

    Now, those of us who find the preceding disjunction implausible (and maybe that group doesn’t include you) will thereby have some prima facie reason to think that r is false, and thereby have some prima facie reason to think that the argument from p and q to r is not sound.

    Also, you say “it’s worth noting that even in the brief hints Jason gives about why he thinks evidence will turn out to be I-R, he does go so far as to indicate that his thinking here is based on or at least influenced by Williamson’s work on the connections between knowledge and evidence, and he isn’t just going by how some proposed connection initially strikes him.”

    But now I’m really confused: I thought that we all agreed, in light of Jeremy’s very useful remark 21 above, that the argument that Jason suggests for anti-intellectualism about evidence is NOT based on Williamson’s thesis K=E, but rather on the different thesis that I call KE above, namely: “There is no change in knowledge (or, rather, whether one is in a position to know) that p without a change in one’s strength of evidence for/against p.” Is there some logical connection between Williamson’s K=E and this latter principle that I’m not seeing? Or does Williamson defend KE somewhere?

  34. Keith, thanks for the nice discussion of the actual arguments Jason suggests. I think it’s fair to point out, however, that we don’t in the paper actually claim to be attacking one of the arguments Jason makes (though that may have been the impression from even some of our own comments here and at X-Phi.) Rather, we defend our approach by pointing out that SSI/IRI-ers have supported various positions (for example, SSI about knowledge and justification) by appealing to intuitive judgments about epistemic claims in contrast cases. We then write, without attributing the argument to anyone, that:

    Presumably, any argument for AIE will have to come from our intuitive judgments about cases. But while much intuition thumping has taken place with respect to AIK and AIJ, not nearly as much attention has been paid to AIE. So the question that we set out to answer was the following: Do our intuitive judgments about cases support the hypothesis that there is a relation between one’s evidence and the practical costs of being wrong?

    Admittedly, as has been pointed out on this blog, this involves a bit of overstatement. There are other arguments available to the proponent of anti-intellectualism about evidence. But, as Keith also points out, in a way there have been no arguments given for the position at all, so much as just suggested. One can then view our paper, perhaps more charitably, as an investigation along lines that a proponent of AIE might propose.

    Now, there may well be reservations about the way we go about our investigation. We would be very interested in hearing such objections. Indeed, one thing that would be very helpful to us would be criticisms of CE, which Ram mentions in 32. Reservations about this bridge principle have been suggested here, but so far as I can see no explicit reason for these reservations has been offered. Perhaps Williamson has offered a better defense of a purported connection between knowledge and evidence than we attempt for CE. We did not offer an explicit defense of CE because such a defense was not the purpose of our paper, and, frankly, the principle seemed pretty intuitive to us. We are certainly not advocating abandoning intuition, if our (partially) experimental methodology made anyone think we were. Perhaps we could be disabused of this intuition. But we would want to see some arguments against it. Surely the move from one bridge principle regarding evidence is well supported to a completely different bridge principle regarding evidence is an implausible basis for theses is unreasonable.

    Also, I would be interested in any critique of our hypothesized explanation of people’s (broadly construed, to include philosophers) judgments in juxtaposed cases such as those we discuss. Again, that explanation is that people do not generally (say, in the individual cases they encounter in real life) judge as anti-intellectualists, but some do when high and low stakes cases are juxtaposed, because they have a tacit commitment to pragmatic encroachment which they exhibit when stakes are made salient. If the results we discuss in this paper extend to other epistemic concepts (such as knowledge and justification,) where the path from intuitions about contrast cases to anti-intellectualism is more clear, this explanation may end up being more important than the current paper suggests.

  35. I don’t think Jason’s hinted argument takes KE as a premise (which is not to deny Fantl’s point in #21 that one might do so). If you look at the passages in question, Jason seems to be arguing that IRI about knowledge is compatible with KE. The suggested argument for that conclusion is that given IRI about knowledge and an intimate connecting knowledge and evidence (perhaps E=K, perhaps something weaker but still Williamsonian in spirit), we should expect evidence to be interest-relative too. So as I’m reading Jason, he’s conceding that we should hang onto KE, and trying to show adopting IRI about knowledge doesn’t require one to surrender it. KE, then, isn’t acting as a premise of an argument for IRI about evidence. Rather the hinted argument for IRI about evidence is part of Jason’s defense against a certain objection to IRI about knowledge. Here’s the relevant passage:

    ‘It is prima facie difficult to accept that one person knows that p and another does not, despite the fact that they have the same evidence for their true belief that p. But, if knowledge is anywhere near as central to epistemology as the considerations in Williamson (2000) suggest, then one would expect that evidence is similarly interest-relative. If so, one’s practical situation will affect the evidential standing one has with respect to one’s belief that p, and not just whether one knows that p. On this view, the interest-relativity of knowledge does not entail that two people can differ in what they know, despite sharing the same evidence.’ (181)

    Again, Jason’s reply to Ram in PPR suggests he’s trying to show that IRI about knowledge need not commit one to a denial of KE:

    ‘Neta’s first point is that IRI entails that someone with the same evidence for p and q might know that p, yet not know that q, because of her greater investment in the truth of p. Neta is right to emphasize that this kind of consequence is to be avoided, if possible. However, it is only a problem for certain versions of IRI. The purpose of my book was to defend IRI about knowledge, and I was officially neutral on IRI about other epistemic notions, such as evidence. But as I say in the conclusion of K&PI (p. 182), “…my own view is that all epistemic notions are interest relative.” Neta’s discussion provides further support for the view that interest-relativity affects all epistemic notions.’ (201)

  36. Hi Aidan,

    Thanks for this very helpful intervention. It seems that you’re right that what’s moving Jason to adopt some version of IRI about evidence is not KE itself, and not E=K itself, but rather an attempt to accommodate some sort of generic connection between knowledge and evidence. The quote that you provide certainly suggests that. But then, I still want to know, does IRI about evidence claim that

    Either it is false that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa — or else people’s intuitive judgments about rational confidence are systematically wrong (and systematically wrong in a way that has not already been revealed in other empirical studies).

    Also, since you bring up the PPR symposium concerning Jason’s book, I’ll point out that at no place in constructing my putative counterexample to IRI about knowledge did I ever stipulate anything concerning the evidence possessed by the protagonist of my example. If the example works as a counterexample at all (and I leave it to others to judge that), it does that independently of what views we might hold about evidence.

  37. Fair enough, Aidan. But notice that if you’re right about this (and there are no other relevant arguments,) Stanley simply assumes IRI about evidence to preserve IRI about knowledge and a connection between knowledge and evidence (I’m not sure from your post if you mean to say he’s defending KE or K=E.) Then one can view our argument in the paper as a challenge to Stanley’s assumption, and thereby a challenge to accepting both IRI about knowledge and the relevant connection between knowledge and evidence.

  38. This is in response to #35. My reason for saying in 34 that you can’t be targeting any argument that Jason really endorses for interest-relativity about evidence is that Jason doesn’t really endorse any argument for that conclusion. Certainly that follows! As for undermining some argument or reason Jason hints at without fully endorsing, if your argument goes the way you’re now saying, that wouldn’t be undermining his suggested reasons so much as providing an independent argument that his conclusion is wrong, at least so far as I can see. That happens: One guy argues for P, another produces an argument for ~P. In *some* sense each argument undermines the other: If one is sound, that shows the other is somehow unsound. But I just count that as giving an argument for the opposite conclusion.

    Also, if that’s how your argument goes, that seems to be going against what you say way back in comment #1, where you agreed with Jon that “the word of the uninitiated doesn’t carry much weight” and you insisted you were “NOT attempting to level an argument of the form: People make such-and-such unreflecctive judgments about these cases, and these judgments must be by and large right, therefore the following epistemological view is true..” For now your argument seems to rest on something like that “If 70 or so undergraduate college students from an elite university say that P, then, probably, P.” Well, I guess what you’re saying in 35 and 38 is that it rests on the disjunction you display there. But that sure seems to be giving much weight to the word of the uninitiated. [[Well, unless, that is, you accept the disjunction simply because you find its first disjunct true. But in that case it’s kinda tricky to use the disjunction instead of just appealing to the first disjunct. For in that case, the first disjunct is what’s making the disjunction true (from your point of view), and the second disjunct’s function would seem to be to make it misleadingly appear that you are undermining, rather than just opposing, Jason’s argument. So, scratch the whole idea inside these brackets: It seems best to read you as giving great weight indeed to the word of the uninitiated, against what you say in #1.]]

    I’ll skip what I was going to say about the second half of #35: Aidan (37) cleared that up pretty well, I think.

  39. Keith says:

    “My reason for saying in 34 that you can’t be targeting any argument that Jason really endorses for interest-relativity about evidence is that Jason doesn’t really endorse any argument for that conclusion. Certainly that follows! As for undermining some argument or reason Jason hints at without fully endorsing, if your argument goes the way you’re now saying, that wouldn’t be undermining his suggested reasons so much as providing an independent argument that his conclusion is wrong, at least so far as I can see. That happens: One guy argues for P, another produces an argument for ~P. In *some* sense each argument undermines the other: If one is sound, that shows the other is somehow unsound. But I just count that as giving an argument for the opposite conclusion.”

    Now, I have no argument with any of that — obviously we were using the word “targeting” differently in our comments 34 and 35 above, and perhaps we count different things as “undermining” reasons: but for the time being I don’t care how we use the word “targeting” or how we should conceive of “undermining” reasons, as long as we agree that what’s going on in the present discussion is the following:

    Jason endorses IRI about evidence. Phelan and I argue that IRI about evidence, in conjunction with some empirical premises, entails a particular disjunctive claim:

    Either it is false that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa — or else people’s intuitive judgments about rational confidence are systematically wrong (and systematically wrong in a way that has not already been revealed in other empirical studies)

    Mark and I find that disjunction implausible, and so think it counts against any view that implies it. But perhaps people find the disjunction plausible? I hope to find out…

    Keith continues:

    “For now your argument seems to rest on something like that “If 70 or so undergraduate college students from an elite university say that P, then, probably, P.””

    Absolutely not. But Keith continues:

    “Well, I guess what you’re saying in 35 and 38 is that it rests on the disjunction you display there.”

    Yes, that is exactly right!

    “But that sure seems to be giving much weight to the word of the uninitiated.”

    As I said above in response to Jon, we give weight to their word NOT in thinking that what they say is probably true, but ONLY in thinking that what they say exacts some empirical cost on a philosophical position. In other words, if a philosophical position implies that p, and peoples’ intuitions tend to say not-p, then the philosphical position had better have some reasonable explanation why peoples’ intuitions are wrong in this case. (Folk intuitions can be a defeater of a philosophical position, but, if the philosopher can explain away the intuitions, she thereby defeats the defeater.)

  40. In light of the important issue that Jon raised above concerning the epistemology of disagreement, I should clarify that when I said, in 41, that “folk intuitions can be a defeater of a philosophical position, but, if the philosopher can explain away the intuitions, she thereby defeats the defeater”, I didn’t mean to suggest that folk intuitions that disagree with a philosophical position ALWAYS serve to defeat the position. I meant only that they CAN do so. And, it seems to me, they do so in this case (though they may too be defeated).

  41. Keith writes: For now your argument seems to rest on something like that “If 70 or so undergraduate college students from an elite university say that P, then, probably, P.”

    For the benefit of those who haven’t read it, this is certainly not a premise on which the argument in Ram’s and my paper depends (regardless of what anyone might have said on this thread to suggest it.) We could not simply read an epistemic theory off of people’s responses, since, as we point out in the paper, people’s responses are inconsistent across juxtaposed and non-juxtaposed cases! (If we were employing such a simple method as that, which 70 of the students would we go with: the one’s who got juxtaposed cases or the ones who got single cases?) One moral of the paper is this: Some philosophers take juxtaposed cases to reveal something about an epistemic concept generally, but reactions to juxtaposed and non-juxtaposed cases (at least involving the concept of confidence) don’t accord with one another, so the assumption, that intuitive judgments regarding contrast cases (at least about confidence) reveal something about an epistemic concept generally, is dubious.

  42. Jon kicked this off with a worry about the weight of the judgments of the uninitiated — if they haven’t yet thought very hard about the issues, perhaps we shouldn’t attend so much to what they are saying. So the judgments that count are those produced under caution, and under fuller information. I’d like to stress that philosophical interpretations of these judgments should also be produced under caution and fuller information if we are going to accord them much weight. The Neta and Phelan paper seems to move quite swiftly from a finding about stakes and expected confidence to a conclusion about the constitutive dependence of evidence on stakes. Here are some reasons to worry about the way they interpret their data:
    1. As a matter of empirical fact, do stakes make a difference to our confidence levels, when everything else is held fixed? Yes they do. There’s a large existing body of empirical data showing that higher stakes (e.g. financial penalties for inaccuracy) reduce confidence — for reviews see e.g. Kunda, Z. (1990) “The case for motivated reasoning” Psychological Bulletin, 108(3), 480-498, or Lerner and Tetlock (1999) “Accounting for the effects of accountability”, Psychological Bulletin, 125(2), 255-275.
    2. If stakes ordinarily do have an influence on confidence, why didn’t we see that effect in the first Main Street case? One lesson of the psych lit on this point is that it’s not easy to hold everything else fixed. A number of other variables change with stakes and end up affecting confidence. For example, time pressure can make high-stakes subjects think hastily but with increased confidence (see Kruglanski, A. W., and Webster, D. M. (1996). Motivated closing of the mind: ”seizing” and ”freezing”. Psychological Review, 103(2), 263-283). Checking your watch when it’s a life-or-death matter that you should be somewhere at a given time might signal time pressure. Or perhaps asking a passerby about Main Street is broadly granted to be a pretty good ground of confidence even for subjects in High Stakes, and so it’s just not the kind of case that’s going to generate high stakes/low stakes contrasts. Or perhaps subjects are taking the expectation rather than the requirement reading of the “should” in the question “How confident should Kate be that she is on Main Street?” in this probe. If so, Jason could take comfort from the observation that people respond differently when the question is made unambiguously normative, as in the last probe (“Kate wonders how confident she should really be that she is on Main Street ” … What factors should affect Kate’s confidence that she is on Main Street?” — the “really” and the italics on “should” in the original force the normative reading).
    3. As for the juxtaposed cases that Neta and Phelan explore, more caution is required to establish that subjects aren’t just producing a contrast effect as the result of compliance to conversational norms (I think this blog had a recent link to Simon Cullen’s nice discussion of Norbert Schwartz’s work on survey pragmatics). And on the first juxtaposed case (“Two Streets”) there’s the further difficulty that stakes aren’t the only thing separating Kate’s attitude to Main and to State: when she is at (near? facing?) the intersection, the passerby tells her “The street you’re on is ‘State Street’ and this other street is ‘Main Street'”. Which street is she on right now? State looks like a better answer than Main, whatever’s going on with the stakes here.
    Anyway, the uninitiated are often smart in ways we may not be thinking about when we first design our probes; we’ve got to be pretty careful in figuring out the upshot of their judgments.

  43. this is certainly not a premise on which the argument in Ram’s and my paper depends (regardless of what anyone might have said on this thread to suggest it

    Mark: In the second paragraph of #40 (which is what you quote me from), I was responding to the argument Ram suggested in this discussion at 35. I can’t really tell whether that argument was being suggested as the (or an) argument of the original paper. And I messed up my presentation of Ram’s 35 argument, so forget the 2nd paragraph of 40. Here’s how it really seems to go. Ram is arguing — in 35, whether or not this is the argument of the original paper — that AIE is false on the grounds that it entails the following disjunction (call it “D”), which Ram claims is false:

    D: Either it is false that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa — or else people’s intuitive judgments about rational confidence are systematically wrong.

    So, in 35, Ram depending on the falsehood of D. To put things a bit more positively, he is resting his argument on the conjunction of these two claims (D can be false, as Ram claims, only if both of these are true):

    1. The more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be

    2. People’s intuitive judgments about rational confidence are not systematically wrong.

    Now, Mark, you report that:

    People’s responses [about rational confidence] are inconsistent across juxtaposed and non-juxtaposed cases!

    Whatever bearing that may have on other arguments or considerations (including the actual arguments of your original paper), that hurts the argument of Ram’s that I was discussing: Given that inconsistency in people’s judgments about rational confidence, 2, which Ram is depending on here (insofar as he is resting his argument on the falsehood of D — which is pretty clear from 35), isn’t looking very good.

  44. There’s a large existing body of empirical data showing that higher stakes (e.g. financial penalties for inaccuracy) reduce confidence

    Jennifer: Is there a quick and easy (or at least quickish and easyish) answer to this: How was confidence measured in this empirical work?

  45. Hi Jennifer,

    Thanks for the very helpful comment! I’d like you to clarify something. You write:

    “As a matter of empirical fact, do stakes make a difference to our confidence levels, when everything else is held fixed? Yes they do. There’s a large existing body of empirical data showing that higher stakes (e.g. financial penalties for inaccuracy) reduce confidence…”

    Now, do you mean to imply that there is empirical evidence that, when an agent’s stakes are higher, then we ordinarily judge that that agent’s RATIONAL degree of confidence goes down? Or do you mean simply that there is empirical evidence that, when an agent’s stakes are higher, then that agent’s ACTUAL degree of confidence tends to go down?

  46. Mark and Keith,

    Good points! So Keith, I need to be more careful about spelling out the disjunction that makes trouble for IRI about evidence.

    What I had before was this: “Either it is false that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa — or else peoples’ intuitive judgments about rational confidence are systematically wrong.”

    But this, as Keith points out, is not what I should have said. What I SHOULD have said was this: Either it is false that the more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa — or else peoples’ intuitive judgments about rational confidence in non-juxtaposed cases are systematically wrong.

    Now, you might treat peoples’ intuitive judgments about rational confidence in juxtaposed cases as evidence of the truth of the second disjunct above. But on what basis? Why not treat peoples’ intuitive judgments about rational confidence in non-juxtaposed cases as evidence of the falsity of the second disjunct above? What Mark and I do — as Mark carefully points out in his comment — is try to provide an explanation of BOTH sets of intuitive judgments. Now, it’s the IRI-theorist’s turn to provide an IRI-friendly explanation of both sets of intuitive judgments.

  47. Keith – confidence gets measured in many ways. Typical would be what Mayseless and Kruglanski do in their 1987 – “What Makes You So Sure? Effects of Epistemic Motivations on Judgmental Confidence” – subjects were asked to rate their confidence in each judgment they made on a 0-100 scale where 0 was described as “not at all confident” and 100 as “confident beyond a shadow of a doubt”. Others ask people to rate their subjective probabilities out of 1.0, with 1.0 dubbed as certainty and various further degrees of explanation about the meaning of that scale. And yes, there are a bunch of interesting questions about the relationship between confidence and perceived confidence, and about the relationship between empirical and elicited belief, this gets complicated really fast.

    Ram – the data I had in mind is about actual confidence, not perceived rational confidence. But I’d guess we probably do judge rational confidence to be lower for high-stakes subjects given only just enough evidence to make a low-stakes subject happy, given that the reason why high-stakes subjects are typically less confident at that level appears to be that these subjects have a natural psychological tendency to seek out more evidence before making up their minds. If that tendency is blocked or resisted (perhaps due to haste or some countervailing pressure like wishful thinking) then we’ve probably got a less rational subject on our hands. To what extent are we aware of this? Ziva Kunda says in the survey I mentioned before that “people are aware of the effort/accuracy trade-off” (1990, 481), and I think the literature she cites backs her up, but you can see for yourself if you agree.

    Incidentally, about the ‘vice-versa’ side of the first disjunct of the disjunction in dispute between you and Keith – for what it’s worth, greater rational subjective confidence does not necessarily require more evidence. It’s enough that the subject should do a better job of processing his evidence (as in fact high-stakes subjects are ordinarily inclined to do, unsurprisingly expending more cognitive effort and being less susceptible to most biases).

  48. Hi Jennifer,

    Two more questions.

    (1) You write: “But I’d guess we probably do judge rational confidence to be lower for high-stakes subjects given only just enough evidence to make a low-stakes subject happy, given that the reason why high-stakes subjects are typically less confident at that level appears to be that these subjects have a natural psychological tendency to seek out more evidence before making up their minds. If that tendency is blocked or resisted (perhaps due to haste or some countervailing pressure like wishful thinking) then we’ve probably got a less rational subject on our hands.” But if the resisting force (e.g., haste) is irrational, then why should we think that the resisted force is not also irrational?

    (2) You write: “Incidentally, about the ‘vice-versa’ side of the first disjunct of the disjunction in dispute between you and Keith – for what it’s worth, greater rational subjective confidence does not necessarily require more evidence. It’s enough that the subject should do a better job of processing his evidence.” I don’t understand why you believe this. Do you also think that people who understand morality or moral reasons better are thereby more rationally obligated to fulfill those demands, or act in accordance with those reasons? If not, then what’s the disanalogy between the evidential reasons case and the moral reasons case?

  49. This is in response to #44, Jennifer’s three part post. I’ll respond to each part.

    #1: I too am curious about Ram’s question in 47. Is the empirical data you cite data in which the subjects of the experiments faced penalties? If so, that’s a relevant distinction from our studies.

    #2: You write: Time pressure can make high-stakes subjects think hastily but with increased confidence (see Kruglanski, A. W., and Webster, D. M. (1996). Motivated closing of the mind: ”seizing” and ”freezing”. Psychological Review, 103(2), 263-283). Checking your watch when it’s a life-or-death matter that you should be somewhere at a given time might signal time pressure.

    Once again, Ram’s question is relevant. In the study you cite, is it time pressure on subjects for whom the stakes are high that makes those subjects think hastily but with increased confidence? If so, this data is not directly relevant, since we weren’t subjecting subjects in our studies to either time pressure or high stakes. Furthermore, even if it were relevant, it couldn’t explain the fact that we saw a difference when the cases were juxtaposed, since the same time pressures are mentioned there. Finally, even if it were only relevant in the non-juxtaposed cases, if the IRI-er about evidence were right why wouldn’t we still expect a gap between high and low stakes non-juxtaposed cases, just shifted to higher numbers on the scale, since our results weren’t at ceiling for any of our cases?

    You go on: Or perhaps asking a passerby about Main Street is broadly granted to be a pretty good ground of confidence even for subjects in High Stakes, and so it’s just not the kind of case that’s going to generate high stakes/low stakes contrasts.

    Perhaps, but this fails to explain why people responded differently when the passerby cases were juxtaposed. It also fails to explain people’s responses in the non-juxtaposed drunks and street-signs cases.

    And: Perhaps subjects are taking the expectation rather than the requirement reading of the “should” in the question “How confident should Kate be that she is on Main Street” in this probe.

    Why would they only take it this way in the non-juxtaposed cases? Or, alternatively, if they take it this way in the juxtaposed cases, why does a difference emerge? (You might say that subjects suspect that Kate’s an interest relative invariantist about evidence, so expect her to have actually different levels of confidence in the two cases” … but I doubt it.)

    As a general remark about these possible explanations: I would never claim that someone couldn’t explain our pattern of results by explaining each result as a consequence of some distinct group of hypotheses and the pattern as a random confluence of all those distinct explanations. I would just take our explanation as more parsimonious and powerful and, therefore, preferable.

    #3: You write: As for the juxtaposed cases that Neta and Phelan explore, more caution is required to establish that subjects aren’t just producing a contrast effect as the result of compliance to conversational norms (I think this blog had a recent link to Simon Cullen’s nice discussion of Norbert Schwartz’s work on survey pragmatics).

    That is a possibility. Of course, we were taking the similarity of the percentages of those who, when asked directly, say importance should effect confidence and the percentage of those who respond as IRI-ers about evidence in juxtaposed cases to suggest our preferred explanation. But that evidence can hardly be decisive. And, even if you are right, that would be cold comfort to one who would base philosophical theses on reactions to contrast cases appearing in philosophical papers, since someone judging contrast cases when reading a paper seems to be in roughly the same situation as one judging contrast cases when reading a survey.

    The last criticism has some bite, and has been noticed before. We left the case in since it fits with the overall pattern of judgments on all the surveys, but maybe the paper would be stronger if we took it out, or reran it to avoid this problem.

  50. This is in response to #44, Jennifer’s three part post. I’ll respond to each part.

    #1: I too am curious about Ram’s question in 47. Is the empirical data you cite data in which the subjects of the experiments faced penalties? If so, that’s a relevant distinction from our studies.

    #2: You write: “Time pressure can make high-stakes subjects think hastily but with increased confidence (see Kruglanski, A. W., and Webster, D. M. (1996). Motivated closing of the mind: ”seizing” and ”freezing”. Psychological Review, 103(2), 263-283). Checking your watch when it’s a life-or-death matter that you should be somewhere at a given time might signal time pressure.”

    Once again, Ram’s question is relevant. In the study you cite, is it time pressure on subjects for whom the stakes are high that makes those subjects think hastily but with increased confidence? If so, this data is not directly relevant, since we weren’t subjecting subjects in our studies to either time pressure or high stakes. Furthermore, even if it were relevant, it couldn’t explain the fact that we saw a difference when the cases were juxtaposed, since the same time pressures are mentioned there. Finally, even if it were only relevant in the non-juxtaposed cases, if the IRI-er about evidence were right why wouldn’t we still expect a gap between high and low stakes non-juxtaposed cases, just shifted to higher numbers on the scale, since our results weren’t at ceiling for any of our cases?

    You go on: “Or perhaps asking a passerby about Main Street is broadly granted to be a pretty good ground of confidence even for subjects in High Stakes, and so it’s just not the kind of case that’s going to generate high stakes/low stakes contrasts.”

    Perhaps, but this fails to explain why people responded differently when the passerby cases were juxtaposed. It also fails to explain people’s responses in the non-juxtaposed drunks and street-signs cases.

    And: “Perhaps subjects are taking the expectation rather than the requirement reading of the “should” in the question “How confident should Kate be that she is on Main Street” in this probe.”

    Why would they only take it this way in the non-juxtaposed cases? Or, alternatively, if they take it this way in the juxtaposed cases, why does a difference emerge? (You might say that subjects suspect that Kate’s an interest relative invariantist about evidence, so expect her to have actually different levels of confidence in the two cases…but I doubt it.)

    As a general remark about these possible explanations: I would never claim that someone couldn’t explain our pattern of results by explaining each result as a consequence of some distinct group of hypotheses and the pattern as a random confluence of all those distinct explanations. I would just take our explanation as more parsimonious and powerful and, therefore, preferable.

    #3: You write: “As for the juxtaposed cases that Neta and Phelan explore, more caution is required to establish that subjects aren’t just producing a contrast effect as the result of compliance to conversational norms (I think this blog had a recent link to Simon Cullen’s nice discussion of Norbert Schwartz’s work on survey pragmatics).”

    That is a possibility. Of course, we were taking the similarity of the percentages of those who, when asked directly, say importance should effect confidence and the percentage of those who respond as IRI-ers about evidence in juxtaposed cases to suggest our preferred explanation. But that evidence can hardly be decisive. And, even if you are right, that would be cold comfort to one who would base philosophical theses on reactions to contrast cases appearing in philosophical papers, since someone judging contrast cases when reading a paper seems to be in roughly the same situation as one judging contrast cases when reading a survey.

    The last criticism has some bite, and has been noticed before. We left the case in since it fits with the overall pattern of judgments on all the surveys, but maybe the paper would be stronger if we took it out, or reran it to avoid this problem.

  51. Keith, regarding 45: I knew that you got Ram’s argument from the thread, and am sorry if I suggested otherwise. I just wanted to point out for the benefit of those who hadn’t read it, that I didn’t take us to be committed to that in the paper.

  52. Hi Ram,
    As far as your point (1) is concerned, I’m confused about your confusion – are you suggesting that the natural tendency for high-stakes subjects to seek out extra evidence is itself irrational? I must be misunderstanding you. Or I haven’t explained the point very clearly. I hate to market myself, but you can check out my paper on knowledge ascriptions and the psychological consequences of changing stakes (on my website) if you want a less compressed explanation. On your point (2), you take issue with my claim that greater rational subjective confidence need not always require more evidence, but might be grounded in better processing of the evidence. I’m thinking of ordinary problems of bounded rationality here. It seems to me that my answer to the question of how confident I should be that p is the case should properly depend not only on the evidence I have for p, but also on how much time I’ve had to think about that evidence, whether I’ve judged this evidence heuristically or systematically, etc. If I’ve had only a few minutes to look at some complex body of data that seems initially to support p I should rationally be less confident in my judgment that p is the case than if I’ve had all day to mull it over, right?

  53. Hi Jennifer,

    Re 1: I did not for a moment mean to suggest that it’s irrational for high-stakes subjects to seek out more evidence. Rather, I meant to suggest that it’s irrational for them to have a lower degree of confidence, given the same evidence.

    Re 2: You say “It seems to me that my answer to the question of how confident I should be that p is the case should properly depend not only on the evidence I have for p, but also on how much time I’ve had to think about that evidence, whether I’ve judged this evidence heuristically or systematically, etc.” So let me ask: do you think that degrees of confidence are rationally required to be probabilistically coherent?

  54. I just emerged from a 13 hour flight, and quickly read all the interesting comments on this thread. Starting backwards – in response to Ram’s second question, degrees of confidence for ideal reasoning situations, when one has infinite time to make decisions, should be probabilistically coherent. But one would think things are more complicated for ordinary reasoners, with limited time constraints. I’m assuming there is stuff on this in theory of bounded rationality, as Jennifer mentioned (I characterize the notion of a ‘serious practical question’ in terms of bounded rationality in my book, so I can capture some of the data discussed in the psychology literature I think).

    I highly recommend her Australasian Journal of Philosophy paper on this topic. As she makes clear there (and above), there has been a ton of work in psychology showing the effects of stakes on confidence, work that I became aware of by responding to Jennifer’s paper at an APA session. One might, as Jennifer does, seek to explain this data in terms that don’t threaten intellectualism about evidence. But it would be quite surprising if experimental philosophers were able by their distinctively philosophical methods to show that the results in this research were flawed.

    It did not in fact occur to me to use this research in support of anti-intellectualism about evidence. It just seemed to me that anti-intellectualism about knowledge coupled with intellectualism about evidence was a less plausible thesis. As I say in the PPR exchange, there are objections to it that are not objections to a more thorough going anti-intellectualism. But I have yet to make this case out in detail.

    The discussion on the thread has moved a little bit away from whether we in fact have stakes-sensitive intuitions, and more towards whether, if you present undergrads straightforwardly with bank-type cases, whether they will respond in the way contextualists or SSIers will say they respond. But let me suggest that the issue of whether the cases as presented in philosophy papers are the best way to test stake-sensitivity of epistemic notions is a much less interesting question than whether in fact judgments about epistemic notions are stakes-sensitive. I’m sure real psychologists would scoff at the set-ups we use in our papers.

    My own route to stakes sensitivity was as follows. Over many years of doing and teaching epistemology, I realized that judgments of whether someone knows (or is justified in believing) are stakes-sensitive. In particular, I realized that when I wanted my undergraduates to get the intuition that someone didn’t know something, I would raise the stakes in the situation (‘yeah? But suppose someone had a gun to your head…’). When I became aware of what I was doing, I also noticed other epistemologists regularly eliciting intuitions that someone doesn’t know something by raising stakes. I’ve been sensitive to this phenomenon for so long, that I’m absolutely convinced something is going on there. If philosophers aren’t able to detect these effects in their initial forays into survey gathering, I’m much more likely to be skeptical of the experimental set-up than I am of the phenomenon.

  55. Mark – You suggest that the empirical data from the surveys I cite might not be relevant because they involved subjects put under financial penalty for inaccurate judgment. But both Keith and Jason’s bank cases involve subjects for whom a proposition becomes high-stakes exactly on account of the expectation of a financial penalty for inaccurate judgment. So if we’re wondering how we naturally perceive the confidence of subjects like that, then I think my data’s still in. Incidentally I think you’d get a stronger probe if you could make Kate’s stakes more vivid by explaining in the scenario why it matters so much that she should be somewhere on Main Street by noon, rather than elsewhere – the generic claim that ‘her life depends on it’ probably leaves too much latitude for wondering about Kate’s state of mind in calculating her (expected? rationally required?) confidence. (e.g. if we feel she really, really wants it to be true that she’s now on Main Street, we’ll expect her to be more confident than she is.) It’s a great strength of both Keith and Jason’s versions of the Bank cases that the subjects involved are clearly represented as being totally cool and rational throughout.

    Anyway, financial penalties for inaccuracy are one of many manipulations psychologists use to get their subjects to treat a proposition as high-stakes (others include financial incentives for accuracy, generating the expectation that either the judgment or the method of judging would have to be defended in front of either a hostile or sympathetic audience, even just the removal of anonymity from survey responses). There’s a general, across-the-board high-stakes effect (=increased cognitive effort, more accurate but less confident judgment), but there are some mild differences in this effect that do depend on the manipulation involved (like, for instance, we respond slightly differently to accountability measures when we perceive them as illegitimate). But I don’t think these differences give any reason to doubt the general correlation that IRI finds between higher stakes and increased stringency about knowledge (as Jason mentions, I also think this stringency is rational, but I’m not yet persuaded that we have to go to IRI to explain why this is so).

    On time pressure – again, the data are relevant not because your subjects were under time pressure, but because your subjects were being asked to judge the cognition of someone who might quite plausibly be perceived as being under time pressure. Because there is a correlation between people being rushed and people making higher-confidence judgments, we might expect this correlation to be reflected in our intuitive judgments concerning the confidence of rushed people. Just a suggestion.

    One more thing – I’d be a bit more excited about your report that 44% of subjects had the impression that stakes should matter to confidence if you had some explanation of why a sizable minority of subjects would pick up this (by your lights false) belief. Any thoughts?

  56. Wow: too much to respond to all at once. For now, I’ll just respond to an important point that Jason made above, and then I hope to reply to other things soon.

    Jason says: “degrees of confidence for ideal reasoning situations, when one has infinite time to make decisions, should be probabilistically coherent. But one would think things are more complicated for ordinary reasoners, with limited time constraints.”

    Now, let’s recall how this became an issue:

    In our paper, Mark and I assume the principle

    “(CE) The more or better one’s evidence for a proposition, the greater one’s confidence in that proposition ought to be, and vice versa.”

    We do not attempt to argue for CE, but we simply assume it. If CE is false, then that is a problem for our paper.

    Jennifer says that CE is false, because the “vice-versa” direction is not true. She writes “It seems to me that my answer to the question of how confident I should be that p is the case should properly depend not only on the evidence I have for p, but also on how much time I’ve had to think about that evidence, whether I’ve judged this evidence heuristically or systematically, etc.”

    I objected to Jennifer that proportioning degrees of confidence in accordance with such factors as how much time I’ve had to think about my evidence would lead to probabilistic incoherence: I could have less than .5 confidence in each of p and not-p. Since probabilistic coherence is a global constraint on rational degrees of confidence, this cannot be right.

    Enter Jason’s comment above. But notice that Jason is careful not to say that probabilistic coherence is NOT a global constraint on rational degrees of confidence (that would be demonstrably false, given several plausible principles of decision theory). Rather, what he says is that “one would think things are more complicated for ordinary reasoners, with limited time constraints.” And I agree that, even though probabilistic coherence is a global constraint on rational degrees of confidence, things ARE more complicated with real agents. One way in which they are more complicated is that there are several other constraints, both global and local, on rational degrees of confidence at a time, besides probabilistic coherence. Another way in which they are more complicated is that not every violation of the requirements of rationality is a violation for which the agent can be blamed. (“OK, so I didn’t see that q followed from the conjunction of p and if p, then q: I was really busy!”)

    But this last point has gotten me to notice an issue that has not been brought up yet, but that really could be a potential problem for me and Mark. The issue is this: when Mark and I endorse CE, we are using the word “ought” to express rational requirement: I will go to the mat for CE so understood. But when students report how confident subjects “ought” to be in various propositions under various circumstances, are they hearing the word “ought” as expressing rational requirement, or are they hearing the word “ought” as expressing blamelessness? And, given the wording of our questions, would it make a difference if they meant one or the other?

    Now, I’m not sure how to figure out whether students are hearing the word “ought” as meaning something like rationally required, or rather as meaning something like blamelessness. So it seems to me that our best bet, if we want to check whether folk intuitions do or do not serve to defeat anti-intellectualism about evidence, is to design question where it wouldn’t make a difference whether the students hear “ought” as meaning one or the other. Any suggestions as to how to do that?

    (It is at this point that we would be very grateful if the aforementioned real psychologists could suspend their scoffing for a moment to offer some constructive suggestions.)

    More later…

  57. Finally, a few free moments to respond to another of the important points raised earlier.

    Jason writes “It just seemed to me that anti-intellectualism about knowledge coupled with intellectualism about evidence was a less plausible thesis. As I say in the PPR exchange, there are objections to it that are not objections to a more thorough going anti-intellectualism. But I have yet to make this case out in detail.”

    Now, let me see if I understand exactly what’s going on. Back in 21, Jeremy suggested the following argument on Jason’s behalf:

    “1) Suppose that knowledge can interestingly come and go as practical interests change. (This is motivated only in part by reflection on cases, and also by theoretical considerations about the relationship between knowledge and action.) Call this “pragmatic encroachment” on knowledge. It’s one kind of anti-intellectualism about knowledge.

    2) Assume that another kind of intellectualism about knowledge is true in the following sense: there is no change in knowledge (or, rather, whether one is in a position to know) that p without a change in one’s strength of evidence for/against p.

    It follows that one’s strength of evidence can vary as practical interests change.”

    One worry I had about that argument is that I didn’t see anything at all to recommend premise 2. But then, as Aidan pointed out back in 37, what Jason writes in his PPR piece responding to me doesn’t look like the argument above. What Jason writes there is:

    “Neta’s first point is that IRI entails that someone with the same evidence for p and q might know that p, yet not know that q, because of her greater investment in the truth of p. Neta is right to emphasize that this kind of consequence is to be avoided, if possible. However, it is only a problem for certain versions of IRI. The purpose of my book was to defend IRI about knowledge, and I was officially neutral on IRI about other epistemic notions, such as evidence. But as I say in the conclusion of K&PI (p. 182), “” … my own view is that all epistemic notions are interest relative.” Neta’s discussion provides further support for the view that interest-relativity affects all epistemic notions”

    But then, as I mentioned in reply to Aidan, the counterexample that I attempted to construct in response to anti-intellectualism about knowledge stipulates nothing whatsoever about the subject’s evidence. In fact, the stipulations of the counterexample were made in almost exclusively non-epistemic terms (except that I did say that the subject has no “special reason” to believe that either street sign is inaccurate, and no “special reason” to believe that either street sign is any more likely to be accurate than any other street sign — I take these stipulations not to imply anything very specific about the subject’s body of evidence).

    Now, given that the example does not stipulate anything about the subject’s evidence, on what grounds can it be claimed that the example’s effect (if any) against anti-intellectualism about knowledge depends upon presupposing intellectualism about evidence?

    Of course, since Mark and I are criticizing a particular thesis, and not attempting to criticize any particular argument for that thesis, this whole issue is independent of our paper. But, like Keith and Jeremy and Aidan and others, I’m still interested in it.

  58. Mark and Ram,

    Independently of the whole question as to whether you succeed in refuting IRI, I think these are some pretty exciting and intriguing results!

    I was curious to hear whether you might have any hypothesis as to why some people in the within-subjects condition think we should assign lower confidence in the high stakes case. Do you think they might be inferring this conclusion from certain premises about the role of evidence in practical reasoning? Or is there some other mechanism afoot here?

  59. Excellent question Josh! But since I just have a moment now, rather than try to address it (which Mark is anyway more qualified to do than I am), I’ll reply to yet another point of Jason’s in his 56 — a point that is, in fact, closely related to your question.

    Jason writes: “My own route to stakes sensitivity was as follows. Over many years of doing and teaching epistemology, I realized that judgments of whether someone knows (or is justified in believing) are stakes-sensitive. In particular, I realized that when I wanted my undergraduates to get the intuition that someone didn’t know something, I would raise the stakes in the situation (’yeah? But suppose someone had a gun to your head…’). When I became aware of what I was doing, I also noticed other epistemologists regularly eliciting intuitions that someone doesn’t know something by raising stakes. I’ve been sensitive to this phenomenon for so long, that I’m absolutely convinced something is going on there.”

    Well, like Jason, Mark and I are also absolutely convinced that something is going on there. What’s going on there is an effect (in roughly half the population) of JUXTAPOSING the low-stakes and high-stakes cases. So yes, Jason is absolutely right that something is going on there, and we agree. The question is: what is it that’s going on there, and under what conditions does the effect appear or disappear? The problem with answering this question by saying that peoples’ intuitions about cases are being guided by some appreciation of the truth of an IRI doctrine is that this “explanation” is at odds with a lot of other data (e.g., about non-juxtaposed cases).

  60. I’d like to echo the oddity that others in this thread have found of publishing a very strongly worded conclusion based on evidence that, though generally quite interesting, doesn’t support the conclusion. You’ve found a strong anti-intellectualist tendency in the juxtaposed cases. Given that the conclusion you seem to want to draw regardless of the data is that anti-intellectualism is false, you decide to give some kind of error theory of what is going on in the juxtaposed cases. But that is a bizarre reaction to the data you’ve obtained (suggesting that you were committed to a particular conclusion in advance of inquiry). Why think the error is in the juxtaposed cases, rather than the non-juxtaposed cases? In the non-juxtaposed cases, certain important features of the situation aren’t sufficiently salient. So people make mistakes. The juxtaposed cases raise these features to salience, to allow people to be more clear about their judgments.

    The anti-intellectualist doesn’t think that stakes sensitivity of epistemic notions is obvious – otherwise it wouldn’t be a controversial new position. The naive theory of knowledge is intellectualist. The juxtaposed cases raise to salience a feature that isn’t part of our naive theory of knowledge. In these cases, we see the recognition of practical stakes on epistemic notions most clearly. But *why* practical stakes matter to the obtaining of epistemic relations is not a fact determined by intiutions. It emerges from philosophical reflection on the value of knowledge, and in particular, the role it plays in practical rationality.

  61. Ram,

    You write, concerning my reply to you in the PPR piece:

    “Now, given that the example does not stipulate anything about the subject’s evidence, on what grounds can it be claimed that the example’s effect (if any) against anti-intellectualism about knowledge depends upon presupposing intellectualism about evidence?”

    My point was that the anti-intellectualist about evidence has a simple response to your objection in your PPR piece. Whether a method of acquiring beliefs provides good enough evidence for X’s belief that p on time t depends upon what is at stake for X at t. If a lot is at stake in finding Main street, a certain method M of gathering evidence might not be sufficiently good evidence, not just for X’s belief that p at time t, but for any other belief X might form on the basis of M at t.

    I thought this was clear from the exchange, so I may be confused about what is confusing you?

  62. In response to Jason at #62:

    First, let me point out that we haven’t ‘published’ our strongly worded conclusion. This paper should be viewed as a work in progress, and I think Ram and I would both agree now that we should back off of that strong conclusion since, as Ram, Jason and others have pointed out, it isn’t warranted by our challenge to one strand of evidence for anti-intellectualism.

    In regards to the suggestion that we were committed to a particular conclusion in advance of inquiry, let me assure the readers that we were not COMMITTED to a particular conclusion in advance of inquiry. We did expect a certain pattern of responses to our surveys, but that is certainly does not impugn our results. Anyone who runs a study expects a certain reaction to it, or what would be the point of running that particular study? Now, in this particular case, the reaction we expected was not the one we got. What we expected was for those who got non-juxtaposed cases to judge in accordance with IRI about evidence–that is, to accord less rational certitude for the high stakes case, and more rational certitude for the low stakes case–and we expected the results to be closer together for the juxtaposed case. When we got the unexpected results we did get, we were at a loss as to how to explain these for some time. Finally, we settled on the relevant explanation, and ran the last experiment in the paper in an attempt to test this.

    You also write: “Why think the error is in the juxtaposed cases, rather than the non-juxtaposed cases? In the non-juxtaposed cases, certain important features of the situation aren’t sufficiently salient. So people make mistakes. The juxtaposed cases raise these features to salience, to allow people to be more clear about their judgments.”

    This may be correct. I’m trying to think of some way to test this hypothesis. Maybe by seeing if juxtaposing cases involving differences only between certain features agreed to be irrelevant also elicits a difference in subjects responses (though, if it did, it would probably be best to adopt Jennifer’s hypothesis involving survey pragmatics instead of ours of tacit belief–but both are equally problematic to one who would argue for IRI about evidence from intuitions about contrast cases.) But, in the absence of such additional experimental evidence, notice that the hypothesis that the error is in the juxtaposed cases has something going for it that your hypothesis doesn’t: As we mention in our paper, it doesn’t take juxtaposition to raise the reliability of the source of information to a level of salience sufficient to cause that clearly relevant feature to make a difference to confidence judgments. Why would that be required for practical stakes? Perhaps stakes aren’t so important to evidence after all–at least they seem much less important than reliability of source. In any case, our view doesn’t face this additional explanatory burden.

  63. “Why think the error is in the juxtaposed cases, rather than the non-juxtaposed cases? In the non-juxtaposed cases, certain important features of the situation aren’t sufficiently salient. So people make mistakes. The juxtaposed cases raise these features to salience, to allow people to be more clear about their judgments.”

    Of course, even when the feature is made salient by juxtaposition, still only about 43% of people exhibit the IRI-tendency.

  64. Mark is in the process of replying to a number of points that have been raised, but the blog is for some reason not accepting his posts. Since I know he’s replying to several points being made, I’ll confine myself to replying to just one for now, and see if I need to add anything else after his posts:

    Jason writes: “Why think the error is in the juxtaposed cases, rather than the non-juxtaposed cases? In the non-juxtaposed cases, certain important features of the situation aren’t sufficiently salient. So people make mistakes.”

    Now, if it is true that “in the non-juxtaposed cases, certain important features of the situation aren’t sufficiently salient”, then of course that is a problem with our study. But we tried hard to make the features in question salient in the phrasing of our questions in the non-juxtaposed cases. Perhaps we should have put the phrases expressing the relevant features into bold, or all caps, or underlining? Any other suggestions?

    More later…

  65. Jennifer, thanks for the insightful comments in 57. I’ll respond to some of these:

    You write: “You suggest that the empirical data from the surveys I cite might not be relevant because they involved subjects put under financial penalty for inaccurate judgment. But both Keith and Jason’s bank cases involve subjects for whom a proposition becomes high-stakes exactly on account of the expectation of a financial penalty for inaccurate judgment. So if we’re wondering how we naturally perceive the confidence of subjects like that, then I think my data’s still in.”

    The reason I thought the motivated cognition data might not be relevant had nothing to do with the fact that it involved financial penalty. My concern with the relevance of that data had to do with the fact (I believe, but there may be studies of which I’m not aware) that it is looking at how subjects (of studies) judged their own confidence when they themselves faced a financial penalty. Keith’s, Jason’s, and our cases all involve asking people (readers or experimental subjects) to judge the rational (hopefully) confidence of agents in cases who face financial penalty. Subjects in the data you cite are arguably judging the probability that something is true, whereas our subjects are–analogous to the relevant philosophical cases–judging a subject’s confidence. That the former kind of judgment changes with financial penalties to the subjects of the studies does not tell us directly whether the later kind of judgment changes with financial penalties to the agents in the cases subjects are judging. That was the disanalogy I was worried about.

    I still doubt that time constraints on agents in the cases are making a difference in subjects judgments, since, as I mentioned in 51, it’s standardized across all cases, and there’s still room for subjects to rank rational confidence higher on our Likert scales. But we could always check just to be sure. And I see your point that the force of my first argument against the relevance of time constraints is somewhat mitigated by the prima facie reasonable claim that: “Because there is a correlation between people being rushed and people making higher-confidence judgments, we might expect this correlation to be reflected in our intuitive judgments concerning the confidence of rushed people.” (Though there might be some reason not to expect this too, given such well documented effects as the actor-observer bias–a tendency to explain the actions of others in systematically different ways than we explain our own.”

    Your other suggestions are very helpful. We should certainly rerun these studies making the stakes more vivid. We could also easily run one where Kate is as cool as the center seed of a cucumber. If we philosophers had money we could offer a financial incentive to our subjects. At least we could have them put their names on the surveys in an experiment, as you suggest. Thanks to you, and everyone else, who has given such good suggestions to us on this comment thread. This certainly provides a nice blueprint for future research.

    Finally, we may be able to get only limited insight into the question about how a sizeable minority of subjects could have picked up the false belief that stakes matter to confidence, since I doubt that these subjects consciously formed this belief, say, as the result of reading Jason’s work on IRI. But notice that this is presumably not only our problem. If we can assume that failure to select a feature on this survey reveals a belief that the feature is irrelevant, then the proponent of IRI (about evidence) must explain how a majority of subjects formed the (by the proponent’s lights) false belief that stakes don’t matter to evidence.

  66. Mark,

    You write:

    “Subjects in the data you cite are arguably judging the probability that something is true, whereas our subjects are—analogous to the relevant philosophical cases—judging a subject’s confidence. That the former kind of judgment changes with financial penalties to the subjects of the studies does not tell us directly whether the later kind of judgment changes with financial penalties to the agents in the cases subjects are judging. That was the disanalogy I was worried about.”

    The proponent of IRI about evidence claims that our judgments about the quality of evidence is affected by practical stakes. That is the most straightforward explanation of the data Jennifer reports (though, as Jennifer has argued in her work, it may not be the best explanation). Insofar as your work suggests that in certain kinds of experimental set-ups (in particular, non-juxtaposed situations) we do not make such judgments about *others* in high-stakes situations, it raises a puzzle. There is a prima facie tension between the psychological studies and the studies you philosophers have been doing. Perhaps this is attributable to some kind of first-person/third-person asymmetry. Or perhaps it is attributable to flawed experimental design by either the psychologists or you guys.

    You have been defending your experimental design by arguing that it is similar (though somewhat different) to the bank-type cases discussed by philosophers. But surely the interesting question is not whether the specific cases discussed by philosophers support IRI about evidence, but whether there is good evidence that stakes undermine rational confidence. And the latter seems strongly supported by extant psychological literature.

  67. Here are some quotes from the 1990 Kunda survey article in The Psychological Bulletin brought to our attention by Jennifer Nagel:

    “The work on accuracy-driven reasoning suggests that when people are motivated to be accurate, they expend more cognitive effort on issue-related reasoning, attend to relevant information more carefully, and process it more deeply, often using more complex rules.”

    “In these studies, accuracy goals are typically created by increasing the stakes involved in making a wrong judgment or in drawing the wrong conclusion, without increasing the attractiveness of any particular conclusion.”

    “In sum, the case for accuracy-motivated reasoning appears quite strong. In the above studies subjects had no reason to prefer one conclusion or outcome over another; their sole goal was to be accurate. The evidence that people process information more carefully under such circumstances is considerable and persuasive.”

    In short – according to this summary of a very large body of empirical research in psychology, when you increase the stakes involved in making a wrong judgment, people regard ordinary methods of gathering evidence as less reliable than they ordinarily do. The most *straightforward* explanation of this large body of research in psychology is that IRI about evidence is true.

  68. “But surely the interesting question is not whether the specific cases discussed by philosophers support IRI about evidence, but whether there is good evidence that stakes undermine rational confidence. And the latter seems strongly supported by extant psychological literature.”

    I would agree that the first is not an interesting question. What does strike me as an interesting question is whether the philosophical method of contrasting cases is a good way to get evidence in favor of certain views. Our studies suggest that it may be problematic to claim on the basis of reactions to contrast cases that our intuitions about a certain concept ordinarily (i.e. in non-contrast cases) vary in reaction to certain features which are manipulated in these contrast cases.

    “The proponent of IRI about evidence claims that our judgments about the quality of evidence is affected by practical stakes.”

    The studies you and Jennifer cite suggest that this claim is true of first-person judgments of the quality of evidence. Our studies suggest that this claim is not true of third-person judgments of the quality of others evidence. (So far as I can see, neither of you has cited empirical data that contradicts this.) Therefore, our studies present a challenge to the generality of the claim made by the proponent of IRI about evidence.

  69. Hi Jason,

    Concerning your question in note 63 about our PPR exchange. Here’s what you say:

    “My point was that the anti-intellectualist about evidence has a simple response to your objection in your PPR piece. Whether a method of acquiring beliefs provides good enough evidence for X’s belief that p on time t depends upon what is at stake for X at t. If a lot is at stake in finding Main street, a certain method M of gathering evidence might not be sufficiently good evidence, not just for X’s belief that p at time t, but for any other belief X might form on the basis of M at t.”

    And now here’s what puzzles me: to whatever extent this reply works, what makes it work seems — so far as I can tell — to have nothing whatsoever to do with any view about evidence. Why couldn’t you just as well have said the following:

    Whether a method of acquiring beliefs provides X with knowledge that p on time t depends upon what is at stake for X at t. If a lot is at stake in finding Main street, a certain method M of forming beliefs might not provide knowledge, not just for X’s belief that p at time t, but for any other belief X might form on the basis of M at t.

    In other words, suppose you eliminate the reference to evidence, just talk about methods of forming beliefs, and you lose … what? Is your reply to me now any worse off than it otherwise would have been?

  70. Jason writes, in 69, “In short – according to this summary of a very large body of empirical research in psychology, when you increase the stakes involved in making a wrong judgment, people regard ordinary methods of gathering evidence as less reliable than they ordinarily do.”

    But I don’t see a single quote in 69 to substantiate this interpretation of the summary. There’s nothing about people regarding certain “methods of evidence gathering” as “less reliable”. What the survey article says is just that, when the stakes go up, people expend more effort with the aim of accuracy. But that’s perfectly consistent with every position (including ours) on the debate concerning anti-intellectualism about evidence! Suppose my current evidence makes it the case that I should be .8 confident that Obama will win the election. Now you ask me to bet $10 at 1:1 odds that Obama will win. Fine by me. But now suppose you ask me to bet my life savings at 1:1 odds that Obama will win. No way! Given the massive cost to me of being wrong, I better go out and gather more evidence before I make such a bet! And that’s NOT because I have less than .8 confidence, or because I ought to have less than .8 confidence. It’s because I’m not willing to take a .2 chance of losing my life savings!

  71. Ram,

    Umm…it wouldn’t really be an interesting and controversial topic of psychological research if the upshot was that people rigorously adhere to expected utility theory. The upshot of the research is rather that two people with different stakes, confronted with the same piece of evidence, report it as being differentially persuasive. In particular (judging e.g. from the description in Kunda’s summary article of the 1979 Petty & Cacioppo study), people with different stakes report different confidence levels in a proposition, when given the same piece of evidence. They do not (as you claim they should) report having the same degree of confidence, but act on that basis differently.

    In short, the data you are getting in your study directly conflicts with this data. So someone is doing something strange. I suspect your non-juxtaposed cases do not really elicit stakes-sensitive intuitions. Perhaps we need something like the techniques used by psychologists (on the other hand, the fact that you are getting substantial stakes-sensitive intuitions in your juxtaposed cases shows that one can perhaps elicit them more cheaply than giving subjects an actual financial incentive).

  72. Jason writes: “The upshot of the research is rather that two people with different stakes, confronted with the same piece of evidence, report it as being differentially persuasive.”

    How does it follow from the thing said here to be the upshot of the research that raising stakes lowers RATIONAL confidence? When people are under pressure, they make all sorts of mistakes.

  73. Ram,

    For better or for worse, the view I hold in epistemology is that it is in fact rational to lower one’s confidence in response to raised stakes. No doubt, you find this philosophical position counter-intuitive. But I thought the dispute we were having on this blog was not over whether philosophers (or psychologists) find this conception of rationality counter-intuitive, but whether the view is reflected in basic folk intuitions.

    If you’re now prepared to concede what the psychological literature appears to show, that, contra your paper, we do in fact lower our confidence in a belief in response to raised stakes, then we can move on to the question about whether the anti-intellectualist account of these intuitions is philosophically defensible.

  74. Jason,

    I don’t think you characterize our disagreement accurately. I agree with everything you say in the first paragraph of 74. I also agree that we do in fact lower our confidence in a belief in response to raised stakes. And I agree that the psychological literature shows this. So on all those points, we are, and have been throughout, on the same page.

    The only point you make in 74 that I do not agree with is the one that you express by including the phrase “contra your paper” in your second paragraph. Our paper makes no predictions at all about how peoples’ actual degrees of confidence will be effected by raising stakes. Our paper makes predictions only about how peoples’ rational degrees of confidence will (or will not) be effected by raising stakes.

    So what I was asking you in 73 was just this: how is it possible to glean, from the psychological literature that you cite, any predictions about how peoples’ rational degrees of confidence will or will not be effected by raising stakes?

  75. Ram,

    Here is how the psychological literature bears on rational credences. First, it shows that people in fact change their degrees of confidence in response to raised stakes. If people try to align their subjective credences with their epistemic (rational) credences, then we have the link to rational credences in place. Since I believe that people do in general try to align their subjective credences with their rational credences, we have the desired conclusion.

    On my view in epistemology, in many cases in which people lower their credences in response to raised stakes, they are in fact correctly aligning their subjective credences with their rational credences. So my view in epistemology gives a charitable interpretation to people’s tendencies to lower their credences in response to raised stakes. The opposing intellectualist view is not charitable. It says that people, when facing raised stakes, make mistakes about their rational credences (since they are trying to align their subjective credences with their rational credences).

  76. Jason, you write: “If people try to align their subjective credences with their epistemic (rational) credences, then we have the link to rational credences in place. Since I believe that people do in general try to align their subjective credences with their rational credences, we have the desired conclusion.”

    As you recognize yourself, the link between drop in actual credence and drop in rational credence comes from the claim that people SUCCESSFULLY try to align their subjective credence with their rational credence. So, you say, your view is more charitable than our anti-intellectualist view because it grants that peoples’ changes in actual credence are rational.

    Now, that’s a helpful clarification of how you’re thinking about the bearing of the psychological literature on the present issue about evidence. My worry, though, is about whether the principle of charity should be applied to our behavior, to make that behavior come out as rational as possible, or whether it should be applied to our intuitive judgments about rationality, to make those intuitive judgments come out as true as possible. What this exchange shows, I think, is that, in this particular instance, you get incompatible results depending on what you apply the principle to.

    In fact, though, this sort of conflict should be clear from our data, which suggests that people (at least 44% of people) tend towards the view that anti-intellectualism is right, even though their own verdicts about particular hypothetical cases are (assuming CE, which is itself neutral concerning whether intellectualism is true) in conflict with anti-intellectualism. So there’s just no way of reading their thinking as fully consistent.

Leave a Reply

Your email address will not be published. Required fields are marked *