Two kinds of pragmatic enroachment

I have to weigh in on what I take to be McGrath’s side on the debate concerning salience and pragmatic encroachment. Kvanvig, in his earlier post, suggested that one could account for “pragmatic enroachment” by appealing to the unfortunate epistemological effects of “having a belief or concern that further inquiry would not be a waste of time”. In particular, Jon proposes that such a belief may serve as an internal defeater of other reasons one may have for a belief. McGrath’s response is to emphasize that this move isn’t going to help when a subject S intuitively doesn’t know that p, because the costs are high for S, even though she lacks the belief that further inquiry would not be a waste of time. So consider a subject S in a DeRose bank case who isn’t aware that she has a check coming due. On certain “pragmatic encroachment” views (e.g. the one I’m working on), it may still be the case that she doesn’t know that the bank is open, though someone with her same evidence does know that the bank is open (since the costs are lower for that other person). Since S isn’t aware that she has a check coming due, she lacks the belief that checking further about the bank’s opening hours would not be a waste of time — in fact, she thinks it would be waste of time. So Kvanvig’s suggestion is powerless to explain this kind of “pragmatic encroachment”.
In general, there are two versions of Subject Sensitive pragmatic enroachment theories, that need to be distinguished. According to the first (which I think one sees in Hawthorne’s chapter 4), what is epistemologically relevant is what the subject believes to be her interests. According to the second, what is epistemologically relevant is what the subject’s interests actually are, rather than just her beliefs about her interests. Those who, like me, are eager to make mere salience epistemologically irrelevant emphasize in our work the latter kind of subject sensitivity, rather than the former (and I think Fantl and McGrath are in this latter camp as well). Kvanvig’s move is only directed towards the first of these subject sensitive approaches.


Two kinds of pragmatic enroachment — 27 Comments

  1. Pingback: Certain Doubts » Actual or Perceived Stakes? – for Contextualists

  2. Jason is exactly right about what my proposal can and can’t explain. It can’t explain intuitions where knowledge isn’t present because of the actual costs are significant, such as in the bank case (she has a check coming due, has the same casual evidence about when the bank opens as another person who has nothing particular at stake, and so is claimed not to know when the bank opens).

    These examples, I think, are the hardest on the invariantist–they are just the kinds of examples that conflict with our strong sense that knowledge involves a purer relation to the truth than such views allow. I’m not saying that we might not have to give this idea up, but at least the contextualist can preserve some of the purity of knowledge here, whereas the invariantist has to give it up entirely.

    My view is much more hardnosed here. I can see going contextualist here and adopting the viewpoint of the speaker to account for the loss of knowledge. But if I refuse to take the speaker’s point of view, and insist that knowledge ascriptions must only be subject-sensitive, then the cases where actual interests come into play simply don’t move me at all. That is, I don’t share the intuition even in the slightest in the bank case that Jason has. When I shift perspectives and think like Keith from the speaker’s perspective, I do see the point of the example as a reason for contextualists to insert pragmatic considerations. So, my view is that to the extent that I’m tempted by contextualism, I’m tempted to account for the cases; and if you convince me that contextualism is false, then I think it is a mistake to think that knowledge is sensitive to actual pragmatic factors (as opposed to one’s that would factor into some notion such as expected utility).

    Jason, as you somewhat misleadingly put it last fall while here, on your view, “the more you care the less you know.” To the extent that your caring gives you reasons for action from your own perspective, then I think you and Matt hold similar views (and that view is very hard for the epistemicist to respond to). But if your caring does not line up with reasons for action (suppose, e.g., that a meteor will strike New Brunswick next fall with no warning, so even though this would matter to you, you’d have no reason to turn down the offer they made you from your perspective), I don’t have any inclination to diminish the set of things you know, except by adopting my own perspective on your situation, a perspective from which it is assumed that you’re a dead man come fall (I apologize for the morbidity here…).

    There’s a lot of mere confession in these claims, maybe little else, and other people’s intuitive reactions may differ. But these are mine, and I embrace them!

  3. Jon,

    For what it’s worth, I once again agree with your general sentiments. But I think you give up too much. So, you say, of this example

    JS: Since S isn’t aware that she has a check coming due, she lacks the belief that checking further about the bank’s opening hours would not be a waste of time – in fact, she thinks it would be waste of time. So Kvanvig’s suggestion is powerless to explain this kind of “pragmatic encroachment”.

    Kvanvig: These examples, I think, are the hardest on the invariantist–they are just the kinds of examples that conflict with our strong sense that knowledge involves a purer relation to the truth than such views allow.

    I feel no pull whatsoever to say that S doesn’t know that the bank is open. If pragmatic considerations–costs and such–are going to encroach on knowledge, they must do so via justified beliefs S has about the costs and the general circumstances. If S is justified in thinking it would be a waste of time (she has good reason to think she has no check coming due, etc.), then I feel no pull in the direction of saying she doesn’t know when the bank opens. But perhaps I don’t have the same strong sense “that knowledge involves a purer relation to truth than such views allow.” If the subject is justified in believing that these circumstances that would up the cost don’t exist, I don’t see how the costs can encroach on knowledge; indeed, it seems to me that if we encounter a view on which they can so encroach, we’ve a *prima facie* reason to reject the view. So, I would stand up and be proud of the impotence of your view on this issue.

    It seems to me that the only way this additional information about costs might affect knowledge is in the following sense. Perhaps, it might be the case that the speaker truly says “I know when the bank opens.” Then, she comes to have a justified belief that she has a check coming due, and it’s important that she be at the bank exactly when it opens. Perhaps then a second utterance of “I know when the bank opens” might be false. The semantics of “knows” in her idiolect have changed, and the two sentences will express different propositions in light of the new information she has acquired reasonably. But what has happened here is the speaker now is using “knows” to express a relation which requires much more justification for its holding than she did before she came to believe reasonably that there were stakes to her knowing exactly when the bank opens. I can buy this much. I’m not sure what this makes me, other than a philosophical neanderthal.


  4. Well, I agree in large part with your New Brunswick-meteor example; for me, what this shows is how difficult it is to figure out a formula for when practical interests become epistemically relevant. Hopefully, there is some kind of division between cases, and the New Brunswick-meteor example falls on one side, and the more intuitive cases fall on the other. So I see that latter example as evidence to be accounted for in a theory that incorporates the epistemic significance of practical interests. Of course, maybe no theory will be able to place all the good cases on one side, and the bad cases on the other, but the question has only started to be investigated.

    Perhaps my egregious slip-up in the header was sub-consciously due to the difficulties I’ve been having finding such a neat account…

  5. Matt,

    Pardon my interjection, but I was curious about this claim of yours,

    If pragmatic considerations–costs and such–are going to encroach on knowledge, they must do so via justified beliefs S has about the costs and the general circumstances.

    Why think that? If we allow that costs can encroach on knowledge, I don’t see the motivation for such a restriction.

    Consider an analogy. We have no problem in Gettier cases saying that the subject lacks knowledge because of some environmental factor, even when she is completely unaware of said factor. Moreover, even if, say, she is perfectly justified in believing that there are no barn facades around, she still doesn’t know that she’s looking at a barn. In light of this, I imagine that a proponent of Jason Stanley’s approach could say: and likewise, in the bank case, even if she is unaware of the costs, or even if she is justified in believing that there will be no such costs, she still doesn’t know the bank will be open.

  6. John,

    I agree; there is a definite similarity between this and Gettier cases. Very roughly, in each case, there is a true proposition of which the subject is (non-culpably) unaware that plays an important role in undermining knowledge. But, admitting that Gettier cases succeed doesn’t open the door in general for non-internal defeaters. Some externalists will say that this (that there are non-internal defeaters) is an important lesson to be learned from Gettier cases. So, suppose I’m looking at an object, and it appears to be red to me. I have strong justification for this belief; I am justified in believing my perceptual faculties to be reliable and nothing to be amiss with my environment. Suppose though, that there is a red light shining on the object. The object is actually white. I am unware of the light, and have no reason to believe that there is any such manipulation of my environment ( I have reason to think there is no manipulation). Does the truth of the proposition: *There red light shining on the object* serve as a defeater for my justification for the belief that the object is red? I think not, unless I have reason to believe this proposition is true. Some with more externalist intuitions might concede that I’ve no rationality defeater, but I still have a defeater for my justification (or warrant). My intuitions don’t run in this direction.

    The cost situation is a bit more complicated, because the relevant proposition operates differently than a defeater (or the standard sorts of defeaters in the literature). If I come to know about the high stakes, the original proposition *I know that p* still can be true. But because of my awareness of the stakes, “I know that p” comes to express a different proposition, and it’s false. I don’t see how the semantics of “knows” when I utter it changes unless I am aware of the stakes being raised.



  7. Matt–I think what you’re angling toward is contextualism, where the semantics of “knows” allows it to be true in one context that you know but false in another context (even though your informational state is the same in both). Does that seem to be what you want?

  8. Well, in some obvious sense the semantics of “knows” does change, as I suggested with the questions posed to a 101 class example. The interesting question is whether it changes in the sorts of circumstances the contextualists think it does. I was granting the contextualist point in my response to John. In general I sort of like semantic shifting, and I wouldn’t be averse to it in this case. But I think that two things keep getting run together in the discussion. The *sentence* “I know that p” may be true at one time t and false at a later time t1 in spite of the fact my evidence base for p is identical. But, no single proposition changes in truth value. The sentence expresses different propositions (if I’m going to follow this line) in the two different contexts. Both propositions can be true at the same time–the proposition expressed by the sentence token at t may be true at t1 because its truth conditions call for less justification than do the truth conditions for the proposition expressed by the sentence at t1. The way contextualism is stated often suggests a lack of epistemic supervenience. This isn’t the case, at least the way I’m spinning this. You don’t have two epistemically identical circumstances, one in which *I know that p* is true and one in which *I know that p* is false. You have two epistemically identical circumstances, one in which *I know that p* is true and one in which *I know* that p* is false. So, no lack of supervenience.


  9. I can’t see how salience of error can do the work Jon Kvanvig wants it to do. Consider the airport case again. Suppose we agree that Mary and John ought to check further. Now, imagine a new airport case, just like Cohen’s original, except for the fact that Mary and John never worry, they still ought to check further. In this new case, Mary and John are aware of just how much is at stake, of their various options, etc., but laziness overcomes them. They just refuse to let themselves be worried. I think we have to admit that, if Mary and John ought to check further in the original case, then they ought to check further in this new case.

    If that is right, we have to ask: do Mary and John know that just getting in line to board will have the best results in this new case? If we say they don’t know this, we can’t explain their not knowing by appealing to salience of error. If we say that they do know this, then we’re stuck having to admit that this is a situation in which people ought to do one thing (check further) which they know is worse than another (just getting in line to board). So either you allow pragmatic encroachment of a kind that undermines epistemicism or you reject a very powerful principle relating knowledge to rationality of action. I’m inclined to allow the encroachment.

  10. Matt (McGrath)–I think your examples are very probing and hard to see a way around. In my earlier post, I began to worry a little about the notion of “ought to worry”. A minor concern here is that notion of worrying about the chances of error is a slogan for a detailed account of how salience is supposed to work, which I haven’t given and don’t have! But notice you leave some room here, since (unlike Jason’s approach), you’ve got John and Mary aware of the risks involved. Once you go this far, I’m not sure how you can also claim that the risks of error are not salient for them, but I guess this is my problem, not yours, since I haven’t given salience conditions.

    A larger issue is about the decision theoretic background here. I take it that the notion of what a person ought to do is a matter of what actions are rational in the sense a good decision theory will capture. So at the very least, there needs to be some role in the Mary/John case for the notion of expected utility. If that were all that is needed, Matt is clearly right that they ought to check when the worry iff they ought to check when they don’t worry.

    But it is plausible to think that the right decision theory is going to include subjective factors beyond attitudes about the likelihood of various outcomes. The one I focused on was one’s constitutional dispositions toward risk-taking: some people are simply much more comfortable with, and willing to take, risks than others. Given this, what a person ought to worry about should take into consideration their degree of risk-tolerance (though there needs to be limits here since one’s risk-tolerance can itself be irrationally high or low).

    In the second case you imagine, you say they don’t worry because “laziness overcomes them.” As I imagined the role of risk-tolerance in an account of rational action, I don’t think this would be possible. If your risk-threshold is exceeded, then you show some signs of emotional discomfort; it’s not as if it takes effort to begin to worry.

    So here’s what I see. There are still Jason’s examples where the person is completely unaware of the risks involved, and I don’t have a good answer to those cases (in part because I just can’t see the attraction of making knowledge contingent on such factors). Then there are your cases, where the person is assumed to be aware of the risks, but you want to deny that the risks are salient for the person. Salience requires that the risks strike the person as important ones, so there will be cases where we imagine one person in two distinct settings with the same epistemic parameters, but where the risks strike the person as important in one case but not in another. The way that can happen is that we imagine the person’s risk tolerance changing in the two cases. But here I think the right account of rational action and epistemic justification gives different answers in the two cases: if the degrees of risk tolerance are within some normal range, then what’s rational at the high end of range need not be rational for those at the low end, and what your justified in believing depends on whether you take the chance of being wrong to be significant in a given case. So for now, my reaction to the case you imagine is that I’m not convinced the risks are salient in the original airport case iff they are salient in your amended case.

  11. Matt (Davidson): I think saying that the semantics of “knows” changes is not the right description. The proposition expressed changes, on your view, from context to context, but we should expect one uniform account of the meaning of “knows” to yield such a result. That, I take it, is just what a contextualist semantics does: it introduces a parameter that can differ in different contexts, with the result that the same semantics allows different propositions to be expressed in different contexts.

    I agree that contextualists should tell us what happens when we disquote, so that we get a theory of knowledge out of the view and not just a theory of ‘knows’. I think the primary exponents of the view–Cohen and DeRose–aren’t confused on this point and don’t confuse the truth of sentences with the truth of the propositions expressed.

  12. I completely agree, of course, with Matt McGrath. I think there is a continuum between the cases — if one agrees that in the original Cohen case, John and Mary don’t know, then one should not think that irrational confidence allows them to know. It is but a short step to see that unawareness of the costs at issue should also not have a positive epistemological effect!
    Once one takes this final step, one can’t do the relevant work with *expected* utility — it just has to be the utility of the agent that is at issue.
    I’m still a little unclear what view Matt Davidson’s is suggesting. On a subject-sensitive invariantist view, two knowledge ascriptions to two different people with the same evidence base may differ in truth-value, even though the property ascribed to them is exactly the same (e.g. the property of knowing the bank is open at 2 p.m.). Of course the knowledge ascriptions express different propositions, but only because they are ascriptions to different people. In no sense, much less on an obvious one, is it the case that on my sort of view, the semantics of “know that the bank on 96th and Broadway in New York City is open at 11 a.m. on Saturday, June 19, 2004” change from context to context. It always expresses the same property. To think that it does is to endorse contextualism, as Jon pointed out, which is not a semantically innocuous position.

  13. Jason–I like your comment here, in the following way. I think the pragmatic view needs to move to encroachment by actual pragmatic costs, i.e., actual utility vs. expected utility and its variants. If it’s the latter, I think I can formulate salience conditions to account for the cases (maybe the right term is some cousin of ‘formulate’, such as ‘gerrymander’ or ‘weasel’)… Going this strong helps me see exactly what is at stake epistemically, rather than having to worry that it is really decision theory that is the source of dispute. (Only one minor complaint about what you say: you use the phrase “irrational confidence” above, but I would think a proper decision theoretic approach to rationality will make the chances of error salient if the confidence is really irrational, and the irrationality of their confidence level will then block that confidence from being capable of rebutting the subjective defeater present, so my view won’t count them as having knowledge either).

    So, Matt, are you willing to go with Jason’s stronger view (stronger than what I thought you were willing to endorse)?

  14. Here’s a good test case to help me see what Jason (and maybe Matt) are willing to endorse. Take the airport case, and suppose that John visited the doctor recently. He was told everything was OK, but since the visit the doctor has just found a serious illness revealed by his blood test which must be treated immediately or John will die. If the plane is late, the waiting health officials will not be able to help him, but if the plane is on time, they’ll inoculate him and everything will be fine. John, of course, knows nothing of any of this. He doesn’t check further to see if his itinerary is still accurate.

    We assume that apart from the health risk story, John knows that his plane will arrive at 11. Add in the health risk stuff. Are you now inclined to say that he doesn’t know? As you might expect, I’m not inclined that way. My inclination is that he knows in both or neither.

  15. I was putting forth a position which I took to be plausible, (and contextualist, I suppose). We have linguistic data that suggests that to some degree the contextualist is right: the semantics of “knows” (the relation expressed by “knows”) changes in different contexts. An obvious case of this is Phil 101 student who will a) claim she knows where she lives and b) 30 minutes into discussion of the _Meditations_ will deny that she knows where she lives. Now, it could be that she’s just wrong on one occasion. But further questioning is suggestive semantically. So, you might say to her, “But you just said that you did know where you lived. What happened, did you forget? OK, take your philosopher cap off–do you know where you live?” And, she will say, “Oh, if that’s how you’re using “know”, then, sure, I know where I live.” I’ve had similar conversations many times in class. This suggests to me that the semantics of “knows” is changing here–“knows” is expressing different relations at different times. For those who are tempted to say that she’s wrong the second time when she claims she doesn’t know, if asked, she’ll talk about how she could be wrong, deceived, etc., and thus doesn’t know where she lives. This suggests to me that the content of “know” has changed. In fact, a plausible account here is that she was correct when she said she did know where she lived, and she also was correct when she said she didn’t know where she lived. That is, both sentence tokens expressed true propositions (though different). (By the way, I wasn’t accusing Cohen or DeRose of confusing semantic content with sentences–DeRose in particular is very aware of these issues.)

    Jon: I think there is a verbal dispute here: When I say “the semantics of ‘knows’ changes” I mean that the semantic content of “knows”–the relation expressed–changes.

  16. Jason Stanley says

    two knowledge ascriptions to two different people with the same evidence base may differ in truth-value, even though the property ascribed to them is exactly the same (e.g. the property of knowing the bank is open at 2 p.m.)

    I’m uncomfortable with this because of an apparent denial of epistemic supervenience, unless one says that (if we want to talk in terms of relational properties) the relational properties expressed by “knows the bank is open at 2” differ from one case to another–this will be, very roughly, in virtue of a difference in the “knows” relation, which for these purposes, we may think of as a part of the relational property. So, the extent I’m willing to grant differences in the truth of sentences of (first-person, here) knowledge ascriptions, I would want there to be some sort of semantic difference (grounded in a shifting of the “knows” relation) to explain the following: the truth of one sentence and the falsity of the other.

    Jon: I wouldn’t result to using terms like “weasel” yet. I think you’re exactly right in your approach: Once the cost is accessible to the subject, account for it in epistemic terms. If it’s not (as in Jason’s example), then there’s no pull to deny knowledge.

  17. Jon:

    I take it that in the case of the illness, you want to say that he knows both (for what it’s worth, I’d claim the same).

    But suppose the following: He’s asked if he knows when the plain arrives. He looks up at the monitor, which is reasonably accurate in most cases, and says that he does, it leaves at 11. Then he’s informed of his illness such that he comes to have an undefeated justified belief that he’s in serious trouble. He begins to panic, saying “I have to find out when the plane will arrive.” He approaches airline workers and says “I have to know when that plane arrives.” The workers look at the monitor and say “It arrives at 11.” He says, “No, this is of the utmost importance. I need to know when the plane arrives–can you find out when it’s due in here, what the weather is like [etc.].” How do you account for what is going on here? It seems to me that in his idiolect “know” has changed content; it now expresses a relation which requires much more evidence for its being instantiated than did the relation which was expressed by “know” before he found out about the illness.

    You might say something like: The semantics for the language give a content to “know”, and when he claims he doesn’t know when the plane leaves, he’s wrong. But we can use pragmatic features of the context (in a Gricean sort of way) to explain his denial.

  18. Matt D.–in the case you describe, the chances of error have become salient and are too high for John to stomach. So he doesn’t know, because he has an unrebutted internal defeater.

  19. Jon,

    I’m now puzzled by the work salience is doing here. When he finds out the costs are high, this doesn’t give him any reason to think that his prior evidence base was faulty, nor does it give him any reason to think his belief was false. I don’t see how he has acquired any sort of defeater as a result of the new information. The propositions he comes to believe are, it seems, epistemically irrelevant. Why not just say that he knows in both cases, and explain away his claim that he doesn’t after he gets the new information (the Gricean move would be an example of this)?

  20. It’s not just finding out that the costs of being wrong are high. You might be a risk-taker personality, and not be bothered by that and not take it seriously. But if you are bothered by it, and the risks become strikingly high, the rational attitude to take is that your inquiry is not sufficient to rule out the chance that further learning might undermine present opinion by revealing additional information that is not misleading. In that situation, the implications of salience give you an internal defeater, so you could only have knowledge if that internal defeater were itself rebutted by further information you possess.

  21. Jon: But how does upping the sense of risk give you any new (epistemic) reason to think that your previous evidence base is flawed, or that the belief for which you claim you have a defeater might be false? I don’t think that your assessment of the probability that further inquiry would lead to a rebutting defeater should go up at all when the sense of risk goes up. So, to think of this decision-theoretically, the subjective probability of that outcome remains the same. What goes up is the cost of that outcome. So, the (practically) rational thing to do might be to look for more evidence. But this rationality isn’t epistemic at all. So I still don’t see how the sense of risk generates any sort of defeater.

  22. Matt D. (noticed that there’s a lot of matt’s here?): So this is the kind of abuse I get for trying to reach out to the lost souls preaching the gospel of pragmatic encroachment (or enroachment, as Jason would have it!)!

    Actually, I think, regardless of how objective a phenomenon positive evidence for a belief needs to be, the account of defeat one should accept should be subjective. Michael Bergmann has a nice discussion of this issue, and shows how the view is fairly standard, though not universal. But I think his explanations are pretty good as to why one should adopt such a view (gosh, I’m now having second thoughts about whether it really is Mike where I got this–so I guess I don’t know who agrees with me on this anymore, and I need to check further…).

  23. Jon: I agree defeat is subjective, or at least we generally understand them as such. In fact, I argue (assert?) this in post 3 in this thread. (Though I suspect you’re not confusing Mike and me as to your source!) So, I’m assuming or willing to grant that defeat is subjective. I still don’t see how this answers the questions I raise. (These are raised, of course, out of deep gratitude in an attempt to shore-up your soul-saving efforts.)

  24. The key is that for justification to be epistemic, it has to justify for you that further inquiry would only undermine present opinion by uncovering misleading information. And that’s what comment 20 above argued was lost when the risks of error become salient for you.

  25. So here’s where I think we are. Hawthorne and I hope to account for any effect of pragmatic features on knowledge by appeal to salience, with John allowing pragmatic encroachment and me resisting it. Stanley and McGrath wish to reject the salience experience, with Matt attempting to argue for pragmatic encroachment on the basis of perceived stakes and Jason on the basis of actual stakes. What an interesting variety of opinions! I can see another conference/workshop in the making, and no one gets to leave till it’s all sorted out!!

  26. Pingback: Thoughts Arguments and Rants

Leave a Reply

Your email address will not be published. Required fields are marked *