The Fantl/McGrath Argument against Fallibilist Purism

Another entry from my grad seminar’s close reading of this deep and important book, but fair warning to the reader: you need to be motivated by the intricacies of the argument here, since this is going to be a complicated and lengthy post. The bottom line is that there is a lesser target and a greater target, and though there is some uncertainty as to the target, ultimately I think the target is the greater one. And
I don’t quite see how the argument succeeds against this greater target.

(Because of the need for strong motivation, I’ll put the rest below the fold.)

So, first, fallibilism is the view that you can know even though there is some epistemic chance of error (and, we assume, there is no such chance of error for anything you are metaphysically certain of), and purism is the claim sameness of epistemic position implies sameness with regard to being in a position to know. Since “position to know” is notoriously shifty, F/M define it (p. 84): “You are in a position to know that p iff no epistemic weaknesses with respect to p stand in the way of your knowing that p.”

They then note that belief itself isn’t to be counted here as a kind of epistemic weakness: “Were belief among those . . . factors, then . . . it would follow that you are positioned to know iff you know” (p. 84). It surprised me that the truth condition for knowledge was not also singled out here as a non-epistemic factor. I would have thought that one’s epistemic position with respect to p is a function of the conditions for knowledge other than belief and truth, but that is not so here. Instead, an epistemic weakness is anything other than belief. Nothing important turns on this issue, however, at least as far as I can tell.

F/M note in footnote 22 on p. 28 that when I introduced the notion of pragmatic encroachment, I applied it to theories that introduce a practical condition in the account of the nature of knowledge. Even so, a theory could have consequences for rational action while still rejecting such encroachment. F/M note this fact in the footnote, distinguishing between a real pragmatic encroachment principle (PEP), according to which practical stakes of some sort are part of the nature of knowledge, and a pragmatic consequence principle (PCP), according to which being of use in practical situations is a necessary consequence of knowledge but not part of its nature. The example they use to distinguish the two views is infallibilism: if infallibilism is true, whether you know is fixed by strength of epistemic position alone, and any pragmatic constraint on knowledge is merely a byproduct of this fundamental fact about the nature of knowledge.

The same distinction can be drawn among versions of fallibilism as well, for those who think there is a necessary connection between knowledge and practice. Some such fallibilists are PEPers, maintaining that the pragmatic condition is one feature of the nature of knowledge, and other such fallibilists are PCPers, maintaining that any change in what is reasonable to do implies a change in either belief or in what F/M refer to as position to know.

And in footnote 22 on p. 28, F/M insist that “Purism” is the view they want to reject, so the target is wider than just theories that embrace what I termed ‘pragmatic encroachment. That puts an additional burden on the argument. Not only are Fallibilist PEPers being aimed at, so are Fallibilist PCPers.

The distinction between PEPers and PCPers matters to me, because, as I see things, it is the PEPers we should resist. PCPers are not the enemy of those of us who resist pragmatic encroachment. So I initially expected Purism to carve the field the same was I was thinking when I first talked about pragmatic encroachement, I though that, perhaps, one could accept the F/M rejection of Purism without going to the dark side.

I was wrong, but the way in which I was wrong raises concerns about the argument F/M use against Fallibilist Purism.

That argument is on pp. 84-88, and is a standard Stakes argument: you can have LOW and HIGH stakes cases, with all other factors held fixed, so that LOW should board the train and HIGH shouldn’t but should check further to make sure the train is a local one. This argument, as F/M note on p. 86, requires that differences in reasonable courses of action can occur even though strength of epistemic position remains the same. And that is precisely what is denied by both PEPers and PCPers, and why the width of the target makes me question the argument.

To see my concern, think of knowledge in traditional terms: JTB+G. One’s epistemic position covers all of these except belief. Let’s also put truth aside, since LOW and HIGH are explicitly constructed so that the truth value is the same (for the claim ‘this train is a local one’). That leaves justification and the Gettier condition.

Finally, I get to the point I want to make! Here’s the argument that these conditions are the same for LOW and HIGH:

One might object . . . that stakes have a direct bearing on epistemic position. Now, there are obviously some cases in which by raising stakes on whether p your epistemic position with respect to p changes, as when you have some special evidence of a correlation between high stakes and whether p. But it is implausible in the extreme that changing stakes will always make the relevant difference to your strength of epistemic position. Consider probability: if you are offered a suitable gamble on whether the die will come up 6, this does not plausibly change the probability for you that the die will come up 6 . . . The matter seems no different if the chance that not-p is low enough for you to fallibly know that p. (p. 87)

The argument here is this. Fallibilist PCPers have an implausible position, because it is an extreme position to think that changes in stakes necessarily imply changes in epistemic position. The position is extreme because (objective) probability doesn’t work like that, and so we shouldn’t think of epistemic chances as working like that either.

This argument makes me suspect that we are focusing on the justification condition for knowledge rather than the real epistemic position, which involves ungettiered justification. For traditional epistemologists, it is plausible to think of fallibilism as the view that justification is a function of epistemic probability, that knowledge doesn’t require an epistemic probability of 1, and that metaphysical certainty (of the sort that makes one a real infallibilist, whatever that involves) entails an extreme probability. Then, when we compare this notion of epistemic probability with that of the objective probability F/M use in the argument, I acquiesce: there’s no reason I can think of to hold that epistemic probability is always and everywhere changed by changes in practical stakes.

But that’s not the conclusion they need. They need the stronger conclusion that ungettiered justification doesn’t require extreme chances of truth (that’s the fallibilist requirement) and that it is extreme to think that the chances that are a function of ungettiered justification vary with changes in practical stakes.
Here I find the argument questionable. For one thing, many hold that ungettiered justification entails truth (Sturgeon, Lehrer, Zagzebski), and if so, any analogy between the notion of epistemic chance that maps onto ungettiered justification and objective probability will require that the epistemic chance of p, when both ungettiered and justified, is 1 (because the probability of p given everything about one’s epistemic position has to be 1 when one’s position entails that p is true). And, even for those such as Howard-Snyder et. al. (2003), who resist the entailment claim, the failure of the entailment claim doesn’t imply the needed fallibilist assumption that ungettiered justifications can obtain for less than extreme probabilities. For example, if it is true that p wouldn’t be false given one’s epistemic position, that by itself may be enough to get the objective probability of p given that position to be extreme (though Alan Hayek has a worry about this that would need to be addressed–see his paper defending that almost all counterfactuals are false).

One way to think about the issue here is this: maybe there are hardly any fallibilists in the world, when fallibilism is interpreted in terms of the totality of what it takes to turn true belief into knowledge. If so, I think we have too strong a requirement on what it takes to be a fallibilist, but since these are terms of art in some sense, that’s not a substantive objection by itself.

But, second, suppose somehow we can defend the view that ungettiered justifications can co-exist with non-extreme epistemic chances. Even so, I think we have no idea how to measure chances here in any way that will let the argument go through against Fallibilist PCPers. Suppose, for example, that the Gettier condition is a modal one, using some combination of safety and sensitivity, where we understand the relevant counterfactuals in terms of similarity of worlds. Which worlds are more similar to the actual world is clearly a function of what is true at the actual world, so changing actual stakes is guaranteed to change which worlds are closer and which are more remote. On what basis can we insist, then, that these changes could not possibly correlate with changes in what is involved in satisfying the Gettier condition?

Thus, until we know better what a solution to the Gettier problem involves, I don’t think we are in a position to use LOW and HIGH to undermine the position of PCPers, even if we can use such examples to show that there must be some pragmatic connection between knowledge and action.


Comments

The Fantl/McGrath Argument against Fallibilist Purism — 10 Comments

  1. This is pretty interesting, Jon. In that part of the book, we addressed only the thought that raised stakes might impact justification. We probably should have explicitly considered potential impacts on Gettier-conditions. Is the main worry that we haven’t ruled out the possibility that raising the stakes might Gettierize a subject? I think Stew Cohen just raised a similar possibility at the Central, though he was targetting what we say about justified belief, not knowledge. There’s even a mechanism there, you suggest: raised stakes might reshuffle possible worlds so as to make the worlds in which p is false closer.

    First, I wonder if this is going to violate some sort of purism about those kinds of Gettier conditions: it won’t be the case that two subjects who stand in the same purely truth-relevant relations to p either will both or neither be unGettiered. Merely offering the right kind of bet on whether that’s a barn can Gettierize the subject. I don’t have to actually build any barn facades. (In fact, we might think that one of Gettier’s cases — the job offer case — is a high stakes case (depending on what sorts of decisions Smith has to make). It turns out that Gettier didn’t even have to have Smith reasoning from claims about coins. Smith was already Gettiered just because the stakes were too high!)

    Second, subjects in high-stakes contexts can have stronger epistemic positions than subjects in low-stakes contexts, so I don’t see why it isn’t possible for HIGH to have a stronger epistemic position than the fallibly knowing LOW. Why then couldn’t HIGH have the same strength of epistemic position as the fallibly knowing LOW? Is the worry that the only way to guarantee that is by raising HIGH’s justification so that HIGH should now act on p? But, again, why? What makes Gettier cases distinctive is that the subject’s safety/sensitivity is tweaked without making a difference to the subject’s justification. So why can’t we just keep making HIGH safer and safer, more and more sensitive, but without upping HIGH’s justification for p? I have no principled argument we can do this, but it seems in keeping with the nature of Gettier cases than we can.

    Finally, the kind of Gettierization you’re suggesting strikes me as a very weird kind of Gettierization. In standard sorts of Gettier cases, there is some factor beyond the subject’s ken such that if the subject were aware of that factor, it would dramatically impact the subject’s strength of justification. Not so, here. I’m trying to think of a way of being aware of whatever it is that decreases my safety/sensitivity but doesn’t also make me less justified. But here HIGH is aware of everything that decreases HIGH’s safety/sensitivity, but is not less justified (just as being aware of the bet on whether the die comes up 6 doesn’t make it more justified for HIGH to believe that the die will come up 5 than that it will come up 6).

    (Also, I should point out that none of this will impact the argument that purism about justified belief is false: that it’s false that two subjects with the same strength of justification for/against p will either both be justified in believing p or neither will).

  2. Hi Jeremy, this is from chapter three, and there you guys are explicit that epistemic position isn’t about justification, but about what converts belief to knowledge (though I think it wouldn’t matter if it were about what converts true belief to knowledge). I was worried that your arguments suggested you were thinking about justification, though, when the explicit context requires that you be thinking about everything needed beyond belief or true belief. Does that sound right?

  3. Jeremy, one issue is that, once epistemic position is defined to include everything beyond belief or true belief, it’s not clear that it comes in degrees. It has a component that comes in degrees, but of course, so does knowledge, and I’m not a fan of views on which that comes in degrees.

    But even if it did, it isn’t clear that there can be LOW and HIGH when epistemic position is understood this way. That was the point about whether ungettiered justification entails truth, or eliminates the chance of error altogether.

  4. One other thing, Jeremy (and sorry for the multiples here!). You’re right that this still leaves open arguments about pragmatic encroachment concerning justification itself. That’s the next chapter, and though I read it, it was more than a year ago, and haven’t yet gone through it for the seminar this week. Plan to post on every chapter, since I think the book is so good!

  5. I’m a little confused about the Zagzebski point, Jon. You say, “For one thing, many hold that ungettiered justification entails truth (Sturgeon, Lehrer, Zagzebski), and if so, any analogy between the notion of epistemic chance that maps onto ungettiered justification and objective probability will require that the epistemic chance of p, when both ungettiered and justified, is 1 (because the probability of p given everything about one’s epistemic position has to be 1 when one’s position entails that p is true).” But why identify epistemic chance with the probability of p given everything about one’s epistemic position? I mean, Matt and I include truth in one’s epistemic position, but that doesn’t mean that if one has a true belief then there is no chance that the belief is false. Nor would an account according to which conditions 2 and 3 were that S believes that p and that if S believes that p then p is true, respectively.

  6. Hi Jon,

    I’m glad you’re enjoying the book. Thanks for the challenging posts about it. This monster comment just tries to add some clarifications in addition to the ones Jeremy gave.

    First, we do include strength of justification in strength of epistemic position, although you’re right that we don’t equate epistemic position with strength of justification.

    Second, we have some suggestions for thinking about epistemic chance in the book. Here is one which we could just stipulate in the discussion, if it helped: your epistemic chance for p is the degree of confidence you are justified in investing in p. (There are lots of issues about details here, which I don’t think are really relevant, concerning chunky credences, etc.)

    Third, the argument from KJ to the falsity of fallibilist purism goes like this. If fallibilism is true, there is some person (Jeremy) who fallibly knows something p, i.e., who knows p but who isn’t justified in having a confidence of 1 in p. Now, we can imagine a second subject (Matt) exactly like in respect of the level of confidence he is justified in having and in respect of the other features determining strength of epistemic position but whose stakes are so high that a higher justified confidence in p is needed to act on p. By KJ, Matt doesn’t know p, whereas Jeremy does, and they have the same strength of epistemic position for p.

    The key thing — I think (but maybe I’m misunderstanding you) — is that fallibilism for us is a doctrine not about epistemic position but about something very much like strength of justification, viz. the amount of confidence one is justified in having in a proposition.

    Think about the following. Consider someone looking at a barn from a normal Missouri road in a normal Missouri county — it’s real, no fakes around, etc. The person knows, fallibly let’s say, that it’s a barn. Her confidence shouldn’t be 1. Now suppose somehow that her life hangs in the balance depending on what she does and whether it’s a barn, and she’s aware of this. Maybe she’s offered a high-stakes bet that pays only a little bit if she’s right and costs her life if she’s wrong. Our claim is that all this could happen (a) without affecting how confident she should be that it’s a barn and (b) without affecting the other features deemed traditionally relevant to knowledge, such as the absence of Gettier-like luck, reliability, truth, etc. I guess I just don’t see why jacking up the stakes should guarantee that any of this changes. (It’s not as if higher stakes bring fake barns into existence nearby, or make it the case you’re relying essentially on a falsehood, for instance.) I can say more about why I think Gettier-like luck is independent of high stakes if you want.

    Finally, the point of the dice example really was about epistemic chance (in the sense of degree of confidence one is justified in having). We were, I think, just assuming that raising the stakes needn’t affect truth-value, Gettier-like luck, and the like. The remaining worry was whether it must affect justified confidence. We were arguing it needn’t. We concluded that raising stakes needn’t affect strength of epistemic position.
    I said finally, but here’s one last thing. It’s not particularly important that we assume epistemic positions come in degrees. It’s convenient but not necessary. All we need to have is a pair of subjects with the same epistemic position with respect to p one of whom is in a position to know p and the other who isn’t.

  7. Hi Matt, yes, I realize how the argument is supposed to go, but to get it to work about knowledge, you have to have LOW and HIGH cases for at least everything beyond true belief. Showing that you can get LOW and HIGH with respect to justification alone won’t show what’s needed when the issue is knowledge itself rather than justification. We just don’t know enough about the Gettier condition to tell whether, esp., some combination of safety and sensitivity will work here. If raising the stakes alters the ordering of worlds in the right way, then we get no counterexample to Purism by the cases you use. The issue is your characterization of being in a position to know, and what counts as an epistemic weakness that needs to be taken into account. It is clear from p. 84 that failure to satisfy the Gettier condition is one such weakness.

  8. Jon, I think I’m getting the picture a little better. One could have a view like this: Ungettiered JTB is JTB such that one’s belief is safe. One’s belief is safe iff it is true in all close worlds. Raising stakes expands the range of worlds that are close.

    I agree that is a consistent view. I think it’s pretty bizarre and unmotivated by thinking about intuitive ideas of “could have easily believed falsely”, but put all that aside. What if one did accept the view? How could we argue against fallibilist purism, from KJ, on this view of the gettier condition?

    I think that if this view were true, then safety would not be a component of epistemic position. The component would be *being safe with respect to range of worlds R*. And we can imagine that raising the stakes wouldn’t affect whether the person’s belief was true in a certain given range of worlds. So, we could still run the “given a LOW, construct a HIGH” argument to show that if KJ and fallibilism are true, purism is false.

    When we talk about the determinants of strength of epistemic position on p. 27, we say it is a matter of “how strong your evidence for/against p is, how reliable are the belief-forming processes available to you which would produce a belief that p, how strong your counterfactual relations to the truth-value of p (how sensitive, how safe your available basis is for believing p), etc.” If being safe simpliciter can be affected by stakes, it doesn’t count. Take instead safety with respect to a given range of worlds.

    So, I guess I don’t see a problem from these quarters for the derivation of the falsity of fallibilist purism from KJ. However, if this was the right account of the gettier condition, the “subtraction argument” in chapter 4 would need revision. This may tie in with some of Stew Cohen’s concerns about the subtraction argument. Maybe we can talk about that when you get to chapter 4.

  9. Hi Jon,

    I just saw Matt’s post, so I may just be saying something similar. But I still think that if safety is a function of pragmatic considerations then it won’t be pure. Either safety will be composed of two factors — a purely epistemic factor and a pragmatic component — or it will be ineliminably pragmatic. If the latter, it’s not a part of strength of epistemic position — which is strength of purely epistemic position. If the former, then holding fixed the purely epistemic factor, we can vary the pragmatic factor and so vary whether a subject knows holding fixed all purely epistemic factors. This just seems like pragmatic encroachment to me. This was just the first point of my first comment.

Leave a Reply

Your email address will not be published. Required fields are marked *