Another entry from my grad seminar’s close reading of this deep and important book, but fair warning to the reader: you need to be motivated by the intricacies of the argument here, since this is going to be a complicated and lengthy post. The bottom line is that there is a lesser target and a greater target, and though there is some uncertainty as to the target, ultimately I think the target is the greater one. And
I don’t quite see how the argument succeeds against this greater target.
(Because of the need for strong motivation, I’ll put the rest below the fold.)
So, first, fallibilism is the view that you can know even though there is some epistemic chance of error (and, we assume, there is no such chance of error for anything you are metaphysically certain of), and purism is the claim sameness of epistemic position implies sameness with regard to being in a position to know. Since “position to know” is notoriously shifty, F/M define it (p. 84): “You are in a position to know that p iff no epistemic weaknesses with respect to p stand in the way of your knowing that p.”
They then note that belief itself isn’t to be counted here as a kind of epistemic weakness: “Were belief among those . . . factors, then . . . it would follow that you are positioned to know iff you know” (p. 84). It surprised me that the truth condition for knowledge was not also singled out here as a non-epistemic factor. I would have thought that one’s epistemic position with respect to p is a function of the conditions for knowledge other than belief and truth, but that is not so here. Instead, an epistemic weakness is anything other than belief. Nothing important turns on this issue, however, at least as far as I can tell.
F/M note in footnote 22 on p. 28 that when I introduced the notion of pragmatic encroachment, I applied it to theories that introduce a practical condition in the account of the nature of knowledge. Even so, a theory could have consequences for rational action while still rejecting such encroachment. F/M note this fact in the footnote, distinguishing between a real pragmatic encroachment principle (PEP), according to which practical stakes of some sort are part of the nature of knowledge, and a pragmatic consequence principle (PCP), according to which being of use in practical situations is a necessary consequence of knowledge but not part of its nature. The example they use to distinguish the two views is infallibilism: if infallibilism is true, whether you know is fixed by strength of epistemic position alone, and any pragmatic constraint on knowledge is merely a byproduct of this fundamental fact about the nature of knowledge.
The same distinction can be drawn among versions of fallibilism as well, for those who think there is a necessary connection between knowledge and practice. Some such fallibilists are PEPers, maintaining that the pragmatic condition is one feature of the nature of knowledge, and other such fallibilists are PCPers, maintaining that any change in what is reasonable to do implies a change in either belief or in what F/M refer to as position to know.
And in footnote 22 on p. 28, F/M insist that “Purism” is the view they want to reject, so the target is wider than just theories that embrace what I termed ‘pragmatic encroachment. That puts an additional burden on the argument. Not only are Fallibilist PEPers being aimed at, so are Fallibilist PCPers.
The distinction between PEPers and PCPers matters to me, because, as I see things, it is the PEPers we should resist. PCPers are not the enemy of those of us who resist pragmatic encroachment. So I initially expected Purism to carve the field the same was I was thinking when I first talked about pragmatic encroachement, I though that, perhaps, one could accept the F/M rejection of Purism without going to the dark side.
I was wrong, but the way in which I was wrong raises concerns about the argument F/M use against Fallibilist Purism.
That argument is on pp. 84-88, and is a standard Stakes argument: you can have LOW and HIGH stakes cases, with all other factors held fixed, so that LOW should board the train and HIGH shouldn’t but should check further to make sure the train is a local one. This argument, as F/M note on p. 86, requires that differences in reasonable courses of action can occur even though strength of epistemic position remains the same. And that is precisely what is denied by both PEPers and PCPers, and why the width of the target makes me question the argument.
To see my concern, think of knowledge in traditional terms: JTB+G. One’s epistemic position covers all of these except belief. Let’s also put truth aside, since LOW and HIGH are explicitly constructed so that the truth value is the same (for the claim ‘this train is a local one’). That leaves justification and the Gettier condition.
Finally, I get to the point I want to make! Here’s the argument that these conditions are the same for LOW and HIGH:
One might object . . . that stakes have a direct bearing on epistemic position. Now, there are obviously some cases in which by raising stakes on whether p your epistemic position with respect to p changes, as when you have some special evidence of a correlation between high stakes and whether p. But it is implausible in the extreme that changing stakes will always make the relevant difference to your strength of epistemic position. Consider probability: if you are offered a suitable gamble on whether the die will come up 6, this does not plausibly change the probability for you that the die will come up 6 . . . The matter seems no different if the chance that not-p is low enough for you to fallibly know that p. (p. 87)
The argument here is this. Fallibilist PCPers have an implausible position, because it is an extreme position to think that changes in stakes necessarily imply changes in epistemic position. The position is extreme because (objective) probability doesn’t work like that, and so we shouldn’t think of epistemic chances as working like that either.
This argument makes me suspect that we are focusing on the justification condition for knowledge rather than the real epistemic position, which involves ungettiered justification. For traditional epistemologists, it is plausible to think of fallibilism as the view that justification is a function of epistemic probability, that knowledge doesn’t require an epistemic probability of 1, and that metaphysical certainty (of the sort that makes one a real infallibilist, whatever that involves) entails an extreme probability. Then, when we compare this notion of epistemic probability with that of the objective probability F/M use in the argument, I acquiesce: there’s no reason I can think of to hold that epistemic probability is always and everywhere changed by changes in practical stakes.
But that’s not the conclusion they need. They need the stronger conclusion that ungettiered justification doesn’t require extreme chances of truth (that’s the fallibilist requirement) and that it is extreme to think that the chances that are a function of ungettiered justification vary with changes in practical stakes.
Here I find the argument questionable. For one thing, many hold that ungettiered justification entails truth (Sturgeon, Lehrer, Zagzebski), and if so, any analogy between the notion of epistemic chance that maps onto ungettiered justification and objective probability will require that the epistemic chance of p, when both ungettiered and justified, is 1 (because the probability of p given everything about one’s epistemic position has to be 1 when one’s position entails that p is true). And, even for those such as Howard-Snyder et. al. (2003), who resist the entailment claim, the failure of the entailment claim doesn’t imply the needed fallibilist assumption that ungettiered justifications can obtain for less than extreme probabilities. For example, if it is true that p wouldn’t be false given one’s epistemic position, that by itself may be enough to get the objective probability of p given that position to be extreme (though Alan Hayek has a worry about this that would need to be addressed–see his paper defending that almost all counterfactuals are false).
One way to think about the issue here is this: maybe there are hardly any fallibilists in the world, when fallibilism is interpreted in terms of the totality of what it takes to turn true belief into knowledge. If so, I think we have too strong a requirement on what it takes to be a fallibilist, but since these are terms of art in some sense, that’s not a substantive objection by itself.
But, second, suppose somehow we can defend the view that ungettiered justifications can co-exist with non-extreme epistemic chances. Even so, I think we have no idea how to measure chances here in any way that will let the argument go through against Fallibilist PCPers. Suppose, for example, that the Gettier condition is a modal one, using some combination of safety and sensitivity, where we understand the relevant counterfactuals in terms of similarity of worlds. Which worlds are more similar to the actual world is clearly a function of what is true at the actual world, so changing actual stakes is guaranteed to change which worlds are closer and which are more remote. On what basis can we insist, then, that these changes could not possibly correlate with changes in what is involved in satisfying the Gettier condition?
Thus, until we know better what a solution to the Gettier problem involves, I don’t think we are in a position to use LOW and HIGH to undermine the position of PCPers, even if we can use such examples to show that there must be some pragmatic connection between knowledge and action.