Aptness entails safety

Ernest Sosa had two big ideas about knowledge:

  1. In papers like “How to Defeat Opposition to Moore” (Philosophical Perspectives 1999), he argued that knowledge requires safety: for my belief in p to count as knowledge, it must be something that could not easily happen that I would have a belief on the kind of basis on which I in fact believe p in a proposition that is false.
  2. In his 2007 book A Virtue Epistemology (Apt Belief and Reflective Knowledge, Vol. I), he argued that basic knowledge consists in a belief that is apt – a belief that is not merely competent and also correct, but a belief that is correct precisely because
    it is competent.

When he adopted the second idea, he simultaneously abandoned the first idea, arguing that there are cases in which a belief is apt but not safe, and in such cases the belief in question could still be knowledge (A Virtue Epistemology, pp. 29, 41).

It is only this last move that seems mistaken to me. Both of Sosa’s big ideas are fundamentally true. The two ideas are not in tension with each other, since – as I shall argue – no apt belief can be unsafe.

What exactly does an “apt” belief amount to? To fix ideas, I shall assume that the only relevant sort of “competence” is rationality. So an apt belief is a belief that is correct precisely because it is rational. But what might it mean to say that a belief is correct “precisely because” it is rational?

“Because” is, of course, an explanatory term. The following features of explanations seem particularly relevant here:

  • In every explanation, one fact – the explanandum – is explained on the basis of another fact – the explanans. For the explanation to be genuine, this connection between the explanans and the explanandum must be in some way an instance of a more general pattern.
  • Typically, this general pattern has to have some degree of modal robustness. That is, this pattern must hold, not just in other cases in the actual world, but also in cases in other sufficiently nearby possible worlds as well.
  • It seems that all normal explanations presuppose a background of normal conditions. So the relevant cases where this general pattern has to hold are cases where the background conditions are normal to the same degree, and in broadly the same way, as in the particular case in question.

So, for you to have an apt belief in p in a given case C1, it is not enough that in C1 your belief in p is both rational and correct. It is also necessary that in all the sufficiently nearby possible worlds, every case C2 that is similar to C1 with respect to what makes C1 a case of a rational belief (and also similar to C1 in the degree to which it is normal, and in the way in which it is normal to that degree) is also a case of a correct belief.

But this clearly implies is a kind of safety: it implies that it could not easily happen that a thinker would have a belief that was rational in a similar way to your belief in p, in conditions that are similar to your actual conditions in the degree to which they are normal (and in the way in which they are normal to that degree), while the belief in question was false. Just to give it a name, we could call this rationality-and-normality-relative safety, or RN-safety for short.

So, why does Sosa claim that apt beliefs can be unsafe? It is simply because he does not consider RN-safety; he only considers what he calls “outright safety” and “basis-relative safety” (A Virtue Epistemology, p. 26). Sosa is right to claim that outright safety and basis-relative safety are not necessary for knowledge; but RN-safety is much more plausibly necessary for knowledge.

Suppose that (i) you could easily have held a belief in a false proposition on the same kind of basis on which you actually believe p, but (ii) if that had happened, it would have been because either (a) your competence was impaired in a way in which it actually was not impaired, or else (b) your conditions were abnormal in a way in which they were actually not abnormal. Then, as Sosa correctly points out, your belief in p does not exhibit basis-relative safety (nor a fortiori outright safety); but since as things actually are, your conditions were quite normal, and your competence was not impaired, it surely could still be a case of knowledge.

However, even if cases of this sort could be cases of knowledge, they are not counterexamples to the claim that knowledge requires RN-safety, since in all these cases, the belief in question is RN-safe.

In short, Sosa was right both times. Both his first big idea – that knowledge requires safety – and his second big idea – that it requires apt belief – are fundamentally true. So far from being incompatible with each other, the first big idea actually follows
from the second!

In a few days’ time, I shall produce a few more posts on this blog to explain how this approach can handle all of the Gettier cases, and to comment on some other features of this approach. But I hope that this argument for the conclusion that aptness entails safety will be sufficient for now.


Comments

Aptness entails safety — 8 Comments

  1. This is an interesting argument, Ralph. What would you say about the following objection?

    RN-safety, as you have characterized it, is either too hard to satisfy or too easy. It depends on what you mean by “normal conditions.”

    Suppose I have been observing crows for several years and have seen only black ones. I inductively infer that the next crow I will see will be black, too. Is this belief RN-safe? Albinism in crows is a rare but natural phenomenon. Let us suppose that the crows in my surrounding area constitute a genetic pool such that there is a low probability that a fully white albino crow has been hatched in recent years. If that is enough for the existence of a white crow to be normal in your sense, then there are worlds that are sufficiently nearby and sufficiently normal in which I could easily have had a false belief that the next crow I will see will be black. (I am assuming that this belief would be rational in just the way my actual belief is rational.) If so, my actual belief is not RN-safe. I take it this sort of case is not unusual, in the sense that many common phenomena have low-probability natural alternatives similar to albinism. So, this lack of RN-safety would generalize to a wide variety of empirical beliefs.

    On the other hand, if you want to exclude low-probability natural alternatives like albinism from counting as normal conditions, RN-safety becomes too easy to satisfy. Part of what scientists want to do is to rule out such possibilities. To take one extreme example, physicists adopted a 5-sigma standard for claiming that the Higgs boson had been detected. The point in doing so was to rule out the possibility that they were observing low-probability phenomena that looked like Higgs bosons but were in fact something else. Here’s a more prosaic example: in the thousands of times I have parked my car, it has been stolen only once. In this more stringent sense of normal conditions, it is not normal for my car to be stolen. Still, when I want to know where my car is, I do not carve out an exception for its having been stolen. That is, part of what I want to know is that it has not been stolen. Paying attention to RN-safety will not tell me that. So, knowledge cannot be understood in terms of RN-safety (where normal conditions exclude low-probability natural alternatives).

    • Thanks, Baron! Let me reply, all too briefly, to your interesting comment.

      It seems to me that the phenomena that you point to call for a broadly contextualist treatment. In some contexts, it will be true to say that you “know” that the next crow to be observed will be black (just as it is normally true for you to say that you “know” where your car is). These will be “low standards” contexts, where it would also be true for physicists to say (as they doubtless did in informal conversations in the bar, etc.), “We know that the Higgs boson exists”, even before the team at CERN met the 5-sigma standard.

      In other contexts, however, it would not have been true to say that you “know” that the next crow will be black, or that the scientists at CERN “knew” that the Higgs boson exists. These will be “high standards” contexts, where it is also not true for you to say that you “know” where your car is.

      The difference between “high standards” and “low standards” cases depends on how inclusive the relevant class of cases is in which the believer must have a correct belief. The larger this class of cases, the higher the standard of safety becomes; the smaller this class, the lower the standard of safety becomes.

      As I have formulated RN-safety, the class of cases in question depends both on which worlds count as “sufficiently nearby” and on which cases in those worlds count as “similar”, both with respect to what makes the actual case a case of “rational belief” and with respect to the degree and kind of “normality” that is exhibited by the actual case.

      In the “high standards” contexts, a lot of worlds count as “sufficiently nearby” and a lot of cases count as “similar” in these respects; in the “low standards” contexts, many fewer worlds count as “nearby”, and fewer cases count as “similar”.

  2. Thanks for your reply, Ralph. I think contextualism by itself is not enough to give a satisfactory reply here; more would need to be said about proximity of worlds, too. As I was thinking of the crow case, the genetic pool of nearby crows allows for a low probability of there being an albino. Every genetic combination, whether it produces albinism or not, is individually improbable, but they are each equidistant from the actual world. In this sense, each combination is like a ticket in a large, fair lottery. Each ticket has the same, small chance of winning. Consequently, the various worlds in which these tickets win are equidistant.

    My impression is that the most natural way to construe the stringency of standards in different contexts is that they govern different distances from the actual world. But if the world with a white crow is no more distant from the actual world than the worlds with black crows, lowering the epistemic standards so that more distant worlds are irrelevant is not going to prevent it from being relevant to my actual belief. It isn’t a more distant world.

    I think you would need to have some way of connecting stringency of standards to something like preponderance of nearby worlds, though I imagine that would be a difficult thing to characterize.

    • Thanks Baron! You’re right that I should have said more about how the kind of safety that I’m proposing (“RN-safety” as I called it) can be combined with a contextualist approach. However, I suspect that you may be overlooking a detail in the way in which I articulated this kind of safety.

      Specifically, as I articulated it, RN-safety involves, not just quantification over “nearby worlds”, but also quantification over “similar cases” within those nearby worlds. So, even if the world where the next crow that I observe is an albino is just as “nearby” as the world where the next crow that I observe is black, it does not follow that the case in which the next crow is an albino will count in the relevant context as a “similar” case.

      In general, suppose that there is a case C1 in which you believe p, and in C1 no very low-chance events that are relevant to the truth-value of your beliefs occur. (Of course, every case involves the occurrence of some very low-chance events, but these events will typically be irrelevant to the truth value of your beliefs.) Then it seems that in some contexts, possible cases in which such highly-improbable events relevant to the truth value of your beliefs do occur will not count as “similar” to C1 with respect to the degree to which (and the way in which) the conditions are normal. Obviously, these will be the relaxed low-standards contexts, where many beliefs count as “safe”.

      There will also be other contexts where cases in which low-chance events relevant to the truth value of your beliefs occur do count as “similar”; these will be strict high-standards contexts, where far fewer beliefs count as “safe”.

  3. Thanks, Ralph. I think I have a better idea of what your view is now, though I think more will need to be said about how similarity is determined. In the case of a lottery, aren’t all of the cases in which various tickets win similar to the case that actually occurs? (Assuming the lottery is fair, of course.) If so, then the case in which I will see a white crow is just as similar to the actual case as the cases in which I will see black crows.

    The only reason I can see for denying that the white crow case is just as similar is that the crow’s color is different from the color of the crow I actually see next. But if you tie similarity to the content of the belief in this way, safety is trivialized. Here’s an example that shows why. Suppose I see a white crow and come to believe that the next crow I’ll see is also white. Suppose further that this is actually true; despite being tremendously improbable, the next crow I will see actually is white. Is this belief safe? If this is a “low” context and we look only at the most similar cases, where similarity is determined by matching the content of the belief, then the relevant cases will be those nearby worlds in which the next crow I see is white. Hence, the belief counts as safe. But this seems absurd—if this belief is safe, then any true belief will be safe (at least in “low” contexts).

    • Thanks, Baron! You say:

      The only reason I can see for denying that the white crow case is just as similar is that the crow’s color is different from the color of the crow I actually see next.

      But in my previous comment, I tried to indicate a different reason for denying this:

      1. The relevant dimension of similarity between cases is in what I called “normality” (specifically, both in the cases’ degree of normality and in the way in which the cases are normal to that degree).

      2. In what we could call “highly normal” conditions, no low-chance events that are relevant to the truth value of the agent’s beliefs occur. So if the actual case is highly normal in this way, then no similar case will involve the occurrence of any such low-chance events.

      The case that you describe, in which you irrationally believe that the next crow to be observed will be white, and — amazingly — it actually is white, is clearly not “highly normal” in this sense; on the contrary, it is highly abnormal. So the cases that are “similar” to this case in the relevant respects will include other cases that are abnormal in similar ways — and in most contexts, these cases will include cases in which the next crow is not white.

      That said, I am willing to allow that in principle there could be super-low-standards contexts in which the range of “similar” cases in the “nearby” possible worlds includes only the actual case. In these contexts, safety collapses into mere true belief; and knowledge effectively collapses into rational true belief. I suspect that we do sometimes use the word ‘know’ like this — although I also suspect that usually we require some non-trivial degree of safety instead. I don’t see why this is “absurd” (as you put it)!

  4. Thanks again, Ralph! If I understand you correctly, it sounds like you would be willing to accept that I can know my lottery ticket will lose (when it will lose). This is so because it would be abnormal for my ticket to win. That’s fine with me, but you will need to be careful about how you express this point: “In what we could call ‘highly normal’ conditions, no low-chance events that are relevant to the truth value of the agent’s beliefs occur.” No matter which ticket wins, it is a low-chance event. Similarly, no matter which crow I see next, it is a low-chance event that it has the genetic profile it does. Low-chance events happen all the time, even in highly normal conditions.

    Regarding your suggestion at the end, that in super-low-contexts “safety collapses into mere true belief; and knowledge effectively collapses into rational true belief,” I think the collapse is going to go a little further than you are allowing. In your initial post, you proposed an explanation of what it means to say that a belief is correct because it is rational. That explanation, you said, implies a kind of safety, which you then characterized in the RN way. If safety collapses into mere true belief, though, then it is hard to see how to go about explaining why a belief is correct because rational. I take it part of the motivation for your initial post is that rationality should be a repeatably successful (in some sense) way of acquiring true beliefs. If we drop the repeatability part of it, then it will be difficult to make the case that rationality is really what explains the success. So, if safety collapses into mere true belief, the danger will be that knowledge likewise collapses into mere true belief. That’s what seems absurd to me. But perhaps you could avoid this consequence by saying more about what rationality is (apart from our ability to characterize it in terms of safety).

    • Thanks again, Baron! You’re right that I need to be careful about how I express my idea about “normality”. Indeed, I should have been more careful than I actually was, since ticket number n‘s winning the lottery is undeniably a low-chance event, and one that is “relevant to the truth value” of my true belief that my ticket hasn’t win. So perhaps I should just have said that in “highly normal” conditions, no low-chance events that falsify any of my beliefs occur? (I will have to think about this some more.)

      Of course, if I ever give a complete account of knowledge on the basis of these suggestions, I will have to base it on an account of “rationality”. Obviously I can’t give an account of rationality now… But perhaps it would be helpful for me to make one comment. In fact, unlike Sosa, I’m an internalist about rationality, and so I don’t think that rationality itself is necessarily a “repeatably successful … way of acquiring true beliefs”: after all, evil demons can ensure that their victims’ rationality does not function as a successful way of acquiring true beliefs.

      Indeed, I wouldn’t characterize rationality in terms of safety at all: after all, a whimsical demon might set out to Gettierize all of an agent’s rational beliefs, in which case the agent’s beliefs would be rational, but would not meet any standard of safety — except for the very lowest standard of all. As I conceive of things, there is a completely independent norm of rationality, which can be invoked in giving an account of knowledge.

Leave a Reply

Your email address will not be published. Required fields are marked *