Meno requires adherence, not safety
avatar

Many recent epistemologists have claimed to find inspiration in the discussion of knowledge and true belief in Plato’s Meno (97a-98a). At the same time, it has become common for philosophers to emphasize safety from error (or the like) as a necessary condition for knowledge – where (roughly) a belief is safe from error if and only if it held as a result of a process that could not easily yield a false belief.

Besides safety, another feature that a belief can have is what Robert Nozick called adherence to the truth – where (roughly) a belief adheres to the truth if and only if it is held as a result of a process that could not easily fail to yield belief if the proposition in question were true. As I shall argue, Plato’s Meno suggests that knowledge requires adherence – not that knowledge requires safety.

According to Meno (98a), the difference between knowledge and true belief consists in the fact that a mere true belief is like a slave who is liable to “run away from the soul”, whereas knowledge has somehow been more securely “tied down”. In effect, a mere true belief might too easily be lost, whereas knowledge is not so easily lost in this way. It is not suggested that if the true belief is lost it will be replaced by a false belief – the true belief might just simply disappear altogether, without being replaced by any belief on the relevant topic at all.

Plato’s suggestion here is presumably not that knowledge is less liable to be forgotten than a mere true belief. The suggestion is surely that knowledge is less liable to be rationally undermined by new evidence that comes to light. There are two ways in which new evidence might rationally undermine a true belief:

  1. The true belief might have been irrational or unjustified all along, and the new evidence might force the believer to realize that he never had any good reason for the belief in the first place.
  2. The true belief might originally have been rational and justified, but the new evidence might defeat that original justification.

Cases of this second kind (2) involve a JTB that falls short of knowledge. But they are not really like the original Gettier cases (or like Carl Ginet’s famous “barn façade” case). Instead, they are more like the “assassination case” that Gilbert Harman presented in Thought (1973, 143f.).

In Harman’s “assassination case”, there is a JTB that fails to count as knowledge, because there is a mass of (misleadingly) defeating evidence in the believer’s environment, and it is simply a fluke that the believer does not encounter this defeating evidence. But this defeating evidence – we may suppose – consists entirely of “undercutting” (rather than “rebutting”) defeaters. So if the thinker had encountered this defeating evidence, she would simply have given up on having any beliefs about the topic in question – she would not have come to believe the proposition’s negation. I.e., this belief fails to count as knowledge, not because it is unsafe, because it fails to adhere to the truth adequately.

In order to define ‘adherence to the truth’ more precisely, we need to factor the “process” that yields a belief into two components: the “positive conditions” (which require the presence of certain factors that normally yield the belief in question) and the “negative conditions” (which require the absence of certain factors that would inhibit the positive conditions from yielding the belief in question).

Then we can define the notion as follows:

A belief adheres to the truth iff, for some process P, the belief is held as a result of P, and in all nearby cases in which the believer meets the positive conditions of P, and the corresponding proposition is true, P yields belief in that proposition.

It is clear that a belief can be safe without adhering to the truth in this way. This will happen whenever there are no nearby cases in which the relevant process leads the believer to believe anything false, but there are nearby cases where the believer encounters undercutting defeaters (so that in those cases, the process fails to yield any belief at all).

In conversation, I have found that many fans of safety deny that knowledge requires this kind of adherence to the truth. In their view, the only way in which the “assassination case” can fail to be a case of knowledge is if it involves a failure of safety (as well as a failure of adherence). As I have argued here, in denying that knowledge requires adherence as well as safety, these philosophers are disagreeing with Plato’s Meno.


Comments

Meno requires adherence, not safety — 30 Comments

  1. Here’s (perhaps) a case where one knows but one’s belief does not adhere to the truth. Suppose that I’m listening to the radio, while also reading philosophy blogs that happen not to have any popup windows in them. I learn from listening to the radio that Lady Gaga is playing a show tonight. As it happens, there is a nearby case in which I get distracted by a popup window in my web browser, so distracted that I stop paying attention to the radio (without stopping listening to it). In that nearby case, I wouldn’t have formed the belief about Lady Gaga, even though I would still be listening to the radio. But, in the actual case, I nonetheless know. Perhaps, here, I have knowledge without adherence. (I suppose it depends on what the “positive conditions” are of listening to the radio.)

  2. Ralph, great post, I like this interpretation of Socrates’s speculation a lot. I wonder, though, if Roush’s adherence condition wouldn’t be better here. First, we don’t need any talk of processes or methods of belief formation, and we don’t need recourse to any possible worlds semantics (though I recognize that your talk of nearby cases doesn’t require it either). What do you think? Her adherence condition requires the probability of believing p given p to be very high and the probability of believing ~p given p to be very low. (Of course, we need to know which probability function is at work in this condition, and part of what I like about Sherri’s book is the careful attempt to address this worry.)

  3. Hi,

    Jon: a classic objection to adherence are Glimpse cases (I glimpse a bird in the garden and thereby know that there is one). Does the Roush condition avoids that without appealing to methods?

    Ralph: could you clarify to me whether the positive conditions are (a) the conditions under which the process P takes place, (b) the conditions under which P takes place and yields the target proposition? If the second, it seems to me that the condition is logically equivalent to: no nearby cases satisfy the positive and negative conditions jointly – am I right?

  4. Thanks for those great comments, Dennis, Jon and Julien!

    I am in complete agreement with Julien that we have to introduce something like “methods” (the term that Julien prefers) or “processes” (the term that I prefer) to give a good formulation of either safety or adherence. If Sherri Roush tries to get by without introducing anything like methods or processes, then I believe that she will face a long chain of problems — including the “glimpse” problem that Dennis raises here.

    I accept that there is a good question about whether we should prefer the kind of formulation that I have used (which requires that at all of the relevant nearby cases in which the positive conditions of the process are met and the proposition is true, the process yields belief in the proposition) or the more probabilistic formulation (which is effectively equivalent to the idea that the process must yield belief in at least most of these worlds).

    My own view is that we should be contextualists about all of these epistemic terms, and so this difference is less important than we might at first assume. But I accept that if contextualism is false, then this could turn out to be an important difference.

    Julien, about your last question — I don’t want to try to offer a complete account of the relevant “processes” here in the comments thread on Certain Doubts. But very roughly, my idea was this:

    1. The process’s “positive conditions” would include the occurrence of mental events — like having the sensory experience of glimpsing the bird in the garden, or hearing the early-morning radio broadcast the news of the assassination — that would normally trigger one’s forming the belief in question.
    2. The “negative” conditions would include the absence of the sort of defeating conditions that would inhibit those positive conditions from triggering the formation of that belief.

    So the process will not yield a belief unless both the positive and negative conditions are met. (Indeed, it is somewhat unnatural to say that any belief-forming process “takes place” at all unless the belief in question is actually formed!)

  5. Hi Ralph,

    Thanks for the clarification. I like the idea of process inhibition, I need to give it more thought.

    “it is somewhat unnatural to say that any belief-forming process “takes place” at all unless the belief in question is actually formed!” : one could for instance have in mind what Nozick calls (if I remember correctly) “double-sided” methods, which are process that can result both in a proposition or its negation. Or more generally what Goldman calls “process” is way more general than a process leading to a unique proposition. It helps to see that you’re thinking of these in much narrower terms (rightly in my view!).

  6. Thanks Julien!

    I am thinking of these processes as having a certain kind of generality to them. E.g. one might go through the very same process on several different occasions, to form a belief in the proposition that one might express on that occasion by saying “That’s a barn”; if each occasion involved a different barn (or barn façade…), this would be a belief in a different proposition on each occasion.

    I’m sceptical of the idea of rational processes that are “two-sided” in Nozick’s sense. But in fact, that issue doesn’t matter, since in the cases where the belief-forming process is inhibited by undercutting defeaters, no belief (or disbelief) is formed — either in the proposition or in its negation!

  7. Julien and Ralph, the probabilistic adherence condition isn’t the only condition on knowledge, for Roush. She also has a relevance condition, which is that the probability of not believing p when p is false is very high. That condition won’t help with Glimpse cases, but Roush has a complicated condition on what has to be held fixed when assessing the conditional probabilities in question. It is that fixing condition that is supposed to do the work. My intention wasn’t to vouch for the adequacy of the theory, but simply to raise the issue of whether a probabilistic construal of adherence might be useful here. It is interesting to note that Williamson’s take on the Socratic speculation is itself probabilistic, though not the probabilistic version of adherence.

  8. Pingback: Glimpse Cases and Roush’s Theory | Certain Doubts

  9. Hi Ralph,

    Do you think there is knowledge without adherence in the following case:

    Suppose I want to know how biased a coin is. In particular, I want to know whether its bias is in the interval (x, y), for 0 < x < y < 1. Here's my method(/process) for deciding. I flip the coin 1000 times and then do some statistical tests. If the tests say I can be more than 99% confident that the bias is between x and y, then I believe that its bias is between x and y; if the tests say I can be more than 99% confident that the bias isn't between x and y, then I believe that its bias isn't between x and y; otherwise I suspend judgment about its bias. I take it that this sort of belief-forming method often yields scientific knowledge.

    For some choices of x and y, there exist possible outcomes of 1000 tosses such that the statistical tests will say I can be more than 99% confident that the coin's bias is between x and y, but there are no possible outcomes of 1000 tosses such that the statistical tests will say I can be more than 99% confident that the coin's bias isn't between x and y. Suppose I choose such an x and y, flip the coins 1000 times, do the statistical tests, and learn that I can be more than 99% confident that the coin's bias is between x and y. I come to believe that the coin's bias is between x and y. In fact, the coin's bias is between x and y, and I thereby I come to know this.

    My belief is safe: x and y are such that I couldn't have easily have used my method on this coin to form a false belief. But my belief violates adherence: I could easily have used my method on this coin and failed to form the true belief I actually formed–i.e., because the tosses might have easily landed such that I couldn't be 99% confident that the coin's bias is between x and y. I say this could easily happen not just because it's a possible outcome of a chancy process, but because it might also have high objective probability; I might have been lucky to get just the right proportion of heads and tails to allow me to be more than 99% confident that the coin's bias is between x and y, but, given the coin's actual bias (between x and y), it was highly objectively improbable that I would be so epistemically fortunate. This seems relevant to probabilistic versions of adherence mentioned above.

  10. Jon —

    As I mentioned above, I favour a broadly contextualist view of these probabilistic accounts. So, in my view, knowledge requires strict safety and strict adherence: i.e. in all relevant nearby cases in which the positive conditions of the relevant process are met, the process yields belief if, and only if, the corresponding proposition is true. However, exactly what domain of cases counts as the relevant “nearby cases” is determined by the conversational context.

    So, in effect, for every version of a probabilistic account of knowledge, there is a context where this strict safety+adherence account is necessarily coextensive with that probabilistic account. In this way, contextualism dissolves this appearance of any real conflict between the strict safety+adherence approach and the probabilistic safety+adherence approach.

  11. Jeremy —

    I think that you have misinterpreted your example. In fact, your example shows that knowledge does not require that one’s belief-forming process can truly be called “safe” in all conversational contexts; but it does not undermine adherence at all (except on a very tendentious understanding of what the relevant processes are).

    1. Whether your example can be truly described as an instance of “safety” depends on which cases count as “nearby” in the conversational context. There is clearly at least one possible world w where the result of the coin toss in w is an extraordinary cosmic accident, so that statistical analysis of this result in fact fails to reveal the underlying bias of the coin at all. In some conversational contexts, cases in which one uses this process in w count as relevantly “nearby cases” — and in some of these cases, the coin’s bias is not between x and y at all. So in these contexts, it is just not true to say that this method or belief-forming process is “safe” — since in at least a “nearby case”, the process yields a belief that is false!

    2. Suppose that we individuate processes fairly finely, so that the process P that one actually uses involves the specific sequence of types of experiences on which one’s belief is actually based. Then cases in which one observed a different sequence of coin tosses are cases in which the “positive conditions” of this process P are simply not met. So these cases are not counterexamples to the claim that adherence holds in the actual case.

    Moreover, any cases in which one is rational, and has precisely this sequence of experiences, and no undercutting defeaters are present, will be cases in which one forms the belief. Thus, as long as no such defeaters are present in any nearby world in which the coin’s bias is between x and y, adherence will be satisfied: in all nearby cases in which the positive conditions of this specific process P are met, and the proposition that the relevant coin’s bias is between x and y is true, P will yield belief in that proposition. So (on these assumptions) adherence is met in your example.

  12. Hi Ralph,

    Re 1: I agree that what could easily have happened is highly context dependent. But I set up the case so that, no matter how the coin landed, I’d either believe that its bias is within the interval it is actually in or I would suspend judgment about its bias: for no sequence of outcomes would I form a false belief about the coin’s bias. For example, suppose x = .001 and y = .999 and p = .01 is my statistical threshold for belief. For most sequences of 1000 heads/tails I will end up believing that the coin’s bias is between x and y; for all others, such as 1000 heads, I will suspend judgment about whether the coin’s bias is between x and y. But for no sequence will I falsely believe that the coin’s bias isn’t between x and y.

    Re 2: Your proposal doesn’t strike me as “individuat[ing] processes fairly finely”; it strikes me as individuating them fantastically finely–so finely that adherence loses its interest. If, for example, we are psychologically constituted so that our experiences cause our beliefs (with the help, no doubt, of our background beliefs), then adherence will be trivial: In any case where I have my actual sequence of experiences (and background beliefs) I’ll form my actual beliefs, so, if what I believe is true, I’ll believe truly. More generally: If positive conditions are defined so finely that I couldn’t easily have formed different beliefs under the same positive conditions, the adherence is trivial: In all nearby cases in which the actual positive conditions obtain and in which what I actually believe is true, I will truly believe what I actually truly believe.

  13. Jeremy —

    Ad 1: You’re now changing your example. Here is your original description of the case:

    there exist possible outcomes of 1000 tosses such that the statistical tests will say I can be more than 99% confident that the coin’s bias is between x and y, but there are no possible outcomes of 1000 tosses such that the statistical tests will say I can be more than 99% confident that the coin’s bias isn’t between x and y.

    Now you ‘re adding that there are no possible outcomes of 1000 tosses such that (a) the statistical tests will say that you can be more than 99% confident that the coin’s bias is between x and y, but (b) in fact the coin’s bias is not between x and y. I don’t think that could possibly be true. Surely, if I’m only 99% confident that the proposition is true, I must have a confidence level of 1% that the proposition is false. So why isn’t it possible for the proposition to be false even if the coin toss results in this particular outcome? But of course, if there is a nearby case in which the proposition is false, the process is unsafe.

    Ad 2. Individuating processes so finely does not make “adherence loses its interest”. You haven’t shown that adherence is satisfied in the cases that interest Socrates and Gilbert Harman: viz. cases where the negative conditions are actually met but might very easily not have been met — i.e. where there is a mass of defeating evidence lying around in your environment and it is only a fluke that you don’t encounter this defeating evidence. So adherence is not trivial.

  14. [S]uppose x = .001 and y = .999 and p = .01 is my statistical threshold for belief. For most sequences of 1000 heads/tails I will end up believing that the coin’s bias is between x and y; for all others, such as 1000 heads, I will suspend judgment about whether the coin’s bias is between x and y. But for no sequence will I falsely believe that the coin’s bias isn’t between x and y.

    This example is troubling. To be confident that a coin’s bias is between 0.001 and 0.999 is to be confident that you aren’t dealing with a same-sided coin (i.e., either double-headed coin or double-tailed coin), but to have no idea which way the coin is biased and to concede that you are not confident that your experiment is powerful enough to distinguish between your claim (i.e., the coin is not same-sided) and the alternative (the coin is same-sided.) For the probability of getting 1 or more heads from flipping a 0.001 biased coin 1000 times is about 0.63, and the same is true for getting 1 or more tails given the symmetry of your constraint.

  15. Ralph-

    Re1: “Now you ‘re adding that there are no possible outcomes of 1000 tosses such that (a) the statistical tests will say that you can be more than 99% confident that the coin’s bias is between x and y, but (b) in fact the coin’s bias is not between x and y.”

    I didn’t say that, although I suppose there’s a scope ambiguity in what I said if you read “actually” non-rigidly. So let me be explicitly de re: For certain x and y (such as .001 and .999), it is possible to form a belief that the coin’s bias is between x and y using the aforementioned method, but it is impossible to form a belief that the coin’s bias isn’t between x and y using the method. For all such x and y, it follows that if I form a true belief that the coin’s bias is between x and y using the method, then (assuming the coin’s bias couldn’t easily have failed to be between x and y) my belief is safe, since I couldn’t easily have formed a false belief that the coin’s bias is between x and y using the method.

    Re2: You defined adherence as follows:

    A belief adheres to the truth iff, for some process P, the belief is held as a result of P, and in all nearby cases in which the believer meets the positive conditions of P, and the corresponding proposition is true, P yields belief in that proposition.

    This definition doesn’t mention negative conditions. Was that a typo? If so, what was your intended definition?

    Gregory-

    I don’t see what’s troubling. Let bias 0 correspond to the coin’s being double-headed. Now suppose I get 1000 heads. True, I’ll suspend judgment about whether the coin’s bias is between .001 and .999. But for a still very small x > .001, I will come to believe that the coin’s bias isn’t between x and 1. (I’m assuming a uniform prior probability measure over possible biases.)

  16. I should say that if safety is a matter of not easily falsely believing that p, then all the work is done by the coin not easily having had a different bias. However, if one also thought that safety required not easily falsely believing that not-p, then the impossibility of believing not-p by the method–for certain p–is relevant to safety.

  17. Hi Jeremy,

    Now suppose I get 1000 heads. True, I’ll suspend judgment about whether the coin’s bias is between .001 and .999.

    Why would you suspend judgment when there is a better than 1 in 3 chance of your biased coin turning up heads for all 1000 tosses? That’s what I mean by the experiment not being powerful enough to move you from (practically certain) total ignorance to something stronger. (NB: In my earlier post, I skipped past the values between 0 and x and between y and 1 and went straight to the end points.)

  18. Jeremy —
    1. Now you’re explicitly adding the very assumption that I was querying, that “the coin’s bias couldn’t easily have failed to be between x and y”. My point was precisely that in at least almost every conversational context, there will be relevantly “nearby cases” in which the coin’s bias fails to be between x and y. (If Ginet’s barn could easily have been a barn façade, surely your coin could easily have had a different bias!)
    2. I didn’t explicitly mention “negative conditions” in my definition of adherence because their role was implicit in what I said. As I explained: (i) the relevant processes have both “negative” and “positive” conditions, which are jointly sufficient for yielding the belief in question; and (ii) a process satisfies adherence if and only if the process yields belief in every nearby case in which its positive conditions are met and the corresponding proposition is true. It is implicit in this that adherence will fail if there are nearby cases where the process fails to yield belief even though the positive conditions are met and the proposition is true. Obviously, these must be cases in which the negative conditions fail to be met.

  19. Gregory-

    I didn’t mean to be saying that I’m not moved at all from ‘total ignorance’–by ‘suspend judgment’ I just meant neither flat-out believing nor flat-out disbelieving. I guess my picture thinking was something like a threshold conception for flat-out belief (which it probably shouldn’t have been, since I don’t like that view).

    Ralph-

    1) I see, I guess we just disagree then. I think there are very few conversational contexts in which it would be true to say that this very fair coin could have easily had a bias less than .001 or greater than .999.

    2) OK, but your suggestion was to build “the specific sequence of types of experiences on which one’s belief is actually based” into a belief’s positive conditions. My worry is that, at least when considering only nearby cases, such positive conditions will by themselves be “sufficient for yielding the belief in question” irrespective of whether any further “negative conditions” are satisfied. This would trivialize adherence. But maybe we just disagree: Do you think I could have easily had “the specific sequence of types of experiences on which one’s belief is actually based” without forming the belief I actually formed?

  20. 1. If the believer has a confidence level of 1% that the coin’s bias is not between .001 and .999, won’t it be true that in many conversational contexts (in which we are thinking about the believer’s situation) we suppose that it could easily have happened that the coin’s bias is not between .001 and .999?
    2. My idea was that there is a possible world w1 in which you have the specific sequence of types of experiences on which one’s belief is based in the actual world w* even though in w1 you also have various pieces of defeating evidence (which you did not have in w*) as well. Admittedly, we have to understand the phrase “the specific sequence of types of experiences on which one’s belief is actually based” in a particular way to accommodate this idea. I may not have been as clear as I could have been on that point — for which I apologize…

  21. Hi Jeremy,

    Compare two cases:

    a) The coin’s bias is b such that 0 ≤ b ≤ 1, and the agent is confident about that. This means the agent is confident that a tossed coin will land on one side or the other but little else. You might say that the agent is confident in his total ignorance about the coin toss: ignorant about the bias, but also ignorant about the mechanism.

    b) The coin’s bias is 0 < b < 1, and the agent is confident about that. This means the agent is confident that the tossed coin is not a trick single-sided coin which, when tossed, will land one way or the other. But the agent is confident of little else.

    Your example supposes the coin’s bias is 0.001 < b < 0.999, which is an instance of b. The additional structure that these numerically determinate points give, I claim, do not help your example. Rather, they seem to be obscuring the information (or absence of information) this broad interval represents.

    Now pick whatever propositional attitude you favor to express your endorsement of b, or your endorsement of the specific version of b you’ve introduced. Run the experiment. You are suggesting that your initial attitude toward b will change if the experiment happens to yield the same face all 1000 trials. Specifically, you are suggesting that this information will lead you to suspend judgment that you (your agent) is practically ignorant about the coin’s bias. But your specific constraints entail that this experimental outcome, seeing 1000 tosses all landing the same side up, is not a rare event. In short, the outcome is perfectly compatible with you endorsement of b; I see no reason to change attitudes.

    If I have misunderstood you, perhaps you can show your calculations so that I can see better what you had in mind.

    Regarding

    I think there are very few conversational contexts in which it would be true to say that this very fair coin could have easily had a bias less than .001 or greater than .999.

    The composition of the coin is one thing. The way it is tossed is another. A fair coin depends on both parts.

    Example: suppose I pull from my pocket a newly minted 50c Euro coin. We agree it is fair. But rather than toss the coin, I place the coin always heads or place the coin always tails. You do not know which. Although the composition of the coin is fair, the uncertainty mechanism I use to determine outcomes ensures that when I “toss” the coin it always lands the same side up. Hence, from your point of view, the coin is biased 0 or biased 1 in favor of heads.

  22. There is an unfortunate overloading of ‘b’ in my comment. If you replace the inequalities by 0 ≤ x ≤ 1, 0 < x < 1, and 0.001 ≤ x ≤ 0.999 that should make things clearer.

  23. Sorry for the delayed reply!

    Ralph-

    1) I take it the idea is something like that if it’s rational to have Cr(p) > 0 in a context, then it will be true in that context to say that it could easily have been that p? I guess I don’t think that’s true–at least, not if what “could have easily happened” is understood metaphysically. Coins just aren’t the sorts of things that could have easily had biases wildly different from those they actually have.
    2) Thanks, I understand the proposal now. Appealing to specific chains of experiences still seems extremely unpromising to me, though, but I’ll leave it at that.

  24. Thanks, Jeremy!

    On (1), I certainly wouldn’t accept any general principle to the effect that any context in which it’s true to say that it’s rational to have a non-zero credence in p is also a context in which it’s true to say that it could easily have been that p. (E.g., it could surely be perfectly rational for some thinkers to have a non-zero credence in metaphysically impossible propositions, such as “Water is not H2O”, etc….)

    All that I meant was that in the case of credences concerning the outcome of a coin toss, what makes it rational for the thinker to have a non-zero credence in the proposition in question is something like the Principal Principle — i.e. the thinker’s credences are being guided by their beliefs about the objective chances of various outcomes; and we can stipulate that the thinker’s beliefs about the relevant objective chances are not themselves mistaken in any way. And I think I would accept the general principle that any context in which it is salient that there’s a non-zero objective chance of p‘s being true is a context in which it is true to say that it could easily have been that p.

  25. Hi Ralph,

    Let X be the proposition the objective chance that the coin lands heads is x. You write:

    “we can stipulate that the thinker’s beliefs about the relevant objective chances are not themselves mistaken in any way”

    If by this you mean that: if X, then the thinker has Cr(X) = 1 (or is at least very high), then this is exactly what we *can’t* stipulate. To fail to have such credences is precisely *what it is* to be uncertain about how the coin is biased, which was the relevant feature of the case! (This is perfectly compatible with the thinker satisfying PP.)

  26. Sorry, I could have formulated that point more clearly: ‘not mistaken in any way’ is much stronger than I need.

    All that I need is something much weaker — viz. the point that the thinker’s beliefs about the objective chances are not radically mistaken, in the sense that for some non-zero positive real number x, the thinker has a non-zero credence in the proposition that the objective chance of the relevant event’s occurring is x, while in fact the objective chance of its occurring is 0.

  27. Whenever there is a double headed coin and I’m not certain that it’s double headed, I’ll violate that condition. That doesn’t strike me as a case of being radically mistaken. But setting that aside, I’m not seeing how the proposal handles my case, since in the case it was stipulated that the coin is in fact neither double headed nor double tailed.

  28. The proposal handles your case by ensuring that, in the context in which we are discussing the case, it is salient that there is a non-zero objective chance that the outcome of the coin toss does not reliably indicate the coin’s bias, but is instead the result of an extraordinary fluke. Since it is salient in the context that there is a non-zero objective chance of this being the case, it will be true in this context to say that it “could easily have happened” that this is the case.

  29. 1) In the relevant scientific contexts, I think that’s probably not true: When you rule out a null hypothesis because p is low, you arguably do so because you think p is too low for you to have easily gotten misleading evidence. (Caveat: I’m not making a normative claim to the effect that using classical statistics in this way is reasonable, only a descriptive claim about how scientists seem to use it.)

    2) But suppose I grant that in these cases you could have easily gotten evidence that wouldn’t support the truth that your evidence in fact supports. How does that relate to the issue of adherence? Is the thought that if your evidence had been any weaker then the positive conditions wouldn’t have been satisfied? That seems implausible to me. Let n be the minimal number of heads that need to be observed in order to know (in the context) that the coin isn’t biased towards tails. Suppose I then come to know that the coin isn’t biased towards tails on the basis of seeing it land heads n times. It could have easily landed heads one less time and suppose that, if it had, I would have suspended my belief about whether it was biased towards tails. So, if you want to maintain adherence, you have to say that the positive conditions wouldn’t have been satisfied if the coin had landed heads one less time than it actually did. But that is to determine positive conditions incredibly finely.

Leave a Reply

Your email address will not be published. Required fields are marked *