Two Arguments that Knowledge Entails Evidential Justification

I still find it hard to believe that anyone would want to deny this, but it happens.  đź™‚

The Knowledge Entails Evidence Argument (KEE)

1. Knowing entails believing as one ought.

2. Believing as one ought entails believing what fits ones evidence.

3. Thus knowing entails believing what fits one’s evidence.

I’m not sure I believe 1, but I’m not particularly opposed to it, and it seems like the thing fans of knowledge would like.

2 doesn’t even say that believing what fits the evidence is sufficient for believing as one ought, only that it is necessary for it.  I think it sounds weird to say S’s evidence points to not-p, but she’s believing just as she ought by believing p.  Here’s a more formal presentation of that idea.

The Knowledge and Normativity Reductio (KNR)

1. Suppose for reductio that knowledge does not entail evidential justification.

2. If 1, then possibly, S knows p though her evidence all points to not-p, even after perfectly virtuous inquiry,

3. If all S’s evidence points to not-p after perfectly virtuous inquiry, then S should believe not-p.

4. So if 1—modulo the seemingly undeniable 3—possibly, S knows p though S should believe not-p.

I think 4 would spell trouble for friends of the value of knowledge.  At the very least, I’m much more concerned about what I should believe than what I happen to know.


Comments

Two Arguments that Knowledge Entails Evidential Justification — 29 Comments

  1. Just quick and I may be missing something, and probably following up on Aidan—2 is too strong; all that seems true to me is 2*:

    2*) If 1, then possibly, S knows p though all her evidence is neutral between p and not-p.

    But 3* has (to me at least) a lot less prima facie plausibility than 3:

    3*) If all S’s evidence is neutral between p and not-p, then S should not believe p.

  2. Sounds just as bad to me Geoff. I mean, S might have practical reasons for favoring one of a pair of counter-balanced propositions, but, after all, she might have practical reasons for believing what she justifiedly believes to be totally unjustified by her evidence.

    It seems clear to me that when one’s evidence is counterbalanced, the attitude that epistemically fits that is suspension of judgement.

  3. But you presumably don’t just want to preach to the choir. Take a version of an entitlement theory, which says that one ought to believe that there’s an external world even though one does not (and cannot) acquire any evidence for this, in virtue of one’s possession of a non-evidential warrant to believe this proposition. The line isn’t that one’s evidence supports the non-existence of the external world. It fails to support either the proposition that there is an external world or its negation. Such a theorist will deny 2 in both arguments, and on seemingly principled grounds. If the argument is just ‘If 2, then K entails EJ’, then I guess I’m unclear who the target is.

  4. I’m nonplussed at entitlement theory. So that’s mos def not the target.

    The target is “the friends of knowledge” and “those who think knowledge is valuable” per above. Maybe some ET’s are in there–I have no idea–but the majority are not. I have in mind people like Greco and Sosa who tend to like normativity and knowledge, but not evidence, probably Goldman too.

  5. No surprise there Trent; I mean, you’re an evidentialist!

    Anyway I don’t mean to get into a “how does it sound?” debate; that’s pretty fruitless. But I suspect there are good reasons you could give for resisting 3* which you couldn’t give for resisting 3. That’s all I meant. I have loud children I have to attend to, but if I think of the reasons I’ll post them later.

  6. Geoff, I doubt that there are any, but of course I welcome proposals if you think of any. I’m fond of Loud Children, so give them my best! Unsurprisingly, I was a Loud Child.

  7. Trent,

    Two quick questions (1st argument gets ‘a’ and 2nd gets ‘b’). I’m getting a bit tripped up on the should-nots. You say:

    “3b. If all S’s evidence points to not-p after perfectly virtuous inquiry, then S should believe not-p.”

    That seems to be more contentious than you need. Why not switch “should believe not-p” to “should not believe p”? Seems more in line with the 1st argument where the idea is that fittingness is necessary, not sufficient for a “should”.

    Like Aidan, I’m trying to get a better feel of who the targets are. Suppose Goldman says this in response: I’m happy to concede 1a and 2a, but that’s because having evidence is what you got once you manage to believe what you ought. Evidence is just what you justifiably believe. It’s ultimate evidence if it’s a basic belief and derivative evidence otherwise. Having fitting evidence is the result of having justified beliefs and knowledge, not that by virtue of which the result is achieved.

    Does explanatory priority matter to you, or would you and Goldman now be on the same page?

  8. Clayton,

    I agree that 3b is stronger than it would have to be, but nothing weaker is good enough for me. I take 3b as is to be an instance of a general principle with “not-p” in for a schematic “p”.

    Explanatory priority does matter to me. There are moments I’ve blogged about in the past where I want to mix Chisholm and Williamson and say that my evidence consists in the set of propositions such that I am aware that p. But most of the time I think it’s more commonsense and simple and natural to take the Conee-Feldman line that ultimate evidence consists in experiences.

    Thus, 3b could read (also modifying per the above):

    3b* If all S’s experience points to X after perfectly virtuous inquiry, then S should believe X.

    That seems obvious to me.

    I think agreeing with experience is one admissible precisification of “evidential justification” and stipulate that’s what I mean in 1. There may well be other permissible precisifications, but I still think Goldman and I would be in important disagreement (there there may be slippage along the propositional/doxastic justification line) .

  9. I’m not sure that any of the principles here (premises 1. and 2. of KEE, and premise 3. of KNR) should stand in unrestricted form; they are probably all at best true with some sort of “generally” or “ceteris paribus” operator in front of them. (“Entails” may have to be switched to a suitably weaker verb as well.)

    Fwiw, you can also add me to the chorus of those who find premise 2. of KNR rather too strong as well; and 2* sounds like something that plenty of people will reject, along the lines Aidan laid out @5. Also relevant would be views like Kornblith’s, in which animals can have knowledge just fine, even if they don’t play the evidence game at all.

  10. Jonathan,

    I don’t understand. Are you aware of a counter-example?

    I assume you’re talking about KEE-2? For the animal example doesn’t seem to even be of the right form for a c-e to KNR-2.

    I don’t know what the evidence game is. Is that from Parker Brothers? Can I get that at Target? 🙂

    I take it that animals have experiences and that something is cognitively criticizable if their beliefs don’t fit the evidence.

    Can you help me understand where we are relevantly different from other animals here? If it helps, I go in for causal basing relations. Maybe that’s where you get off the train.

  11. I had KNR-2 in mind. If S can know p without being evidentially justified in p, but only because S just doesn’t have evidence of any sort for anything ever, then 1 can be true even when the consequent of 2 isn’t possible. (The animal knower just can’t have any evidence to be pointing the wrong way in the first place!)

    So this would be on an account that combines a reliabilism about knowledge, but some sort of articulate-reasons-having for evidential justification.

  12. Jonathan,

    1. I don’t know what the “articulate-reasons-having” version of evidence is.

    If it’s something like

    S has evidence for p only if S can state a reason for p

    then I don’t even think that’s plausible for most human cases. So you can stipulate that sense of you want–and I think there’s a usage of “evidence” that goes along with that–but that still fails to distinguish between the animal case and the human case.

    2. It seems odd to me that you’d deny animals the evidence bit but grant them the knowledge bit.

    3. re: #1, as I said, you can consider a different argument from the one I’ve presented, but I’ve presented a different argument. Just as there is a usage of “evidence” that involves the articulation of reasons, there is also one which involves our experiences, the “testimony of the senses.”

    4. I’m still quite unclear on just what it is you think animals don’t have. Maybe you think they don’t have what Chisholm called “takings”. That may well be the case, and we can distinguish between animal knowledge and human knowledge on that basis, but I’m really interested in why you think this:

    JW1 The animal knower just can’t have any evidence to be pointing the wrong way in the first place.

    I can’t myself think of a reason. For example, I think a dog can *recognize* a cat as a cat (and this needn’t, any more than in a human, entail *thinking the thought* “That’s a cat” much less “I can tell that’s a cat because…”

  13. Trent,
    What do you think about Plantinga’s counterexample? (in the “Warrant in Contemporary Epistemology” volume ed. Kvanvig, p. 358-361). A person can know that 2+1=3 w/out any evidence. The best evidence that might be necessary for such knowledge is what he calls “impulsional evidence” (which, as I read him, is identical to a seeming that 2+1=3). But he points out that all the things we believe has this sort of evidence (everything we believe seems true to us), and the evidentialist is looking for a condition for knowledge beyond the belief condition. So you can’t appeal to that.

    Senor has some interesting counterexamples too.

  14. Hi Trent

    Re KEE-1, do you think that one can know without knowing that they do? If so, don’t these sort of cases create trouble for KEE-1? Here’s an attempt:

    Case a: We’re looking for a particular restaurant, but we haven’t checked where it is. A specific and sharp image of a street corner comes to my mind, which I feel is somewhere along the street on our right and connects with the restaurant’s street. I say, “I’m not sure. but I think I know where it is, we should go on the right”. We do, end up in the restaurant, and perhaps I say to our friends: “we hadn’t got a map, but fortunately I knew where it was”.

    Perhaps that’s a case where I knew without knowing that I did. (If you think that I don’t know because I don’t believe, that doesn’t affect the following.) Now:

    Case b: as a, except that I become instanteously perfectly confident that the restaurant is there.

    Perhaps that’s a case in which you know but don’t believe as you ought, because you don’t know that you do, and you should instead suspend judgement (e.g. you should merely presume and not believe).

  15. Just a couple of thoughts–

    First, anybody who thinks that what you know is (part of) your evidence will have a hard time denying that knowledge entails evidential justification. So your target must be somebody who denies that all knowledge is evidence. Not a revelation but worth pointing out.

    Second, there are all sorts of crazy-ass skeptical hypotheses that entail that I have my actual experiences. Not only do my actual experiences seem clearly not to discriminate between what I actually believe on the basis of those experiences and the crazy-ass hypotheses, but it’s easy to show that my having my actual experiences raises the probability that the crazy-ass hypotheses are true. So at the very least there’s a prima facie problem for the idea that your evidence points to the denials of those crazy-ass hypotheses if you think evidence = experience. (I realize there’s more to “pointing to” than probability-raising or discriminating but what account would block this prima facie problem? Maybe something where E points to P when P is the best explanation for E?) Thus letting evidence = experience seems to lead to a dilemma: either you’re permitted to believe something that your evidence doesn’t point to or else you ought not to believe that the crazy-ass hypotheses are false. People who “like normativity and knowledge” but don’t think knowledge is evidence are (I suspect) going to generally hate the second horn and see this as a reason to reject KEE-2.

    Third, the kind of situation I just described is not one where your evidence *does* point to P but you ought to believe Not-P. I think it’s a lot harder to imagine a situation where your experiences seem to support Not-P over P but where we think it’s okay to believe P anyway. It’s cases like these ones where your “evidence” seems neutral between P and Not-P that make me skeptical about KEE-2 and my revised KNR-3* I gave up above.

  16. “I don’t know what the “articulate-reasons-having” version of evidence is.” Just think of a view that works in terms of some sort of Sellarsian picture about justification, but a Kornblithian one about knowledge. Regarding animals, think of the relevant chapter of _Mind & World_ about them.

  17. Andrew,

    Impulsional evidence is evidence, so it’s not a counter-example. Even if every believe brings with it some evidence for it automatically–and conservatives already endorse that anyway (Foley and Chisholm both have versions, but there are many more)–the belief and the evidence are distinct. Mike Huemer also covers that objection in _Skepticism and the Veil of Perception.

    Julian,

    I don’t think one needs to have the reflective state either to know or to have evidence or to believe as one ought. That over-intellectualizes knowledge and if I bought into that I’d be a skeptic about most things.

    Geoff,

    1. I’m working with the experiential conception of evidence here, so by definition no knowledge is evidence except in some derivative sense.

    2. “I realize there’s more to “pointing to” than probability-raising or discriminating but what account would block this prima facie problem? Maybe something where E points to P when P is the best explanation for E?”

    Exactly.

    3. What situation now?

    Jonathan,

    I can *try* to think of such a view, but I’m not sure I’m successful, and when I think I might be successful, I get FALSE. I’m happy to distinguish between types of knowledge by types of reasons and I think some reasons are interestingly reflective, but that’s not all knowledge or all reasons. Some knowledge–though none of mine–is based on reasons one can express so eloquently that they could convince almost anyone, that’s pretty good stuff, eh. Some knowledge is nearly like a reflex. But it’s a reflex in response to sensory stimuli, i.e. evidence.

    There are relevant chapters in _Mind and World_???

  18. Let’s think about that KNR premise 3.

    I show you a bent coin and tell you that it is either biased .55 in favor of heads, or biased .55 in favor of tails. Let p be “the coin is biased heads.”

    I virtuously toss this coin twice and observe that it has landed heads on both occasions. Since there is a roughly 30% chance of seeing that two-head outcome and p being true, versus a roughly 20% chance of seeing that outcome and the coin having a bias to tails, hence p being false, observing two heads is evidence which points to p.

    Question: Should you believe p? I have my doubts. But suppose you dig in. Then, consider adjusting the bias of the coin to a fixed value as you move asymptotically to 1/2 and consider the same experiment. For some ε of bias, a virtuously observed sequence of two heads will give you evidence pointing to p. But should you believe p given any value of 0.5 < ε? That sounds like wild-eyed Lockeanism.

  19. Greg,

    I’m right with you there. I pressed that against Rich and Earl pretty hard. I think I do so in a piece that’s to be published before too long. Still, I often start out talking in binary terms because it’s simpler.

    In this case, in my arguments, we just let p be a statement about degree of belief.

    I do, though, think that’s unfair to Locke because he also talks about proportioning.

    With your denotation for p, I’d just reiterate from above that there is more to pointing than probability raising.

    There’s another way to dig in, too, which is to restrict to perceptual cases. There, it’s clear that pointing is not a matter of probabilites–not logical ones anyway–but rather something phenomenological. So I could restrict the premise to that domain and still cause trouble for reliabilists in the case of perceptual knowledge.

    Another modification suggests itself:

    3*’. If all S’s evidence points SUFFICIENTLY to not-p after perfectly virtuous inquiry, then S should believe not-p.

    For then there will still be cases of the kind I need. And this move is independently motivate–independent from my defending my arguments (which I’m just exploring)–precisely by the .50000000000001% cases.

  20. Hi Trent,

    Others have already said what I was initially going to say about KNR2. So I’ll say something else.

    Do you think that line 2 of either argument is compatible with innate knowledge?

    It’s sensible to speak of innate knowledge and innate beliefs. But innate evidence? Not sure about that one. Sounds weird. But maybe you could work it out.

    Also, just a quick point about formulation. When you say “all S’s evidence points to not-p” and similar things, I think you mean “on balance S’s evidence points to not-p.” Right? When you say “all S’s evidence,” that makes it sound as though there were absolutely no evidence speaking in favor of p, which might needlessly raise hackles.

  21. I agree with Jonathan [@11] that we need ceteris paribus conditions, and this remark would carry over to my example. However, my example is not relying upon probabilistic information so much as simply the real number line: the probabilities are just convenient mechanics. The agent is interested in whether the coin is one way rather than the other, and he has evidence that points one way rather than the other. Further, we can adjust the example to weaken how much the evidence points one way rather than another, and we can weaken the evidence as much as we like. The probabilities just make this fiddling precise.

    The agent is interested in whether the coin is biased heads or not, he is not interested in the expectation of a coin landing heads; just as, by the way, Fisher’s exact yields a rejection of the null hypothesis, not a small degree of belief that the null hypothesis is false. Besides, changing to degrees of belief would blow apart the argument you’ve invited us to consider.

    Regarding Locke: wild-eyed Lockean belief is to Lockean belief as wild-eyed boy is to boy: there is entailment but no equivalence.

    Finally, I wonder whether “points to sufficiently” would then collapse to “should-belief”, so that then you’d have a (trivial) bi-conditional for premise 3. Then there is a risk of the argument trivializing.

  22. Correction to [23]: Fisher’s exact yields a rejection of the null hypothesis (when data falls in the rejection region), not a *high* degree of belief that the null hypothesis is false.

  23. John,

    I do think it’s sensible to talk of innate knowledge, but only in the empiricist friendly way that Carruthers does in his nativism book some years ago. And that requires having evidence, for example seemings to be true of what experiences trigger.

    I’m happy to weaken the formulation that way, but I think the reliabilist is stuck with the argument in its stronger form. I.e. The normative claim seems more clearly true with all evidence, but the reliabilist, I think, has no good ground for adding their evidential defeat condition. But, again, if we weaken it somewhat to “clearly on balance” then I think it all still works.

  24. Greg, you wild-eyed boy, you,

    1. Maybe your agent isn’t sensible and should be interested in something more fruitful! She should be interested in the expectation and nothing more!

    2. But I do think the sufficeintly clearly is necessary and non-trivial (I mean, *obvioius* truth does not entail trivial truth, though, I’d settle for truth because I just want the argument to be sound, then the premise will be non-trivial in the sense that it’s essential in an argument with a true, non-trivial concusion). The fact is, sometimes we know that evidence favors p, but not enough to warrant belief. How much is enough? Probably there’s no fact of the matter. But I take it to be obvious that we have to allow for a margin of error. And I see not reason at all to think that trivializes anything. Again, I want that premise to be obvious. I’d even be OK if it were anaylitic (I’m tempted to think it’s some kind of conceptual truth), but I deny this trivializes the argument. I don’t see how that follows. A substantive conclusion can follow from a set of premises some members of which are analytic, conceptual, obivious, what have you.

  25. 1. Suppose the agent is coughing up blood and wonders whether he has plague. Call this p*. Turns out there is a diagnostic test for plague, the results from which gives evidence pointing to whether p* with chance 0.5<ε, etc., etc.

    It is the structure of the example which matters, not how it is decorated.

    Also, the agent is interested in whether she has plague, not the expectation of p*. Given overwhelming odds that p*, offering her odds to favor a bet that ~p would not simply be grotesque but besides the point.

    2. I'm sympathetic to the idea that positive evidence does not legislate belief, and would stick with this through quite a few rounds of qualification. However, I do not think your argument serves the cause, and I've sketched a pincer movement: loosen-up the linkage between 'points-to' and 'should-believe' to ensure the argument is substantive, and you risk invalidity; tighten-up the linkage between 'points-to' and 'should-believe' to secure validity, and you court triviality.

    This is an indirect way to request that you explain precisely what 'points-to' and 'should believe' mean so that we too can see the undeniability of KNR3.

  26. 1. My guess here is that we’re going to be butting heads on the binary stuff just like Henry and I did. I submit that to be epistemically interested in whether p is just to be interested in the epistemic probability that p. I’m not even sure if I believe in any “acceptance” except some pragmatic act. I don’t know what I can say to make any headway: if I try to think in terms of acceptance of a proposition and the case is one where the probability only slightly tips in favor of p, I get the result SUSPEND. The closer it gets to 1 the more clearly the verdict comes back (in the imperative) BELIEVE. That’s how I’m wired, and it would surprise me if this wasn’t widespread.

    2. I’m afraid I’m just mostly confused by claims of triviality here.

    3. I’d *love* to be able to give precise explications of those terms! But I can’t. I can only advert to paradigm examples. If the only evidence you have concerning whether the grass is green is that it appears green, then your evidence points to GREEN. And if all your evidence points to p, then you should believe it. Same if your evidence clearly points to p. If someone has a theory according to which it’s not the case that you should believe p when your evidence clearly points to p, that person has some baggage. As I understand him, Goldman does not want to deny that. My goal is to turn that sufficiency into a necessity by the arguments I gave.

    KNR-3 is a sufficiency claim I think everyone–including reliabilists–should endorse. The idea is to use 2 to leverage the reliabilist into accepting the necessity claim mentioned in 1. So it’s 3,2,1 BANG: K –> E, if it works. An epistemological pipe bomb to blow up in the face of reliabilists (in a nonviolent, harmless way: this bomb makes one better off).

  27. Well, it would help if I said ‘soundness’ rather than ‘validity’; argh, I am working on something at the moment which involves translating the notion of provability of one system into a (restricted) formula within the object language of another system, and this has me playing fast and loose with soundness and validity.

    Anyway, this was fun to think about!

Leave a Reply

Your email address will not be published. Required fields are marked *