Can Justification Just Fall Short of Knowledge?

We all know that justified true belief can fail to be knowledge when funny stuff happens (or at least most of us think this). What I want to ask is whether a JTB can fail to be knowledge for a more mundane reason–because the belief is justified, but it isn’t justified enough to count as knowledge.

Another way, perhaps, to put this is to question a line from section 6 of Ralph’s paper “The Aim of Belief”: “[T]here is no way for a rational thinker to pursue the truth except in a way that, if it succeeds, will result in knowledge.” Is this so?

Here’s a case I’d like to survey you on. Charlie Brown, a baseball general manager, is trying to decide who to pick in the amateur draft. He looks at the prospects and comes to believe, based on his high school performance, that Joe Shlabotnik will be a good major league player someday. Indeed, Joe does turn out to be a good major leaguer. So Charlie had a true belief; it also seems as though it may have been justified, because it was based on performance. Yet I would think that it falls short of knowledge, because predicting someone’s eventual major league performance on the basis of his high school performance is too uncertain.

(Apologies to non-baseball fans; the argument probably transfers to any sport, though baseball performance is notoriously difficult to predict.)

Indeed, I’d argue that Charlie is much better off knowing that his pursuit of the truth about Joe’s future performance will not result in knowledge. I’m convinced by Tim Williamson’s argument that one of the advantages of knowledge over JTB is that it is less likely to be abandoned in the face of counterevidence. Yet Charlie should be ready to abandon his belief in Joe’s future in the face of counterevidence. Given the chancy nature of baseball prospects, a general manager has to be prepared to abandon someone who looked promising but who isn’t panning out, or he may damage his team by keeping on an underperforming player. Players who you know to be good will be kept in the lineup after a poor start (I remember Barry Bonds batting under .200 one May when he was in Pittsburgh and going on to win the MVP–er, sorry again to non-baseball fans); players who you think to be good won’t.

Does this case convince you? Do you think Charlie is only justified in believing that Joe will probably be good? Do you think it casts any sort of light on the kind of justification that’s necessary for knowledge?


Comments

Can Justification Just Fall Short of Knowledge? — 15 Comments

  1. Matt–I think you’re exactly right that justification can exist but fall short of that needed for knowledge. There’s a paper on my website right now where I argue for this, in the context of defending coherentism against the problem of justified inconsistent beliefs. The heart of the argument is that, for the kind of justification needed for knowledge, the quality of your evidence has to be such as to justify the kind of closure to further inquiry that is involved in knowledge. I.e., something in the neighborhood of having your evidence justify for you that further inquiry would undermine the opinion presently confirmed by your evidence only by revealing misleading information.

  2. By the way, I don’t think the point about it being possible to be justified in believing p without having a justification sufficient for knowledge threatens Ralph’s point. After all, the kind of inquiry relevant here is inquiry aiming for the kind of psychological closure you have when you know that the claim in question is true. (this last statement needs qualification, but the truth is nearby at least, and it supports, rather than undermines, Ralph’s claim).

  3. Matt, I was hoping I might test your intuitions on a slightly different question. Do you think that someone can justifiably believe something they know couldn’t amount to knowledge? Presumably if Charlie were a reflective manager, he might form a view about whether such judgments could constitute knowledge and am interested in whether you think his having this view that such judgments can’t constitute knowledge or perhaps that second-order view amounting to knowledge must undermine the justificatory status of his judgments about ball players.

  4. I’m always surprised and delighted when it turns out that people have actually read some of my stuff!!!

    Anyway, my answer to Matt’s challenge is (no doubt predictably, since we philosophers are obstinate creatures) not to abandon my principle linking knowledge and justification, but to appeal to a different aspect of my view — viz. my endorsement of a modest form of *contextualism*. (The sort of contextualism that I like is closest to the version that has been defended by Stewart Cohen. But I am very doubtful whether the standards for “knowledge” or “justification” can ever rise so high that sceptical claims like ‘Moore doesn’t know that he has hands’ are true; so unlike Cohen, I don’t think that contextualism is much use for answering sceptical arguments.)

    Thus, I say that in some contexts, it is perfectly true to say that Charlie Brown “knows” that Joe will be a good major leaguer; in other contexts, it is not true to say that he is “justified” in believed that Joe will be a good major leaguer (in those contexts, it is at most true to say that Charlie is justified in believing that Joe will probably be a good major leaguer). What I deny is that the terms ‘justified belief’ and ‘knowledge’ have a context-invariant meanings that makes “justified belief” require a lesser degree of justification than “knowledge”.

    In short, it is the same kind of justification that is required both for justification and knowledge. The precise degree of justification that is required for the belief to count either as “justified” or as “knowledge” varies with the context; but any degree of justification that can suffice to make it true that a given belief is “justified” can also suffice (if the other success conditions are met) to make it true that the belief in question counts as “knowledge”.

  5. Ralph, your view is a very attractive one, but I wonder how it handles lottery cases. Here’s what I would have thought: the normal standards for justification allow that you’re justified in believing that your ticket is a loser, but the normal standards for knowledge don’t allow that you know this. There may be unusual standards cases where the justification and knowledge go together regarding lottery cases, but in the ordinary case, and in ordinary ranges of cases, they don’t seem to.

    What do you think?

  6. I basically think that the contextualist line on the lottery is right: there are contexts where it is true to say ‘Jon knows that he won’t win the lottery’. But I agree with you that these contexts seem a bit odd, whereas it seems much less odd for there to be a context where it’s true to say ‘Jon’s belief that he won’t win the lottery is justified’.

    Perhaps I should restrict my claim to statements like ‘Jon’s belief that he won’t win the lottery is entirely justified / perfectly justified’ — to make it clear that we’re not to hear the justification claim as merely saying that Jon’s belief is justified to *some* degree.

    It’s also important that my claim about the link between justification and knowledge concerns *ex post* justification claims (‘Jon’s belief that p is justified’) as opposed to mere *ex ante* justification claims (‘Jon has [some] justification for believing that p’).

  7. Ralph, I like this way of thinking about justification. The reason I asked is that i think there is a distinction between ordinary justification and the stronger kind relevant to knowledge, and I agree with you that even if we sometimes have the weaker kind, our goals in inquiry and belief-formation are for the stronger kind (which, when we get it, involves a legitimation of the closure experience where we properly view the issue in question as having been decided, so that further inquiry isn’t necessary or appropriate any longer).

    I’m going to put on the sidebar links to works in progress, and I’ll put the paper up where I argue for this claim (I argue for it in the context of trying to solve the problem of justified inconsistent beliefs for coherentism).

  8. [I don’t think this registered the first time–apologies in advance if it double-posts.]

    Clayton, my view is that Charlie can have the (true) belief that he doesn’t know that p and still be justified in believing that p. In fact, though this accords with about no one else’s intuitions, I think that Charlie may be able to assert this–“Joe’s going to be a good major leaguer. Of course I can’t know this just by looking at his high school records, but wait and see-that’s what’s going to happen.” (Caveat: Even if you think this utterance is OK, the emphasis on “know” is often a sign of a context shift.)

    Ralph–Part of this is a question of how to define the technical term “justified,” but I’m slightly uneasy with contextualism for justification. I think a useful use for “justified” rests on the idea that S would be justified in believing p iff S just plain should believe that p–p would be good for S to believe from the epistemic point of view. And it seems odd to say that the truth of “S should believe p” varies with the context of utterance. (If I remember correctly, this is like a point Hawthorne makes against contextualism with respect to the connection between knowledge and practical reasoning–though I deny that connection, so that argument doesn’t move me!)

    But what I’m doing may just be to insist on Jon’s ordinary justification rather than the stronger kind relevant to knowledge–certainly if I were to replace “justified” with “completely justified” I don’t think the argument of the last paragraph goes through.

    Jon–In relation to comment 1, do you conceive of waiting to see what happens as a kind of further gathering of evidence? I’m envisioning a case where Charlie has gathered all the evidence it is possible to gather, and the only thing to do is to see how Joe turns out. Does that count as closing further inquiry? It seems to me as though it probably shouldn’t; on the other hand, I’m in much the same situation with respect to many ordinary pieces of knowledge in the future–I have a settled belief now, I don’t feel the need to engage any more inquiry now, but if what I believe were to fail to come to pass I would have to revise my belief. (Maybe this is in your paper!)

  9. Matt–I agree with you that knowledge of the future is possible, unlike some others. So let’s assume that. I also think that your evidence can be sufficient now to justify not inquiring further, even if you know at some later point that new evidence might arise that forces you to change your opinion. The latter point is just an admission of your fallibility, and everyone ought to practice doing that regularly! So the closure experience regarding further inquiry shouldn’t be identified with any denial of fallibility.

  10. Jon–I take it that the situation you describe is one in which we do know that p, even though we know that it’s possible that we might later make you change your mind? (I guess this begs a lot of questions in the area of Kripke’s paradox, but it seems intuitively sensible.) Would you say that this differs essentially from Charlie Brown’s case, in which he’s not going to gather any more information?

    One difference I see is that Charlie Brown would gather more information if he could, while in cases of ordinary knowledge of the future that might not be so–we might not be willing to pay even a small fee for more information. (They’re shutting off the computers so I’d better go!)

  11. I guess, if Charlie were like me, he’d want more information, especially since he’s aware of the poor track record of predicting future success in baseball. So I expect he doesn’t see further inquiry as being closed off, even though he may also know that there is no further information available to him.

  12. Matt — I take back my last post on this thread. You’re right that ‘justified’ is a semi-technical term; so consulting our intuitions about the term won’t take us as far as it will with a term that is heard every day on everyone’s lips, like ‘know’. To some degree, our task is just to stipulate a theoretically useful use for the term ‘justified’, not to analyse our intuitions.

    But it’s not straightforward to define ‘justified’ in terms of the notion of what we “should believe”. The trouble is that there is more than one kind of ‘should’ here.

    In one sense (indeed, in my view, the primary sense), if you are consciously considering a proposition p, you “should” believe p if and only if p is true. (After all, if p is true, then p is the right thing to believe on this question.) But clearly, we can’t use this sort of ‘should’ to define anything like a familiar notion of justified belief.

    So, to define ‘justified’, we would have to appeal to a different sort of ‘should believe’. Roughly, we could try to appeal a kind of ‘should believe’ that must be capable of “directly guiding” our forming and revising of our beliefs, and which must therefore be an “internalist” ‘should’ (i.e., what we “should believe” in this sense is determined purely by the facts about our internal mental states).

    My problem is that I don’t think that this sort of ‘should’ can be a purely epistemic (or truth-oriented) notion. This is because epistemic considerations alone can’t determine any unique right way to balance the goal of believing the truth against the goal of not believing the false. Non-epistemic (practical) considerations will have to come in to determine some way of balancing these two goals against each other.

    Thus, if these non-epistemic practical considerations end up weighting the goal of believing the true more heavily than the goal of not believing the false (about the question at hand), then our beliefs (about this question) “should” be guided by relaxed epistemic standards that permit us to hold a lot of beliefs; if the non-epistemic practical considerations end up weighting the goal of not believing the false more heavily than the goal of believing the true (about the question at hand), then our beliefs (about this question) “should” be guided by stricter epistemic standards that permit us to hold many fewer beliefs.

    The only way to get a purely epistemic (truth-oriented) sense for the term ‘justified belief’, I think, is to embrace contextualism and allow this notion to be context-sensitive in the way that I describe. Then the non-epistemic considerations get shunted out of the meaning of ‘justified’ itself and into the context in which the term is used. Roughly, the idea is that to say that a belief of yours is “justified” means that your reasons for that belief are “good enough” — i.e. good enough for the contextually relevant practical considerations, whatever they are.

  13. Ralph, I think what needs to be shown to show that one can’t have a purely epistemic theory of justification is more than just noting that the two goals have to be weighted. What needs to be shown, I think, is that this weighting cannot be handled in terms of some notion of risk such that the risk in question becomes salient to the individual in a given case. It may be that what makes risk become salient is non-epistemic, but that’s in the same category as what makes a person engage in inquiry in the first place–neither motivation has to affect the purely epistemic character of justification. In this way, the concept of salience can do just what the appeal to contextualism does: it causes the epistemic bar to raise and lower, but without requiring that the meaning of justification changes from context to context.

    Of course, as demonstrated in other threads here, it is not at all obvious whether salience can do all the work that needs to be done here. My only point is that an argument is needed against this position in order to sustain the conclusion that only contextualism has the advantage of providing a purely epistemic sense of justification.

Leave a Reply

Your email address will not be published. Required fields are marked *