# More on Degrees

Below Ralph asks whether the concept of “degrees of justification” is best measured by a probability function. This worries him because it would mean that all logical truths are maximally justified, and it seems that you may be more justified in believing some than others. (Contrast ~(p&~p) with a page-long tautology.)

I’d like to suggest–or really, fool around with the idea that–if you think that you should be more justified in believing some logical truths than others, then that tells you how to avoid treating degrees of justification as a probability function.

First let me say why you might not think that. Suppose that your conception of degrees of justification is: The more likely that your evidence makes a proposition, the more justified you are in believing it. Any cognitive limitations you may have that block you from figuring out how likely your evidence makes a proposition are your own problem. Then it’s no problem to say that all logical truths are maximally justified. They’re all certain, no matter what experiences you’ve had–because they’re independent of experience–and the only difference between ~(p&~p) and the page-long tautology is that your cognitive limitations may keep you from seeing that the latter is certain. But on this view, that’s your problem.

OK, so most of us will probably say that there’s at least a sense of justification that that view doesn’t capture. Maybe it’s something like this: How justified you are in believing a proposition depends on how confident you should feel about it, given the thought processes you’ve just gone through (including gathering evidence). So even after working out the 100-column truth table for the page-long tautology, we should be pretty unsure that it’s actually true–reflecting on that process, we can see that we might easily have made a mistake.

Now, how do we work out that the page-long tautology has to have maximum probability? By going through the probability axioms–pr(~p) has to be 1 – pr(p), pr(p v q) has to be pr(p)+pr(q)-pr(p&q), etc. As we work out each step, we can see that the probability of the tautology must be 1.

Except–in calculating that probability, we are going through a long, mistake-prone thought process. So on this conception of justification, we aren’t absolutely justified in thinking that the probability of the tautology is 1. We have some reason to believe that the probability is 1, but we can’t be absolutely sure. If you conceive of justification as degree of confidence appropriate to your thought process, then justification will be more like a distribution of probabilities rather than a single probability. And some tautologies will indeed come out less justified than others. (This also looks like it makes justification into a partial rather than a complete ordering, which Ralph seemed to think might be desirable.)

The ironic result, if this works, is that it may undercut a certain style of argument for Bayesianism. This is the argument based on which bets it’s practically rational to accept. Practical rationality seems to be tied to how confident it’s appropriate for you to feel about your beliefs rather than to how likely those beliefs are on your evidence. If I’m faced with a bet on p, I’ll think about whether I’ve worked out that p must be true, rather than on whether p is likely on my evidence. (The former is my best shot at getting the truth of the latter, and the latter isn’t as important as how likely p is, period.) So I should not assign degrees of justification that are like probabilities–I shouldn’t be willing to stake my life against a penny on some complicated logical tautology. Hence focusing on what is practically rational won’t lead me to Bayesianism.

This argument depends crucially on our cognitive limitations. It’s because we have reason not to be 100% confident in our thought processes that we wind up with degrees of justification that don’t match the probability laws. So it may be that we would assign probability-like degrees of justification if we didn’t have these cognitive limits–and then we would assign maximum justification to all logical truths. (Or it may not be!)

One of the questions this raises is whether the purely epistemic standpoint should take account of our cognitive limitations. Often I think not, but it also seems to me as though a lot of our cognitive architecture is dependent on those very limitations. If we could keep track of degrees of justification for all our beliefs, would we have any categorical beliefs at all?

#### More on Degrees — 3 Comments

1. Matt W.–I’ve been meaning to ask a question, but was quite caught up with pragmatic encroachment!

A couple of reasons not to set up the problem as you do. First, you start off with the following conception of evidence: the more likely your evidence makes a proposition, the more you are justified in believing it. You conclude that, on this conception, there is no problem saying that all logical truths have the same degree of justification.

I think this argument works only if you read “likelihood” in the premise in some sense that honors the probability calculus. Then of course the argument is trivial. If likelihood means epistemic probability, then I accept the premise, but not the conclusion, since I don’t think epistemic probability honors the calculus.

As a result, I don’t see any need to rephrase the account in terms of how confident one should feel. Unless your a deontologist of some sort, you might deny that there is any level of confidence one should have. In addition, one could have a level of confidence slightly different than the degree to which the belief is justified, and yet for that degree of confidence itself to be justified.

2. Jon, I agree that I’ve at least elided something in setting up the first part of the argument. It might have been better if I had been said “experiences” instead of “evidence.” The conception I was working with is something like this: Say you have a set S of experiences (however “experiences” are defined). S is the raw material for judgments of justification. Then the question we’re asking is “To what extent do experiences S justify belief in p?” Sometimes it may be difficult to impossible for human beings to figure out the answer, but that shouldn’t affect the extent to which the experiences justify belief in p. (Maybe this is something akin to the desire to avoid strong person-relativity that Ralph discusses–the fact that you’re constructed so that you can’t come to belief in p doesn’t keep you from being justified in believing p. I think I may be setting out something much stronger here than Ralph is, though.)

Then logical truths will always attain the maximal degree of justification, no matter what experiences you’ve had, because they can in principle be known to be true independently of your experiences. If you can’t figure out for sure that they’re true, that’s a matter of your cognitive limitations.

I only believe in this picture some of the time, and I don’t think that it by itself would force us to take degree of justification as honoring the probability calculus. For instance, you could hold that degree of justification that p can’t be treated like a probability if your experiences don’t give you any information concerning p.

The crux may be whether one’s favorite view of degree of justification motivates treating logical truths as differently justified without adverting to the thought process that leads up to the belief. That may be the case for the concept of epistemic probability you’re describing–in which case my account of how to avoid assigning maximum probability to all logical truths won’t work.

3. Pingback: Opiniatrety