Below Ralph asks whether the concept of “degrees of justification” is best measured by a probability function. This worries him because it would mean that all logical truths are maximally justified, and it seems that you may be more justified in believing some than others. (Contrast ~(p&~p) with a page-long tautology.)
I’d like to suggest–or really, fool around with the idea that–if you think that you should be more justified in believing some logical truths than others, then that tells you how to avoid treating degrees of justification as a probability function.
First let me say why you might not think that. Suppose that your conception of degrees of justification is: The more likely that your evidence makes a proposition, the more justified you are in believing it. Any cognitive limitations you may have that block you from figuring out how likely your evidence makes a proposition are your own problem. Then it’s no problem to say that all logical truths are maximally justified. They’re all certain, no matter what experiences you’ve had–because they’re independent of experience–and the only difference between ~(p&~p) and the page-long tautology is that your cognitive limitations may keep you from seeing that the latter is certain. But on this view, that’s your problem.
OK, so most of us will probably say that there’s at least a sense of justification that that view doesn’t capture. Maybe it’s something like this: How justified you are in believing a proposition depends on how confident you should feel about it, given the thought processes you’ve just gone through (including gathering evidence). So even after working out the 100-column truth table for the page-long tautology, we should be pretty unsure that it’s actually true–reflecting on that process, we can see that we might easily have made a mistake.
Now, how do we work out that the page-long tautology has to have maximum probability? By going through the probability axioms–pr(~p) has to be 1 – pr(p), pr(p v q) has to be pr(p)+pr(q)-pr(p&q), etc. As we work out each step, we can see that the probability of the tautology must be 1.
Except–in calculating that probability, we are going through a long, mistake-prone thought process. So on this conception of justification, we aren’t absolutely justified in thinking that the probability of the tautology is 1. We have some reason to believe that the probability is 1, but we can’t be absolutely sure. If you conceive of justification as degree of confidence appropriate to your thought process, then justification will be more like a distribution of probabilities rather than a single probability. And some tautologies will indeed come out less justified than others. (This also looks like it makes justification into a partial rather than a complete ordering, which Ralph seemed to think might be desirable.)
The ironic result, if this works, is that it may undercut a certain style of argument for Bayesianism. This is the argument based on which bets it’s practically rational to accept. Practical rationality seems to be tied to how confident it’s appropriate for you to feel about your beliefs rather than to how likely those beliefs are on your evidence. If I’m faced with a bet on p, I’ll think about whether I’ve worked out that p must be true, rather than on whether p is likely on my evidence. (The former is my best shot at getting the truth of the latter, and the latter isn’t as important as how likely p is, period.) So I should not assign degrees of justification that are like probabilities–I shouldn’t be willing to stake my life against a penny on some complicated logical tautology. Hence focusing on what is practically rational won’t lead me to Bayesianism.
This argument depends crucially on our cognitive limitations. It’s because we have reason not to be 100% confident in our thought processes that we wind up with degrees of justification that don’t match the probability laws. So it may be that we would assign probability-like degrees of justification if we didn’t have these cognitive limits–and then we would assign maximum justification to all logical truths. (Or it may not be!)
One of the questions this raises is whether the purely epistemic standpoint should take account of our cognitive limitations. Often I think not, but it also seems to me as though a lot of our cognitive architecture is dependent on those very limitations. If we could keep track of degrees of justification for all our beliefs, would we have any categorical beliefs at all?