More on Justified Inconsistent Beliefs

Claudio’s paper reminded me that there are two separate problems regarding justified inconsistent beliefs. The first problem is the problem of contingent beliefs that are jointly inconsistent, as we find in the lottery and preface paradoxes. The second problem is the problem of necessary falsehoods and the possibility of being justified in believing them.

The first problem I think I know what a coherentist can say about: distinguish between ordinary justification and epistemic justification, as I’ve written about here before (and which appears in the linked paper in the sidebar on works in progress). But what of the second problem? It appears that the distinction in question doesn’t help here, since one can be justified in thinking that one’s justification for believing a necessary falsehood would turn up only further information that is misleading. So the distinction in question doesn’t help for this issue.

I now think this appearance is misleading.

Let’s distinguish between beliefs that are inconsistent from those that are necessarily false. An inconsistent belief is one from which the falsum constant can be derived in the preferred logic. A necessarily false proposition need not be inconsistent, but may be. So the problem of justified inconsistent beliefs is only a problem for those beliefs from which this constant can be derived.

A further constraint is also in order, one regarding the question of which logic the inconsistency can be derived in. Suppose that a belief system contains an implicit, or presupposes a, theory of evidence. Such a theory of evidence will itself presuppose a preferred logic or group of logics. Coherentists will want such logics to be the ones relative to which the inconsistency can be derived.

Given this further constraint, I think the earlier distinction is adequate for this further kind of justified inconsistent belief as well. For if the governing theory of evidence for determining justificatory status is one in which a contradiction is derivable from one’s belief, then whatever evidence one has for that belief will not count as adequate evidence for the claim that gathering further evidence would reveal only misleading information. One may have some evidence for this claim about further information, but its confirming power will be defeated by the evidence one also has that this evidence confirms a claim from which the falsum constant follows.

So, it appears to me, the distinction between ordinary and epistemic justification can solve both types of justified inconsistent belief problems for coherentism. Something further may also be true: maybe inconsistent beliefs of this sort can’t even be justified in the first place. For if the theory of evidence that governs epistemic facts about you is a theory on which the falsum constant is derivable from a belief of yours, this fact itself may defeat whatever confirming power any evidence you might have in favor of this belief. But even without this claim, I think the problem of justified inconsistent beliefs in necessary falsehoods isn’t insoluble for coherentists.


Comments

More on Justified Inconsistent Beliefs — 8 Comments

  1. Jon, one way to distinguish the two problems is in terms of the difference between (i) xBp&xB~p and (ii) xBp&~p. (i) involves inconsistent beliefs, whereas (ii) involves belief an inconsistency. I’m probably missing something, but I don’t really see how (ii) poses a special problem for coherentism. Believing an inconsistency is never justified and coherentism does not seem to entail that it ever is, unless we subscribe to the principle that xBp&xBq => xBp&q. But this principle seems false, if only because of the intensionality of belief (and belief ascription).

    (Sorry for the not-so-much-to-the-point post based on probable misunderstading of the issue at hand due to ignorance of the relevant literature. At least I get the prize for most epistemic vices in one post: irrelevance, misunderstanding, and ignorance all piled up in one. It’s your luck day!)

  2. Uriah, if you’re right in your apology, then I guess we’re even, since I comment from the same level of competence on your blog! (Well, I guess it’s the Arizona blog…)

    Until the mid-90’s, every coherentist said that consistency is required for coherence, and hence for justification. Lottery and preface examples make it hard to swallow that justified inconsistent beliefs are impossible, and Foley’s 1979 piece “Justified Inconsistent Beliefs” is the source of the worry. Foley says that inconsistent beliefs are different from contradictory ones (like believing p&~p), and that the latter category may be impossible. So the focus is on sets of beliefs, where the contents of the beliefs can’t all be true.

    The usual defense of consistency as a requirement for justification is given by Lehrer: if you violate consistency, you cannot achieve the epistemic goal of having all and only true beliefs.

    So there is a perceived problem for coherentism here, but I don’t think it is insoluble. One way to solve it is Lycan’s: make local consistency within compartments in a partitioned belief system a requirement, but not global consistency (and hope someday to have a good story to tell about how to partition the belief system). The idea is really Wayne Riggs’s idea. Since I don’t know how to partition the belief system, I think it would be nice to solve the problem leaving consistency as a requirement for coherence.

  3. A plug: I offer a motivation for the distinction that Foley mentions (and that Henry Kyburg has discussed since 1961) by discussing how standard methods work in inferential statistics. This point is discussed in my ‘Kinds of Inconsistency’ (2002) and also summarized in ‘Epistemology and artificial intelligence’ (2004), written with L.M. Pereira. The idea is that if we examine the inference techniques common to standard inferential statistics and judge them as justified (or judge some class of statistical inferences as justified), we must recognize a collection of experiments/measurements/estimates of a population parameter may perfectly well yield a collection of accepted (epistemically justified) propositions that are pairwise inconsistent. There is no getting around this property, unless we’re in a position of not needing to use inferential statistics. Indeed, if anything, we’ve learned how to use this property to our advantage, exploiting it to identify probable sources of errors. This type of inconsistency–which is even at times epistemically beneficial–is distinct from contradictory propositions. If the argument advancing this idea is sound, it might offer a line of reasoning against the standard coherentist line. For if the standard coherentist line would render inferential statistics epistemically suspect, it seems reasonable to suspect that the trouble is with standard forms of coherentism.

    W.r.t. Lycan’s suggestion, I’ve a vague memory of an exchange between Kyburg and Bryson Brown after Kyburg’s 1997 JP paper, ‘The rule of adjunction and rational acceptance’ I think it was called. Peter Schotch and Ray Jennings’s (and Brown’s) preservationist approach to paraconsistent logic turns on this idea of partitioning inconsistent sets into maximally consistent cells and then closing each cell under classical consequence. The idea is that by preserving the maximally consistent fragments of your knowledge base, you in some sense preserve the maximum amount of coherent information in the KB. I remember Kyburg not liking the idea because he thought the partition scheme gutted the idea of rational acceptance: the thought is that a parition on the set of lottery tickets yields the set of singleton sets formed from each individual ticket–or each ticket conjoined to the statement that at least one wins, if I’m remember this right. Anyway, closing either type of cell under consequence isn’t very informative. This exchange might be interesting to look at—although I am not sure where Bryson’s reply came out, perhaps there was something in NDJFL? (eesh, it has been a few years since then, hasn’t it?)

  4. Greg, thanks for the reference on the Kyburg/Brown exchange. Your gloss makes Kyburg sound right–if the partitions are that small, they won’t be any use to the coherentist.

    And you’re quite right that if coherentism renders suspect standard inferential statistics, we’ve got a problem. So here’s what I’m trying to develop. I want to know which sets of beliefs that can’t all be true are incompatible with coherentism and which are not, since I don’t think we should assume that all of them are. Begin by carving impossibilities into two categories: the inconsistencies and the non-inconsistencies. At most, only the inconsistencies need to bother the coherentist. Then carve the inconsistencies into two categories as well. There are the inconsistencies-in-virtue-of-the-theory-of-evidence and there are the other inconsistencies. If one is a subjective coherentist, where the line is drawn will depend on the subjective theory of evidence in force at a particular time for a particular individual. Once we get this far, then I’m interested in examples of inconsistencies. So I need to go read your paper, but maybe you can answer a quick question about the inconsistencies mentioned. What logic do you need for the contradiction to be derived? Classical? Intuitionistic? Relevance? Any of these?

  5. Jon,

    I was wondering what you thought about the following combination of attitudes.

    (i) Beleive p
    (ii) Believe p only if q
    (iii) Withhold q

    When discussing inconsistency, it’s easy to just focus on beliefs and disbeliefs and forget about withholdings. Perhaps that’s just a simplifying device. But it seems to me that coherentists should at least have withholding in the back of their mind while working out their solution, lest it be inapt to handle potential inconsistencies involving withholding.

  6. John, the case you describe is surely one with less coherence than if believing q replaced withholding. I would think that usual coherentist notions such as BonJour’s idea of the extent to which inferential connections exist would show that withholding decreases overall coherence, and if the beliefs have degrees attached to them, with withholding counting as degree of belief .5, then we have probabilistic inconsistency here as well.

  7. Hi Jon,

    I do it in a default logic that I developed in my PhD thesis. The core results are presented in ‘A resource bounded default logic’ (2004) and an informal motivation for the logic is given in (Wheeler and Pereira 2004) mentioned above.

    I want to stress that I don’t think there is a knock down argument against coherentism in these papers. (I don’t wish to suggest that I thought you were saying otherwise; rather, I mention it by way of setting up the comments below.) The issue is highly multi-variant, depending critically upon on how one choses to represent the inputs, how to represent belief (or some suitable class of doxastic states) and how entailment works within such a framework.

    Rather, an idea from these papers is this. If one tries to model basic kinds of inference problems made within standard inferential statistical (e.g., hypothesis testing, estimation of a population’s mean, variance…undergraduate textbook stuff) you realize that this is a fallible procedure: your sample could fail to accurately (i.e., be within the bounds predicted by your model) represent the population parameter you’re interested in, despite best efforts. To be a bit more precise, the behavior of an individual statistical inference is structurally very similar to a default rule (Reiter 1980, AIJ), which is a non-monotonic inference rule. There is a mild paraconsistency property that comes with standard default logic, which is due to the fixpoint semantics: it is possible, given a set of satisfiable defaults, for there to be multiple conclusion sets that are pairwise inconsistent. I wrote the Kinds paper before studying Reiter’s default logic and I thought at the time that modeling statistical inference was an uncontroversial reason to study paraconsistent logics, which were (are) controversial. Now I’m less sure of this particular claim, if only because it seems that paraconsistency properties crop up in all kinds of logics, which raises the question of what precisely makes a logic a paraconsistent logic. Maybe they aren’t that exotic after all. But that is another issue.

    Since, by hypothesis, each of the satisfiable statistical rules in a collection are well formed, it is reasonable to regard there being some type of commitment to their consequences. Now, that leaves open precisely what attitude one should take toward default extensions that are pairwise inconsistent, or to the elements in extensions that are pairwise inconsistent. But I should note that there often is information in such collections that would be lost simply by lopping off all that isn’t satisfied by each and every extension—which is what a consistency condition would recommend, which corresponds to the ‘skeptical inference strategy’ in the default logic/belief revision literature. Anyway, this is what motivates me to study operations on such sets and to embrace both inference rules and sources of evidence that appear (to me) to violate the spirit of the standard coherentist line—which is to eliminate an inconsistency as soon as it arises and to finger the rule (or system) that introduced it.

    Your proposal sounds interesting. I’d certainly be interested to learn whether the statistical examples I mentioned would be useful to you.

Leave a Reply

Your email address will not be published. Required fields are marked *