managing contradictions – brainstorming

What to do when your beliefs are contradictory – don’t panic!

Some philosophers talk as if there is no rational advice one can give to someone – oneself for example – who has just uncovered a contradiction in her beliefs. This seems wrong to me, and eventually I would like to write something about this. In the meantime, I thought that some brainstorming might help. Anyone want to join in?

Some relevant observations [edited to put them below the fold]. (No references here – but of course these are linked to many other people’s work.)

(1) checking for consistency is cognitively expensive. (A basic bit of meta-logic that seems to be born out in everyday cognition.) So it is not a good idea to spend a lot of energy rooting out contradictions in one’s beliefs when they are not causing trouble. When they are, there is usually some theme involved, following which can uncover the worst ones. For example I know to check whether my beliefs about what appointments I have in the next few days can be reconciled with my beliefs about where I am going to be.

(2) because it is not easy to find contradictions, when one deals with one by denying one of the conflicting beliefs, one has no assurance that one is not just creating more or worse trouble. It is safer to move from belief to agnosticism. But it is also an aim of inquiry to have usable opinions about things. So there is a tension here. Often the best thing to do is to leave the beliefs in place, and guard against trouble with a compartmentalization of one’s beliefs, making one wary of conclusions whose roots lie in both compartments.

(3) suppose your beliefs fall into two compartments, and you cannot make a unified system of them. That is no worse than finding that someone who is no less intelligent and well informed than you believes something you deny. That’s a very familiar situation. One usually just lives with it, if there is no easy way to persuade the other, and you don’t find yourself persuaded. But in the one person compartmentalised belief case it is you who is no less intelligent and well informed than you, and who disagrees with you. There is really nothing more shocking about one person letting a contradiction remain than about one person tolerating a disagreement with an epistemic equal.

(4) suppose that you think the other person is better informed than you. Then if their belief contradicts yours you have more of a problem than when you find a contradiction in your own beliefs. If their belief added to yours entails something which you explicitly deny, then even if you don’t automatically add their belief to yours, still you would think you have something to worry about. Conversely, suppose you think that p on a topic where you feel pretty secure, and think that if p then q, on a topic where you don’t trust your intuitions and don’t have good control over the evidence. Still, that is what you believe. Then the two come together, one romantic evening, and you see the pressure to q, which you would much rather avoid accepting. You could just withdraw the belief that if p then q, but you could also shrug your shoulders and say “always felt I was at sea on those matters: better not let my X opinions mess up my Y opinions.”

(5) it is sometimes be the best course to let your beliefs get fragmentary – if you have the virtue of letting them fall into the right fragments – and hope that these fragments combine with the fragments of other people’s beliefs in profitable ways. So I have beliefs of kinds X, Y, Z and so do you, and we strive to make our X- and Y-beliefs consistent with one another’s, but just agree to differ when there’s trouble over Z. Sometimes it’s best to straighten out your own mind, sometimes best to get on the same wavelength as your friends, depending.

(6) so two important intellectual virtues emerge. Having a feel for profitable fragmentation, and knowing when to worry about the pattern of your beliefs and when to worry about their relation to beliefs of others. In a way the second is a special case of the former.

Adam Morton
university of alberta
adam.morton@ualberta.ca


Comments

managing contradictions – brainstorming — 7 Comments

  1. This is a very interesting topic, and though I am not positive this will be helpful, I would like to partake in the discussion (if you can’t tell already I am a virgin blog participator).
    What I found to be most intriguing on my initial reading of your post was the realization or the “pressure to q” in (4). It seems to me that person S has a certain epistemic responsibility to acknowledge the relevance of the relationship they have come to understand between p and q (in this case p -> q). Is what you mean by “pressure” that S now has an intuition or at least good evidence to believe if p then q? If that is the case, I do not think a person would be legitimate in withdrawing the belief, or even not allowing the interference of the X and Y beliefs if that means ignoring the consequences of either X or Y. Do you have an example of how a “pressure to q”, that is an awareness of the plausibility of if p then q might lead to a profitable fragmentation?

    On a side note: I agree wholeheartedly that people can agree to disagree as you I said about Z in (5); especially with respect to perceptual beliefs like color recognition or tastes.

  2. There is one line of thought about inconsistency and reasonable belief which holds that inconsistency is informative and (potentially) an opportunity to improve one’s epistemic position, rather than a cause for despair or a sign of irrationality.

    van Benthem has expressed this view, for instance, but the idea is implicit in any theory of belief revision that includes a contraction principle. Or, more generally, any cognitive agent that can learn. My colleague Reinhard Kahle has worked on syntactic belief revision operators, for instance, which address some of the options you mention in your post. “Structured belief bases” is what he calls them, I believe.

    The common idea behind these strategies is that inconsistency represents an opportunity to the agent for him to learn something new, about his own beliefs, about the source of the information, about the content of the message, about his own priorities,… rather signal a crisis in rationality that must be attended to immediately. And inconsistent input is an opportunity rather than a cause of forced resolution, for reasons you mention. It might be more important to finish eating lunch than attend to one’s inconsistent beliefs, for instance.

  3. Hi Adam, I find the idea here of comparing rational inconsistent belief to rational disagreement quite interesting. One cost of doing so, though, is that it makes the notion of defeat quite messy, especially the notion of a rebutting defeater. Or maybe it is in tension with this principle: evidence for p is evidence against ~p.

  4. Adam it may also be useful to consider which norm one is considering to be under attack by logical difficulties in a system of belief. Do the envisioned norms under attack counsel against sets of beliefs from which a contradiction can be derived? Or against believing p while at the same time believing ~p? Or against believing (p & ~p)?

    If we begin by controlling for additional elements in belief (e.g., modes of presentation to address examples like the Kripke puzzle and more standard cases such as Cicero/Tully cases), epistemologists typically begin from the latter point and try to work toward the former views. Foley, for example, thinks the latter two norms are true (no justified self-contradictions and no justified contradictory beliefs) but disagrees with the former (no justified inconsistent beliefs). I think it is an interesting question whether the two he endorses stand or fall together. I’m inclined to agree that there can’t be justified self-contradictions, but I’m unconvinced that there can’t be cases where one is justified in believing p while at the same time being justified in believing ~p (still assuming sameness of mode of presentation). The only arguments I have seen and can imagine assume the principle from the outset, or commit one to the impossibility of justified inconsistent beliefs. So maybe the only defensible norm is the one about no justified self-contradictions. To defend that norm, more has to be said about the effects of rational endorsement of dialetheism than has been said to this point, but I’ll ignore that issue here!

  5. A short reaction to the helpful comments.
    It is not hard to find examples where it makes sense to resist both the temptation to draw a simple conclusion and the temptation to clear up a contradiction. You see that if p then q follows from what seems clear to you about topic A, on which you are an expert, and that p follows from what seems right to you about confusing and novel topic B. But you absolutely disbelieve q. There is a temptation to derive q, see there is a contradiction with your prior belief, and clean it up by abandoning q, since you knew you were wobbly on B questions. But: suppose you expect to be thinking about B day after tomorrow, or talking to an expert on it. Then you should leave things as they are. When you know your way around B you may find out that p is definitely not something to abandon, or that implausible as q may be it is nothing like as implausible as not p. (This is using what Gregory Wheel
    While waiting for things to get clearer, there may be a helpful strategy related to what Jonathan Kvanvig says. Concentrate not on ironing out contradictions when belief is collapsed to a yes/no level, but on getting your degrees of belief to make sense. I suspect that probabilistic coherence is not exactly what we need, but some way of ranking conflicting possibilities so that one can guide action by them while waiting for a deeper resolution is clearly something to aim for. More to think about here, in particular about what ways of massaging one’s degrees of belief can leave open the questions one wants to resolve when one thinks more deeply or becomes better informed.
    Believing an outright contradiction is clearly a more dire situation than having beliefs that entail one another’s negation. And of course the latter fix comes in degrees, depending on how much thinking is required to fill out the entailment. (With possibilities of mistakes, hidden premises, and so on.) The essence of compartmentalisation is to prevent the latter turning into the former. But, I suggest, there are some situations in which even an outright p&~p is to be left in place. Contradictions are bearable when ambiguity, vagueness, or contextuality is in play. In a standard sort of case one is happy to think “that is green — for ripe/un-ripe berry classification– and not green — for definite/washed-out photo background choice.” Now suppose that you expect that there is some such clarification that will make the p&~p your thinking is pushing you towards ok. But you can’t figure out what it is. Mightn’t you just leave it in place while you wait? An obvious example for epistemologists turns on “know”. I think that my daughter knows I am her father. She has ample evidence, and it is true. But I also think that there are many mistakes of this kind — it’s a wise child .. — and if she reflected on some of her friends’ situations doubts would pass through her mind. So she just thinks she knows. She knows and she doesn’t. Easily resolved if we become contextualists, but there are reasons not to, and anyway there are so many contextualisms. Just as one can leave an inconsistent set of beliefs in place and lessen the crisis by finding a usable assignment of degrees of belief to them, one can leave a contradiction in place and as palliation describe the factors that suggest some semantic subtlety, even though you don’t yet know what the subtlety is.

  6. I am very sympathetic to Adam’s view that we shouldn’t be blinkered by inconsistency. If you discover that you endorse (however you would like to cash out ‘endorse’) both p and not-p, this means that there isn’t a (standard) model satisfying your endorsements. So what? This doesn’t mean that you haven’t good reason for endorsing both sides of p, nor does your discovery that you do license you to binge inferencing.

    It is a fine strategy to put some modal distance between you and your inconsistent commitments w.r.t. p, but I don’t know why we should worry whether this trick can always be pressed into play.

    Logic, and probability for that matter, have very little to do with rational belief and reasoning, after all.

  7. Well, this topic continues to be interesting but I am getting a bit confused now. Let me make sure I am clear on the proposed situation. With respect to topic A, S is an expert and clearly sees if p then q. With respect to topic B, S sees that p follows but disbelieves q, that is thinks ~q, and this creates a contradiction. However, there is also the possibility that S could be wrong about q in topic B, but S realizes, or hopes, the matter will be resolved through consultation with an expert and additional information. If all that has been said so far is correct, I would like to add a Peircean turn here and claim S does not in fact have an actual belief with respect to topic B, but is simply mired in doubt and in need of a resolution to an indeterminate situation. Additionally, it seems to me topic B presupposes q must always have an independent relationship with p. Why? Any number of things can follow from p, and ~q does not necessarily have to represent a contradiction as much as it simply points out the absence of q in some given situation with respect to p. Here’s an example, though not as novel as I might like, which I think exemplifies the points. I am curious to see if with a little sympathy this will go through.

    S is an ambitious and dreamy young person who imagines himself saving the lives of other people in heroic scenarios. In particular S is imaging what traits he may need to posses to prevent people from falling to their death. He is envisioning those scenes in movies where the hero is holding a person who is dangling over the side of a building. Now, S is thinking of two basic types of characteristics of hero’s, which he takes to be strength and luck. With respect to strength S believes, if he has a weak grip then person q will fall to their death. With respect to luck S believes, if he has a weak grip then person q will not fall to their death. The reason for S not thinking q will fall to their death with resect to luck is because he thinks that by being lucky there will be some type of awning beneath him which will prevent person q from falling to death. But S does not know very much about physics and formal mathematics so he thinks that it is possible the awning will not prevent person q from falling to their death. However, S has the opportunity to go and speak with a physics professor in a couple of days, and that individual can give him the information he needs. After this occurs S comes to believe person q will indeed not fall to their death.

    I will freely admit the example has a few weak points and some holes, but I think a little sympathy can patch those up fine so I will just leave things as they stand. Here’s how I think it breaks down. Prior to speaking with the physics professor, S does not have an actual belief regarding the fate of person q, he merely thinks it is possible the awning can prevent the persons death. But, I believe the presence of doubt prevents S’s notion from attaining the status of belief. Now, after S speaks with the professor, he is able to consecrate his thoughts about ~q into an actual belief and this creates a potential contradiction with the first scenario, where he clearly saw if p then q . However, I do not think this is really a contradiction because what is actually taking place is not if p then ~q, but rather if p then (r and ~q).

    Ultimately, what I am saying, is I cannot foresee a situation where a person believes if p then q follows from topic A, and then believes p follows from topic B, but has an actual belief in ~q while ~q is considered independent of any other statement in the same way q is considered in topic A.

Leave a Reply

Your email address will not be published. Required fields are marked *