Compromise, Political and Epistemic

Two politicians disagree about policy. In the end, some resolution is necessary, since unending paralysis is intolerable (for whatever reason). So they compromise. Both think the result is less than ideal. To understand the result, we need to know not only the history of the process, what they used to think and why, but also what they presently think and why. They think that the compromise is best, in some sense, but also, in another sense, that it is not. Without some such internal conflict, we don’t yet understand the political process in question.

Two cognizers disagree about some claim p. Unlike politics, resolution of conflict isn’t necessary. So maybe they disagree forever, even after discussions aimed at resolving the disagreement. Maybe, though, the discussions are fruitful, and they come to agree. When they do, the explanation will cite past disagreements leading to present agreements. After fruitful discussion of this sort, there is no longer any present disagreement, either about p or what the evidence shows about p. But suppose no such resolution occurs.

Some epistemologists say rationality requires both disputants to give up their views about p. Suppose that is right. Call this result “epistemic compromise.” Now, situations of epistemic compromise are different from situations of full resolution of disagreement. When resolution occurs, the story we tell has present agreement as the outcome of past disagreement and discussion. When epistemic compromise occurs, we expect something different. Perhaps something like this should be said. As the disagreement continues, if the disputants are aware that they are converging on a point where rationality compels them to abandon their beliefs, they will view the approaching event with consternation. They will view it as a loss rather than a gain, and this sense of loss will not leave once the convergence point is reached. Even if we don’t like this particular account, we should expect, as in the case of political compromise, some present mark to distinguish it from cases of epistemic resolution. What might that present mark be?

To answer this question and see the problem with epistemic compromise, let’s define two ideas. The first is cognitive self-alienation, and the second is the difference between epistemic compromise and mere change in view. On the latter issue, the difference is modelled on the nature of political compromise, and requires some internal mark of regret to distinguish the two. Let’s call mollificationism the view that says compromise is rationally required under certain conditions such as shared evidence and equal respect for abilities to analyze the information in question. The best hope for an internal mark in question is something like this: when X and Y disagree and must compromise, then from X’s total point of view p is correct and from Y’s total point of view ~p is correct.

This answer brings us to the notion of cognitive self-alienation. Rationality is perspectival in character, so if the above answer is going to be sustained, total point of view must include information not legitimately counted in the perspectival character of rationality. This result commits the mollificationist to what I will call cognitive self-alienation. We can approach this concept indirectly by defining the differences between deferring, demurring, and desisting.

Each of us has certain cognitive abilities and disabilities. Some of us are better at math, some are better at noticing small details, some are better at visual detection, etc. In addition to these differences in cognitive abilities, there is also the matter of our own view of ourselves as to the level of ability we have in a given domain. This perspective on self can lead us to demur on changing opinion in the face of disagreement and can also lead us to defer to others when we view them as in a better position on the matter. Whether to demur or defer is a matter of our perspective on ourselves, at least in the default case. What to say when defect or inappropriateness is involved in a perspective on ourselves is something we don’t need to address for present purposes.

In between demurring and deferring is desisting in belief, which is perfectly sensible from each of our points of view when our view of ourselves falls between a view calling for one to demur or for one to defer. To defer and to desist both involve change in view, but all three attitudes are here understood in terms of a coherence between a first-order response to disagreement and a metalevel perspective on oneself that undergirds the various responses of demurring, deferring, or desisting.

It is important to notice that the difference between demurring and desisting need not be reflected in a perspective on one’s cognitive abilities that goes beyond the present case. For example, one may view another as better in general about the subject matter in question, but not in the present case. There is no incoherence or guarantee of irrationality in distinguishing general from particular in this way: it is nothing more than recognizing, as Pollock characterizes it, the defeasible character of inferring definite from indefinite probabilities.

Mollificationists insist on compromise, however, and compromise is different from desisting, as characterized above. In the case of compromise, the claim is that it is rational to give up a belief, even though from one’s total point of view, that belief is correct. It is in this way that the internal mark characteristic of compromise is in place. But now the unity of cognitive self involved in demurring, deferring, and desisting is no longer present. What we have is a person whose total point of view, including the view of self, doesn’t undergird desisting or deferring, but rather calls for demurring. And yet, compromise is required. What the mollificationist position in question requires is cognitive self-alienation, where one has a view of oneself and one’s abilities that is part of one’s total point of view and yet isn’t part of the perspectival character of rationality involved in determining what attitude is legitimate at the first-order level. Instead of a happy union and coherence between first-order belief and meta-level attitude toward self, we have internal conflict between these levels. A theory that insists that rationality requires such cognitive self-alienation has a serious burden of explaining how the theory is appropriately sensitive to the perspectival character of rationality.

The only response I can see here is to claim that the same information (the disagreement itself by someone with what looks like the same evidence) makes irrational both the belief and the perspective on self that generates the cognitive self-alienation. But that can’t be right. We develop more fine-grained assessments of people’s abilities by finding out that in spite of their general competence they are mistaken in more specific circumstances. So we start with general competence in some cases, noticed shared data, and rely on self-trust to learn that the person in question is not as competent in all the subareas as in the general area in question. To adopt the stronger view reduces the mollificationist to insisting that once shared data is recognized, disagreement rationally coerces dropping belief, and that view is recognizably mistaken.

So if all of this is correct, the substantive question relative to the literature is this: are the equal weight positions of Feldman, Elga, Christensen, et. al. versions of mollificationism? I think the answer is yes, but won’t argue for that here.


Comments

Compromise, Political and Epistemic — 6 Comments

  1. How is it “recognizably mistaken” to insist that rationality requires actually agreeing, instead of making yourself agree on some basic level but regretting it at some meta level?

  2. I’m not quite sure which sentence you are referring to but I think it is the last one of the penultimate paragraph? That sentence refers to a stronger position, according to which shared data alone compels abandoning belief (independent of any role for assessment of competence in analyzing the data). That position is clearly mistaken, since it is certainly rational to discount the opinion of someone known to be bad at data analysis in a given area of inquiry.

  3. Let me suggest that knowingly disagreeing is a clear sign of a rationality failure, but that it does not provide a recipe for fixing the failure. Consider an analogy with someone who discovered his P(A) = 0.1 while his P(not A) = 0.3. His failure to satisfy P(A) + P(not A) = 1 would be a clear sign that his beliefs were not the best they could be, but this sign would not by itself tell the best way to modify P(A) and P(not A) to fix the failure.

    Similarly, I would not advise someone to force themselves to agree by choosing a belief away from what their best evidence indicates. Instead, I would say that the fact that you knowingly disagree with someone else is a clear sign that one or both of you is not doing the best they can with their evidence.

  4. Robin, there is something right about what you say here, but it is very hard to figure out what it is. First, we’ll have to make sure that the parties to the dispute have the same evidence, otherwise it’s easy for them to both be rational and yet disagree. Even if they have the same evidence, however, there is still a problem. If logical constraints on rationality have to be given up, it is very hard to see how probabilistic constraints are going to survive either. To violate the probability calculus is to have a belief set (or degree of belief set) that is defective in some way, just as to have an inconsistent set of beliefs is to have a set of beliefs that is defective in some sense as well. But such defects need not be defects of rationality. Rationality is perspectival in a way that blocks failures of logical and probabilistic coherence from guaranteeing irrationality. (The quickest argument to this conclusion is to notice that Frege could have been rational in believing his axioms even though they entail a contradiction.)

    The lessons of the perspectival character of rationality haven’t been fully appreciated in epistemology to this point: there are still quite a few that hope to provide logical or probabilistic constraints on rationality. But they usually hedge at the same time, claiming that these are constraints on ideal agents, and that actual agents aren’t so constrained. I think this gives away the store, but since I talked about this in the comments section in the immediately prior post (about dutch book arguments), I’ll not repeat that here.

  5. Frege could be excused because he did not know his axioms led to a contradiction. So can’t we describe consistency as an “ideal” of rationality, which people should try to achieve? We can understand the idea of an ideal without imposing the requirement that every reasonable person always achieves every ideal.

    Consider the examples of driving/walking in the middle of your lane/path, or of building walls and floors to meet at right angles. We don’t have to claim that every reasonable person must always meet such ideals to make sense of them as ideals. To be reasonable, they must just be trying to reach such ideals; it is unreasonable to choose not to move closer to an ideal when you are aware of the deviation and the cost is very low.

    Similarly probability coherence and not knowingly disagreeing can be considered ideals of rationality.

  6. No, consistency isn’t an ideal; that’s the lesson of the preface paradox. If some other logical or probabilistic constraint is an ideal, we’d need to see an argument for it. There are lots of attempts, but I think in the end they don’t work. See, for example, the problems I noted for Christensen’s dutch book argument. The same problems plague every representation theorem argument as well.

Leave a Reply

Your email address will not be published. Required fields are marked *