Two politicians disagree about policy. In the end, some resolution is necessary, since unending paralysis is intolerable (for whatever reason). So they compromise. Both think the result is less than ideal. To understand the result, we need to know not only the history of the process, what they used to think and why, but also what they presently think and why. They think that the compromise is best, in some sense, but also, in another sense, that it is not. Without some such internal conflict, we don’t yet understand the political process in question.
Two cognizers disagree about some claim p. Unlike politics, resolution of conflict isn’t necessary. So maybe they disagree forever, even after discussions aimed at resolving the disagreement. Maybe, though, the discussions are fruitful, and they come to agree. When they do, the explanation will cite past disagreements leading to present agreements. After fruitful discussion of this sort, there is no longer any present disagreement, either about p or what the evidence shows about p. But suppose no such resolution occurs.
Some epistemologists say rationality requires both disputants to give up their views about p. Suppose that is right. Call this result “epistemic compromise.” Now, situations of epistemic compromise are different from situations of full resolution of disagreement. When resolution occurs, the story we tell has present agreement as the outcome of past disagreement and discussion. When epistemic compromise occurs, we expect something different. Perhaps something like this should be said. As the disagreement continues, if the disputants are aware that they are converging on a point where rationality compels them to abandon their beliefs, they will view the approaching event with consternation. They will view it as a loss rather than a gain, and this sense of loss will not leave once the convergence point is reached. Even if we don’t like this particular account, we should expect, as in the case of political compromise, some present mark to distinguish it from cases of epistemic resolution. What might that present mark be?
To answer this question and see the problem with epistemic compromise, let’s define two ideas. The first is cognitive self-alienation, and the second is the difference between epistemic compromise and mere change in view. On the latter issue, the difference is modelled on the nature of political compromise, and requires some internal mark of regret to distinguish the two. Let’s call mollificationism the view that says compromise is rationally required under certain conditions such as shared evidence and equal respect for abilities to analyze the information in question. The best hope for an internal mark in question is something like this: when X and Y disagree and must compromise, then from X’s total point of view p is correct and from Y’s total point of view ~p is correct.
This answer brings us to the notion of cognitive self-alienation. Rationality is perspectival in character, so if the above answer is going to be sustained, total point of view must include information not legitimately counted in the perspectival character of rationality. This result commits the mollificationist to what I will call cognitive self-alienation. We can approach this concept indirectly by defining the differences between deferring, demurring, and desisting.
Each of us has certain cognitive abilities and disabilities. Some of us are better at math, some are better at noticing small details, some are better at visual detection, etc. In addition to these differences in cognitive abilities, there is also the matter of our own view of ourselves as to the level of ability we have in a given domain. This perspective on self can lead us to demur on changing opinion in the face of disagreement and can also lead us to defer to others when we view them as in a better position on the matter. Whether to demur or defer is a matter of our perspective on ourselves, at least in the default case. What to say when defect or inappropriateness is involved in a perspective on ourselves is something we don’t need to address for present purposes.
In between demurring and deferring is desisting in belief, which is perfectly sensible from each of our points of view when our view of ourselves falls between a view calling for one to demur or for one to defer. To defer and to desist both involve change in view, but all three attitudes are here understood in terms of a coherence between a first-order response to disagreement and a metalevel perspective on oneself that undergirds the various responses of demurring, deferring, or desisting.
It is important to notice that the difference between demurring and desisting need not be reflected in a perspective on one’s cognitive abilities that goes beyond the present case. For example, one may view another as better in general about the subject matter in question, but not in the present case. There is no incoherence or guarantee of irrationality in distinguishing general from particular in this way: it is nothing more than recognizing, as Pollock characterizes it, the defeasible character of inferring definite from indefinite probabilities.
Mollificationists insist on compromise, however, and compromise is different from desisting, as characterized above. In the case of compromise, the claim is that it is rational to give up a belief, even though from one’s total point of view, that belief is correct. It is in this way that the internal mark characteristic of compromise is in place. But now the unity of cognitive self involved in demurring, deferring, and desisting is no longer present. What we have is a person whose total point of view, including the view of self, doesn’t undergird desisting or deferring, but rather calls for demurring. And yet, compromise is required. What the mollificationist position in question requires is cognitive self-alienation, where one has a view of oneself and one’s abilities that is part of one’s total point of view and yet isn’t part of the perspectival character of rationality involved in determining what attitude is legitimate at the first-order level. Instead of a happy union and coherence between first-order belief and meta-level attitude toward self, we have internal conflict between these levels. A theory that insists that rationality requires such cognitive self-alienation has a serious burden of explaining how the theory is appropriately sensitive to the perspectival character of rationality.
The only response I can see here is to claim that the same information (the disagreement itself by someone with what looks like the same evidence) makes irrational both the belief and the perspective on self that generates the cognitive self-alienation. But that can’t be right. We develop more fine-grained assessments of people’s abilities by finding out that in spite of their general competence they are mistaken in more specific circumstances. So we start with general competence in some cases, noticed shared data, and rely on self-trust to learn that the person in question is not as competent in all the subareas as in the general area in question. To adopt the stronger view reduces the mollificationist to insisting that once shared data is recognized, disagreement rationally coerces dropping belief, and that view is recognizably mistaken.
So if all of this is correct, the substantive question relative to the literature is this: are the equal weight positions of Feldman, Elga, Christensen, et. al. versions of mollificationism? I think the answer is yes, but won’t argue for that here.