Philosophical blogdom is in a bit of a tizzy over Posner’s remarks as guest blogger on Brian Leiter’s site. Posner insists that the reasons we give for moral positions we hold can only be rationalizations, maintaining that the real reason we hold the views have to do with cultural and social factors rather than the arguments we give for these views.
In response at Left2Right, Gerald Dworkin says,
Analogy: I came to believe there were an infinite number of primes because a friend passed on this information to me when I was 11. But he also, I learned later, passed on lots of other things which were not true. But I now actually have a proof of this fact and so hold the belief now on that basis. The claim that my belief is just a rationalization requires showing one of two things. Either that such proofs are not themselves good reasons or that I would have continued to believe the claim even if I had no such proof.
Posner must believe one or both of these things about moral beliefs. Either there are no good arguments for moral beliefs or that even if there are these do not explain why we hold them.
It’s pretty clear from Posner’s post that he doesn’t wish to take on the challenge of showing that all the arguments for moral beliefs are bad arguments. Instead, he wishes to take the second route of impugning the pedigree of the beliefs, which Dworkin characterizes in terms of a counterfactual: we’d believe the claims even if the arguments are bad. This point is certainly relevant to what counts as a rationalization, but not in the way most imagine, I think. The counterfactual in question (and the related question of what explains the belief) should be viewed, at most, as evidentially related to rationalizing rather than constitutive of it.
So here’s the argument.
First, I suspect Posner’s points can be turned against his own arguments, since there is as much reason to think that our conceptions of adequate reasons are culturally conditioned just as much as our conceptions of morality are. More important, though, I don’t think such a point needs to bother us as epistemologists. We know that we all suffer from the egocentric predicament, and that we may also be incapable of ridding our assessments of the non-rational causal influences of our surroundings. How should that affect what to believe and which beliefs to act on? For example, suppose I know all the effects of my environment on my manner of assessment, and suppose I’ve got what I take to be the best evidence that you’ve just cheated me at cards. If you can get me to calm down long enough to deliberate, and remind me of cultural conditioning and the like, what should I do and what should I believe, assuming I take full account of the truth of what you’ve said? As far as I can tell, the proper response is that I’ll continue to do my best to stop any pernicious effects I find of such conditioning, but you still need to give me my money back.
So then you ask whether I’d continue to believe what I do even if I were to learn that my methods of assessment are inadequate. My response would be to ask why you think that is relevant at all. You say, “because if you would so believe, then you’re only rationalizing.”
That strikes me as thoroughly wrongheaded. There are many cases where my behavior and beliefs are not ideal in precisely this way. I realize that what I do and believe wouldn’t co-vary with my reasons for belief and action. To that extent, I’m a mess. But when I’m thinking about what to do and believe, my assessments line up with my behavior and beliefs. In some of these cases, I lament that my behavior and beliefs would remain the same even if I lacked the argumentative support I muster on their behalf. I realize that in spite of my inadequate motivations, the behavior and belief are still appropriate for the circumstances. That’s not rationalizing; that’s recognizing part of the human condition. When we look for rationalizations, I think we are looking in the neighborhood of false consciousness. What is distinctive about the above cases is that there is no element of false consciousness in them.
Now if you insist, I’ll let you have the word. Call what I’m doing rationalizing if you wish, but I’ll now insist that it has no negative epistemic connotation. And I’ll try to get you to see that you treat your own points of view in precisely this way, and legitimately feel no guilt or shame or remorse in so doing. So I’d rather say it’s not rationalizing, but if you want to insist, we’ll call it that and say that rationalized beliefs are sometimes epistemically holy. (OK, I’ll come clean on what I mean to endorse–if the beliefs are true and ungettiered, they count as knowledge.)