The Internalist Intuition & Normative Judgment

Green is in the good case so he knows all sorts of stuff about how things are and ought to be.  Green knows that he ought to keep his promises when there’s no overriding reason not to, knows that he cannot keep his promises without visiting his friend Plum, knows that he cannot get to see Plum unless he gets his tickets for the train, and so knows that he ought to get his tickets for the train.  Mustard is in the bad case.  While Mustard is Green’s epistemic counterpart (i.e., the two are in precisely the same non-factive mental states and have been since the cradle), Mustard is deceived at nearly every turn by a deceiving demon.  It seems to Mustard that he has friends and that he’s made promises to them that can be kept only if tickets are purchased, but Mustard’s only companion is the demon.  His beliefs don’t constitute knowledge as they tend to be false.  Unless you like abusing a perfectly good word, you should probably say that the processes that produce his beliefs aren’t reliable.  Things seem precisely the same to them and they reason in just the same way.

Suppose your theory of justification says that there’s some external condition (e.g., truth, reliability, knowledge, proper-functioning) necessary for justification.  It seems that your theory has the unfortunate implication that in some pair of cases like the Green/Mustard pair, the external condition your theory says is necessary for justified belief is a condition that obtains only in the good case.  A typical reaction to this is to say that when we compare the good and bad cases, the beliefs in the good case are just as justified as the beliefs in the bad case.  Someone might go on to say that what does the justificatory work here is some common factor.  Perhaps the phenomenal conservative’s view is the ticket:

(PC) If it robustly seems to S that p, then, in the absence of defeaters, S thereby is justified in believing p.

While (PC) seems to capture some intuitions, it’s worth noting that there’s another view that seems consistent with the comparative judgment that epistemic counterparts always have beliefs with the same justificatory status, the view that internal duplicates of knowers have justified beliefs.

(KD) If S is the same on the inside as someone who knows p, S is justified in believing p.

For a large number of cases, there won’t be much difference between (PC) and (KD).  If someone forms beliefs on the basis of hallucinatory experiences, perhaps if the contents of the beliefs ‘fit’ the contents of the experience the subject is the same on the inside as some possible subject.  In judging that things are as experience represents them as being, both (PC) and (KD) will classify the perceptual beliefs as justified.  When it comes to moral matters, however, it seems that (KD) may not deliver verdicts in cases where (PC) delivers verdicts and I worry (well, not really.  It’s not my theory!) that the verdicts aren’t all that intuitive.

I. The Rationality Rationale

I’ve seen this sort of rationale offered for (PC) and offered for something in the neighborhood of (KD).  Suppose our counterparts Mustard and Green both believe that they ought to keep their respective promises to Plum.  Suppose they both believe that keeping this duty takes precedence over some competing duty we’d all think wouldn’t give an overriding reason.  (Green’s belief constitutes knowledge but Mustard’s beliefs about the external world are based on hallucinations caused by our demon.)  Suppose both Mustard and Green believe that to keep that promise they have to buy tickets and suppose that they both believe that they have no (undefeated) reason not to buy them or keep the promise.  Suppose that your theory of justified belief and justified action says that there’s something that Green’s actions and attitudes have that is necessary for justification that Mustard’s lack.  And, suppose that Mustard doesn’t perform the action he takes to be necessary for doing what he ought to do.  Suppose he doesn’t form the belief that he should perform the action.  Instead, Mustard forms the intention to do the thing he takes to be the overridden duty to do.  If pressed, he might say there’s better reason to do the other thing and yet he keeps pressing on going for what he judges isn’t best.  According to the externalist theory, it may well be that he’s not failing to do what he should. But, as it seems that he’s deeply irrational that’s bad for the externalist theory.  Just as what’s rational is fixed by your perspective, what there’s most reason to do or believe is fixed by the very same facts that determine what your perspective is like.  Externalists who say otherwise are committed to the unfortunate view that there can be most reason to do what would be deeply irrational to do and maintain attitudes that are similarly deeply irrational (e.g., refrain from believing that you should take the necessary means to the end you rationally judge is the end you ought to pursue).  Call this the ‘rationality rationale’.

II. Rationality, Justification, and Normative Judgment

Here’s where things get tricky.  Suppose White is a cannibal and Peacock a terrorist.  Maybe White understands that there are all sorts of agent relative reasons not to eat people, but White simply lacks the seeming states that would rationalize the judgement that there’s good reason not to eat a stranger.  He’s clearly missing something he shouldn’t be missing.  Peacock’s problem is not that she has too few attitudes, she has too many.  Her moral intuitions tell her that violent acts that take the lives of innocent persons are perfectly justifiable because of the kind of relationship there is between the creator and her victims.  I see no reason to think that there couldn’t be coherent sets of attitudes (including here intuitions and beliefs) that subjects could have where it robustly seems to them that their acts of terrorism and cannibalism are morally right.  This is where I’m curious about intuitions because I find it intuitive to say two things.

First, I feel no pull towards the view that the moral attitudes of (coherent) cannibals and terrorists are justified.  Here’s the nice thing about (KD).  Our cannibals and terorrists aren’t the same on the inside as someone who knows that they should eat someone from the neighboring village or commit acts of terrorism, so (KD) doesn’t classify their beliefs as justified.  This is where I think (KD) enjoys a distinct advantage over (PC).

Second, I feel some pull towards saying that these cannibals and terorists are perfectly rational in their normative judgments.  Suppose our cannibal judges that he’s within his rights to eat someone who isn’t a neighbor and thinking it best to do that forms the belief that he should do what is necessary to secure a meal.  If instead of setting his nets, he does the sort of things that would prevent him from eating another person (banging pots and pans together, handcuffing himself to a rail, encasing his feet in cement) it seems he’s deeply irrational.  If (KD) doesn’t classify our terrorist’s and cannibal’s beliefs that tell them they ought to pursue the necessary means to their ends, then it doesn’t seem to be a view that can be supported by the rationality rationale.

If I’m right that it is counterintuitive to classify the normative judgments of terrorists and cannibals as justified and right that (KD) is consistent with this, it seems (KD) isn’t motivated by the rationality rationale.  It also seems that if I’m right and (PC) might accomodate some intuitions, but it’s not clear it accomodates the sort of internalist intuitions we want to accomodate.  (KD) accomodates the sort of supervenience intuitions just fine.  Here’s a possible lesson to take from this.  I think it’s a mistake to say that when it seems that p and there’s no reason to think ~p or think that the seeming isn’t to be trusted, you thereby count as having an adequate justification for believing p.  Perhaps there’s the further question as to whether it’s possible that by taking appearances at face value you’ll end up conforming to certain sorts of norms.  There are norms that enjoin us not to act as cannibals and terrorists and that’s why we aren’t impressed by a defense of their conduct that it seemed to them that it was right.  (Of course it did, that’s probably why they did it!)

As much as I’d like to think that that’s right, this probably needs some modification.  Non-normative beliefs in propositions that are necessarily false seem to be beliefs that we could nevertheless justifiably accept in spite of the fact that it’s not possible to take appearances at face value and get these beliefs right.  As I’m not much inclined to accept either (PC) or (KD), I’m really interested in knowing what people’s intuitions are about the rationality and justifiedness of the moral beliefs of our (coherent) cannibals and terrorists.  Is there a sense in which they are rational?  Am I right that these beliefs are without justification?


Comments

The Internalist Intuition & Normative Judgment — 5 Comments

  1. The White (cannibal) case seems described in a way that both there is nothing going for White’s belief that cannibalism is morally good and there are reasons for him to think that cannibalism is morally bad (“agent relative reasons not to eat a stranger”). Hence, White seems neither rational nor justified in believing that cannibalism is morally permissible.

    I assume that the Peacock (terrorist) case was supposed to be such that Peacock had no mental states that counted against her terrorist beliefs. If so, then I’m inclined to allow that her terrorist beliefs can be epistemically justified. You say “you feel no pull toward” that position. I wonder whether either of the following two things explains your resistance to say they are justified.

    The first possible explanation of your resistance is the ambiguity of “justified.” I feel no pull toward saying that her terrorist beliefs or actions would be morally justified. But I do feel pull toward saying her beliefs are epistemically justified (as long as she really has no defeaters for her terrorist beilefs). Believing and acting on P might violate moral norms without violating epistemic ones.

    The second possible explanation is that we tend to find it hard to imagine that someone would have no reason to reject terrorist beliefs. So it is hard for us to imagine Peacock as having no defeaters for her terrorist beliefs. Hence, it is hard for us to imagine that her terrorist beliefs are justified.

    Maybe neither one of those two things explains your resistance. In any case, I tend to think:
    i) you can have justified belief in false contingent propositions.
    ii) you can have justified belief in necessarily false propositions.
    iii) you can have justified belief in necessarily false normative propositions (even moral ones).

    If those three things are right, why wouldn’t it be possible for someone to be epistemically justified in believing that some action is morally good when, unbeknownst to the subject, it is morally atrocious?

  2. Hey Chris,

    Good questions and comments.

    I didn’t spell out the details of the cannibal case, sorry about that. I’m thinking of morally right actions as actions that the agent is obligated or permitted to perform. There’s no moral reason _to_ eat the people, but in the absence of moral reasons to refrain from eating something (e.g., ice cream, strangers), I’d classify the eating of something there’s no reason not to eat as right. As for the reasons to think he shouldn’t eat the stranger, I’ve basically stipulated that he’s unaware of any. All the reasons he can think of not to eat people stem from relations between him and these people that he doesn’t stand in with respect to the stranger.

    Anyway, I’m primarily interested in people’s intuitions about the rationality of their moral judgments and whether we’d want to go so far as to say that the cannibals and terrorists are justified in their moral judgments. It seems we have one vote for justified and rational.

    I’m trying to stay out of the discussion of the correctness of these verdicts, but I think that for a few of the authors I have in mind they will have to take issue with something you said this:

    The first possible explanation of your resistance is the ambiguity of “justified.” I feel no pull toward saying that her terrorist beliefs or actions would be morally justified. But I do feel pull toward saying her beliefs are epistemically justified (as long as she really has no defeaters for her terrorist beilefs). Believing and acting on P might violate moral norms without violating epistemic ones.

    If I’m reading you correctly, you think that when the terrorists judge that they should engage in acts of terrorism, the belief isn’t one they necessarily ought to abandon but they ought to nevertheless refrain from doing what they believe they ought to do. Suppose that they manage to do this. They seem pretty deeply irrational. Suppose that among the considerations that motivates (PC) is the though that the thing you ought to do can never be the thing that you’d be deeply irrational to judge that you ought to do. I worry that you’ve just undermined the rationale for (PC).

    Also, I’d want to know why you’d deny that the actions are justified. It seems that if you do what you judge what you ought to do when your judgments about what you ought to do are themselves epistemically justified, you are at the very least blameless for acting in accordance with your normative judgment. What’s the difference between a justified action and a blameless action rationalized by justified mental states? (Whatever your answer to that question is, I’d wonder why we can’t draw the same sort of distinction on the epistemic side and say that the attitudes of the terrorists and cannibals are similarly unjustified but have some other thing going for them, like rationality.)

  3. Thanks for the thoughtful replies. I’ll try to respond to some of the other comments later, but here I’ll focus on this point: “Suppose that among the considerations that motivates (PC) is the though that the thing you ought to do can never be the thing that you’d be deeply irrational to judge that you ought to do. I worry that you’ve just undermined the rationale for (PC).”

    I’m not sure, as stated, that anyone has accepted such a motivation for PC (if you have a reference, I’d be interested). Off the top of my head, Huemer’s “PC and Internalist Intuition” (APQ?) paper says something similar, but I don’t remember it being completely general in the way (I think) you suggest. In any case, one can motivate PC with a more restricted version of that claim. Compare:

    RR1: the thing you ought to do can never be the thing that you’d be deeply irrational to judge that you ought to do.

    RR2: the doxastic attitutde that you are justified in taking can’t be one that would be deeply irrational for you to take.

    I think RR2 is enough to support PC in the way that you are imagining. (I worry about the counterfactual element in RR2 causing problems with this formulation, but I’ll ignore them.) As far as I know, nothing I said ruled out RR2. I confess that it does seem strange to me to say:

    Ex 1: It is epistemically irrational for me to believe P, but I am epistemically justified in believing it anyway.

    When we mix normative statuses, my intuitions are less clear. For example, this sounds less weird to me:

    Ex. 2: It is epistemically irrational for me to believe P, but I am morally justified in believing it anyway.

    To the extent that Ex 2 sounds less weird to me than Ex 1, I find RR2 is more plausible than RR1. Even supposing RR1 is true, I think that what I said was compatible with RR2 and that is enough to get the kind of motivation you are imagining.

  4. Hey Chris,

    There’s an argument in for the claim that what there’s reason to do and believe is determined by your perspective in Gibbons’ “Things that make things reasonable” (here) that proceeds from the assumption that it can’t be that you ought to do/believe what you’d be irrational to do/believe. He’s not committed to (PC), but he’s sympathetic to something more in the neighborhood of (KD). (There might also be similar moves made by Fantl and McGrath in their new book, but I need to double check that.)

    Both Huemer and Silins seem sympathetic to the idea that it can’t be that the attitude you ought to take (believe, disbelieve, suspend judgment) is one that is deeply irrational for you to take. You’re right that neither of them seem to make the general assumption that “the thing you ought to do can never be the thing that you’d be deeply irrational to judge that you ought to do” because they don’t discuss action. Here’s Huemer on p. 151 of his APQ article:

    The preceding argument suggests that a natural characterization of the central intuition of internalism about justification. It is that there cannot be a pair of cases in which everything seems the same in all epistemically relevant respects, and yet the subject ought, rationally, to take different doxastic attitudes in the two cases–for instance, to affirm a proposition and in the other to withhold.

    It would be odd to think that this assumption (i.e., that there can be epistemic obligations that you’d be irrational to believe in) is one we should accept without argument unless there’s something about the relationship between rationality and obligation generally that Huemer thinks we can all appreciate. (As Gibbons puts it, if there are these differences between theoretical and practical rationality we’d want some explanation as to why such differences arise. The similarities have built in explanations since when we’re talking about practical and theoretical rationality, reasons, and oughts we’re talking about rationality, reasons, and oughts.)

    At any rate, I think you can undermine the rationale for a view by showing that the view can be defended only by saying things that are deeply implausible or seemingly ad hoc (e.g., that while you can never be obliged to have irrational doxastic attitudes you can nevertheless be obliged to perform irrational actions without changing your attitudes). Anyone who holds such a view has to explain why it seems that those who believe they must A but don’t A aren’t as they ought to be while maintaining that there’s no reason for them not to be this way.

    Your examples are interesting, I’m not quite sure what to say about Ex. 2. I suspect that’s because I’m not entirely sure I buy the idea of beliefs that are or aren’t morally justified. (Are there non-instrumental reasons to refrain from believing on moral grounds? I can’t think of any.) Suppose I’m wrong. Here’s an account of morally unjustified beliefs:

    MJ: A belief is morally unjustified when the belief tells you to do something wrong.

    Then Ex. 2 would come to something like this: It is epistemically irrational to believe that it’s right for me to eat a stranger but it’s right for me to eat a stranger. That seems as incoherent as Ex. 1. Maybe there’s something other than (MJ) that captures the idea of a morally unjustified belief.

  5. This reply is a bit late and perhaps of little interest to anyone, but here are a few more thoughts on the topic.

    1. Thanks for the paper suggestion. I’ve printed it out, but I haven’t read it yet.

    2. Your application of Ex. 2. I agree that it sounds bizarre to say “It is epistemically irrational to believe that it’s right for me to eat a stranger, but it’s right for me to eat a stranger.” Yet the problem here seems entirely epistemic. When one asserts “it’s right for me to eat a stranger,” they seem to express the belief that it is right for that person to eat a stranger. But it is (epistemically) irrational to hold a belief when one regards that belief as irrational. The intuitions, then, concerning your application of Ex. 2 are intuitions about epistemic statuses, not moral ones.

    3. Some Disconnects between Moral and Epistemic Value. I agree that it seems weird to talk about being morally justified in believing something. Suppose that being morally justified can’t be properly ascribed to beliefs. We have found a relevant difference between epistemic and moral justification, because epistemic justification clearly applies to beliefs. I’m also inclined to think that it is far more plausible that the “ought implies can” principle applies to moral justification than it does for epistemic justification. Suppose a subject’s evidence strongly supports P. Due to malfunction the subject is determined to believe ~P in light of that evidence. The subject can’t believe P, but he epistemically should have, i.e. he had epistemic justification to do so. I submit, then, that our intuitions concerning moral and epistemic justification may suggest some deep differences between the two kinds of normative status. We shouldn’t assume by default that moral and epistemic normativity are parallel in every/most respects.

    4. If I had to make moral and epistemic normativity parallel, I might endorse the following view (which may take back some things I said previously):
    E1: You are epistemically justified in believing P only if it isn’t irrational to believe P.
    M1: You are morally justified in doing A only if it isn’t irrational to do A.
    However, I also would be careful to say the following.
    E2: A belief can be blameworthy even if it is epistemically justified.
    M2: An action can be blameworthy even if it is morally justified.
    Blamelessness and justification will come apart when my perspective justifies me in believing something, and I only have my perspective due to some forgotten past sin. I’d also say that:
    E3: Epistemic justification is not sufficient for warrant.
    M3: Moral justification is not sufficient for moral rightness.
    Is this the sort of thing you were trying to push me toward at the end of your first reply to me?

    5. Based on your clarification in the first reply, I think I misunderstood your cannibal case. Sorry about that.

Leave a Reply

Your email address will not be published. Required fields are marked *