Self-Deference

Sometimes we defer in opinion. When we should so defer is an interesting question, but here’s one possibility I haven’t seen discussed. Should we defer to ourselves? Consider the following deference principle:

Self-Deference: C-me-now(p/C-me-now(p) = x) = x.

This principle says that my present credence for p, conditional on that credence being x, is x.
Perhaps one might even think of Self-Deference as a constitutive rule of rationality: if you violate Self-Deference, you are automatically irrational. You might even think of Self-Deference as expressing a minimal amount of self-trust needed for rationality.

Maybe not, though. Do you really think that your credences are always appropriate? Presumably not. So you think that for some values for p, the credence you should assign to p differs from what you actually assign to p. So you think that for some values of p:

Normative Non-Self-Deference: C-me-now(p/C-me-now(p) = x) should not be x.

So, you are committed to the view that it is OK to satisfy:

Non-Self-Deference: C-me-now(p/C-me-now(p) = x) = y (where y is not equal to x).

And perhaps you actually have some conditional credences that satisfy Non-Self-Deference. Perhaps, for example, you view yourself as being too confident whenever you hear a neat aphormism and consider the claim that it is one of Ben Franklin’s. Maybe you are even so bold as to have a measure of your degree of overconfidence. So your credence, say, that aphorism X is Ben’s is .7, conditional on your credence that it is Ben’s is .6.

There’s a dutch book result related to this, but it takes some other assumptions, so I’ll stop here for now and post about it later. So what goes wrong in this argument, if anything?


Comments

Self-Deference — 10 Comments

  1. Hi Jon,

    Similar issues arise with the reflection principle, that you should treat your future self as an expert. It seems clear that for certain notions of belief that you can treat both your present and future selves as non-experts—e.g., when drunk. But for other notions of belief—like Lehrer’s notion of acceptance—it’s less clear that you can treat your current acceptances as misleading, conditional, of course, on you having acceptances.

  2. Hi Ted, yes the connection with Reflection is interesting. If you know that some of your credences are irrational, then Self-Deference won’t hold, and it would be irrational to satisfy it in some cases. But suppose your credences are all rational. Then won’t Self-Deference be satisfied? The argument I gave is still supposed to be an argument for the opposite.

    Maybe that’s the idea behind the Lehrer reference. Acceptances are something like beliefs purified of non-epistemic motivations. They are what you commit yourself to in terms solely of getting to the truth and avoiding error. So, put in these terms, the argument above says this: can’t you know that even in these terms your credences are too high for certain kinds of propositions, like whether an aphorism is one of Ben’s?

    Maybe the way out is to grant that one can know this, but in so knowing, we have information that implies that the credence (or acceptance) isn’t rational. But how would that argument go? We have what Pollock calls an indefinite probability which is a (defeasible) reason to think that the credence doesn’t track the frequencies in nature very well. That reason is either a reason that Self-Deference shouldn’t be honored or that the credence in question is irrational. So which is it? Here’s a try at the former: you know the calibration point, and you think that this aphorism has the glow of being different. You can either violate self-deference (because you’ve been here before) or change your credence. Changing your credence, however, doesn’t honor the special glow. So you refuse to change your credence.

    This, with a few other assumptions, will make your bookie very happy! But if, like me, you don’t think much of dutch books, put that on hold. So is the credence irrational in the above situation?

  3. My money is–paradoxically–with my bookie in this case! Can you spell out the counterexample a bit more? By the way, I hope that normative non-self-deference doesn’t imply non-self-deference b/c the latter requires one to have two different current credences in p.

  4. Ted, non-self-deference doesn’t require two different credences about p. If you have a current credence for p of x, it requires a difference credence for a p conditional on the first credence, but that’s not two different credences for p.

    One can get this result with some further assumptions, such as this: an unconditional credence is a conditional one, conditional on background information. And then, assume that any unconditional credence specifies a claim that is included in background information. Lots of room to complain here, though.

    But your remark suggests that you really like Self-Deference. And that is the intuition I’m trying to get working. But in your first comment, you said Self-Deference is pretty easy to avoid; but that’s not consistent with the assumptions needed to generate two different credences for p (which I assume is impossible).

  5. Jon, Thanks for the clarification. I’ll have to think more about the formulation of non-self-deference once I get rid of this end of the semester headache. I can’t shake the thought that my credence *now* for p conditional on my credence *now* for p being x could be anything other than x. To be sure it can be something other than x at a later time, but at the same time?

    Anyway, I do like self-deference at least understood along Lehrer’s notion of acceptance. I’d attempt the same justification you mention in the opening post that it expresses a minimal sense of self-trust needed for rationality. If you are going to have acceptances at all, then you have to trust yourself at some level; you can’t rationally defer all the way down. This reminds me of what Locke said about divine relevation. If God said “do x” then I shoud “do x” but I have to be justified in believing that that’s what God said.

  6. Jon,
    Is a violation of self-deference inconsistent with a principle of inferential justification, where credences are acceptances? Here’s the idea. The principle of inferential justification says that an inferentially based belief is justified only if the premise beliefs are justified. Suppose one accepts p (one thinks that one’s acceptance tracks the epistemic reasons one has for p). So one’s credence that p at time t is x. Suppose one considers the credence for p one should have conditional on one’s credence for p at t being x, and this credence is different than x. What explains the change? Either it’s an epistemic reason or not? If not, then the premise belief (whatever it is) isn’t justified and so the resulting credence isn’t rational. But if the change is based on an epistemic reason then that reason is already included in the original credence. So one’s change in credence of p isn’t justified (assume that the change in credence falls outside of the margin of error, which I take it, is required for a violation of self-deference).

  7. I’m not quite sure what the question “what explains the change?” means. Your credence for p at t is x. You reflect on what your credence for p should be conditional on your credence for p being x. I would think the answer depends on what kind of proposition p is and how confident you are at your ability to hit the mean between the extremes of being too optimistic and too pessimistic about that kind of proposition. Now, add the parenthetical information that you also are certain (?) that your credences are the right ones, given your evidence. Then you hold that, for any kind of proposition, you will hit the mean between the extremes. So now if you reflect and arrive at a credence other than x for the conditional probability, you have an inconsistency. In the usual case at least, either the conditional probability just assigned should be given up or you should give up your parenthetical remark.

    Now I think you intend to identify acceptance with the parenthetical remark, so you can’t give it up. But the identification is problematic. What you commit yourself to, or endorse, for purposes of getting to the truth and avoiding error is different from the parenthetical remark.

  8. The parenthetical remark-“one thinks that one’s acceptance tracks the epistemic reasons one has for p”-wasn’t that one is certain that one has hit the mean; rather the idea is that by one’s own lights (at that time) one has done one’s level best to take the epistemically appropriate attitude. Holding that state fixed, it seems one should think that any change would be inappropriate; any change would go against what one thinks is the best attitude to take towards p. If you think you place too much credence in a certain field of propositions then that’s an epistemic reason to take account of. Of course, if you think this only after reflecting on your current credence (but note this takes place in time) then that changes the state under consideration and thus it’s not a counterexample to self-deference. Maybe, though, the notion of acceptance I’m working with is too strong. I’ve always thought Lehrer’s notion of acceptance similar to Foley rationality in that what I accept expresses what I think I’m Foley-rational in believing, where my goals are the epistemic ones.

  9. Ted, this is the direction I’m thinking things need to go, but the meta-attitude you’ve got here isn’t part of partial belief or partial acceptance. So what I think such considerations show is that Self-Deference is a constitutive principle of rationality only given certain other assumptions, which of course is to admit that it is false. It is true when meta-attitudes are in place that require it to be true. Maybe?

  10. Jon, that sounds right, though it’s tempting to think that the sense in which self-deference fails is exactly right; it’s just the sense in which one shouldn’t defer to anyone when one doesn’t think that person is an expert.

    Also re my concern about the formulation of non-self-deference there’s a simpler way of putting it that I think clarifies the issue. The concern was that the formulation violated the uniqueness condition for functions, by letting C assign different values to the same triple-subject, time, proposition. The reply I take it is that is that C has (at least) four argument places—subject, time, body of evidence, and proposition, and that non-self-deference says its ok to assign different values to the following quadruples (S, t, e, p) and (S, t, e*, p) where e* differs from e only in that e* includes ((S, t, e, p), x). (I’d use the standard notation for ordered pairs but html code messes it up.) If this is the claim, though, my intuitions are less clear on whether a violation of self-deference is rational. I’d at least want to add to e* a proposition to the affect that one thinks ((S, t, e, p), x) either overshoots or undershoots the mean.

Leave a Reply

Your email address will not be published. Required fields are marked *