The Principal Principle

David Lewis thought the following is true:
The Principal Principle (PP): Ps(A|Po(A)=x)=x (where Ps is a rational subjective probability, and Po is some objective probability).

Here’s a gloss of this claim, more or less accurate: if you know that the objective probability of A is x, then you should assign degree of belief x to A given that information.

So, first, a confession: I know there’s been considerable discussion of this principle in the literature but I haven’t read as much of it as I should. So there may easy answers here to the things that concern me, and pointing them out in the comments would be useful.

Second, my problem: I think, to go straight to the bottom line, that I may not know how to interpret conditional probability claims.

Suppose I think of a conditional probability as implicitly referring to some background knowledge in the given condition, so that PP implicitly quantifies over every cognitive agent and their systems of information. Then the principle is false, regardless of what kind of probability is included here, since the background information might conflict with the stated condition.

One alternative here is to abstract from one’s actual system of information, ridding it of everything epistemically relevant to the truth or falsity of A before adding (just) the information contained in the condition. Then PP says something like this: if the only relevant information you had about A was that its objective probability is x, then your subjective probability for A should be x as well.

I’ve never been comfortable with abstractionist interpretations of conditional probability claims. What is the conditional probability that you exist given that I tell you so? (This is sloppy: I don’t want the report of testimony to guarantee your existence but just to be a sincere and honest assertion of it on my part.) The abstractionist interpretation requires you to consider a situation in which your only information about your existence is my testimony. I don’t see how such a situation is possible, or conceivable, or imaginable.

Perhaps one should put the claim in form of a conditional, as my gloss of PP above did, where I reported the principle as claiming that if one knows that the objective probability of a claim is x, then one’s subjective probability should be x as well. But that won’t work with probabilities that conditionalize on one’s own non-existence. What is the conditional probability of anything given that you do not exist? If we understanding probability in the subjectivist sense, this makes little sense. It’s hard to imagine your knowing of your non-existence. Of course, this might be an argument for a different understanding of probability, but we can’t take that route for the principle above, since it explicitly appeals to subjective probability.

So, assume that the condition in the principle is that the objective probability of my existence is zero. Note that this supposition is compatible with my existence, so we can’t rely on such a claim to generate an answer here concerning the application of the Principal Principle to such a case. What should my subjective opinion be about my existence?

Is there something I’m missing here? Maybe so, but if not, here’s what I think the idea is that makes the principle look plausible. It’s a claim about evidence or confirmation. It makes a claim about prima facie evidential support, to the effect that if you learn the objective probability of a claim, that is evidence that the total evidence supports the claim to the same degree. That allows one to say that the degree of confirmation provided for p by the evidence that the objective probability of p is x, is precisely the same (even though all such evidential claims are only prima facie).


Comments

The Principal Principle — 11 Comments

  1. Jon, you write,

    “Suppose I think of a conditional probability as implicitly referring to some background knowledge in the given condition, so that PP implicitly quantifies over every cognitive agent and their systems of information. Then the principle is false, regardless of what kind of probability is included here, since the background information might conflict with the stated condition.”

    I don’t think I follow this. How does the principle come out false? If the objective probability of A is x, then why would that objective probability change on the background information? For instance, if the objective probability of tossing Heads is 1/2, then adding background information k will not alter that objective probability. Now suppose k states that the coin is biased. In that case it is not true (and never was) that the objective probability of tossing Heads was 1/2. It is not that the objective probability *changes* in that case.

  2. Sorry, Mike, that wasn’t very clear, was it. What I’m thinking is that K might include the opposite of the objective probability claim. So, in effect, we’ll get the following form, in such a case:
    Pr(A/p&~p).
    I wouldn’t expect the value of this probability to vary depending on what was involved in the claim that p.

  3. You are missing the time-indexing of chances.

    The Principal Principle is intended to capture the meaning of ‘chance’. Chance encapsulates how all the complicated facts about the past (and some other stuff I’ll leave out) bear on a single outcome. The core idea is that anything you learn about facts before the outcome should affect your degree of belief in the outcome only by way of affecting your degree of belief about the chance. You are rationally permitted to let your degree of belief in the outcome differ from the chance only when you learn about facts after the outcome (like watching the result of a coin flip or looking in a magic crystal ball). I’m simplifying here by assuming you have a definite belief about the chance. If not, your belief in the outcome should equal a weighted sum.

    So when you ask about the objective probability of my non-existence, the question is, “When?” 1,000 years ago the objective chance was very high (assuming indeterminism here). I now have a zero degree of belief that I am non-existent, but there is no conflict because I’m using information from after the time of the chance we’re considering.

    If you ask what me what is the chance right now that I don’t exist, I would say 0%. My degree of belief that I don’t exist is also 0%. They match!

  4. Hi Doug, I’m not so much worried about getting the numbers to work out right as I am how to think about conditional probability itself. But I also think that your gloss makes the principle pretty obviously false, since it relies on distinguishing factors prior to the outcome from those you learn about after the outcome. Suppose the Infallible Predictor says that the coin will come up heads. I know what ‘chance’ means, and I know that the chance is 1/2 that the coin comes up heads, but I also know that it will come up heads because I know that the IP is infallible.

    I seem to recall that Lewis has a footnote about something similar voiced by Teller, I think(?), though put in subjunctive mode and dealing with electrons, if I remember correctly. Lewis claimed he had no idea what Teller was talking about…

  5. Jon,
    You’re right that it doesn’t matter when you learn the facts. What’s important is whether the fact you learn is a fact about the future (of the chance event) or about the past (of the chance event). That’s because knowledge of the future might contain knowledge about the outcome that goes beyond what its chance is. So your degrees of belief about the outcome can diverge from the chance when you have an infallible predictor assuming you construe the infallible predictor as providing you with evidence about the future. Of course, you might say your evidence about the outcome is just coming from listening to the predictor which is safely from the past, and your knowledge that the predictor has always been correct is safely from the past, and so information from the predictor should be admissible. So then the question is, “Is there a principled distinction between the admissible evidence and inadmissible evidence?” Lewis did offered a theory that wasn’t perfectly precise, and infallible predictors seem to be problematic for it because they make information from the future into information from the past. I have a sense that the kind of answer Lewis endorsed for such a predictor would depend on the details. If the predictor got predictions because the future facts caused him to believe the right outcome, then it would be inadmissible (meaning your degrees of beliefs about outcomes could diverge from your degrees of belief about the chances). If the predictor gets predictions by scanning the current or past microstate or just by being lucky, then it is admissible (and your degrees of beliefs about outcomes have to match your degrees of belief about the chances). You can come up with other mechanisms that make it even less clear. What if an angel tells the predictor…? To the extent the mechanisms of knowledge acquisition depart from the usual mechanism of gaining knowledge solely from the past, (and knowing the future only insofar as we can make the right kinds of inferences based on our knowledge from the past), it becomes unclear how the principal principle applies.

    If you doubt whether the theory of what’s admissible can be made rigorous (and non-circular) in general, I think you’re right. What I think this shows is that our concept of chance implicitly relies on a naive theory of linear time, and we find it unclear how to apply notions of chance when there are certain kinds of time travel, crystal balls, or magical connections between the past and future.
    But to get back to your main argument…
    You seemed to be thinking that there is something unclear (to you) about conditional probability. I don’t think there are any special problems about conditional probability that the principal principle raises. Whatever problems there are, I think, are about whether you can carve up the totality of facts into the admissible and inadmissible in principled way.

  6. Doug, on the last issue about conditional probability, I was just guessing where someone might think I wasn’t understanding Lewis correctly. It is true that I find conditional probability a troubling notion to understand, but I didn’t mean to suggest that there are special problems here raised by PP. I only meant to say that my concerns about the principle might trace to difficulties I have with the notion of conditional probability. I like what you say here, especially about admissible and inadmissible evidence. I think we agree that there’s a problem for the principle here, though the intuitive idea I gave at the end of the post about defeasible evidence survives even the infallible predictor case.

  7. A couple of points:
    First, Jon, I don’t tend to think of conditional subjective probabilities in either of the ways you suggested above. My subjective P(X|Y) is just my degree of belief in X on the supposition that Y. If one wants to make normative claims about subjective P(X|Y) values, those are just normative claims about what degree of belief the agent should assign to X on the supposition that Y; not claims about what the agent should do on the condition that she knows Y. And of course, such claims should depend on what the agent already believes.
    Note that this interpretation of conditional probability leaves both of the following as substantive normative claims: First that for any agent, P(X|Y) should equal P(X&Y)/P(Y), and second that an agent who learns Y should then assign a P(X) equal to her previous P(X|Y).

    Second (admittedly somewhat picky) point: I don’t think the admissible/inadmissible distinction is precisely about future/past. Lewis’s idea seems to be that for chance events, there are various facts that affect the setup of the chance process, then the chance process occurs and generates an outcome in a way that couldn’t have been predicted from the setup of the chance process. (Lewis is explicitly talking about genuinely chancey processes here, not just processes that we can’t predict because we lack enough information about the setup.) Inadmissible evidence is evidence about how the process came out; admissible evidence bears only on the setup of the process.

    Thus one can get prior-to-process inadmissible evidence if there are seers of the future, as Lewis makes clear in his Sleeping Beauty paper. But one can also get information about events that happen after the process that is still admissible information. Suppose I walk into a room where a coin flip has just occurred and someone is complaining about the outcome. Their interlocutor responds, “Yes, but the flip was fair!” The interlocutor’s response happens temporally after the chance outcome, so I have information about events after the outcome, but it is still admissible information because it is evidence about the setup of the chance process, not about its outcome.

  8. Mike, I think we agree here about the admissible evidence idea. I only meant to say that you can’t limit inadmissible evidence to things about the future. I didn’t mean to say that anything about the future would be inadmissible.

    I also like the result about substantive normative judgements. I think nearly everyone realizes this point about conditionalization (and maybe realizing that it is false as well!), but the former is more unusual. Some of my concerns about conditional probability are the result of requiring whatever we say to honor the division in question.

  9. Oh, Mike, I meant to address your suppositional point as well. I’m happy with that account, except that I don’t think it will allow us to endorse PP. But if we already have a reason to reject PP, then the problem I was worried about disappears. The problem for me was that the value of the conditional probability changes depending on what one already knows, and then PP is threatened. But if PP can’t be sustained anyway, that wouldn’t be a problem.

  10. Jon,

    I don’t tend to think of the problems with precognition, infallible predictors, etc. as reasons to reject PP. The way I see it, PP is an analysis of chance, and as an analysis of chance it does fairly well. It’s that chance itself is a concept whose meaning (in virtue of its conceptual role) depends on the knowledge asymmetry (or something similar… the division of evidence into admissible and inadmissible), which is itself vague and a result in part of our being epistemologically limited in certain ways.

    If we lived in a world without the knowledge asymmetry (or something like it), we wouldn’t have use for the concept of chance. If there were occasional violations of the knowledge asymmetry, like with some kinds of infallible predictors, we would have a useful notion of chance, but it would just have unclear application in a few particular cases. This wouldn’t be any more problematic than having a notion of temperature that is extremely useful and applies perfectly well to large collections of interacting molecules, but where it’s unclear what it would mean to say that a single particle has such-and-such temperature, since there are no properties of a single particle that play the conceptual role of temperature.

    I think the interesting problem is to get a grip on what it is about us (our epistemological access to the world) and about the physics of our world that grounds the knowledge asymmetry and the usefulness of the chance concept. Thinking about how PP would work with precognition is useful, but I think we need to avoid thinking we have enough reliable precise intuitions about precognition cases to reject PP. We need a theory of the knowledge asymmetry that explains a wide range of phenomena. I think what will turn out is that the PP (perhaps with some slight tweaking) will be a good normative principle in just the special cases where the physics of us and our environment is such that our best guide to future outcomes comes by way of making inferences using this theoretical concept ‘chance’, and using laws that relate chances to stuff we have access to, etc.

  11. Prof. Kvanvig,
    I know that Plantinga devotes his second chapter on probability in Warrant and Proper Function to the question of how to interpret conditional probability claims. His answer, unsurprisingly, is something like: “The conditional probability of A on B is x iff a properly functioning individual would believe A to degree x if he were to learn B.” I think that he also deals with your nonexistence objection in his “objections” section. Unfortunately, I don’t have access to the book from where I am, so I don’t remember how adequately he addresses it. I also know that you probably won’t be thrilled with a proper functionalist account of conditional probability, but I thought I’d at least mention that it’s there.

Leave a Reply

Your email address will not be published. Required fields are marked *