I want to know, but I lack the resources for figuring it out. Here’s the data: roughly 90 billion people who have been born on earth have died. Roughly 7 billion people (all those who are alive today) have not. That’s over 7 percent of people who have ever been born who have not died. So, pretty good odds I’ll die someday (though of course I most resemble those who have not ever died), but not astronomically high. Except that all those alive today are in stages of life that at least some of the 90 billion dead were in before they died. So, how do I calculate this?

Here’s a relevant analogy (after the fold):

Suppose you and your friend are drawing balls from a tub and you notice that all of the ones you’ve drawn have the following pair of features: they start out red and then, at some point between 10 and 20 minutes after being withdrawn from the tub, turn green. Suppose you draw roughly 90 such balls from the tub and all of them share these two features. Now you draw 7 more balls from the tub. Your friend doesn’t look at the balls. You do. You see that they are red (as were the previous 90 balls). How certain should you be that the balls will end up green? Both you and your friend should be fairly certain; you should assign the proposition that the 7 new balls will end up green a very high probability. But your friend’s probability should be higher than yours. After all, as far as your friend knows, the balls are already green; they might just be normal green balls that end up green. But you know that, if the balls are normal balls, then they are normal red balls that end up red. So, the probability for you that the balls will end up green is lower than it is for your friend. After 20 minutes, if your friend looks and sees that the balls are now green, your friend should assign an even higher probability to the proposition that the 7 new balls will end up green. So, in turns of relative probability assignments, the proposition that the balls will end up green is the highest for your friend after your friend looks at them after 20 minutes. It is the second highest for your friend before your friend looks. And it is the third highest for you while you’re looking at them before 10 minutes is up and you see that they are red.

Suppose that the number of balls initially picked is not 90, but is 90 billion. And suppose that the number of new balls picked is not 10, but 7 billion. Here, the probability for each of you that the 7 billion new balls will all turn green is much higher. But the relative probability assignments remain the same: it is highest for your friend after looking after 20 minutes, second highest for your friend before looking, and third highest for you, when you look before 10 minutes is up and see that they’re all red

What are the relevant probabilities for you, and for your friend both before and after looking? Is this something that can be calculated without additional information?

Hi Jeremy: interesting post! I have a concern about this:

I guess I don’t understand what the relevant analogue to the “normal” balls is supposed to be in the likelihood-of-dying case: after all, when it comes to how likely it is that I will die, my data is only of people who (i) have lived and “end up” dead, or (ii) are currently living. This exhausts my data, for I don’t have any data of the “normal” balls sort—namely, someone who began alive and “ends up” (remains thereafter) alive.

So to make the analogy a good one, won’t we have to set the stage such that one has never encountered any “normal” balls?—in which case, what is normal, within the analogical example, is rather that balls always (thus far) turn from read to green at some point.

Thanks, Matt. Yes, I guess the analogy isn’t perfect. I wasn’t trying to make too much hinge on the normalcy of the balls. It was more that in observing that the balls are red, I’ve eliminated some possibilities that the friend hasn’t eliminated — that the balls are already green prior to 10 minutes. So, one way that the balls can end up green is by being green prior to 10 minutes and staying that way. And, by observing that they’re red, you’ve eliminated that possibility. Of course, you’ve also eliminated the possibility that they’re green and will will turn red. You’re friend hasn’t eliminated that one either. But we have background info that the dead don’t come back to life, so I think we can allow the background info into the tub/ball case that green balls don’t turn red. (I hope we can allow the background info that it’s possible for balls to be green prior to 10 minutes, even though none from this tub have been. After all, we have background info that people have died very soon after being born.)

Okay. Add in the assumption between you and your friend that once green, balls stay green, an assumption that doesn’t hold for red balls (that they end up red).

But since the question is “How certain should you be that the balls will end up green?”, and since every ball you’ve already pulled has ended up green, both you and your friend should be

equallyconfident that the 7 you just pulled will end up green. Your eliminating the possibility that the 7 balls are already green has no bearing on your probability that they’ll end up green, right? (Except insofar as you’re considering the “normal” possibility that a red ball could end up red: but that’s just as unlikely for you as for you friend, viz., it’s never happened.)So I guess I’m still not seeing how you get your original claim, that

the probability for you that the balls will end up green is lower than it is for your friend.Though it’s highly probable for my friend that the balls just drawn are red (now) and that if they’re red, they’ll end up green, it’s not certain that they’re red, nor that if they’re red, they’ll end up green. It’s more certain for my friend that if they’re green, they’ll end up green than it’s certain for my friend that if they’re red, they’ll end up green. At least, I think these were my assumptions going in. (Because it’s more certain for us that if X is dead, X will stay dead than it is for us that if X is alive, X will end up dead.) So, my friend thinks, “Well, if the balls are red (which they probably but not definitely are), then it’s highly likely they’ll end up green. But if they’re green — which they might be — then it’s super highly likely that they’ll end up green.” So, the probability, for my friend, that the balls will end up green is somewhere between highly likely and super highly likely. But for me, who knows that the balls are red, I can only reason like this, “Well, they’re red. So, it’s highly likely they’ll end up green.” So, it’s more likely for my friend than for me that the balls will end up green.

I see it now; thanks!

If we try to apply the metaphor of “red balls that turn green” to the original question about dead people and living people, the result is rather…

gruesome.Happy belated Halloween!

Oh, I guess I didn’t read quite properly. What I meant is that if a

greenball ever turnedred, that would be gruesome. I guess I was in a hurry to be the first to make the joke.Pingback: The Stone Philosophy Links - NYTimes.com

Jeremy,

This is a very interesting article. I am a professional mathematician, so I might be able to say something about it.

Basically the answer is that determining the probability that you’ll die requires that you first define what “probability” means in this context. Furthermore, when it comes to this sort of question, there is no consensus about what probability means, even among probabilists. Finally, philosophers should read about the artificial intelligence work of Ray Solomonoff. Here are some possible definitions of “probability”:

There is the “frequentist” account of probability in which the probability that X happens given Y is simply the fraction of times in the past that X happened following Y. Unfortunately, this is completely unhelpful for your question, since Y is the event “90 billion people have died and 7 billion have not”, which is completely unprecedented, so there aren’t statistics available for computing the probability.

Then there is the Bayesian approach in which you assign a probability to every model of the world before you learn any information, and then adjust those prior probabilities as information comes and eliminates possibilities. Specifically, you adjust them using Bayes Rule.

Now, which prior probabilities you assign to models of the world depends on the type of person you are. If you are a radical nihilist, you might think that all models are equally likely. In the ball example, this might translate to a belief that the probability that a ball will turn green is 1/2, no matter what observations you have made about balls changing color in the past.

On the other hand, a scientist might assign probabilities according to Occam’s Razor: shorter models get a higher prior probability. “Every ball turns green” is shorter than “Each of the first 90 balls turns green, but then there is one that doesn’t”, so “Every ball turns green” is the more likely model. Ray Solomonoff gave a precise explication of this idea in the 1960s. Amazingly, he also proved that applying Bayes rule to a prior distribution based on Occam’s razor will converge on the actual model underlying the universe. Now, that underlying model has to exist in the first place, and there is a precise sense in which it has to consist of finitely many rules for Solomonoff’s proof to work. Nonetheless, it is very compelling fact. It’s worth noting that in the ball example, the scientist’s prior distribution would converge on the nihilist’s model if it were the truth about the universe, but not vice versa.

These are some definitions for “probability”, but there are infinitely many more. For my money, I’m going with the one that has been proven to lead to the truth about the universe.

So what does Occam’s Razor tell you the probability is you’re going die? Unfortunately, it’s not practically computable! Furthermore, if you were to try to compute it, it relies on way too much special information about you: is your family prone to long (or short life), so that you might live to a time where medical breakthroughs can extend life indefinitely? Do you have access to special information from scientist colleagues that cancer will be cured soon (or that it never will)? Etc. The answers to these questions all affect the computation, but unfortunately I suspect that whatever the answers to THOSE questions, the answer to YOUR question is “pretty likely”.

I really like this article, as it’s made me question the certainty of death in a fun kind of way.

But are you really saying that if you only see dead people, you would think it more likely that people will die? Doesn’t that also entail your being dead as well?

Thanks, Josh. Sorry for the delay responding. Grading hit at just the wrong time. That’s very helpful. I was hoping to abstract from any special information about me, since what I’m really wondering is how I should estimate the chances of some randomly chosen currently living person being such that he or she will die someday. I, too, sadly suspect that the answer to my question is “pretty likely.” But I’ll take “pretty likely” and even “not practically computable” over the “certainty” that is alleged to apply to only death and taxes.

Tom: it does complicate things that I’m one of the specimens under observation. If I imagine an alien wondering how sure to be, of some randomly selected person currently alive on earth, that the person will die someday, I think the alien should be less sure that this is so the more people on earth are currently alive. I mean, 7% of all people who have ever been born have never died. And, by amazing coincidence, those happen to be all the people who currently are alive on earth! Something must have changed in the last hundred years or so.

Also, Lenoxus: nice!

Yeah but, er, but, eh? I mean any self respecting alien statistician would ask how many humans are, say, 200 years old. There must be a way of posing the question that excludes the living. If you asked what the probability of a chrysalis turning into a butterfly was, you wouldn’t include gestating chrysali would you?

Hi Tom. Actually, I have similar questions about the chrysalis.

It’s just that I can think of two categories of people/aliens who should be more certain than our hypothetical alien statistician that some randomly selected currently living person will die: the people/aliens who are around to see that all the other people currently alive are dead and the people/aliens who have seen that all the currently dead people are dead, but who are ignorant regarding the current state of all the currently living people. So, it’s not certain for me that I will die, and I just want to know, then, how certain it is.

The standard inductive argument — “all people who have ever been born have died; therefore, I will die” — is demonstrably unsound, because its premise is false. Another version — “all people who have ever been born and are not currently alive have died; therefore I will die” — is also unsound, because its premise is as close as can be to a tautology without being one.

Now, there is the argument you allude to: “all people who have ever been born have failed to reach 200 years old. So, I will fail to reach 200 years old.” This is more promising. But I don’t think it’s decisive, because if “failed to” just means “has yet to” then, of course, the conclusion is true. But we knew that already. I have not yet reached 200 years old. If “failed to” means “will never”, then we don’t know if the premise is true: it’s not something we’ve observed. What we’ve observed is that all people who have ever been born and who are now dead have failed to reach 200 years old. But now there’s a real difference between the observed sample and the people about whom we’re drawing the conclusion — the members of the observed sample are all dead and the people about whom we’re drawing the conclusion are not.

So, I still don’t see a decisive argument that I’ll die someday, and it seems really hard to calculate the probability. So, I’m holding out hope.

Of course this calculation ignores the fact that we are animals and not just humans, if you weigh up the percentage of animals who have lived and died and those still living it would be close to 99.99…% that you’ll die. The flawed logic here also asserts that if you’re a member of an extinct species your chances of dying are 1, whereas if you aren’t your chances are <1. Which, basically, means the chances of one organism dying are different to another, and this chance is proportional to the age of the species.