I was just at Fordham talking about assertion, knowledge, and lotteries, and Bryan Frances and I were talking about a particular case that it might be interesting to get others’ reactions to. I won’t tell you who took which position on the case, but here it is.

Suppose you have a lottery in which each ticket has a probability of winning of one in ten to the billionth power. There is no guarantee of a winner, and presumably there will be significantly fewer tickets bought than could be bought. Suppose, for example, that there are a hundred thousand tickets bought.

The drawing is held today, and tomorrow the newspaper (one as trustworthy as, say, the NYTimes) reports two things. First it reports the winning number, and second it reports that Lucky Louie L’Amour is the holder of the winning ticket.

Should you believe either, or both, of the newspaper reports? Should you believe that LLL won the lottery, and should you believe the report of which number is the winning ticket? Before answering, note the incredible improbability of anyone winning, and note the incredible improbability of any given number being the number of the winning ticket. OK, enough priming of the pump: should one believe the newspaper? And if you do believe the newspaper, do you now know, on the basis of the information you have, who won the lottery and what the winning number is?

This is America we believe everything the newspaper tells us.

Sam is right of course. But how could the newspaper ever print the winning number? Presumably, it is a billion digits long. That is a lot of paper.

Okay, I’ll let loose with my initial thoughts.

After reading about the lottery in the NYT, as you described it, I would suspect that something like one of the following happened:

a. LLL did get the winning number but the drawing was rigged.

b. LLL did get the winning number but that’s because he somehow got a high percentage of all the possible numbers.

c. LLL didn’t get the winning number and the NYT reporter has been fooled.

d. LLL didn’t get the winning number and my copy of the NYT is a forgery; so someone is trying to fool me (is this the NYT or the Onion?).

And so on. What I DO know is that not all the following conditions are satisfied:

1. The drawing was fair.

2. LLL got just one ticket.

3. Only one ticket is the winner.

4. There are 10-to-the-billion ticket numbers.

5. LLL isn’t God.

Something like that anyway; maybe another condition or two should be added. I know that something went amiss, although I don’t know what it is. I guess the relevant point is this: I know that LLL didn’t win the lottery in the normal, upstanding way. I know that because I know what 10-to-the-billion means. If the odds are 10-to-the-billion to 1, I assume that means that LLL had to choose a billion-digit number. Note that if some crooked lottery official simply gave LLL 99.9% of the winning digits, LLL would still have to guess a million more! I’ve never played a real lottery, but don’t they have just six or so digits to guess right?

What if he really did win in the normal, upstanding way? Then of course I don’t know that he didn’t win in the normal, upstanding way. Now things get complicated.

Hi folks!

As I normally don’t read carefully all the pages of a newspaper, but do read all the headlines, I think the following would happen to me: if I got to the page where the lottery result was printed and read only the headline ‘Lottery x has a winner: LLL, from NY, won the first prize’ I believe I would know that LLL won the lottery without knowing the winning number – supposing it is true that LLL won.

On the other hand, if I read carefully the news, then I believe I would end up, like Bryan, thinking something like a-e. After thinking about it for a while I would stop believing in what the headline says and would start believing (and, why not, knowing) that the following disjunction is the case: either the NYT made a mistake or I simply haven’t heard about such a strange lottery.

Not sure about Bryan’s conclusions. Why exactly should we doubt the NYT report in the case of the winning ticket? Is it because it reports that something wildly improbable occurred? That just begs the question, doesn’t it? The NYT report itself is evidence that something improbable occurred. So given the NYT report PLUS our previous evidence, the occurrence of the event is much less improbable. So what if it is improbable on the partial evidence we have prior to the report? We should doubt the report only if we assume that NYT reports are just unreliable. But I don’t see that we are assuming this. Hume famously begs the question in this way in his argument against miracles. But then maybe it is being assumed that the NYT is antecedently not reliable and so their report is not evidence that the event occurred (or evidence for much else). But if that is not being assumed, then the fact that the event is wildly improbable is not evidence against the reliability of the NYT report. Not as far as I can see. Otherwise, the evidence for any proposition p would also constitute evidence against any counterevidence against p. But that’s mistaken for obvious reasons.

Hi Mike,

You say:

‘But if [the unreliability of the NYT] is not being assumed, then the fact that the event is wildly improbable is not evidence against the reliability of the NYT report.’

I am not sure the analogy is adequate, but it seems to me that just as the report by the NYT of a ‘wildly improbable’ lottery would be a sign that something is wrong (either with my beliefs about lotteries or with that particular report), the fact that someone I know for years tells me she is in fact a robot would be a sign that something is wrong (either with my beliefs about robots or with that particular person). In both cases, I think, it would be irrational not to reconsider the evidence we have in favor of the source of the information (the NYT or the person). Sometimes, after a brief reconsideration of our evidence, the best policy is to believe that the source is wrong in that particular occasion. As we all know, reliability and infallibility are different matters.

Mike, you also say that

‘[If the fact that the event is wildly improbable is evidence against the reliability of the NYT report], [then] the evidence for any proposition p would also constitute evidence against any counterevidence against p. But that’s mistaken for obvious reasons.’

If I am not mistaken, you are underrating a very important epistemic principle: the underdetermination principle (UP). But if that is so, I guess I have lost an important lesson, since you say that such a principle is ‘mistaken for obvious reasons’. Are you arguing against something in the vicinity of the following version of (UP)?

(UP) If S’s evidence for believing that p does not favor p over some incompatible proposition q, then S does not know that p

If you are arguing against (UP) (or something near enough), then I would appreciate hearing about why it is false. If you are not arguing against (UP) (or something near enough), what I am getting wrong?

Intuitively, I would go through the following train of thought:

No way! I can’t believe someone won that! That was a billion digit number. No, that can’t be right. Then again, it’s being reported as true in the NYT, and why would they make that up? Still, the odds are against it…

I would end up withholding belief about whether or not LLL won the lottery until I obtained more evidence, which I would seek.

But say I have two friends, Fred and Bill, and Fred believes that LLL won and Bill denies that LLL won. I’m not sure that I would think either of them to be justified in their belief. But I would say that sheerly based on the improbability of anyone winning, that Bill is more justified, if either of them are.

I’d say the only relevant consideration here is the proportion of tickets sold. If only one ticket was sold, then the report is highly dubitable; if 100% of the tickets were sold then the report is highly credible (because there has to be a winner if all of the tickets were sold and it’s plausible to think that the NYT is able to track down that winner). If the credibility of the report is merely a function of the proportion of tickets sold, then I don’t think that there is a fact of the matter as to whether the report should be believed or not when the proportion is some arbitrary value between 0 and 1, say 0.05.

Here’s an argument. Suppose the NYT report goes as follows:

“The winner is Michael Allen, who bought n% of the total tickets.”

If n=100, there is very good reason to believe the newspaper: after all, if one buys ALL of the tickets that could be bought, then s/he must be the winner. On the other hand, if n=(1 / ten to the billionth power * 100), there is very good reason not to believe the newspaper, for obvious reasons (For example, since presumably the probability that NYT is mistaken is much higher than (1/ten to the BILLIONth power * 100) (!), it is more reasonable to believe that NYT is mistaken than to believe that it is correct).

However, the report then continues:

“The name ‘Michael Allen’, however, does not stand for a single individual, because it is actually the common pseudonym used by BuyTogether, the nation’s largest lottery players’ organization which uses its members’ money to buy huge numbers of tickets and then randomly selects a single member who receives all the lottery gains. Moreover, the organization has become so popular that every lottery player in the US now belongs to it, which means that, this time, all of the tickets were sold to ‘Michael Allen’. According to the organization, the randomly selected winner of this round is Lucky Louie L’Amour, a senior member who” … ..”

You can raise many doubts about the report. For example, you may wonder what the point of such an organization is, and you may suspect that LLL is a pseudonym for a secret oligarch within the organization, etc. But these doubts are irrelevant here because they have nothing to do with the probability of winning. The relevant doubt (if any) is to be raised against the first part of the report which claims that the BuyTogether organization (“Michael Allen”) was able to obtain the winning ticket by buying n% of the total tickets. This claim is highly plausible when n = 100, and highly implausible when, say, n = (a hundred / ten to the billionth power * 100). So its plausibility depends solely on n%, the proportion of tickets sold to the BuyTogether organization, which ex hypothesis is the proportion of tickets sold.

Rodrigo, thanks. You write,

the fact that someone I know for years tells me she is in fact a robot would be a sign that something is wrong (either with my beliefs about robots or with that particular person)I can’t see it. Are you assuming that the person you know is not someone whose testimony is reliable? If the testimony of the person you know is reliable, then if it is possible that someone like her should be a robot (and this is where the case is weak, I think) then what she tells you is evidence (and might be considerable evidence) that she is a robot.

On your second point, I don’t think I said anything contrary to UP. I said this,

Otherwise, the evidence for any proposition p would also constitute evidence against any counterevidence against p. But that’s mistaken for obvious reasons.What I’m urging is that if e and e’ are respectively the evidence for and against some proposition p, it does not follow that e is also evidence against e’. We are taking the improbablity of the event described in p as evidence e against p. But e need not be evidence against the NYT report. The NYT report is the evidence e’ that p occurred. The relation betwen e and e’ depends on how reliable we are assuming (and it isn’t stated) the reporting of NYT is. Suppose for instance that the NYT is rarely wrong on these particular matters.

Mike,

I agree that the fact that the NYT is quite reliable and has said that LLL won the lottery is good evidence that LLL won the lottery. And I don’t think we should conclude that the NYT is wrong in reporting that LLL won; neither should we conclude that the NYT isn’t reliable. But if the NYT says this:

(a) LLL won the lottery. (b) The odds of LLL’s single ticket winning were 10-to-the-billion to 1. (c) LLL bought just one ticket. (d) It was a fair lottery.

Then I know that either the paper has made a mistake, it is a gag in the NYT, or it isn’t the NYT I’m reading. (Here I ignore possibilities like LLL is God, or has supernatural powers.) It is much, much more probable that one of those three things happened than that all of (a)-(d) are true. I mean: it’s not even remotely close! I think an appreciation of the magnitude of the odds is important here.

Yu Guo,

You say that the relevant consideration is the proportion of tickets sold. I am not sure about that. There are 10-to-the-billion tickets. If no person buys more than one ticket, then we need 10-to-the-billion people. But for one thing, our universe would have to be very different for that to happen! When I was a kid, physicists thought that there were about 10-to-the-80 particles in the universe. If they were off by a factor of a googol, then there are 10-to-the-180 particles, still nothing compared to 10-to-the-billion. And even if there were 10-to-the-billion people, each with one ticket, I don’t see how the NYT could find the winner. Even if the winner were shouting from the rooftops about his win, he would most likely be billions of light years away from earth. Furthermore, how did he even get a ticket, being so far away in spacetime? This just shows the ridiculousness of the numbers involved.

Now suppose that LLL bought 90%, say, of the tickets. But how much does a ticket cost? There won’t be enough money in the universe. Or, he got virtually all the tickets for free. Now I have doubts about the fairness of the lottery. But suppose all the tickets are free. Then it’s certainly fair in one respect. But how did LLL get 90% of them? I’m still wary about fairness. Maybe only two people were able to enter the lottery, LLL and MMM, and LLL was able to gobble up 90% of the tickets while MMM was able to claim just the other 10%. Now maybe I can believe the corresponding NYT report. But now we’ve changed the problem around so much that we’re ignoring the original issues!

Bryan

I’ll ignore the logistical obstacles that Bryan Frances identified.

I agree with (at least) the first part of what Yu Guo says. And we’re told that only 100,000 tickets were purchased, so the prior probability that any of them wins is piconanomicroscopic: the reciprocal of (ten to the [billion minus five]).

Whether we should believe the

Times‘ report that somebody won depends on how careful we think they are about these things. It would take a lot of careful checking to reduce the editorial error rate to one in (ten to the [billion minus five]), but I imagine it could be done. Suppose it is done; suppose, that is, that theTimeshas ‘SUPERCHECKERS’.Now let’s look at Mike Almeida’s point. Knowing that the

Timeshas SUPERCHECKERS and having read the report, we know thatsomething very surprising has happened. Either the SUPERCHECKERS have erred or the super longshot lottery has come in. If the SUPERCHECKER error rate is low enough, then the former is even more surprising. So we should believe the latter. Bayes’ theorem will give us an exact credence if we know the SUPERCHECKER error rate.As to Louie: I think I may not understand the intended question, since it seems perfectly obvious to me that our credence for

LLL wonshould be almost exactly the same as our credence in the report of the winning number.One last thing: I think the scenario is polluted by the presence of the unearthly number, ten to the billionth. Our intuitions about such numbers are superemely unreliable, and its presence means we have to think about possibilities that we ordinarily ignore (as Bryan pointed out). So consider this simpler variant: You have bought a ticket in the Powerball. The morning newspaper in Waco prints a report of the winning number: it’s yours! You think the

Tribunetakes only ordinary care with this type of story. Somebody reads the AP wire and types in the report, taking a quick second look to be sure he didn’t misread. You, on the other hand, look at your ticket and the newspaper eleven times, rub your eyes, and ask your sharp-eyed daughter to check for you…Do you believe that you’ve won?

Jamie, no, of course I wouldn’t believe it. But the explanation of that is the immense significance of the claim in question. It would be too much of a disappointment to find out it was a mistake, so I’d withhold judgment out of self-protective motives. This explanation leaves open, of course, whether the pragmatic grounds for suspension of belief line up with what evidence shows.

This is getting more interesting than the conversation Jon and I had a few nights ago! We didn’t go into any of this.

I recommend that we stick with the assumptions Jon and I came up with. Not because the other assumptions aren’t interesting! But just for the sake of focusing on one problem, the one Jon and I wanted to set up. So let’s assume that LLL owned just one ticket number, there were 10-to-the-billion ticket numbers, and only 100,000 tickets were bought. Let’s assume further that it’s a fair lottery. If LLL really had picked the winning number, it would be easy for the NYT to discover this: there is a database that lists the people who bought tickets along with the numbers they bought. And let’s assume further that the NYT report went as I suggested (adding Jon’s stipulation about 100,000 tickets):

(a) LLL won the lottery. (b) LLL bought just one ticket. (c) The odds of LLL’s ticket winning were 10-to-the-billion to 1. (d) It was a fair lottery. (e) Just 100,000 tickets were sold.

Thus, our assumptions entail that (b)-(e) are true. Our assumptions say nothing about (a)’s truth-value. My initial thought is this:

1. After reading the NYT report we should not, if we have any appreciation of the probabilities involved, conclude that all of (a)-(e) are true. In other words, don’t accept what the paper says, even knowing of its general reliability.

Here are the more controversial and I think interesting claims:

2. If (a) is false (so LLL didn’t win) then before reading the NYT report (or any other report) I know that: if the lottery is fair and LLL bought just one ticket and only 100,000 tickets were sold and the odds of any given ticket winning were 10-to-the-billion to 1, then LLL didn’t win.

3. If (a) is false (so LLL didn’t win) then after reading the NYT report I know that: if the lottery is fair and LLL bought just one ticket and only 100,000 tickets were sold and the odds of any given ticket winning were 10-to-the-billion to 1, then LLL didn’t win.

(3) is more controversial than (2). I think we must stick with the absurdly large numbers, because only then do things get interesting. Only then do we begin to doubt assumptions commonly made about the epistemology of lottery beliefs. For instance, I take it that most epistemologists would say that (2) is false if the odds are just a million to 1. Is that accurate? But when the odds are much worse, it seems to me that (2) is true. In fact, (3) looks pretty good to me too.

Bryan, you write,

(a) LLL won the lottery. (b) The odds of LLL’s single ticket winning were 10-to-the-billion to 1. (c) LLL bought just one ticket. (d) It was a fair lottery.Then I know that either the paper has made a mistake, it is a gag in the NYT, or it isn’t the NYT I’m reading. (Here I ignore possibilities like LLL is God, or has supernatural powers.) It is much, much more probable that one of those three things happened than that all of (a)-(d) are true. I mean: it’s not even remotely close! I think an appreciation of the magnitude of the odds is important here.All of (a)-(d) are also true under the assumption that

every ticketwas sold (imagine a rather large world of gamblers). In that case we have all of (a)-(d) true and not much reason to doubt the NYT report, right? So (a)-(d) alone do not make the report incredible.You say,

It is much, much more probable that one of those three things happened than that all of (a)-(d) are true.This seems to me question begging. It is more likely that one of those three things happened prior to including the NYT report in your total evidence. But to suggest that it is more likely that one of those things happened after learning about the NYT report is to beg the question against the reliability of the NYT. It is not to include the report of the NYT in your total evidence for/against those three things. The report from the NYT is itself evidence (and, for anything that has been said so far, very strong evidence) that none of those things happened.

Mike,

I disagree. As I mentioned earlier, if all the tickets are sold, there is no way the NYT will ever discover the winner, assuming one ticket per person. It’s even hard to imagine how every ticket could be sold; the numbers involved aren’t merely big but bring up issues about breaking laws of nature. And if it’s not one ticket per person, and LLL got 50% or so of the tickets say, then it’s hard to see how the lottery is fair, for the reasons I gave in comment 10. Am I all wet here?

Bryan,

You’re discussing the initial worry, and I am less concerned about that than the latter. But I don’t off hand see why there could not be such a world. Maybe the numbers would have to be altered a bit, but the point would be exactly the same.

But that aside, I’m more concerned with whether the reasoning noted earlier (at 10:25, I can’t read the post numbers) begs the question. I can’t see why it doesn’t.

Mike,

I’m sure I could be making a big mistake here, but I still don’t see it. Let’s settle on the lottery as described earlier: one ticket per person, fair, about 100,000 tickets sold, database of purchasers with their ticket numbers purchased, 10-to-the-billion odds for each ticket. Then I read the NYT article that says this:

(a) LLL won the lottery. (b) LLL bought just one ticket. (c) The odds of LLL’s ticket winning were 10-to-the-billion to 1. (d) It was a fair lottery. (e) Just 100,000 tickets were sold.

At this point I can conclude any of the following:

(i) (a)-(e) are true,

(ii) the article is supposed to be putting me on or otherwise humorous,

(iii) this isn’t really the NYT I’m reading but some joke paper, or someone’s hacked the NYT website,

(iv) this is the non-modified NYT but they made an error somewhere in (a)-(e).

I know that the NYT has access to the database mentioned earlier. So I know how easy it is to determine whether a purchased ticket has the same number as the winning number chosen by the official lottery computer.

I don’t see any reason to opt for (i). Any of (ii)-(iv) strike me as much more reasonable. Hell, let’s say that I’m the reporter. So I can rule out (ii) and (iii). The next thing I the reporter would do is make sure that I’m not being fooled, right? I wouldn’t opt for (i).

In fact, suppose I’m the sole lottery official. It’s my lottery; I run the whole thing from top to bottom. To make it even easier, suppose that there are just 1000 tickets sold. I wrote the computer programs and had them tested by many genius programmers, etc. I have the computer generate the winning number and I see that it matches LLL’s. At this point I’m going to think the program’s been messed with.

Virtually no matter what happens, I don’t see myself reasonably opting for (i). But if I understand you, you’re saying that I should opt for (i). Is that right?

Hi Bryan,

Now I have to agree with you that the proportion of tickets sold is not the only relevant consideration, but this is for a different reason than that the “unearthly” number entails any nomological impossibility. And I believe there is a problem that needs be addressed even before one considers the exotic cases involving huge numbers. Let me explain.

Suppose there is a lottery in which each ticket has a probability of winning of one in 100 thousand (which is entirely reasonable). And suppose that there were 100 thousands tickets (which is entirely possible).

Case 1:

ONLY ONE ticket was sold, and the next day NYT reports that LLL is the holder of the winning ticket.

My intuition: As lucky as THAT? That’s one in 100,000. I won’t believe it unless the NYT editor can convince me that their error rate is somehow less than 1/100,000. So I would at least withhold my belief as to whether LLL won the lottery.

Case 2:

ALL of the tickets were sold to each of a total of 100,000 players, and the next day NYT reports that LLL is the holder of the winning ticket.

My intuition: I don’t think in this case there is any special reason to doubt the credibility of the NYT report. Isn’t this just one of those ordinary happenings? So I would believe that LLL won the lottery.

Case 3:

ALL of the tickets were sold to just two people: LLL, who holds just one ticket, and Bill Gates, who holds all the remaining tickets (this would be plausible, given there are only 100,000 tickets). The next day the NYT reports that LLL is the holder of the winning ticket.

My intuition here would be very much like that in Case 1: The probability that Bill Gates win the lottery is extremely high, and that of LLL winning is extremely low. If the NYT says otherwise, that calls their credibility into question. So I would, once again, withhold my belief about whether LLL won the lottery.

So Case 3 is a case in which, even though ALL of the tickets were sold out, still I have reason to doubt the credibility of the NYT report. However, I think there are some tricky issues with our intuitions in such cases. For can’t we regard the other 99,999 players in Case 2 as a group, and say that the probability that this group wins is extremely high? And what if these folks are all hired by Bill Gates to buy tickets for him? (I believe Hawthorne addressed this issue somewhere in his book, though I can’t remember.) I think this issue has to be settled before one even starts addressing cases involving huge numbers, because the intuition that the huge number case (as Jon describes it) trades on is essentially similar to that in play in Case 1, which is either undermined or trivialized, as I show below.

Presumably, a settlement of the issue involves trumping either Case 2 intuition or Case 3 intuition. That is, if one should believe the report in Case 2, then one should believe it in Case 3; and if one should not believe the report in Case 2, then one should not believe it in Case 3. (This is because, as I said, there does not seem to be a sufficiently important difference between Case 2 and 3 to warrant separate treatments.) Suppose one should believe the report in both Case 2 and 3. Then the intuition in Case 1 is undermined, because Case 3 and Case 1 does not seem to be importantly different: What if Bill Gates, after the draw, return all of his tickets? Should we count this as Case 1 or Case 3? If we agree that one should believe the report in Case 3, one should we deny that one should believe it in Case 1? If Case 1 intuition is thereby undermined, then there is little reason to worry over the original, huge-number case, because that case depends on essentially the same kind of intuition (Both involves probability considerations). On the other hand, suppose one should NOT believe the report in both Case 2 and 3. It follows that one should not believe the report even in Case 2, and this is to say that one should not believe the NYT at all in lottery matters. If so, it comes as no surprise that one does not believe the NYT in more exotic kind of lottery matters.

Yu,

That’s a really interesting set of cases to think about. I worry that my probability intuitions are pretty unreliable, and so I hesitate to say anything about your cases. I remember when I first found out about the Monty Hall problem (that’s what it’s called, right?) about switching one’s guess when one door has been opened. Boy was I thrown for a loop!

I guess I’m not confident about almost anything concerning your three cases. But right off the bat, I don’t think I’d disbelieve the NYT reports in either case 1 or 3. I’d withhold judgment, but I wouldn’t conclude that they got it wrong. At least, this is what I would do with my limited knowledge of probabilities. In my lottery described in earlier comments, I will disbelieve the NYT report that asserts all of (a)-(e), as I suspect the issues are simpler there.

Bryan,

As the case is described I think that we don’t know whether to believe the report of the NYT or not. What we have is the testimony of the NYT to the effect that something (call it p) happen that is

antecedently(that is, prior to the testimony of the NYT) highly improbable. What is in question is the probability of p after the report of the NYT. So, let e be the evidence beforehand about the fairness of the lottery etc., and let Pr(p / e)= very low. Let e’ be the testimony of the Times to the effect that p occurred. We cannot conclude that Pr(p/ e & e’) = very low. We just don’t know.It looks to me like your argument goes something like this: since the Pr(p / e) is very low, and e’ is evidence for p, it is true that Pr(e’ /e ) = very low. That is, that since the NYT is presenting evidence e’ for a highly improbable event p, we therefore have reason to believe that e’ is not much evidence at all for p (recall, you say, we have reason to believe it is a gag or joke, rather than evidence for p). But of course it does not follow that since Pr(p/e) is low, and e’ is evidence for p, Pr(e’/e) is low.

But once we recognize that e does not impugn the evidence from NYT we are left with the question of how strong the evidence is from NYT. We don’t know in the example. I say it could be strong enough for you to believe your (i).

Mike,

You say:

So, let e be the evidence beforehand about the fairness of the lottery etc., and let Pr(p / e)= very low. Let e’ be the testimony of the Times to the effect that p occurred. We cannot conclude that Pr(p/ e & e’) = very low. We just don’t know.

I don’t see why you think we don’t know that. Given the probabilites involved, we do know it. I agree that Pr(p/e & e’) is higher than Pr(p/e), but it’s easy to rig the numbers so we can see that both probabilities are extremely low. Is 10-to-the-billion not big enough for you?

I think the fact that the NYT says (a)-(e), is evidence for (a)-(e) (i.e., for (i)), since (in our story anyway) the fact that the NYT says anything is good evidence for that thing. But that evidence is reduced to mush by the evidence that (a)-(e) aren’t all true.

But that evidence is reduced to mush by the evidence that (a)-(e) aren’t all true.Great. This is exactly the problem. Do you have an argument that it reduces to mush? What is being assumed (and this I alluded to above and am entirely indebted to David Johnson,

Hume, Holism and Miraclesfor the observation) is that the report of even so few asonegenuinely reliable source cannot evidentially outweigh the admittedly considerable evidence against p. I don’t know of any argument that shows that that cannot happen or establishes why it cannot happen, even in the sort of case you’re discussing.Mike,

I’m still not clear on this point you’re making. Let me try again (I’m doing my best, I promise) and you tell me what I’m missing!

At 6pm today the NYT website reports that at 5pm today a million hippos materialized out of thin air in the space of one second in Times Square. The report goes on to conjecture that the materialization might have happened as the result of a bizarre quantum accident. Maybe the article includes pictures, if you like. What do you conclude:

(v) the report is true,

(vi) the article is intended to fool readers or be otherwise humorous,

(vii) this isn’t really the NYT website I’m reading but some joke website, or someone’s hacked the NYT website,

(viii) this is the non-modified NYT website but they made an error.

If this really happens to me tonight, at 6pm, then I’ll reject (v) out of hand; and I can’t imagine any reason for doing anything else. It’d be ridiculous to accept (v). My reasoning: the other options ((vi)-(viii)) are much, much more probable than (v).

Now suppose I’m a NYT reporter so I can rule out (vi) and (vii). Now I’ll conclude that one of my colleagues, or a group of them, are either fooled or are trying to put me on. If I was in Times Square myself, and had the sense perceptions as if I was witnessing this event, I might (I’m not as confident of this, but I don’t see how it matters to my lottery claims) not reject the idea that a million hippos materialized in front of me. Even so, it would take a lot more sense perception and input from other extremely reliable sources before I’d plump for (v).

Is the lottery case importantly different? (I assume they’re both physically possible.) I suppose that if we add on enough OUTLANDISH assumptions (I run the lottery myself, only one ticket was bought, the computer program is amazingly simple, I built the computer from scratch and no one knew about it, etc.), we can construct a scenario in which I will plump for (i), the claim that (a)-(e) are true.

(Actually, I can’t imagine how to do this. Eventually, I would start thinking that LLL had magical powers. But maybe that’s just an embarrassing fact about me. I’ll let it go for the sake of argument.)

But that’s going to be pretty hard! And it amounts to changing the subject, as the way we set up the lottery scenario before there were no such outlandish assumptions. Am I still missing something?

Say the NY Times’ reliability was like this. If LLL wins, there’s a 1/1,000,000,000 chance it won’t report that he won. If he doesn’t win, there’s a 1/1,000,000,000 chance it’ll falsely report he won. Given this assumption, we can calcute the probability that LLL did win given that the Times reported he won. It’s this:

999,999,999/(999,999,997 + 10 to the billion)

That’s a tiny, tiny, tiny number. And the assumption about the Times’ reliability was quite generous I would’ve thought. (And you can decrease the chance of error considerably and still get a really tiny number for the probability that LLL actually won.) Doesn’t this show that we shouldn’t believe that LLL won after reading the report?

Should we believe that there is some winner after reading the report? The probability’s going to be higher for that, but it’s still going to be really low, so it seems the answer’s no.

If this really happens to me tonight, at 6pm, then I’ll reject (v) out of hand; and I can’t imagine any reason for doing anything elseIt is not at all obvious that you should do that. Or rather, it is not at all obvious that you should do that under the assumption that the person who testifies to the event is antecedently (as Johnson says) “. . .serious, sober, capable, honorable and sincere. . .” The evidence of the testimony of that person so characterized might outweigh (I know you find this wildly counterintuitive) the counterevidence, as great as it is. But there is no argument–or I know of none–that this cannot happen. Let me commend to you Johnson’s very brief book published with Cornell in 1999. The point I’ve been making (or trying to) is his. And he does it briefly and I think compellingly.

Dylan, thanks, that’s interesting.

I don’t know, is that really reliable? I guess I can’t tell. What is that reliability compared to a reliability of getting it wrong 1/10 to the billionth? Seems comparatively unreliable. So I’m not sure what the numbers mean here.

On the other hand, I’ve no doubt that we can assign values to get either result.

Mike,

It’s very reliable. I was imagining that the Times only makes a false report one out of every billion times. That’s pretty good, right? And the point is that you could increase the imagined reliability a lot before the probability that LLL won on the condition that the Times reported that he won stopped being a tiny, tiny number.

Of course, the newspaper that only makes a mistake one out of every 10 to the billion times is even better, but that doesn’t change the fact that the newspaper I was imagining was really reliable.

Mike,

I still think you’re wrong. I mean c’mon! If you had read in the NYT that a million hippos, for God’s sake, instantaneously materialized in Times Square, and your best and most trustworthy friend was the NYT reporter, you sure as hell wouldn’t believe him. You would reject what he said as false and then you’d fish around for explanations, such as the possibility that he is part of a grand joke. It hardly matters that your friend is known by you to be generally serious, sober, etc. In fact, even if you could somehow peer into his mind to conclusively establish that he wasn’t trying to fool you, you’d conclude that he had been fooled, or maybe coerced. A million hippos materializing out of thin air in Times Square? How about a billion, all with altered physiology and singing “Helter Skelter” by the Beatles? What’s more probable: someone’s trying very hard to fool you or there was the absurd quantum accident or you’re going nuts? You can try to develop the story in such a way that accepting the report is a live option, but I don’t see how this has anything to do with the scenario as Jon and I set it up.

Here is the main issue these reflections bring up, at least by my dim lights:

I take it that most epistemologists have assumed this is false:

(M): Assuming LLL didn’t win, I know that: if the lottery is fair and LLL bought just one ticket and only 100,000 tickets were sold and the odds of any given ticket winning were a MILLION to 1, then LLL didn’t win. And since I can come to know the antecedent, I can come to know the consequent: I know that LLL didn’t win.

I’m omitting all the stuff about reports as to winners. The knowledge in question is supposed to be had after the drawing has been held but before the winning number has been announced. I take it that the philosophers who have rejected (M) have implied or said that increasing numbers won’t change things (someone please correct me if I’m wrong). Yet the extreme lottery case (again omitting the stuff about reports, which I think Jon was interested in) seems to show that this is true, letting one gazillion = 10-to-the-billion:

(G): Assuming LLL didn’t win, I know that: if the lottery is fair and LLL bought just one ticket and only 100,000 tickets were sold and the odds of any given ticket winning were a GAZILLION to 1, then LLL didn’t win. And since I can come to know the antecedent, I can come to know the consequent: I know that LLL didn’t win.

So the task for those philosophers is to see how (G) can be true while (M) is false. But that looks pretty hard, for the arguments against (M) look just as good when applied to (G). That’s one of the things I find puzzling about this stuff. And this is a problem for everyone who thinks (G) is true, as substituting ’40’ for ‘million’ in (M) generates a claim that is clearly false. One is immediately reminded of sorites beliefs.

Hi Bryan,

I think there is an important disanalogy between the hippo case and the original lottery case (at least if the lucky number is deterministically generated): we can have a pretty good sense of how the improbable events in question are bound to OCCUR in the lottery case, whereas we cannot in the hippo case. That is, the fact that the computer program generates the number it actually does, say, 987987, (whose probability is very low), can conceivably be shown to follow deterministically from the initial seed and the computational algorithms of the program that generates it. And the fact that LLL bought the no. 987987 ticket is also something that can be so explained. What is implausible is not the OCCURRENCE of the event of 987987 being picked out as the lucky number, nor the OCCURRENCE of LLL buying the no. 987987 ticket. What is implausible, it seems to me, is their COINCIDENCE.

Now, I think the implausibility that you attribute to the hippo case is of a very different nature: the hippo case is implausible not because the COINCIDENCE of one event with another is implausible, but because the mere OCCURRENCE of a certain event is implausible. How could a million hippos materialize out of thin air? No explanation of the occurrence of this event is available, I think. So I’m not sure it’s fair to compare the hippo case to the lottery case, because even though they are both implausible, they are so for very different reasons.

On the other hand, it’s true that the “random number” generated by deterministic methods cannot be regarded as really random. Perhaps we need a lottery whose randomness could be traced to laws of quantum mechanics. Even so, I think my initial point still holds: the implausibility of the lottery case is due to the coincidence of a certain number being picked out with LLL’s buying a ticket of that number, whereas the implausibility of the hippo case is due to the very occurrence of hippos materializing out of thin air.

When discussing this post with my roommates tonight, I thought of an analagous situation that helped me get my mind around these huge numbers. It brought me from withholding to denying that LLL won, so I’ll try it out on you guys; maybe it will help some of you.

So imagine that you lined up everyone who lived in India and assigned each a digit from 0 to 9. Then you took the entire population of Helena, MT (roughly 100,000 people) to India and had each Helena citizen guess the digit of each Indian. Approximately 10,000 of them would correctly guess the digit of the first Indian. Approximately 1,000 would correctly guess the second. Approximately 100 would correctly guess the third. About 10 would guess the fourth, and one would guess the fifth. That then leaves 999,999,995 Indian citizens, each assigned a digit between 0 and 9, for the remaining Helena citizen to guess right.

That’s about as close to impossible odds as I can think of. It seems far, far, far more likely that the NYT made an (uncharacterstic?) error.

Perhaps we can determine the debate between Mike and Bryan by narrowing what reliability is.

Let this improbable lottery be called the Gazzilion Lottery, and now ask the narrow question of whether the NYT reliably reports winners and winning numbers from the Gazzilion Lottery. If the NYT is veritably reliable for reporting winners and winning numbers from the GAzzilion Lottery than we definitely come to know on the grounds of the NYT report that LLL is the winner.

Perhaps, Bryan, you are arguing that the NYT is not and maybe could not be a reliable reporter for the Gazzilion Lottery; there are good arguments to this conclusion. I am inclined to believe that it does not really matter how reliable the NYT is in reporting other news, this will not make it a reliable source for winning GAzillion Lottery numbers and winners. But if this is your argument–which I am sympathetic to–then it is merely an argument that in the narrow case of the Gazillion Lottery, the NYT is not reliable.

Jack,

I’d put it a little differently. I’d say the reliability of the Times is purely a matter of how frequently it makes mistakes. Given that such-and-such has happened with the Lottery, what’s the probability that the Times will report that such-and-such has happened?

But then say the Times is highly reliable about lotteries–it rarely makes mistakes. That doesn’t mean we should believe what it says. To determine how likely it is that a particular ticket actually won, given the fact that the Times reported that it won, we have to take into consideration not only how reliable the Times is, but we also have to consider the base rate. In the Gazillion Lottery the base rate is so freaky that we shouldn’t believe the Times even though the Times is highly reliable.

What do you think?

I still think you’re wrong. I mean c’mon! If you had read in the NYT that a million hippos, for God’s sake, instantaneously materialized in Times Square, and your best and most trustworthy friend was the NYT reporter, you sure as hell wouldn’t believe him.We’re assuming that such an event is possible, right? No laws would be violated, etc. Let’s assume that either that’s so or some other possible-yet-highly-improbable event M allegedly happened and my best, most reliable friend testified to M. Let t(M) be that testimony to M’s occurrence. So what we want to know is the value of this,

1. Pr(M /t(M))

Now you say that the value of Pr(M & t(M)) is extremely low. I agree. That prior probability is low. But it is very important here that the value of Pr(M & t(M)) is low against background of information that

does notinclude the fact that my friend testified to M. Otherwise, you’re just assuming that t(M) consitutes little or no evidence for M. You’d be assuming that M is extremely low despite t(M). But that’s just the question, isn’t it, how much evidence is there for M in t(M)? So the background information cannot include the fact that my friend testified to M. But this of course means that the conditional probability Pr(M/t(M)) also exists only relative to background information that does not include knowledge that t(M) holds. So Pr(M/t(M)) is not strictly a posterior probability for M. After all,2.Pr(M/t(M))= Pr(M & t(M)/[Pr(M & t(M)) & Pr(~M & t(M))]

and all of the values on the right are given against a background of information that does not include the fact that my friend testified to M. That is why the value for Pr(M & t(M)) is reasonably put so low. Given

thatbackground information, I agree that n = Pr(~M & t(M)) > Pr(M & t(M)) = m, and Pr(M / t(M)) is less than .5. You should not believe it.But that’s hardly the end of the story. What happens when I update my beliefs?

3. Pr1(M) = Pr(M/ t(M))

I update as in (3) only when I “learn for sure”, as the Bayesians say, that t(M) and “nothing stronger than” t(M).

But when I learn for sure that t(M) we can’t without begging the question just assume that I do not also learn that M. All of the evidence we have against Pr(M & t(M)) comes

beforethe actual testimony. I cannot rely on that probability when updatingafterthe testimony. So we do not know that Pr1(M) = Pr(M/ t(M)). If I can’t learn that M occurred when a very reliable, sober, sincere, honorable etc., friend reports it to me, when can I learn it?Jon posed two questions:

(i) Should you believe the report that the winning number is n?

(ii) Should you believe that L holds a ticket with that number?

I doubt either is nomologically possible. n has 10 to the billion significant digits, and I am confident we have no way to determine that n denotes a particular number of that size. We use numbers of that size to talk about classes…actually, to approximate classes, which is to say to approximate the number of siginificant digits…not to denote specific integers.

So, I cannot conceive of an uncertainty mechanism that generates draws on this lottery, or procedures to record and communicate the outcome, or a machine or person capable of understanding what integer ‘n’ denotes.

You must make a decision of whether you will allow these points to be abstracted out of the problem. If so, then you have a report of a very unlikely event. And surely the Science section reports very unlikely events all the time. And then the problem is to explain why we should do this, or under what conditions we should do this, and dispense with this setup.

But if the point is to take this example at face value, then we must deal directly with the incomprehensibility of numbers of this size, how much larger they are then any of our methods of measurement can even begin to handle. Talk about ‘reliability’ and probability assessments are predicated on the assumption that this series of measurement problems are solved.

Greg,

The number n has just a billion digits, each digit ‘0’ to ‘9’; you seemed to be saying there were 10-to-the-billion digits. I agree that there are problems with the set-up (I discussed them in one of the early comments above), but there is no difficulty with printing the winning number in a newspaper as long as you are willing to use about 100,000 pages of newspaper (assuming 10,000 characters (not words) per large newspaper page)!

Mike,

I’m still not seeing my way to accepting what you seem to be saying. You wrote

If I can’t learn that M occurred when a very reliable, sober, sincere, honorable etc., friend reports it to me, when can I learn it?

I have tried to describe circumstances under which one MIGHT learn M, that a billion “Helter Skelter” hippos had materialized in Times Square. But I continue to be mystified as to why you think that you could learn that fact from testimony along with information about the person’s reliability, sincerity, etc. You wrote

But when I learn for sure that t(M) we can’t without begging the question just assume that I do not also learn that M.

However, I’ve taken into account the fact that we know that t(M) and that the testimony is coming from your most trustworthy, sincere, intelligent, sober, etc. friend. Your friend Jan says that a billion “Helter Skelter” hippos materialized in Times Square. At this point I can conclude any of the following, where I put in the good testimony facts:

1. Jan is right, and a billion “Helter Skelter” hippos materialized in Times Square.

2. There are no such hippos and Jan is trying to fool me for some reason, even though she is generally honorable, reliable, sober, not at all prone to practical jokes, and seems right now to be entirely serious.

3. There are no such hippos, Jan really believes her story, but Jan has been fooled by someone or some group of people, even though she is very intelligent, not easily fooled, reliable, wise, and sober.

4. There are no such hippos and Jan has gone nuts, even though up until now she has always been completely sane.

I don’t see ANY reason for choosing (1) over any of the others. I don’t see ANY reason for thinking that Pr(M/ t(M) & the testifier has always been extremely trustworthy, wise, etc. and seems perfectly sincere right now) is anything other than absurdly small. I definitely know that in real life I would initially choose (2) as the most likely possibility. Wouldn’t you? You seem to be arguing otherwise.

I see that Mike’s October 28th, 2006 at 2:09 pm comment is numbered 20,000. Does that mean that CD has had 20,000 comments thus far?!

I see. So we are talking about 10 to power of 10, not 10 to the power of 1 billion.

No. The simple lottery: each ticket has a six-digit number, 000,000 to 999,999. The odds of it winning are one in 10-to-the-6. Our lottery: each ticket has a billion-digit number. The odds of it winning are 1 in 10-to-the-billion.

AhhHHHhh. Ok. I can model that. And I mistyped: I meant 10 to the 9th, not 10 billion.

…and I am confident that there are methods for confirming the report the stochastic model generates, and methods for confirming L’s ticket is the winning ticket. So, to (i) and (ii): Sure. Why not?

Greg,

We could confirm whether LLL won, then we’d know. But the original question was whether we can know when all we have is the Times report, not the Times report plus some additional evidence.

My thinking is this: first I have to understand how to model this problem, which I think I do now, and whether there is a method to generate a report with any confidence. (And there is one.) Then, seeing how to do this, I assume that a reputable paper will want to know about how the people running the lottery go about controling error when determining the winner. (There would be a good story if the organizers blew it.) I’m not sure I would be confident reading the number myself, but I would be confident reading that L won, just as I’d be confident reading that there was a winning ticket.

And I would assume, too, that such a lottery would issue alpha-numeric keys to each ticket buyer, to compress the information necessary to identify individual tickets, so that individuals could reasonably check for themselves.

To me this looks like an engineering problem, not a philosophy problem.

Hi Greg,

This is an epistemological problem. You read the following in the NYT:

(a) LLL won the lottery. (b) LLL bought just one ticket. (c) The odds of LLL’s ticket winning were 10-to-the-billion to 1. (d) It was a fair lottery. (e) Just 100,000 tickets were sold.

We’re assuming that (b)-(e) are true; we make no assumption about whether they are known or whether (a) is true. The main initial questions are these:

1. For someone with no special knowledge (e.g. so omit LLL, the lottery officials, etc.) who appreciates the odds, it is reasonable to believe the report, thereby coming to believe that (a)-(e) are true?

2. For someone with no special knowledge (e.g. so omit LLL, the lottery officials, etc.) who appreciates the odds, if not all of (a)-(e) are true, can she know, merely on basis of her knowledge of probabilities, that not all of (a)-(e) are true?

I suspect the answers are ‘no’ and ‘yes’. Of course, these initial questions raise further questions, the ones brought up in the comments.

No, it is an engineering problem. Nobody makes a probability assessment without conditioning also on background assumptions, which you can pin down in your description. There exists a way for a lottery like this to run, and for the

Timesto accurately report the result.Don’t conflate error probabilities with low priors.

Bryan,

You are unrelenting! You say,

I don’t see ANY reason for thinking that Pr(M/ t(M) & the testifier has always been extremely trustworthy, wise, etc. and seems perfectly sincere right now) is anything other than absurdly small.I think there is a point we agree on. Prior to the report of my friend the value of Pr(M & t(M)) is extremely low, infinitesimal if you like. And you can use that information in obvious Bayesian ways to reach the conclusion that Pr(M/t(M)) is extremely low. Hey, there’s two things we agree on.

But what you seem to insist on is that the evidence I have

aftermy friend reports to me what happened does not significantly affect the value of Pr(M/t(M)). Now once again. I am in complete agreement that antecedently (i.e. prior to the testimony) I would expect the conjunction of that testimony t(M) and M to be very low. But, now that he has testified, how do you know that his testimony did not significantly affect the value of Pr(M/t(M))? You’re still relying on your antecedent probabilities, aren’t you? C’mon, admit it. That is where the question is begged. You refuse to move beyond the antecedent probability for Pr(M & t(M)) and add the evidence of his testimony to what we know. You might say “I’ve added his testimony, and it wasn’t good”. But how on earth do you know that? All you know is that you didn’texpect it to be goodon evidence that is now outdated; it is evidence from before his testimony. This is a bit like having heaps of evidence that Smith is a poor speaker and insisting, post speech, that it really couldn’t have gone very well, since that is the only outcome consistent with our antecedent expectations.Bryan (42), I have a couple of questions now.

First: in Jon’s statement of the problem, the newspaper prints what it says is the winning number. In your description you have omitted that part. It seems relevant to me.

Second, do you think it’s relevant that there were only 100,000 tickets purchased? It doesn’t seem like Lucky Louis would be more likely to win were there more tickets sold; why then would it be relevant?

As to the proper epistemological question, which I haven’t really been thinking about, I’ll be there’s a difference here between whether we can know who the winner is based on the testimony and whether we are willing to say that we know who the winner is. That’s my hunch.

Folks,

Can anybody help me see what (if anything, other than vulnerability to a familiar charge of epistemic circularity) is wrong with the argument below? (I assume that epistemic circularity is beside the point here.) Thanks!

1. I see that the NYT reports that LLL has won the Gazillion Lottery.

2. Much of what I know I’ve known through testimony from NYT reports.

3. The NYT has been extremely (though fallibly) reliable in the past when reporting about extremely unlikely events. (Every time it reported on some extremely unlikely event, not a shred of counterevidence ever emerged.)

4. This is the most unlikely event in the history of NYT reporting. (It’s extremely unlikely that there is a winner for a lottery of that size with so few tickets sold, even more unlikely than the other fantastically unlikely events that the NYT has reliably reported on, like the winning numbers of thousands of ordinary lotteries.)

5. But, for anything I may believe on the basis of my (fallibly) reliable perceptual and inferential capabilites, I should still think that the combined reliability of NYT reporters far outweighs my own. (I have every reason to think that their fact-checking is much more reliable than my own, since their combined perceptual and inferential capabilities must be more reliable than my own. They have all the most trustworthy experts at their command, etc.)

6. If I were to witness an absolutely fantastic event, I would (normally) trust my eyes. (Much of what I think I know *is* fantastically improbable, that is, *objectively* highly unlikely. For instance, for each individual part of the computer I see in front of me, the odds of its being combined with all of the other parts that make up this particular piece of machinery must be infinitesimal (on reasonable assumptions about how the parts are produced and assembled). And yet, I have every reason to think I know that it has been combined with the other parts. My entitlement comes from the exercise of my reliable perceptual and inferential capabilities (or from my beliefs about their reliability, or from my justified beliefs about their reliability; I don’t presently care). LLL’s win is only objectively improbable. Insofar as I can know anything that is objectively improbable, I certainly can know that LLL has won the Gazillion Lottery.)

7. So, LLL has won the Gazillion Lottery.

I think we have to ask ourselves what we would ACTUALLY do if our most trustworthy, wise, sincere, etc. friend approached us with her story of the billion “Helter Skelter” hippos. There’s no frickin’ way I’d believe her. I’d be an idiot to do so. I’d immediately think she is trying to fool me for a joke–even if she had never tried to do so before. I think that’s obvious. And if I then convinced myself that she was being sincere in her report, the next thing I’d think is that either she’s ill or someone has really fooled her. Actually, I’d think she’s ill. I have a hard time believing that anyone is disagreeing with those claims.

When it comes to the NYT reporting the hippo story, things are only slightly different. What will you ACTUALLY do upon reading the report? Answer: you’ll think it’s a gag paper, not the NYT. Or, if you find out it’s the real NYT, you’ll conclude that either some really embarrassing mistake has been made (e.g., a gag report was accidentally printed as legit) or the report is a joke of some kind. Or, if you find out that no such mistake has been made and the report isn’t a joke, then you’ll conclude that the NYT reporter is a gullible soul who has been fooled; or perhaps he’s ill and somehow got his deranged report published anyway; or perhaps he is about to be fired and he’s exacting some revenge. Why on earth would anyone conclude that the report is true?? I submit that in any actual situation none of us would accept the billion hippos story on the basis of testimony from either the NYT or our best friend.

Before your trustworthy, wise, sincere friend approaches you with her billion “Helter Skelter” hippos story you pass the time by asking yourself strange questions about the odds that a billion hippos could suddenly materialize in Times Square all singing “Helter Skelter”. You will say:

A. The probability that the hippo story is true, is infinitesimal.

You will also say:

B. The probability that the hippo story is true and someone testifies that it’s true, is infinitesimal.

I am also going to say this:

C. The probability that the hippo story is true GIVEN that someone has testified that it’s true, is infinitesimal–although a “bigger” infinitesimal than the first two infinitesimals.

And I’ll say both of these as well:

D. The probability that the hippo story is true GIVEN that someone testifies that it’s true and that person is my best, most trustworthy, wise, upstanding, sincere friend who appears perfectly sane and sincere and who is not prone to practical jokes, is infinitesimal although a “bigger” infinitesimal than the first three infinitesimals.

E. The probability that the testifier is ill, being humorous, has made a printing error, is being vindictive, etc. GIVEN that she testified that the hippo story is true and that person is my best, most trustworthy, wise, upstanding, sincere friend who appears perfectly sane and sincere and who is not prone to practical jokes, is just under one.

I don’t like relying heavily on possible world claims in epistemology, but here’s a rough way to think of why I accept (D) & (E):

Take 100,000 possible worlds in which someone testifies that the billion “Helter Skelter” hippos story is true and that person is my best, most trustworthy, wise, upstanding, sincere friend who appears perfectly sane and sincere and who is not prone to practical jokes. In 99,999 of them, the hippo story is false and the correct explanation of why the testifier said what she said appeals to things like jokes, illnesses, printing errors, etc.

Now my wonderful friend Jan really does come up to me and spouts the hippo story. I can’t imagine that that would change anything! Now I know that the actual world is among those 100,000 possible worlds. This is very interesting, as before I heard the testimony I would have put the odds that Jan would ever testify to a story like the hippo one to be vanishingly small. Even so, I’d be a fool to think that she’s right.

I agree that the hippo story is different in lots of ways from the gazillion lottery story. But the probabilities are comparable in the sense that I’ll accept the analogues of (A)-(E) and I’d be a fool to conclude the report that LLL won is true. All the same alternative explanations–regarding humor, revenge, illness, coercion, printing errors, etc.–are still in force and each is much more likely as an explanation of why the NYT lottery reporter is saying what he’s saying (compared to the explanation that goes ‘He is saying this because he has witnessed the drawing, or has somehow else learned that it’s true’).

So what the hell am I missing now? Does anyone REALLY want to claim that in the scenarios described they would accept the hippo story as true? That they wouldn’t opt for one of the other explanations? And the same for the lottery case?

NB I want everyone to keep in mind that I know nothing about probability theory. I was good in mathematics, I even got an MA in physics, but I’ve always been hopeless at probability theory and have never studied any formal epistemology. So, take everything I say about it with heaps of salt.

Bryan, I do think you’re basically right in [48].

On my office door, I have a newspaper clipping from

The Australian, a highly reliable newspaper, the ‘newspaper of record’ in Australia. The article says that pi has been shown to be a rational number. ‘Scientists’ calculated it with a supercomputer, and it turns out to terminate after some large and impressive-looking number of digits.I

knewthis report was incorrect when I read it. The article is not a joke (the reporters and editors were hoodwinked).The story of Lucky Louie can be similar, it depends on the details.

Thanks Jamie. It’s nice when real life creeps in.

I think we have to ask ourselves what we would ACTUALLY do if our most trustworthy, wise, sincere, etc. friend approached us with her story of the billion “Helter Skelter‿ hippos. There’s no frickin’ way I’d believe her.Fine with me. But I doubt the methodology is helpful. I mean unless it is a rule for you exceptionlessly observed that what you

woulddo is just what youshoulddo. It isn’t for me, so what I would actually do doesn’t show much about what I ought to do.D. The probability that the hippo story is true GIVEN that someone testifies that it’s true and that person is my best, most trustworthy, wise, upstanding, sincere friend who appears perfectly sane and sincere and who is not prone to practical jokes, is infinitesimal although a “bigger‿ infinitesimal than the first three infinitesimals.I see this as question-begging. But we’ve been here before, I believe. You offer on the one hand that the testimony is from a sincere person, not prone to practical jokes. etc. but add that probably the testifier is ill, being humorous, etc. But that just begs the question about the evidential value of her testimony.

I think one obstacle to reaching some agreement has to do with the Hippos example itself. When I consider it, it seems nothing less than impossible. Suppose we imagine instead oceans upon oceans filled with trillions and trillions of ping pong balls (make the number and oceans as large as you like). And suppose that these oceans have been sampled for millions and millions of years in various places, depths, etc., and without exception no one has found a red ball. Every sample has been white. Again, make it as improbable as you like (short of impossible) to find a red ball.

Now suppose your sober, sincere, honorable, intelligent friend who is not prone to practical jokes, tells you in a sober, sincere, honorable, non-joking manner that she went down to the ocean, selected a ball, examined it under different lighting conditions, and it was red. It’s after all, not impossible to find a red ball, it is just extremely, extremely, improbable (as improbable as you like). I can’t see it as anything but question-begging not to take that testimony as weighty evidence that she actually found a red ball. I mean long before you observe it yourself.

Mike, maybe this will help.

People make mistakes when they identify the colors of ping pong balls. When someone examines a ping pong ball under different lighting conditions, and is really interested in knowing what color it is, a mistake is very unlikely. But it isn’t impossible.

Now let’s say that the chance of ping pong ball color misidentification under such circumstances is &epsilon, just to give it a name. To be very specific, the chance that a person like your friend will misidentify a non-red ball as red is &epsilon. (That’s not the chance that a particular testimonial is incorrect, of course — that would be the converse conditional probability.)

Can’t we rig the ping pong ball sampling space so that the chance of your friend finding a red ball is &epsilon/1000 ? You say we can make it as improbable as we like, so I suppose we can do that.

If we can, then when your friend tells you she’s found a red ball, the chance that she really has (the

posteriorchance, conditionalizing on her testimony) is only 1/1000. That follows from the rest. So it isn’t question-begging.Grrrr. That “&epsilon” looked right in the Preview but doesn’t work in the actual comment. As you can guess, it’s supposed to look like a lowercase epsilon. I’ll assume, out of laziness, that it’s comprehensible.

A question: Just because a certain number, n, is ‘absurdly large’ does it follow that an event that has probability of 1/n is absurd? Isn’t there a difference between the former and the latter use of ‘absurd’ here?

I mean, why is it that one would reject the report outright in the hippo case? I’d say that’s largely because the event that is being reported, i.e., ‘Helter Skelter’ hippos materializing out of thin air, is ABSURD. It doesn’t make sense, and presumably it is in principle unintelligible (That is, if the event is a quantum accident, I don’t think it makes sense to ask questions like: Why was it a million hippos that materialized, rather than a million rhinoceros? Why was it ‘Helter Skelter’ that they sang, rather than ‘The Ballad Of John & Yoko’? Such questions wouldn’t make sense because assuming that aleatory probability cannot be reduced to epistemic probability, the only answer that could be given would be something like ‘That’s just what happened’.) And I agree that one would be fool to believe merely on somebody’s say-so something that’s absurd and in principle unintelligible.

But I’m not sure that it’s proper to regard the event of LLL winning the lottery as ABSURD. Perhaps this is what as a matter of fact one intuitively does, but it doesn’t follow that one’s justified in doing so. Perhaps the apparent absurdity of LLL winning the lottery is really due to one’s inability to clearly conceive of the unearthly number in question. Or perhaps one’s intuition is such that one starts to confuse improbability and absurdity whenever the event in question is ‘absurdly improbable’ or has ‘absurdly low’ probability (perhaps this is linguistic evidence that the two concepts are closely intertwined?). So maybe the intuition (if one has it at all) that LLL’s winning is absurd, or (equivalently) that one would be fool to believe it, is really illegitimate and should be trumped. Just because a certain number, n, is ‘absurdly large’ doesn’t mean an event that has probability of 1/n is absurd.

When someone examines a ping pong ball under different lighting conditions, and is really interested in knowing what color it is, a mistake is very unlikely. But it isn’t impossible.Take my own case after the ball has been selected and I’m observing it. If I am observing the ball under different conditions and it looks clearly and unambiguously red, and I am rested, sober, etc., then I think the chances that I am right are close to certain about

that particular ball. The same in your own case. And I’ve no doubt that your betting behavior on the color of that ball would reflect that fact. And that is consistent with holding that there is a small chance that I am mistaken.So far, so good!

Now add this information: the error rate (misidentifying a non-red ball as red) in this situation is epsilon. (By this, again, I mean the chance that it looks red given that it isn’t.)

Then add this information: the prior probability of finding a red ball is epsilon/1000.

Update on that information. How certain are you now that the ball is red?

The chances that it looks red but isn’t I’m assuming are based on the near-red, vaguely red, indeterminately red, etc. non-red balls in the pool or on the near-red, vaguely red, indeterminately red experiences of non-red balls in the pool. But as far as I can see, a ball that is disposed to produce the sorts of clear, unambiguously red experiences I am having, under the conditions I am having them, is red. So I agree that there are red balls in the pool I’d be less certain of. And so I do have a chance of mistaking a non-red ball for red. That’s compatible with my claim that I do not have a chance of mistaking a non-red ball for

THIS PARTICULARunambiguously and clearly red ball under the normal and (even somewhat idealizing) conditions specified. Certainly the chance of error in observation we are discussing is consistent with no chance of error in certain particular cases.Huh. So you mean your example to be one in which the chance of error on this occasion is zero. You hadn’t made that clear (when you said “close to certain” in [54] I thought that meant non-zero chance of being mistaken).

In the newspaper/lottery example, the chance of error is not zero. So I don’t see how your example illuminates the newspaper/lottery example.

Let X be a random variable taking values V= {0,1,2,3,4,5,6,7,8,9}. Define a sequence of assignments for X, (X_1,…,X_10^9), such that each assignment of X_i from V is randomly assigned. Run this algorithm once. Name the output sequence

n.nis your winning ticket.The next step is to confirm whether a particular ticket matches n. This can be treated as a measurement problem: for any match, run the matching algorithm

mtimes. If the ticket andnmatch on allmtrials, you essentially are betting that this is the winning ticket against drawingnonmindependent trials, which can be as absurdly small as you want by making m as large as you want. This is what grounds the claim that L’s ticket wins the lottery. We can do the same trick to settle (i) by writing n to m outputs and comparing the m outputs, which would give us confidence that the machine did not err in writing the winning ticket. It is easy to conceive of a lottery that would go beyond our technical abilities to control error, but this example doesn’t appear to be one.The

Timesreport will have error rates orders of magnitude higher than the events it is reporting here. And this is completely irrelevant, since it can easily run a correction. In fact it will be much easier to run a correction in this case than in most of its other errors it commits.There is absolutely no epistemological problem here. On the contrary, this would be one of the most rock-solid facts the

Timescould report on.Well, let’s see. I tried to explain how I could mistake a white ball for red. I agree that I can do that, and said so. I said that that is consistent with there being particular observations concerning which I could not be mistaken. But I didn’t have to make the probability of not being mistaken 1 in the case I presented. I could have made it less than 1, and the point would still stand. It might be the only ball in the giant bin that I could be nearly certain about, whereas I am much less certain about many other indeterminately red/white balls. The chances that I mistake a white ball for a red one would still be N, but the chances that I mistake a white ball for THAT particular red ball almost nil. I frankly doubt that anyone under the specified circumstances could mistake a clearly white ball for a clearly red one. Speaking of unrealistic. Not unless we start running some sort of skeptical hypothesis.

What the example illustrates is a case where the antecedent probability of being correct in discovering a red ball is extremely low but the posterior probability is very high. Much of the discussion in the previous cases seemed to me to beg the question in this way: i.e., because there was so much evidence against choosing a red ball and so little for it (the report of a single person) the evidence against it must outweigh the evidence for it. Or, in the NYT case, the assumption that the case under consideration was not one in which, unlike many other more or less similar cases, it was unlikely that NYT was wrong.

Ah, good. So let’s go back to that case, shall we? Your chance of being mistaken

about this particular ball given this evidence that you have, is less than 1. Then we can call it 1-epsilon.Now I can repeat the rest of my point very simply.

Add this information:

The prior probability of finding a red ball is epsilon/1000. Update on that information. How certain are you now that the ball is red?

Hi again!

Greg,

When you wrote ‘This is absolutely not an epistemological problem’ what you wrote was probably true. The problem is this: you’re not focusing on the problem that the rest of us are focusing on. We’re not focusing on anything like this:

Can one determine whether a given ticket has the same number as the winning number? And could the NYT accurately report whether or not someone had the winning number?

That does look like a technological problem (although see my final thought below). Instead, our problems arise from this initial question:

If someone such as your most trustworthy, sincere, etc. friend or a NYT reporter told you that (a) LLL has won the gazillion lottery, (b) the lottery was fair, (c) LLL had just one ticket, (d) the odds of a single ticket winning are a gazillion to 1, and (e) there were just 100,000 tickets sold, what attitudes are epistemically reasonable to take with regard to that testimony?

For what it’s worth, I’m saying that for someone who has any appreciation of the odds of the gazillion lottery, the INITIAL attitude (after one has reflected on the fact that the testifier is your most trustworthy, sincere, etc. friend who is showing no IMMEDIATE signs of illness, tomfoolery, etc.) has to be disbelief coupled with belief in something like the conjunction of ‘He is trying to put me on’, ‘He is very ill’, ‘He is lying in order to be mean to me’, etc. More specifically, I hold that:

1. At this point one has to assign an extremely low probability to ‘He is right’ and a very high probability to ‘He is wrong and he is either joking, ill, being mean, he has been the victim of a practical joke, etc’.

(Keep in mind that ‘He is wrong’ doesn’t mean ‘LLL didn’t have the winning number’. It means ‘Not all of (a)-(e) are true’.)

Now assume that I try VERY hard to find out why he is making this bizarre report to me (although I limit myself to investigating the testifier alone, instead of looking for outside evidence). I can’t find any specific evidence of illness; I can’t find any specific evidence of joking, I can’t find any specific evidence of meanness; I can’t find any specific evidence of his being the victim of a practical joke; etc. I hold that:

2. At this point one still has to assign an extremely low probability to ‘He is right’ and a very high probability to ‘He is wrong and I must have overlooked something, although for the life of me I don’t see what it is’—although the probabilities are less extreme.

When I say that the probabilities are less extreme I’m not saying that they’re about the same but one’s just a bit bigger than the other. I’m happy with the thesis that after my thorough investigation I should increase my estimation of the probability that the testifier is right by a factor of a trillion. But a trillion multiplied by next to nothing is still next to nothing (remember how big a gazillion is).

I take it that Mike disagrees with either 1 or 2. I suppose it’s 2. Regarding 1, only a gullible person would accept the testifier’s report without bothering to investigate further! It seems to me that 1 is true for the hippo case as well. The testifier’s long history of sincerity, trustworthiness, etc. coupled with his displaying no IMMEDIATE signs of deception, etc. doesn’t raise the probability of ‘He is right about the hippos’ to anything significant. Only after painstaking investigation of the testifier is there any way I could reasonably accept what he says (but I still see no reason to doubt 2).

If I had god-like epistemological powers but just hadn’t bothered to see for myself if LLL’s ticket is the winning one, and I was able to have utter god-like Cartesian certainty that my friend the testifier isn’t joking, he isn’t fooled, etc., then I might (I don’t know) be completely reasonable in assigning a very high probability to ‘He is right that LLL won’. But of course we’re leaving gods out of consideration! And in any case, I don’t see how god-like investigation of the testifier could rule out the obviously relevant possibility that the testifier is the victim of a practical joke.

Mike and Jamie,

The ping pong case is very different from either the lottery case or the hippo case, at least based on the way you seem to be developing the ping pong case: you seem to have omitted the testimony aspect completely. That opens a new can of worms. But I did examine the non-testimony case above: you’re the sole lottery official, you’re a genius when it comes to computation and mathematics, you’ve taken every precaution you can think of against fraud. And then you see that LLL’s ticket is the winning one. Let’s assume it really is the winning one, through magnificent coincidence. At this point I would conclude that LLL, or someone working on his behalf, has somehow messed with the lottery, so it wasn’t a fair drawing, and I have just entirely missed the deception. That’s the claim akin to 1 above. Then I go through the painstaking investigation to uncover even the slightest specific evidence of deception, fraud, etc. I come up empty handed. What then?

Jaime,

I guess I’m not sure. If I had another ball in front of me, I would not be in the epistemic state I am in (whether it is certainty or near certainty). In the case I describe the fact is that I have ball B directly in front of me, and concerning B, by hypothesis, I have almost certain evidence that B is red. I report on the basis of my near certain evidence that B is red. Two cases: (i) Perhaps I do not know that the ball in front of me is providing me with the near conclusive evidence that it is red. I find that hard to believe, but it does not matter. To deny that it is providing me with conclusive evidence that it is red is to assume some form of internalism about justification. I don’t think that is an assumption of the case. (ii) Ball B in front of me is clearly and evidentally red and I perceive this. Do I sometimes make mistakes in this evidential state? Yes, but there are just a few non-definitely-red balls of the trillions of balls in the giant urn with which I might confuse this one.

Bryan,

I am ignoring the other problems (rhinos, miracles, ping-pong balls) because they are red herrings. I would believe both reports. The reason I would believe them would be because the lottery result could be very easily confirmed.

The methods that a reputable newspaper use to confirm a news story are usually more rigorous than any of my friends’ methods for telling news. (Although positions are reversed for me in some areas of science and mathematics.) And the original problem was whether I believed what was written in a good newspaper about a winning lottery ticket, not what a friend told me.

Note that the error rate for the news report is not lower than the error rate for confirming the lottery results. The reason to focus on the functioning of the lottery is to show that this difference in error rate, and it is an orders of magnitude difference that we’re considering (i.e., the newspaper is much, much more worse), is not a problem. We can treat the result and the naming of L as the winner as facts, or as practically certain as you’d like to make them.

Then the problem simply reduces to the

Timesreporting a name and a number correctly, which it does all the time. Sometimes it messes up. But this case is no different from other well-sourced things it reports. And it is extremely easy to correct, much easier than most of the things it reports as true.One problem in this discussion is that people are trying to pack all of this into their probability functions over the belief states about tickets, L’s winning, balls, rhinos. And you cannot do what I am doing with this first-order model. If you insist to be a Bayesian, you’ll need to introduce second-order probabilities to function like my error probabilities do, which is over distributions not over states. This makes sense from a Bayesian statistics point of view, although I have no idea whether it makes sense from a Bayesian epistemology point of view. And if it doesn’t, so much the worse for Bayesian epistemology.

Greg,

I agree that it is very easy to confirm the lottery result. And it would be very easy for the NYT to get that information and publish it. What is not easy to confirm is that the lottery wasn’t rigged or altered or hacked or otherwise illegitimately interfered with. I still think it would be foolish to believe the NYT report that LLL chose the winning number AND it was a fair lottery (the conjunction is crucial!), and I have been unable to see any reason in this comment thread for believing that conjunction solely on the basis of the NYT report. (Clearly, the supposition has to be that the NYT is saying that the lottery is fair, otherwise the questions Jon raised are silly.) LLL went to a store and with the help of the store’s computer, chose a billion digit number as his ticket (or he chose all the numbers himself, which means that he chose a few and then specified a pattern for the remainder, as there are so many to choose and he isn’t going to live forever). Later, the official lottery computer chose a one billion digit number as the winning number. LLL’s number was electronically fed to the lottery officials who checked it and have discovered the perfect match of the sequence of one billion numbers. We stipulate that everything is completely fair.

Now you have a choice as to what to conclude from the NYT report: either we have the magnificent coincidence or the lottery is interfered with (or something else). There are people involved, not just computers. People can be extremely clever in deceiving others. The NYT reporter could have been right there at lottery headquarters when the officials had the winning number generated—but it was a sham. The code that actually produced THAT number was not the code that the quality controllers from the independent lottery monitoring group examined, etc. It is much, much more probable that the lottery is very cleverly tampered with—or the NYT reporter is a liar exacting revenge for his pending dismissal, or the reporter wrote a gag report for his friend LLL and it accidentally got printed in the NYT—than LLL happened to choose the exact same string of one billion digits as the official lottery computer. It seems to me that you’re not considering the other obvious explanations of why the NYT might report that LLL has won the fair gazillion lottery. Why you would just accept the one explanation—it was a fair lottery and LLL chose the winning number—over any of the other, obviously relevant and in my judgment much more probable, explanations is something I have yet to understand at all! What if the NYT report was on April 1st? Surely then you would hesitate to accept the report! That shows that one must pay attention to other explanations; so why not consider the tampered-with, revenge, accidentally-printed-gag, etc. explanations? How am I misunderstanding you?

What if things went this way?:

In this new lottery each ticket has just one million digits, not a billion. There are 1000 completely separate drawings of this lottery. In each case, LLL buys one ticket. He chooses the winning number for all 1000 drawings. The odds of that are 1 in 10-to-the-billion, just like in the original lottery setup (is that right?). You read in the NYT that LLL has picked the winning number in all 1000 drawings of this fair lottery. What’s reasonable: concluding that the lottery is rigged or that the lottery is always fair and he really chose the winning numbers 1000 times in a row? I can’t even imagine my just believing the NYT report.

Are these cases importantly different? I don’t know; I’m just asking.

Greg,

I think you, Mike and I see it essentially the same way. I’m tempted to agree that the problem is not epistemological. But I also think that it may be a familiar contextualist intuition in probabilistic guise: the more objectively improbable an event is supposed to be, the more reliable we expect the reporter to be. The appeal to the Gazillion Lottery aims at placing the problem about rational credibility far beyond anyone’s reliability. It fails to excite us pretty much for the reasons that you point out above (#64): just another lottery report by the guys we’ve always turned to for lottery reports (for good reason). Bryan then tries to “correct” the situation by *lowering* the reporter’s reliability (not the NYT, just a reliable friend). We cry foul. In sum: maybe an epistemological problem (contextualism), but one which has lost its teeth (skepticism).

How does that fare?

Claudio,

I’m not lowering the NYT reporter’s reliability at all. I assume the NYT reporter’s reliability is extremely high whether the lottery number is a billion digits or a hundred or six. In fact, we can set up the situations so that the reporter is MORE reliable in the gazillion lottery than in the six-digit lottery—more reliable in the task of finding the official winning number and more reliable in the task of finding out if the lottery is fair or not. The same holds for the lottery officials: they can take more precautions against various kinds of fraud in the gazillion lottery case compared with the six-digit case. For each lottery, when I read the NYT reporter saying that LLL won the fair lottery I have a choice in what to believe:

1. It was a fair lottery and LLL won it, just as the reporter said.

2. The lottery wasn’t fair and the NYT was fooled.

3. The article is an accidentally printed gag, and LLL didn’t win the fair lottery.

4. etc.

In the real-life case, with just six digits, 1 is much more probable than any of the others—so much so that none of 2-4 even cross my mind. In the gazillion lottery case, 1 is much less probable than any of the others.

Maybe that’s fair, Claudio. I think the interesting thing, from an epistemological point of view, is that this little model I offer of the original lottery does tell us how (i) the very, very remote event of a winning draw on this lottery can be confirmed and verified, and how (ii) that procedure is, and should be, divorced from the error rate of the communication channel reporting that outcome. People seemed to be running these two things together, thinking that the communication line (the reporter, friend, perception system, etc.) needed to have a lower probability of error than chance of type 1 error for the event being reported on. And nobody is going to have preception systems, friends, lovers, or newspapers this “reliable”. Fortunately this is not necessary in this case. And we seem to be in agreement on that now.

I think we’re home, Bryan. Most of the security questions you raise can be addressed…unless we are dealing with an electronic voting machine company, it seems. Once we agree that we can confirm (i) that there is a winning ticket, and (ii) the name of the ticket holder, then what’s the problem? I’m happy to believe what I read about such an event in the

Times. And I’m happy to revise my belief should theTimeshave another Jason Blair on the lottery ticket desk, or to find that L’s brother-in-law wrote the code for the lottery machine and has disappeared to Dubai. I’m a fallibilist; I’m comfortable revising my beliefs on new evidence.Greg,

I don’t think we’re home yet. If evidence comes to the fore that Diebold made all the computers, LLL is a big Republican donor, etc., then of course we don’t just accept the NYT reporter’s claim that the lottery is fair and LLL chose the winning number. But that’s never been the issue; you’re misinterpreting me. I tried to make this clear in my comment 62 to you. I’m saying that even without the SPECIFIC evidence of tampering, or an accidentally printed gag, etc., upon reading the NYT report one would be a fool to accept the report as true. Again: even in the absence of specific evidence of tampering, etc., it is much more likely that tampering, etc. occurred than the lottery was fair and LLL just happened to chose the same billion-long sequence of digits as the official lottery computer. Lotteries are sometimes tampered with; essays are sometimes accidentally printed; hoaxes are sometimes pulled off; employees sometimes successfully exact revenge. Before one reads the NYT report one should, I think, judge that each one of these possibilities for the gazillion lottery is extremely unlikely although not impossible. But now after I’ve read the report, and I pool all the relevant evidence before me, then I have to conclude that the odds that one of those odd things happened is much higher than the odds that LLL won the fair gazillion lottery.

Greg,

You wrote:

Once we agree that we can confirm (i) that there is a winning ticket, and (ii) the name of the ticket holder, then what’s the problem?

The problem is that upon adding up the evidence (after reading the NYT report) it’s much more likely that the lottery was tampered with–AND there is a winning ticket AND LLL had the winning ticket–than the lottery was fair and LLL had the winning ticket.

Claudio,

I thought the long base rate was the interesting feature. How do pieces of evidence with different

quality(one, the base rate, that’s ‘purely statistical’, the other ‘testimonial’) interact when they butt up against one another?This is a huge question in law. Lawrence Tribe’s first paper, the one that launched his career, is about just this, the quality of statistical evidence in law. “Trial By Mathematics”, it’s called. And Judy Thomson has a couple of interesting papers about it, too.

—–

Mike,

Call the evidence that you have of the ball’s color, ‘E’. We are calling your chance of getting this evidence when the ball is not, in fact, red, ‘epsilon’.

We’re stipulating that the prior probability of finding a red ball is epsilon/1000.

We need to know the prior probability of getting the evidence, E. This is extremely small, since it can only happen when the evidence is accurate (which happens only in epsilon/1000 of cases) or when it is misleading (which happens only in epsilon of cases).

pr(E | R) = 1 – epsilon

pr(R | E) = pr(E | R)* pr[R] / pr(E)

= {(1-epsilon) * epsilon/1000} / {epsilon + epsilon/1000}

And that’s 1/1000.

So it looks to me like you should be pretty nearly certain that the ball is

notred under these circumstances. Short diagnosis: the extremely unfavorable base rate swamps your evidence. For fixed strength of evidence, we can always find a base rate unfavorable enough to swamp it.Hi Bryan,

I think we are going in circles. You’ve granted enough to get an answer to Jon’s original pair of questions, modulo the correction about how to set it up. I’m happy leaving it at that.

Also Jamie: you can get ε by closing ‘&epsilon’ with ‘;’ For some reason unclosed commands appear correctly in preview, but not in the post.

Oh, thanκs. (I just tried it with a kappa in that ‘thanκs’.)

Along the same lines, here’s a tip for Bryan: you can set off long quotations with the ‘blockquote’ command. Enclose it in angle brackets, then close the quote with ‘/blockquote’ also in angle brackets.

Hi Greg,

You’re right that we’re getting closer, but we’re still not home. For the sake of argument I’ll grant that we should believe that the number printed in the NYT is the winning number, as I don’t think that that’s a terribly interesting epistemological issue. But keep in mind that it’s going to be awfully hard to print the winning number, given that it’s a billion digits and so there can’t be any misprints in over 100,000 pages of newspaper. I don’t know about you, but when seeing the huge stack of newspaper I would wonder how on earth they managed to avoid all printing errors. Even if it was printed just on the web, and there was an explanation of the extreme measures that were used to insure accuracy, I’d still hesitate to believe that there were no mistakes in the billion digits reported.

But I have never granted the claim that we should believe the NYT when it reports ‘LLL won the lottery’. After I read the report, several possibilities will come to mind:

1. It was a fair lottery and LLL won it, just as the NYT reporter said.

2. The lottery wasn’t fair and the NYT was fooled about that issue, but LLL did win the unfair lottery.

3. The NYT article is an accidentally printed gag, and LLL didn’t win the lottery, fair or not.

4. The NYT article is purposely fraudulent (so there’s no accident).

5. This isn’t the NYT I’m reading but a gag paper meant to look like it (or someone has hacked the NYT website, say).

6. etc.

I have maintained that 1 can be dismissed as much, much less likely than the others. Only a fool would accept 1 at this point. But that doesn’t mean that we should plump for 2, which includes the claim that LLL did indeed win. Even without specific evidence for 3, 4, 5, etc. I would, if I had any sense, need to investigate further in order to conclude 2 over 3, 4, and the rest. Frankly, I have a hard time believing that the NYT reporter would be so ignorant to just report that LLL won and leave the reader with the impression that the lottery was fair (let’s assume the NYT reporter is known to have some mathematical ability). So, I’d suspect that 2 is false and one of the remaining options is more plausible.

That’s where the interesting epistemological issues are to be found.

Jaime, thanks. Maybe I’m not tracking you entirely. Here’s what I want to claim. Tell me whether these are inconsistent.

1. I cannot mistake a white ball for a clearly red ball.

2. I can mistake an indefinitely red ball for a clearly red ball.

3. Nearly all of the balls in the bin are white.

4. Some of the balls in the bin are indefnitely red.

5. I have selected a clearly red ball.

6. I know, but I am not certain, that the ball I have selected is red.

I can’t see how I could mistake a white ball for a clearly red ball under the specified circumstances. Hence, (1). I guess we agree on that, but I don’t know. Nearly all of the balls in the bin are white, as stipulated in the hypothesis. I have some (but not much) reservation about assigning probability 1 to the proposition that the ball observed under all of the specified conditions is red. So I stipulate (4) that some of the balls in the bin are indefinitely red. I can see how someone could mistake (at least) some indefinitely red balls for a clearly red one. Given a sufficently small number of indefinitely red balls in the bin, I think all of (1)-(6) are consistent. No?

Mike, if the chance that the ball is white, given your evidence, is 0, then your example is critically different from the newspaper case. In the newspaper case, the chance of error is not zero. If you allow that the chance that the ball is white, given your evidence, is greater than zero, then we can just call that ‘ε’ and my point is off and running.

Just by the way, if (1) you cannot mistake a white ball for a clearly red ball, and (5) you have selected a clearly red ball, then why are you not certain that you have a red ball?

I am trying to stop…honestly…and will muster the will-power to quit…but:

this is not necessary. You’ve limited the number of tickets to 100,000, so we only need an alpha-numeric key to individuate these 100,000 tickets. Or a key system to individuate however many tickets you expect to sell. You’ve only got 6 billion potential customers, and only a fraction of this group is in any economic position to consider paying (say) a dollar for a ticket. And there are very efficient ways to compress individual identification keys for 2 or 3 billion tickets. I mentioned this in [42]. Don’t get hung up on the syntax.

If you grant that we can determine that L holds/purchased the winning lottery ticket, that this fact can be verified, which is what the checking algorithm does, then you are home. You in effect divorce the base rate for the ticket winning from the error rate of reporting the outcome, because the checking algorithm swamps the original base rate with an astronomically small error probability to confirm the ID of the winning ticket. Which makes this case different from the ping-pong ball case being discussed here in parallel.

So, to go around the track once more, your likelihoods for your position on (1) in [74] are not taking into account what you’ve granted about how we can know about the winning ticket being L’s. (Or being purchased by L, if you prefer.)

If you decide to retract your endorsement of this premise, you need then to switch threads and argue with Mike and Jamie about ping pong balls. If you stick with it, as you should, then you are home. And if you are going wobbly on this point, [59] tells you why you should not view this case like the perception of ping-pong balls case.

Jaime, you write,

Just by the way, if (1) you cannot mistake a white ball for a clearly red ball, and (5) you have selected a clearly red ball, then why are you not certain that you have a red ball?That’s because of (2) and (4). Here’s the list,

1. I cannot mistake a white ball for a clearly red ball.

2. I can mistake an indefinitely red ball for a clearly red ball.

3. Nearly all of the balls in the bin are white.

4. Some of the balls in the bin are indefnitely red.

5. I have selected a clearly red ball.

6. I know, but I am not certain, that the ball I have selected is red.

Mike,

I don’t get it.

You are looking at a red ball. What is the chance of getting your current evidence, given that the ball is in fact white?

Is it zero, or is it positive?

Greg,

Look, you’re the one who wanted to focus on Jon’s original questions. But now you’re changing the first one or simply forgetting it! Jon’s first question was “First it reports the winning number”. As you put it yourself: “(i) Should you believe the report that the winning number is n?” It doesn’t matter, to THAT question, how many people bought tickets. Even if no one did, the NYT is going to have a helluva time reporting that the winning number is 88492483 … . It has to print the billion digits in order to report what the winning number is. What else is it going to do? Say something like ‘The winning number is the number that was chosen yesterday and that is a billion digits long’?

Actually, I’m wrong about that. If the winning number is just one billion ‘8’s for instance, then the NYT report could just say ‘Amazingly enough, the winning number is all ‘8’s!’ But for christsakes let’s assume the winning number isn’t like that!

And, no: when I claim that only an idiot would, upon merely reading the NYT report, accept 1 from comment 74, I am not forgetting that there could be a fantastically reliable way of determining both (a) if someone won and (b) LLL won. You keep ignoring 3, 4, 5, etc from comment 74. I think you are ignoring the fact that you’re reading about the lottery in the NYT; you aren’t running the show yourself. I have good reason to think that one of those possibilities is the real truth, and therefore I do not have sufficient reason to think that billion digit number that the NYT printed is either LLL’s or the winning number–setting aside the difficulty of avoiding printing errors.

The second question you/Jon posed was this: “(ii) Should you believe that L holds a ticket with that number?” My answer is NO, you should not. First, if (ii) is really asking if LLL holds the winning ticket and the lottery is fair, then you should definitely, definitely not accept it. Second, if (ii) is asking if LLL holds the winning ticket and never mind whether the lottery was fair, then my options are 2-6, I see no reason to prefer 2 to the others, and only 2 has me accepting that LLL has the winning ticket.

I’ve already said why this case is not much like the ping pong ball case: our case has to do with testimony; at least as how Mike and Jamie have been discussing it, the ping pong story has omitted testimony. The lottery story is a problem about testimony!

Let’s face it: philosophy is incredibly hard. I’d be surprised if I ever said anything right that’s philosophically interesting.

Jaime,

Zero. If the ball is white, then I won’t mistake it for a clearly and unambiguously red ball (assuming I’m looking at it under various lighting conditions, I’m not tired, etc. etc.) But, if the ball is indefinitely red, then I could mistake it for a clearly red ball. There is some small chance that I have the experiences that I am in fact having while looking at a definitely red ball, given that I am looking at an indefinitely red ball. But if there aren’t many indefinitely red balls in the bin, this won’t undermine the evidential value of my observation much.

Bryan, the newspaper case is unlike the ping pong ball case in some ways; that’s obvious. But the cases are alike in that if the prior chance of error is not zero, the base rate (that’s the extremely low chance of ‘picking a winner’) can be chosen so as to swamp that base rate. That’s true whether it’s testimony or perception that might be erroneous.

Way back in [46], I asked whether you thought it was relevant that only 100,000 tickets are sold in the lottery. Do you? This is bugging me. After all, Lucky Louie’s (prior) chances of winning neither rise nor fall when other people buy tickets. Why should the posterior probability be in any way affected?

Mike,

Okay, fine. (It seems to me that in this case you should be

certainthat the ball in your hand is red,contrayour (6), but I don’t mean to dwell on that.)In that case, the example is critically different from the newspaper example, where there is a positive real chance of getting the evidence in the newspaper even on the hypothesis that Lucky Louie did not win. Zero chances are impervious to conditionalization.

Jamie,

I do think it’s relevant how many tickets were sold. I guess Jon does too, as he’s the one who set the 100,000 tickets assumption. I can’t say that my reasons for thinking it’s relevant match his, as I don’t know his reasons.

I think someone in the early comments expressed the idea that if all the tickets are bought, then of course it’s no surprise that someone chose the winning number. But of course that couldn’t happen if it’s one ticket per person (a gazillion people?), and if people can buy significant percentages of the total number of tickets then the problems the story poses (which rely on the fact that the odds of some person choosing the winning ticket are about a gazillion to one) go away. So, that’s one way in which it matters how many tickets are sold. I know, it’s not an interesting reason. I discussed some of this stuff in comment 10.

Someone also noted, I think, this variant: suppose that instead of 100,000 tickets being sold just one ticket was sold. LLL was the only one! How does that change things? On the face of, it seems less likely than: 100,000 people got one ticket each, and LLL got the winner. But that’s prima facie only, for me anyway.

It was Yu Guo, in comment 18. He poses some great cases, and I don’t know what to think about them!

Bryan

I think I have some sympathy with the intuition. But if Lucky Louie told you he bought a ticket, and you said, “Gosh, for your sake I hope billions of other people bought tickets too,” that would surely display a confusion on your part. So the event, “Louie’s ticket wins”, cannot be more likely given that many people bought tickets than it is given that only Louie bought a ticket.

This is still bothering me.

Jamie,

It bugs me too. Here’s a start:

We want to compare the probabilities of the following three possibilities, doing the calculation at the same time based on the same evidence/background assumptions:

a. LLL buys one ticket, wins, no one else bought a ticket.

b. LLL buys one ticket, wins, 99,999 people also bought one ticket each.

c. LLL buys one ticket, wins, Bill Gates buys 99,999 tickets.

For some reason, my philosophy radar is saying to me ‘b is most likely, then a, then c’. But I’m deeply suspicious of that radar, so I refuse to go along with it.

Someone who knows probability theory, or perhaps more relevantly the psychology of people who are puzzling over probability problems, should be able to tell me either why my radar is saying that or why I am unwilling to accept it.

Jaime,

Okay, fine. (It seems to me that in this case you should be certain that the ball in your hand is red, contra your (6), but I don’t mean to dwell on that.)I wonder why. After all, I can be mistaken. The ball I selected might be indefinitely red rather than definitely red. So there some small chance of error. There is just no chance of error relative to the white items in the bin.

Yeesh, just noticed that this thread could go to 100 posts.

It’s a long thread, so maybe I’m missing something, but I take the upshot to be that: (1) Brian thinks that since the probability (a) of an erroneous report (several possible explanations of error are given) is far greater than the probability (b) that anyone won (given 100,000 tickets sold in a 1/10^billion lottery) we should believe (a) over (b); (2) Others, particularly Gregory, disagree on the grounds that in general the probability of an accurate report is greater than the probability of an erroneous one, protesting that one can “divorce the base rate for the ticket winning from the error rate of reporting the outcome.”

I do think that it is useful to think about some other examples to try to get a grip on whether we do and should divorce rates. So, here’s mine. Right now I think that the odds of all parties laying down their arms in Iraq are quite small. The NYT’s error rate is certainly higher. However, if the NYT reported tomorrow that a truce had been reached I would evaluate against the error rate without regard to the prior improbability of it being true. That that’s what I would do seems initially to support Gregory, though maybe what I would do is not what I should do. I have to admit intuitive pull toward Bryan’s position.

Here’s a question for Bryan (and my intuitions). Suppose instead of the NYT the Gazillion Lottery was reported in the Improbable Events Quarterly. The IEQ only reports on events that have a base likelihood of occuring that is less than 1/10^million and has a known error rate of 1/10^4. It seems like when evaluting stories printed in IEQ (including a report of LLL’s good fortune) I simply have to divorce the base rate for the event occuring from the error rate for the reporting. Right? Otherwise, you could never believe anything reported by a source you knew to be right 99.99% of the time. It doesn’t seem obviously wrong to think that, similar to the IEQ, most of what the NYT prints had a likelihood of occuring that’s lower than the paper’s error rate.

Bryan,

Could you explain the intuitions that lead you to find b most probable, a then c?

a. LLL buys one ticket, wins, no one else bought a ticket.

b. LLL buys one ticket, wins, 99,999 people also bought one ticket each.

c. LLL buys one ticket, wins, Bill Gates buys 99,999 tickets.

Suppose we think of it this way. LLL buys his ticket on day one and we have three possible worlds thereafter.

World 1: on day two and so on no one buys a ticket.

World 2: on day two and so on 99,999 people buy tickets.

World 3: on day two and so on Bill Gates buys 99,999 tickets.

It doesn’t matter, does it, what happens on the days after he buys his ticket? In worlds (1)-(3) LLL has the same ticket and his chance of winning with that ticket seems the same. I’m not sure why it would go up in b.

Mike,

The certainty expressed in (6) is what we’re talking about.

6. I know, but I am not certain, that the ball I have selected is red.

So it’s obviously irrelevant that you could be mistaken about its being indefinitely rather than definitely red. You cannot be mistaken about its being red, you say. But you aren’t certain. (But I said I wasn’t going to dwell on that part!)

Bryan,

Do you really mean you want to know the relative probabilities of (a), (b), (c.)? That would depend on how likely it is that Bill Gates buys tickets. I think you meant how likely it is that LLL wins given the other parts of (a), how likely it is that he wins given the other two parts of (b), how likely it is given the other two parts of (c.). And I assure you those conditional probabilities are the same!

I’ll try to have something positive (rather than saying it bothers me) tomorrow.

Just to chime in on behalf of Bryan’s intuitions: whether they are true or false, they do seem like intuitions. That is, they feel right and I suspect they would continue to feel right even after being demonstrably falsifed. For, in c, we would conclude that there is a substantial anti-Gates conspiracy going on. In b, we would NOT conclude that there is a substantial anti-non-LLL conspiracy going on. And we might, in a, conclude that there is a substantial pro-LLL conspiracy going on.

Jaime,

I didn’t intend for (6) to be read that way. Sorry about that. (6) is not supposed to express a certainty. I make the claim to know in (6) that I have selected a red ball, but I do not claim that the probability is 1 that I have a red ball. You can reduce that to ‘justifiably believe’, if knowing p entails that p has probability 1. Some decision theorists hold that we should reserve certainty for necessary truths. But this is not something I’m especially concerned about.

Mike, Jamie, Jeremy S, Jeremy F, etc.,

Look, it’s not fair asking me questions that I have no chance of replying to in an intelligent way. Virtually the ONLY epistemology I know is the relevant alternatives, contextualism, etc stuff. I started out as a philosopher of mind & language and was blown away when I read Dretske and DeRose on skepticism. Then I got very lucky in stumbling on a good idea that led to an epistemology book. All this epistemology stuff I do is a complete accident for goodness sakes. Ask Greco: he’ll tell you that I’m a generalist metaphysician all the way.

When I read Mike’s and Jamie’s posts 89 and 90 I was about to say “I should just forget about my intuitions regarding cases a, b, and c, since Mike and Jamie have revealed that my intuitions are rubbish’. Then Jeremy F’s comment popped up and I felt a little better.

If I had read in the NYT that case a actually happened, I’d shout ‘Fraud!’ Same for case c. So those NYT reports would make me conclude that the lottery isn’t fair. But I wouldn’t jump to that conclusion for case b. So is the right conclusion this: if I read the NYT report for either a or c but not b I would not believe the report if the report said ‘LLL won the fair lottery’. In that sense the report of b is “less likely” than the reports of a or c. Let’s presume, just to make our lives easier, that the NYT report does in fact say that the lottery was fair!

So that would be a way to justify my intuition that “cases a and c seem less likely than b”, although the intuition was a mere hunch and crudely expressed.

By the way, this isn’t the gazillion lottery anymore, right? I mean: that brings up issues that are independent of the issues brought up by cases a-c. Let’s say it’s a six-digit lottery with just one winning ticket and no guarantee of a winner (that is, there’s just one drawing).

Jeremy (Shipley),

The idea is that in this case we can *reduce* the error probability for the winning ticket and its match to L; indeed, we can make this probability as small as we like. This is the trick behind driving a wedge between the original base rate and the higher error probabilities for reporting the result. This trick isn’t open to all of the analogous cases under discussion, but it is open to the original case.

I spelled out how this lottery could run and the grounds for believing the two claims. I also stipulated that it would be reasonable to key the tickets [42], to rule out what I thought was an uninteresting stumbling block: namely, not being able to read the term *denoting* the winning number. Earlier [34] I thought we were stuck with this problem, and cited it as grounds for giving up on this altogether. But we simply don’t need the extra information that a billion digit number provides in order to denote the winning ticket. Even with 6 billion ticket holders, we are using only a tiny fraction of this space. It is much more efficient to key lottery numbers to individual ticket numbers. And this would be easy to do when we issue the ticket.

I still don’t see anything philosophically interesting in this example, except maybe the trick behind confirming the ticket number and its owner, and the impact that strategy has on the skeptical arguments running behind the other examples. The lottery example is slightly more interesting as a modeling problem, though.

Hi Bryan,

I am thinking about reading this report in the New York Times. I said why I built the model: this is one way how a reputably organization would do this. And I would assume the New York Times, in its reporting of the result, would check to see that it was reputable. (There would be a great story if they blew the set up.) So, I am not ignoring that the reports appear in the newspaper.

Your very good at getting people to repeat themselves. Let’s agree to disagree, shall we?

Jeremy F must be right that the salient alternatives are relevant, although I don’t exactly understand how this is supposed to work in these cases.

When you play bridge, if you are dealt all of the spades you are almost certain it’s not a fair deal. If you’re asked why, you’re apt to say that the chance of getting all the spades in a fair bridge deal is so tiny as to be negligible, so the alternative, however antecedently unlikely, must be the truth. Now,

everybridge hand is antecedently incredibly unlikely, so unlikely as to be writable off, but one of these dull hands doesn’t make us suspect cheating. The difference in the two cases is that the alternatives are so different. It is very plausible that, if someone is going to stack the deck, you’ll get all the spades. It is utterly implausible that someone will stack the deck to give you exactly this dull hand.It could very well be that something like this is at work when we think that if there were very few tickets sold and Louie won, it was probably a hoax, whereas if almost all the tickets were sold and Louie won, then it was probably Louie’s very lucky day. It could be, but I can’t see how.

Now I have the feeling that the answer to my own puzzlement is somewhere in one of Roger White’s papers.

Greg,

Yes, let’s agree on that, although quite frankly I am still unsure on what your positions are on Jon’s two questions. I know that I have switched back and forth on various important points a few times in this comment thread, so even a careful reader of these comments would have a devil of a time figuring out what I think. Right now I’m happy with what I came up with in comment 80 as answers to Jon’s two questions.

My fear, voiced in 80, is that nowhere in your discussion do you note the relevance of 3-6 from comment 74. It’s precisely those that make me answer both of Jon’s questions negatively

I’m not trying to win or prove you wrong or anything of that sort. I’m just trying to figure my way through a certain problem, and as Mike remarked I am unrelenting. In a way, I think you might be right that there is no real philosophical problem here of any lasting interest. But isn’t all this discussed in the recent literature on Hume on miracles? Peter Millican knows Hume as well as anyone, and he’s a great philosopher (much better than I am), and he has some great stuff on this. I should read it!

I’ll try to justify Bryan’s intuitions. It looks as though that the NYT is reporting very much the same thing about LLL in both cases (i.e., both the good sales case and the bad sales case): LLL wins the lottery. But it seems to me that (1) these reports are not really ABOUT LLL at all (2) and that the two reports say very different things, one more implausible than the other. Roughly, I think one report is saying something like ‘Someone is such that she has bought a ticket and no one else has bought a ticket and she wins the lottery (and she has the name ‘LLL’)’, and the other is saying ‘Everyone has bought a ticket and someone is such that she wins (and she has the name ‘LLL’)’. Of course the latter is a lot more plausible.

So let’s suppose the range of quantification is a set of 100,000 people, of which LLL is a member. Suppose for the sake of argument that no one is allowed to buy more than one ticket. I’ll use the predicate ‘Ticketize’ to mean the same as ‘has bought one ticket’, so ‘Ticketize(x)’ is short for ‘x has bought one ticket’, and use the predicate ‘Win’ to mean the same as ‘wins the lottery’. Let’s stipulate that the lottery is fair and has the following rules:

∀x(Win(x) -> Ticketize(x)) (No one can win without ticketizing).

∀x(Win(x) -> ∀y(~(y=x) -> ~Win(y))) (There can’t be more than one winner).

(~∃xWin(x)) -> ∃x~Ticketize(x) (If no one wins, at least one person didn’t ticketize).

The problem can now be stated as follows:

(1) The NYT reported that Win(LLL) and Ticketize(LLL).

(2) In case A, no one else ticketized.

(3) In case B, everyone else ticketized.

(4) In both cases, the probability of Win(LLL) is the same, i.e., 1/100000

(5) Yet one seems to have the intuition that one should give less credence to A-report than to B-report.

I’d reject (1). I’d say that to the ordinary reader who’s ignorant of LLL, the NYT report is not really ABOUT LLL at all. When you doubt the B-report, you’re not wondering whether it is LLL who is the lucky winner (unless you are a close friend of LLL and you think ‘No way! LLL’s been grounded by her mom for years and couldn’t have escaped to buy a lottery ticket.’). Rather, You are doubting that ANYONE could be so lucky as to win a lottery when no one else has bought a ticket. Similarly, when you agree with the A-report, you are not agreeing with it because you know LLL to be a characteristically lucky person. You agree with it because it’s very plausible that a lottery with extremely good sales will produce SOME winner.

So if we eliminate all references to LLL (though we don’t have to eliminate talk of ‘LLL’; see the end of this comment), the A-report becomes simply this:

(6) ∃x (Ticketize(x) & ∀y(~(y=x) -> ~Ticketize(y)) & Win(x)) (Someone is such that she ticketized and no one else ticketized and she wins the lottery)

And the B-report is saying this:

(7) ∀x (Ticket(x)) & ∃x(Win(x)) (Everyone ticketized and someone wins)

It’s clear that the NYT is reporting very different things in case A and B. Indeed, the substantial information of B-report is merely ‘Everyone ticketized’, because ‘someone wins’ is entailed by ‘Everyone ticketized’ combined with the 3rd rule stated above (‘If no one wins, at least one person didn’t ticketize’). So I’d say the report is primarily about the sales of the lottery. By contrast, A-report is saying that some person both wins the lottery and wins it while nobody else has even bought a ticket. The report is about the existence of a very lucky winner (though it also discloses the lottery’s bad sales). So perhaps our intuition is not wrong after all in attributing more credibility to B-report than to A-report, because the two reports are about different things—the one about the good sales of a lottery and the other about the existence of an extremely lucky lottery winner. Surely the latter is more implausible than the former.

(Of course, you can eliminate reference to LLL while still retaining ‘LLL’. You can say, for example, that the B-report is really:

Everyone ticketized and someone is such that she wins and she is called ‘LLL’.

But if you are going to do that, you’ll have to do it for A-report as well, and arguably this will make the probability of A-report and the B-report drop by the same amount. It’d be still true that B-report is more probable than A-report.)

Just to correct a mistake: in that paragraph that starts with ‘I’d reject (1)’, I should’ve written ‘When you doubt the A-report’, instead of ‘When you doubt the B-report’, and should’ve written ‘when you agree with the B-report’ instead of ‘when you agree with the A-report’.

Yu Guo,

That may be a good explanation for some important intuition in this thread. But I can’t see how that explains Bryan’s intuitions in the original (Gazillion) case. If I’m not mistaken, in the Gazillion case, it was the size of the lottery alone that fueled Bryan’s intuitions. I think you are saying that what’s surprising in case A is that the lottery sold so poorly (only one ticket). If it had sold well (case B), we’d be confronted with the yawning case of a popular lottery’s having a winner. Maybe this captures the cases perspicuously:

Case A: The lottery sold only one ticket (and LLL, the ticket buyer, won)

Case B: The lottery sold well (and LLL had the winning ticket)

The news in A tend to be on the extremely lucky winner. LLL’s luck is compounded by the fact that he alone found that lottery attractive, while ordinary lottery buyers somehow avoided it. So, the focus is on LLL. There’s gotta be something special about that guy! By contrast, the news in B is unexciting in all respects: LLL is just your average lottery customer.

Do I get it? If I do, we have an additional problem: When we move to the Gazillion case (which is where Bryan’s interesting intuitions are) it becomes *less* surprising that it sold poorly. People tend to get discouraged by the gazillion odds!

Jamie,

Yes, yes, yes to the second paragraph of your #96! That’s what my computer parts example in #47 was trying to call attention to. We’re ordinarily surrounded by Gazillion-type events. But the vast majority of them have no practical value to us. (Jeremy F’s #91 emphasized our concern with making sure lotteries are not rigged.) Why isn’t that contextualist intuition all that there is to the Gazillion Lottery problem? The technical expression of that intuition is Greg’s claim that we should “divorce the base rate for the ticket winning from the error rate of reporting the outcome.” He thought that was home for the Gazillion case, and so did I. (Having a way of reporting the winning number in less than thousands of newspaper pages is icing on the cake.)

Bryan,

Your apologies in #93 are unnecessary. If I’m not mistaken, on a good day, an excellent epistemologist barely recognize the shapes of shadows. And I’m one of those who think that epistemology is a beacon in this darkness we call “philosophy”. (But now I must guard against misunderstanding and say this to Rorty and like-minded quitters: don’t count on me!)

Claudio,

I don’t see why we’re ‘home’ just with that thought.

It’s now clear to me that it is indeed critical that the percentage of tickets purchased was tiny.

Thatis what the difference between a ‘shocking coincidence’ and a ‘boring coincidence’ shows. I might explicate that later.It now seems to me that Bryan is correct. It doesn’t matter how good the error-checking mechanism is at the newspaper. We shouldn’t believe the report in the Gazillion lottery. Briefly, the reason is this. When we read the report, we know that something has occurred that had an antecedently tiny probability. Either the guy actually did win the Gazillion lottery, or else there was a reporting error, or else there’s a hoax. Now go ahead and make the chance of the middle one (that there was a reporting error) as small as you want, make it negligible. The last possibility, that there’s a hoax, is going to be much, much more probable, antecedently, than the first possibility (that the guy actually won). So with no further evidence, we have to believe the last possibility, that the lottery isn’t fair — it’s a hoax.

Jamie & Claudio,

The error checking algorithm is at the lottery authority, not the newspaper. The newspaper just reports this result as it would any other mundane item, like a baseball score. The newspaper isn’t presumed to do anything out of the ordinary, except, in the course of responsible reporting, investigate the authority’s method to confirm that the results indeed happened the way the authority announced. And we are all agreed (I think) that this is a very simple thing to do. Thus, it is very easy to get to the facts of the matter that are being reported. This is a critical assumption that, but it is what a reputable paper would do. And it gets you home.

Remember how we confirm the report: we bet there is a match with m correct matches, against the probability of drawing the same sequence at random m times. And you can drive that error rate down (confidence rate up) as high as you like.

…that is, how the lottery authority confirms the claims: they bet there is a match…blah, blah.

I should type slower. The difference on priors for hoax and rare event are one thing, but the error correction algorithm is designed to address the chance of hoax, too. So we aren’t conditioning on the same events in your example.

But, Jamie, aren’t you taking us back to square one? Now we need an explanation of why the NYT’s reliable hoax-uncovering procedures, which have served us so well for thousands of ordinary lotteries (with millions of ticket buyers involved), have suddenly failed in the Gazillion case. Is it super-hoax?!!!! Why super-hoax and not super-luck?

I like this example, Jamie. I was a bit too quick with it.

Let’s see. Here is what I think I am doing: I’m pushing the error correction off onto the lottery people, so the question then is why should my prior on hoax be lower than the prior on event. So, the answer would be that we are not conditioning on hoax/event simpliciter, but each conditioned on reading it in the Times. Then, once we take into account their fact checking procedures and that they usually get names and lottery numbers right (oh, and we can compress ticket ID keys to 3 to 5 digits, by the way, with an alpha-numeric base of 36 characters), we can get home.

That is, then you get the assessment for hoax to drop below the event, because it is against the confirmed event rather than the event simpliciter. I think that’s right.

That was a good point. (Whew!)

Bah. So, perhaps the switch is between event simipliciter and confirmed event. That’s the pivot point.

This is good, since it is starting to take the shape of clashes in statistics between Bayesians and classical statisticians.

A Bayesian will complain, because ‘event’ and ‘confirmed event’ are different states in his algebra. And trot out a boring dutch book argument. An N-P guy will say “So what?” And a contemporary “unification” guy might look to explain N-P practices with second-order probabilities, where the confirmation probability is coming from a different distribution than the event probability.

Hmm. I think that’s what is going on here.

Send out a request to some random professor at your university. Ask him/her to list, in the order on which they appear in his office shelves, the book titles in his office. He responds with an ordered list of 3,000 books (s/he has a big office). Now what was the prior probability that it would be THESE EXACT BOOK TITLES s/he’d list, in THIS PARTICULAR ORDER? Answer: don’t know exactly, but it’d be SUPER, DUPER LOW. (Keep in mind, this was a random professor, so you didn’t even know what subject s/he taught.) The number of different orders of titles that might’ve been sent back is VERY, VERY BIG. Like the Gazillion Lottery, the base rate here is completely crazy. But I can imagine believing, on the basis of the professor’s testimony alone, that these are the books in this professor’s office, and that they appear in his office in this order. The crazy base rate doesn’t prevent me from believing the professor’s testimony, fallible as this professor presumably is. Are there disanaIogies between this case and the Gazillion Lottery case? There definitely are. But it does show that the mere fact that we have a freaky base rate doesn’t all by itself show that we shouldn’t believe in fallible testimony. Sometimes the error rate of the testifier is significantly higher than the prior probability of the fact to which s/he bore testimony, and yet we should still believe his/her testimony. I conclude that Greg, et al are on to something. (My previous comments to the contrary were off base.)

Claudio, I don’t get it. Is your point that the newspaper has extremely reliable hoax-checking procedures? I was assuming not. I figure, if the Lottery commission is corrupted and has cheated, the newspaper won’t catch that.

But look, suppose the prior probability of a long-shot winner is

xand the prior for an undetected hoax isy. Then the posterior probability for a long-shot winner given that it’s either a long-shot winner or a hoax, isx/(x+y). So if, as I’d expect,yis much bigger thanx, then we’d be nuts to believe that there’s been a real winner.Dylan, that’s a great example. If the Gazillion Lottery sells every ticket, and there’s nothing special about Lucky Louie, then your reasoning would apply there too. But in the case we’re offered, it doesn’t sell every ticket.

Here’s the point. First, just a little tidying up: I’d have thought that there was a large chance that the professor would make a mistake or two on his list. So let’s add that it’s very important and he checks the list carefully, and so forth. His prior chance of reporting a certain ordering, given that it’s the correct ordering, is .99, let’s say.

So now, Professor Random sent you a list, call it ‘L’. What was the prior for getting L? Incredibly small, say ε. Does this mean we should doubt that L is the true list of books on his shelf? No. Because if the true list is not L, then what’s the chance of his reporting that it’s L? About ε!

So, one of two unlikely things has happened. Either the true list really is L, with chance ε, or else both he made a mistake and he just happened to report L incorrectly, with chance .01 * ε. So we should believe that the true list is L.

Similar reasoning works if the Gazillion lottery sells all of its tickets, but not in the version at hand.

y here is lower than x, given the design of the error correction algorithm: y = x^m, where m is the number of passes through the algorithm.

Pingback: Fides Quaerens Intellectum :: Lotteries and Testimony

Greg, if the lottery is a hoax, then they aren’t actually using the algorithm!

(Condorcet’s Theorem doesn’t work in Florida, either.)

This is getting good! Consider:

easyJet is a low cost air company in Europe whose online booking system issues a 7 digit alpha-numeric ticket number. When I book a flight online, I am given this ticket number which I show to board the plane. With 36 characters, this would give their booking system the capacity to issue more than 2.6 x 10^30 distinct tickets. Assume these are issued randomly.

I am booking a flight from Lisbon to Geneva, which will be my first trip to this city. I have a very low prior of getting a ticket ID composed of 7 identical characters, e.g., CCC CCCC, or 888 8888. In fact, it is the sum of each chance of these 36 sequences being drawn at random, which is on the order of 1 over 2.6 x 10^30 summed 36 times.

Nevertheless, suppose I book a flight online and my ticket reads RRR RRRR. Bayesians are telling me that I shouldn’t bother going to the airport. (For I’d be “nuts”.) Or, that if I do go to the airport, I should prepare for a hoax. Or if I get on the plane, I should believe that it is much more likely to not land in Geneva. Or if I get off the plane, see ‘Welcome to Geneva’, then I should maintain that I am really not in Geneva. And when I am asked by my host whether I like Geneva, I should reply that I have never been to Geneva. And when I return to Lisbon, I shouldn’t believe that I have ever been to Geneva.

I prefer classical statistics and Geneva.

Greg,

If you are trying to mock my arguments with your remarks in comment 114, you are doing a poor job of it.

Why do you think the example fails as a reply to Jamie’s argument in the second paragraph of [101]?

My conjecture is that there is an interesting clash on the attributes of our models; so, the idea behind the easyJet example was to try to pin point this clash in any comments it generates.

The question is this: my easyJet ticket number falls into a class that I happen to have a prior for, which is orders of magnitude lower than my prior for a hoax.

So I’m generalizing the argument in [101]: why isn’t this a recipe for skepticism?

Greg,

What is the prior probability of your getting a faulty or ‘hoax’ number with seven of the same digits, from the easyJet on-line booking system? For some reason, you failed to say anything about that probability in your set-up.

The Bayesian view is that whether you should believe you’ve been scammed, or are the victim of an error, depends on that prior probability. The classical statistics view is that the prior is completely irrelevant.

I think the classical view is not merely mistaken, but

obviouslymistaken. Priors matter; base rates matter; it’s irrational to ignore them.Bryan, Dylan, Greg, Jamie…,

As the anti-Bayesian point takes shape, we have to make progress on the philosophical front. Bayesianism is the teeth of the Gazillion fallacy. But there must be a rotten intuition driving the bite.

I’ve offered (from the top of my head) two contextualist hypotheses. First (#66), there was this one:

(1) The more objectively improbable an event is supposed to be, the more reliable the reporter is expected to be.

It looks unsound as a principle: My eyes may be equally trustworthy, and may reasonably be thus taken to be, regardless of objective probability (whatever the preferred analysis of the concept). (I assume that one can be one’s own reporter. Is that objectionable? It doesn’t seem to be, if we’re looking at the rationality of a given belief about reliability.)

Then (#100), I tried something like this one:

(2) The more practical importance an event is supposed to have (for a given agent or group), the more reliable the reporter is expected to be.

Again, seemingly unsound as a principle: When I double check my marking of exams, the double-checking is always the same, regardless of whether this is the exam that will flunk them, and that doesn’t seem irresponsible.

Now, I’d like to know: (a) whether these contextualist assumptions are basic (or derivable from more basic assumptions; (b) whether they really are conceptually independent (as they seem to be); and, of course, (c ) whether either is operative in the Gazillion fallacy.

(Needless to say, heaps of charity is expected from spectators of this high-wire act! It’s what you get at breakneck speed, folks!)

I skipped over Jamie’s reply. (I was excited about easyJet!) Jamie: your

yin [109] is for undetected hoax. Here perhaps the subjectivism is getting in our way. By describing the algorithm, I am giving a method for driving down the objective probability ofyto as low as one likes. That is, we can exploit the difference between x and x^m to ensure (i) that the tickets were generated randomly, (ii) that the output ticket is the winning ticket, and (iii) that a purported matching ticket indeed matches.Now, in reporting the result, the Times will endorse the claim if the editors are satisfied that the system is transparent enough for them to see that (i)-(iii) are satisfied. And the Times can report this as solid facts.

So when I, a Times reader, reads the results, I am not simply considering the low prior of the event and the higher rate for the Times being hoaxed. (Otherwise, bye-bye Geneva, and I’d stop reading most of the Science section.) Instead, I am considering the conditions under which a reputable paper could report a low probability event

of this kind. Given that methods exist for a problem of this kind, and that they are pretty easy to implement in a transparant manner, I accept the reports in the times on the grounds that the Times can demand evidence that the error-checking was done correctly as a condition for printing the result.Notice that this is what is behind the newstories about electronic voting machines. In this case, for some companies, there isn’t a sufficiently transparant system for cross-validating results; also, even for companies that do have good security, there is evidence that some election officials are not following the proper procedure.

Finally, I think I see too many dissimilarities between the lottery and Condorcet’s results to take your point. So, I’m not clear on your remark.

Hi Jamie,

I did omit the prior for a hoax ticket. And I should be more careful specifying each of the hoax events, since I consider a number of them. Each is presumed to be much more likely than the draw.

As for N-P versus Bayesian statistics: if we *have* statistical information about a parameter’s distribution, we should use that. So, it is not the case that “priors” are irrelevant on a classical line. But we don’t always have this information, and we shouldn’t make up numbers out of thin air. (You know this drill…I’m just writing it for the sake of the thread.)

I should also be very quick to add that I have a lot more sympathy for Bayesian statistics than I do for Bayesian epistemology.

Hi Claudio,

I’m not sure I can speak to your question other than to say that I think (b) yes, (c ) no, and that I am not sure about (a).

Ah, but this about your (2), which is getting a lot of press now: this principle should be modified and represent a ratio of risk-to-reward rather than a straight correlation between practical import and improved epistemic position. The idea is that our practical considerations affect what beliefs we are willing to act on to the extent that the risk to reward ratio of the actions we are considering is within some bounds. But this should be another thread…actually, it should be a paper.

A quick PS to #118:

When I mentioned the rationality of beliefs about reliability, I ignored the fact that some of those beliefs cannot be rational, because they can only be reached through epistemically circular reasoning. (Or maybe there is a sound Lehrer-style rationale for rationality in the problematic cases.) I thought I could safely ignore that issue here.

Greg,

So you get a ticket number, RRRRRRR. Oh, what the hell, let’s say it’s 777777. The prior probability of getting that number is 2.6 x 10^-30. Let’s call this small number ‘ε’.

To simplify, let’s suppose there is only one ‘hoax’ scenario; if you like, let it be the disjunction of whatever non-negligible hoax scenarios there are. As you say, this scenario is much more likely (antecedently) than the draw. To fix ideas, suppose it is one million times more likely: ‘ε’ * 1,000,000.

Now (as Dylan’s comment forcefully shows) we do need one more probability. We need an estimate of how likely it is that 7777777 would be produced by the hoax scenario. I’m pretty sure that you chose the repeated-digits outcome precisely because it seems quite plausible or likely that a hoax would produce such a number. There are several possible reasons: hoaxes are supposed to be funny and the repeated digits strike us as funny; one ‘hoax’ possibility is not literally a hoax at all but some quirky equipment failure and one reasonably likely way of failing is just producing a long string of digits; probably some others, too. To proceed, I need an estimate. We could choose 1/36 (as you intimate) if we suppose that all ‘hoaxes’ produce some repeated character string, or we could choose a smaller value, say 1/1000.

I’ll continue if/when you confirm (!) that this is consistent with how you intended the easyJet example. I’m sure you can anticipate how the sequel is going to look.

p.s. The Condorcet reference was a throw-away, really. My point was that the mathematics of the jury theorem is irrelevant for an election in whose integrity we have no confidence. I don’t particularly want to pursue that thread — my idea was, like Bryan’s I assume, that the chance of ‘human error’ in the Times reporter transmitting the information, the Times reporter being hoodwinked into believing that the fair process really did take place, the Times reporter himself being corrupt, and so on, was plainly going to be very large compared with the infinitessimal chances in play. But maybe that’s wrong, and I have no stake in it, so I’ll leave it to Bryan to prosecute as he sees fit.

Ah excellent! I had actually thought of the

classof single character tickets, but let’s stick with 7777777.So, now my beliefs about my trip to Geneva hang on the estimate I give to the likelihood of suffering a hoax given that I have a report of having ticket 7777 777.

Why, then, are you objecting to hanging your belief about the lottery on the estimate I am giving you of suffering a lottery ticket hoax given that you read of the lottery results in the New York Times?

Do we at least agree that the two cases line up now?

Thanks, Greg.

1. At first blush, I can’t see why a risk-to-reward ratio should affect judgments of *reliability*. Decision-theoretic considerations will, of course, be relevant to whether I should *act* on beliefs which are the outcome of less-than-ideally-reliable methods. I really don’t know what to do with your suggestion. But I should keep chewing on it.

2. I’m trying to get a firm grip on where your negative answer to (c ) comes from. Are you saying that the Gazillion fallacy is the product of Bayesian epistemology alone, not of any *misapplication* of its principles (whatever the ultimate merits of those principles may be)?

I said,

Greg says,

No, I meant the

converseconditional probabiilty from the one you’re talking about. If ‘H’ is ‘one of the hoax scenarios occurs’ and ‘T’ is ‘I got ticket 777777’, then I’m asking for pr(T|H) and you’re talking abou pr(H|T).Of course, ultimately it’s pr(H|T) that I want, but I’m going to get it from pr(T|H) plus the other stuff.

No, but I think we’re about to!

Claudio,

I think my point is relevant to the Gazillion case. Back in comment #82 Jamie remarked that although we seem to have the intuition that the good sales case is more credible than the poor sales case, we also know that LLL’s chances of winning do not differ in the two cases. I took him to be implying that there is some sort of tension here, and I argued there isn’t. What I was saying is that the NYT report is not de re, that what it really says in all cases (including the Gazillion case) is something of the form ‘SOMEONE (as opposed to LLL) has, in a fair and normal way, won a 1/x lottery as one among y players’, and that the probability of THIS statement being true does depend on the value of y (when x is fixed), even though, as implied by Jamie’s remark, the probability of the quite different statement ‘LLL has, in a fair and normal way, won a 1/x lottery as one among y players’ being true does NOT depend on the value of y (when x is fixed).

Now, I think the relevance of my point to the Gazillion case is that if we cease giving the NYT report the de re reading and interpret it instead as making the different, quantified statement ‘SOMEONE (as opposed to LLL) has, in a fair and normal way, won a 1/y lottery as one among x players’, then we can easily see that the credibility of reports of this kind is merely a function of the value of y/x, and so the predicate ‘credible’ as it is used here is vague: if we alter the value of y/x, we can get cases where the report is clearly credible, clearly incredible, and there are of course borderline cases. So my answer to Bryan’s initial question of why it is that one would go as far as explicitly denying the report in the Gazillion case instead of just withholding belief, is the same answer to the question of why one would go as far as explicitly denying that Jerry (who has a million hairs) is bald while withholding belief as to whether Jimmy (who has only a thousand hairs) is bald. So, for example, we can compare the following cases:

[Case A]

The NYT writes: ‘Someone has, in a fair and normal way, won a 1/one GAZILLION lottery as one among 100,000 players.’

[Case B]

The NYT writes: ‘Someone has, in a fair and normal way, won a 1/one million fair lottery as one among 0.8 million players.’

[Case C]

The NYT writes: ‘Someone has, in a fair and normal way, won a 1/one million fair lottery as the one among only SIX players.’

It’s clear that the intuitive credibility of the NYT report does differ across the three cases: Other things being equal, one would probably reject A-report outright (as I think at least Bryan would do), accept B-report (as Bryan seemed to suggest in #93), and (perhaps) withhold belief in C-report (I think Bryan agreed to something like this in comment #19). Moreover—and this is important—not only the INTUITIVE credibility, but the GENUINE credibility (i.e. the probability that the report is true) depends on the proportion of tickets sold as well. This is because here the reports do not make reference to a particular individual at all. True, the probability that a particular individual, say LLL, wins a 1/x lottery depends solely on the value of x, and not at all on the number of lottery players. But surely the probability that SOMEONE wins a 1/x lottery does depend on the number of players.

I want to take a stab at a summary.

There are at least three methodological approaches that have been raised in this discussion, and they all clash with one another.

(i) With my (orthodox) Bayesian hat on, I am interested in the probability of a hypothesis H (e.g., ticket number n wins/ticket n is not a hoax) given data e (testimony from the

Times/vague ideas about pranks, however each is parameterized), which I think I can get by the method of inverse probability. I’ve just interrupted Jamie’s windup for this punch.(ii) With my Fisher hat on, I am interested in rejecting the no-effect hypotheses (H_o: ticket number n is produced by chance, L’s ticket matches n by chance) of my matching algorithm, full-stop, and I will insist that you cannot do this with the method of inverse probability. So, I am calculating the value of Pr(e|H_o), were H_o is the hypothesis of no effect. In the lottery example, H_o is the hypothesis that my error correction method generates m matches of n, which most recently has been named x^m. I set a p value as high as I like such that observing m matches puts me in the rejection region for H_o. I will cite the

evidential strengthof my algorithm with its super-low p value as my grounds for accepting the ticket reports are legitimate, and accepting them full-stop. I am exploiting discrepancies in the data of the matching algorithm to rule out hoax. TheTimes, assumed to be in a position to confirm these methods, reports this outcome.In sum, for Fisher, the p-value counts as evidence against the null: the smaller the p value, the greater the weight of the evidence. There is no degree of belief; you simply believe it.

(iii) With my Neyman and Pearson hat on, I set up two competing hypotheses, Fisher’s null (H_o) and an alternative hypothesis (H_A). (This is an important difference to Fisher. You cannot assume that the Fisherian significance test simply treats the complement of the null as H_A.). Random sampling is a key assumption here, which seems okay for my matching algorithm since I can repeat this experiment and treat it like a quality control problem. So, the α’s and β’s are literally long-run frequencies of Type I and Type II error, respectively. I am not sure how to get those into an orthodox Bayes framework. (And to compare this with Fisher: Because he doesn’t have an alternative hypothesis, he won’t have the notion of a Type II error. It doesn’t make sense to speak of the power of a significance test, although he does talk about sensitiveness of an experiment. But set this aside.)

These sampling and frequency assumptions are more problematic for other parts of this example in an orthodox Bayes view, particularly the cases where there are error probabilities imagined (and needed for calculation) but where there is no fixed population with a sampling mechanism. So, it isn’t clear what the ε’s in Jamie’s model actually are denoting in the sense that they seem to be composed of two incomparable values: subjective degrees of belief, and long-run frequencies.

In sum: The method driving my algorithm example was (ii), which obviously clashes with (i). I think this is example of a long list of reasons to be skeptical of Bayesian epistemology, rather than embrace Bayesian skepticism. But I jumped back in the discussion because I was interested in seeing if I could do something with (iii) in order to get a closer fit with (i). The idea was to see if the disagreement could be forced *into* an N-P model, where we could then point to the disagreement as a parameter. (I say this in hindsight; it wasn’t clear to me what I was doing until this morning.)

Now I am less sure of this N-P strategy for the following reasons. I am less sure about the move from Fisher to N-P, because p values are not error probabilities and that distinctions between the two seem to be entering into this; and I am less sure about the move from N-P to orthodox Bayesianism, because there are two types of probabilities under the ε’s, one of which is continuous, but the orthodox view, being a measure for degrees of belief, only has finite additivity.

Greg,

Thanks, that is a helpful summary.

I don’t actually see the “clash” (between your (i) and (ii)) in this case. As I said earlier, the difference here amounts to the Heterodox (= non-Bayesian) methods’ simply ignoring the base rate. And as you noted, here the base rate is not a known statistical sample but rather a credence; doesn’t matter, it’s still

irrationalto ignore it.As to ε: it’s a credence, of course. It is related to long-term relative frequencies in the way that DiFinetti showed. I don’t see why we need countable additivity for these examples, but in any case I doubt that there are any problems with countably adding credences unless there are (countably many) zeros involved. That seems to me to be a rather exotic worry in the present context.

So, can you let me know what you think is the prior probability for a hoax scenario’s producing the 777777 ticket?

Yu Guo,

Ingenious, but I don’t see how the sentence could really be expressing that proposition. First of all, if Louie himself reads the sentence in the NYT, he will be pretty damned excited; if instead he were to read that someone won the lottery, he would be much less excited. Second, if Luckless Laura reads the sentence, she’ll be disappointed, whereas she’d be happy if she read that

someonewon. I’m sure it’s easy to come up with other tests, and I predict that the hypothesis that the sentence that includes Louie’s name really expresses the general proposition thatsomeonehas won the lottery, will fail all tests.The report may not bede re. I don’t dispute that point. But I’m quite sure that it isn’t merely the report that someone has won.Hi Jamie,

There are several clashes. One is that Fisher will accept the rejection of the no effect hypothesis, full stop. The p-value is epistemic, but the resulting belief state is not a degree of belief pegged to that value. Also, accepting the null doesn’t entail the acceptance of an alternative. Another is that to get (i) and (ii) to line up, you would have to interpret his p-value as &epsilon, and that wouldn’t be correct. (This in effect is to interpret the p-value as an error probability, which is an equivocation I was guilty of earlier.)

I think what I was trying to do is to re-package the idea with error probabilities, a no effect hypothesis paired with the alternative that it was a genuine ticket, and then argue that there are actually two different tests that we are running together: there are the error rates for confirming the report with the matching algorithm, and there is another error rate for the newspaper, where we dismiss these elaborate hoax scenarios as relevant alternative hypotheses to the Times story being accurate. This was the rough target I was shooting for. But I think we’ll get stuck on the points I just mentioned.

I’ll address the rationality constraints in the next bit:

Honestly? I haven’t the faintest idea. I had originally thought I would play with that parameter, or make a point about how we don’t have any idea what this is and yet it drives the example. If I were forced to give a credal probability, I would assign this [0,1]. (I was swallowing my objections to point valued probabilities for the sake of this discussion.) Note that there is a substantive assumption to orthodox Bayesianism, namely that imprecise credal probabilities indicate an elicitation problem rather than imprecision in values of the model. There is no way within an orthodox Bayesian model to express this distinction. Hacking calls this the

Dogma of Precision.Which brings us to table thumping about irrationality. As we’ve seen in this discussion, there are serious consequences turning on what interpretation of probability you adopt, and how you use the tools of probability to model particular problems. The most compelling irrationality arguments are launched from

withinBayesian constraints, not outside of them. Yet often those constraints are what is at issue. We know enough now, I think, to look into post-Bayesian models. My favorite is the theory of imprecise probabilities, which might appeal to your subjectivist leanings: Take de Finetti’s theory of linear previsions, allow buy and sell price on gambles to be distinct (so there is not necessary a fair price), and you have the underlying idea. Mathematically it is very elegant, since linear previsions fall out as a special case. And you get to see a lot more of what is going on in the theory with this extra expressive capacity.correction to [128]: p values should be “low”; replace ‘high’ with ‘low’ throughout. High significance corresponds to a low p-value.

Greg,

I’m happy with ranges instead of point values, so maybe we aren’t really very far apart on that question. Things don’t change much with that relaxation of the Orthodoxy; for instance game theory remains pretty well untouched.

But if the range for the probability of getting 7777777 in a hoax scenario is as broad as [0, 1], then I say it is completely indeterminate what the probability is for a hoax scenario given all our evidence. In effect, there is then

no answer at allto the question we’re interested in. This is possible, but very dull. I also think it’s unrealistic in the extreme.If someone is going to play a trick on you when you purchase an easyJet ticket, then for sure they’ll give you ticket 777777.

Nah, that’s grossly implausible. What about,

Someone who’s going to play a trick on you when you purchase an easyJet ticket has no chance of giving you ticket 777777.That’s somewhat more plausible, if by ‘no chance’ we mean ‘tiny chance’, but it’s not what I’d think. As I said, I’m happy with a range, but I’d put it at maybe 1/100 to 1/10,000. And you know what comes next….

Hi Jamie,

You’re eliciting, which is okay, but I really don’t know. And when I don’t know anything about a parameter, I represent my ignorance by [0,1]. (There is no gamble I would take on the position.) I thought (intuitively maybe?) that I would float this parameter to get the example to go in different directions. But once I saw that you were going to use a straight model, and no up-the-sleeve Bayes factors that I’d have to learn, I thought I’d try making the point about the clashes between the systems. I was looking for extra hooks to hang the comparison on, I think.

You can get interval-valued probabilities by assuming convex sets of precise probability functions, and under those conditions, yes, the orthodox view *can* be extended. You run into some philosophical problems motivating this move, however, particularly if probability is interpreted to be one’s degree of belief. (Is it then

degreesof a belief?) Switching to statistics, going this way is what is involved in making a sensitivity analysis. So, you look for cases in which those assumptions are not reasonable to make.However, the theory of imprecise probability is broader than this. It is a more expressive framework than interval valued probabilities.

The really interesting thing, I think, is that statistics is a lovely mess. The orthodox Bayesian view offers hope for solving all our problems, if only we could get our problems into the correct form. And when we start trying to weaken the framework to accommodate the actual form our problems take, Bayesianism becomes as ad hoc as the classical methods it is purported to replace. The neat thing is that it isn’t the

samemess.Greg,

I must say that I find it absolutely incredible that your credence for a hoax producing 7777777 as a ticket number is best represented by the [0, 1] interval. This means, for instance, that you don’t think it’s more likely than 1B6L8E9. And if asked which is more likely, that the next hoax ticket will be 777777, or that it will be some other character string, you’d just have to shrug. I don’t say it’s impossible or

necessarilyirrational to have such unresolved credences, but I do find it incredible.Even the Heterodox have to believe we can make some, many judgments of

more probable than, in the sense of credence, without having statistical information to back it up, don’t they? (Don’t you?) Otherwise probability and degree of belief will be completely unconnected, and that would make probabilities and statistics just about useless for helping us decide what to beleive.(Man this is fun!)

That is exactly right. I don’t believe it is more likely; I don’t believe it is less likely. And this is exactly the reasonable position to take given my evidence. I know nothing about how easyJet hoaxes go down, and neither do you. Each of us would have to make up a number, a groundless number, our baseless hunch, to get the calculation to go. If the model grinds to a halt on [0,1], it is because it

shouldgrind to a halt: we haven’t any evidence to base this credal probability, so we should not pay any attention to whatever constraints the model spits out on a bogus value for this parameter.The second part of your question depends upon the view. The Savage axioms won’t be satisfied by the axioms of imprecise probability (except on the limit case where the lower previsions and upper previsions are identical), but we can (often) make comparative judgments within IP. The axioms give bounds on the natural extensions of our (imprecise) credal probabilities. They just allow noncomparable judgments to enter the mix.

If I put my Kyburg hat on, which I am very fond of, I’d say that you’re right about there being no connection between probability and degrees of belief because there aren’t any such things as degrees of belief. On this view, we’d beef-up Fisher’s story in my (ii) and ride the thing until the wheels fall off. And it goes surprisingly far, actually. So it doesn’t follow that divorcing probability from degree of belief means you’ve made statistics irrelevant to either reasonable belief fixation or to decision. But it does mean that you cannot (and on this view, should not) view a probability model as a model for an agent’s belief states. This discussion could then go off into psychologism, and the relationship between formal languages and reasoning…I mention this only to, perhaps, tie together other discussions we’ve had which have touched on the relationship(s) between a formal language and some non-formal domain it is imagined to represent.

Ah, which reminds me: Olá Claudio!: Your (1) in [125] I will grant; I think you’re right. Hopefully I’ll get a chance to work on this idea. The short of it is that the model maintains a strict conceptual distinction between practical concerns and epistemic position, but explains how our change in circumstances can change our demands on evidence. I want to then run it against the airport and train examples, and argue that there are coincidental pragmatic artifacts to the decision theoretical apparatus Fantl and McGrath use and linguistic data that Cohen and contextualists use to make pragmatic features appear to encroach. It doesn’t fit in here. And it isn’t finished. (Any wonder why?!) (2) If you follow the N-P line, then you could get pragmatic encroachment and (c ) would probably hold up. These are the guys that coined “inductive behavior”, after all, and claimed that “inductive inference” was a nonsense notion. They treat it as a decision problem. However, Fisher, our hero, would have no truck with this. We set the p-value as low as we want on

epistemicgrounds, not practical grounds. So, I don’t see contextual features seeping into the workings of the model.To make a judgment about easyJet hoaxes, you have to have specific, statistical information about easyJet hoaxes? Come on, that’s ridiculous. You and I each have plenty of information about hoaxes, we know what kinds of jokes people like to make, we know plenty about what kinds of outputs erratic programs can generate, we know an enormous amount about the psychology of the everyday.

Who is more likely to be elected President of the United States: Barak Obama or Nancy Reagan or Barney Frank or me? No, sorry, I have no idea.

Let’s go back to my [96]. You’re playing bridge, and when you pick up your cards you find you’ve been dealt all of the spades. Do you think it’s a hoax?

I do. But my belief is based on, and entails, a vague but non-trivial judgment about the likelihood of a hoaxer deciding that giving me all of the spades would be a good hoax. What do

youthink?Ah. So, how is it relevant? Suppose we find, using statistical methods, that a certain event has a .999999999 probability of occurring tomorrow. Why should we think it’s going to occur? Why should we behave as if it’s going to occur? Why should we bet on its occurrence if offered even odds?

I am trying to catch up here, since I just finished grading a stack of papers. Can someone verify that I’ve got the easyjet problem right?

You book a flight. They give you a confirmation number. It’s alpha-numeric, so there are 36 possible characters for each digit. The confirmation number is seven digits long. So you expect something that ‘looks random’, like 34ID35U. But then it tells you your number is 7777777. And that raises a few questions:

A. Should you believe that you have successfully booked a flight?

B. Should you believe that your confirmation number is 7777777?

C. If that indeed is your confirmation number, and you believe it is, do you know that that’s your confirmation number?

Is that right? I doubt it. But I’ll assume it is until Greg or Jamie tell me I got it wrong.

For what it’s worth (ahem), I would suspect that I have successfully booked a flight (and I would act under that assumption) but the computer or whatever screwed up in giving me the confirmation number. I’d assume that there was a computer glitch and that’s why it spit ‘7777777’ back at me. I would be a bit worried that I had any confirmation number at all. The notion of a hoax wouldn’t cross my mind.

Since I’m suggesting that one—a person who sees how odd it is to get a number with a pattern that is so familiar and easy to identify—shouldn’t believe that one’s confirmation number is 7777777, you’d think I’d answer C negatively, but I’m not sure about that. If you had the true belief, and then checked in at the airport, and when the person waiting on you (pretend that you don’t use the check-in machines) asked if you knew your confirmation number you said ‘yes’ and gave him or her the right number, then, well, maybe you were right when you said ‘yes’. Maybe being blameworthy in having a belief doesn’t prevent that belief from being knowledge. It depends on how great knowledge is supposed to be.

Here’s WHY I think the answer to B is ‘no’: I have two explanations before my mind after I get my number.

1. It’s the right number.

2. There was a computer glitch and it’s not the right number.

I’d say that 2 is more likely than 1. So I wouldn’t choose 1. I might not choose 2 either, as I might think there is some other possibility I’m overlooking.

The reason 1 is so unlikely is that it is unlikely to get a number that has a pattern that is so easy to identify and so familiar. I know that that’s really vague. But isn’t it true anyway?

I take it, though, that that’s not what you epistemological techno-geeks are discussing 😉

What if the confirmation number was ‘HELLOPA’, or ‘HEYDUDE’ or ‘1234567’? Surely then we’d not believe the number? Or, we’d believe the number but not think it was uniquely identifying?

Thanks a gazillion, Yu Guo! I think I can nail it now — with a lot more than a little help from my friends! (As a bonus, in view of the latest erudite discussion between Greg and Jamie, I can feel like a Frank Capra character played by Jimmy Stewart — say, “Mr. Common Sense Goes to Probability Town” — the hillbilly underdog who pulls an amazing trick, saves Western civilization and rescues the damsel in distress!)

Jon asked two questions:

(a) Can you rationally believe that LLL cleanly won the Gazillion Lottery, as reported by the NYT?

and

(b) Can you rationally believe that the Gazillion Lottery has a winner, as reported by the NYT, if it sold only a *tiny* number of tickets?

These questions lead to conflicting perspectives on *the* fundamental question about the lottery: Super-hoax or superluck?

I think we have a satisfactory answer to (a): Yes, absolutely! The reasoning that opposes this answer is the Gazillion Fallacy: take the gazillion-to-one odds for each equiprobable ticket, notice how much larger the probability of NYT failure is and conditionalize to derive a low credence for LLL’s being the honest winner. As noted (by Greg, Dylan, myself, and others), this is a recipe for an uninteresting form of skepticism.

The tougher question is (b). It baffled Jamie (who pressed it), Yu Guo, me and others. Why is there an intuition that LLL’s honest win is much less credible if he is, say, the sole ticket buyer? The odds are the same as before and NYT’s reliability hasn’t changed. (Yu Guo further notices that our relevant intuitions have the penumbral characteristics of familiar vagueness situations. This doesn’t *explain* anything, I don’t think, but it does help establish the problem at the intuitive level.)

My answer is hinted at in my previous attempt (#100) to understand the relevance of Yu Guo’s proposal. There, fishing for a reason to think that (b) might have an interesting answer given that (a) already had a seemingly satisfactory one, I wrote:

“[In the sole buyer case] LLL’s luck is compounded by the fact that he alone found that lottery attractive, while ordinary lottery buyers somehow avoided it. So, the focus is on LLL. There’s gotta be something special about that guy!”

Jamie then (#101) reacted as follows:

“It’s now clear to me that it is indeed critical that the percentage of tickets purchased was tiny. That is what the difference between a ‘shocking coincidence’ and a ‘boring coincidence’ shows…It now seems to me that Bryan is correct. It doesn’t matter how good the error-checking mechanism is at the newspaper. We shouldn’t believe the report in the Gazillion lottery. Briefly, the reason is this. When we read the report, we know that something has occurred that had an antecedently tiny probability. Either the guy actually did win the Gazillion lottery, or else there was a reporting error, or else there’s a hoax. Now go ahead and make the chance of the middle one (that there was a reporting error) as small as you want, make it negligible. The last possibility, that there’s a hoax, is going to be much, much more probable, antecedently, than the first possibility (that the guy actually won). So with no further evidence, we have to believe the last possibility, that the lottery isn’t fair — it’s a hoax.”

Maybe I’m wrong in thinking that his renewed conviction (in #101) had something to do with my remark (in #100) about LLL’s luck being “compounded” in the sole buyer case. Be that as it may, I expressed dissatisfaction with the reason he offered (“Briefly, the reason is…”) because it struck me as belated opposition to our answer to (a) which didn’t seem to add anything new. But maybe, like me at that point, Jamie couldn’t clearly distinguish (a) from (b) — or, better yet, didn’t know what to do with it. Now, under pressure from Yu Guo, we can.

There is no mathematical solution for the puzzlement induced by (b), nothing as sharply identifiable as a Gazillion Fallacy. The problem there – which makes the hoax hypothesis tempting – is simply that, if LLL is the sole buyer, he is much, much more lucky than if he were just the very lucky winner of a Gazillion Lottery that sold normally (suppose the lottery was a commercial smash on Earth and also on neighboring galaxies). He is not just ordinarily lucky (we need the oxymoron). He is *peerlessly* lucky! He won a lottery that normal lottery winners didn’t even bother buying! (Notice how, according to Yu Guo’s admonition, LLL starts to “blend in”, as it were, the more buyers the lottery attracts.) Now, if there is an intuition (as noted by Jon and Bryan) leading to a Gazillion Fallacy (though they may reject this analysis), there must be an intuition leading to the conclusion that nobody can be as lucky as LLL in the sole buyer case. But, once again, for the rational believer, there’s no trumping the NYT’s reliability a priori, without a shred of evidence for believing in a hoax. The hoax intuition in (b) has the same intellectually disreputable source as the intuition in (a). In fact, it’s the same GF intuition – with the added complexity that we can’t measure how much more lucky LLL is in the sole buyer case. (All we have is the same gazillion-to-one odds.) His luck defies mathematical understanding.

How does my Jimmy Stewart character fare in Probability Town?

(I haven’t reconsidered this post in light of the last four or five posts. I wanna give you my unadulterated Jimmy-style analysis.)

A quick note to Jamie:

If you want to be constained by

meaningfulprobability assessments, then yes. This is very nice, I think, because it can serve as a pivot point between Bayesian epistemology and Bayesian statistics. A statistician could say, well, okay, I don’t have data on this, so my model shouldn’t offer constraints. End of story. But this option isn’t open to the Bayesian epistemologist, so far as I understand the varieties of Bayesian epistemology. This isn’t a knock-down argument, but it is a very crisp way to showcase the difference between these two applications of Bayesian methods. Bayesian statistics, I maintain, is much more reasonable than Bayesian epistemology. The statistical variety views the interpretation of the calculus (as it should) as a type of model, a tool, and not a piece of mathematical psychology. A statistician can snap out of this make-believe world of degrees of belief; philosophers, on the other hand, seem to actually believe in their degrees of belief. (!)I’m afraid I’ll have to catch up with the rest another time. This is a very stimulating thread. Hope to return soon.

As I reread my #140, I notice I haven’t given you any useful description of the Gazillion Fallacy after all. But I think there is a good one in Mike’s posts #20 and #33. And I would have been more careful with my description if I had paid more attention to his.

Except for the botched job in describing the fallacy, I don’t know what else is wrong with my story in #140.

Jamie,

At #130, you wrote:

Here’s a proposal (already hinted at #98) to handle the problem you raise: We can replace talk of LLL with talk of ‘LLL’. That is, the NYT can be interpreted as reporting the following:

SOMEONE has, in a fair and normal way, won a 1/x lottery as one among y players

and is named ‘LLL’.And if I’m correct, the probability of this statement being true still varies with the value of y. Whereas, as you mentioned, the probability of the quite different (but misleadingly similar) statement:

LLL has, in a fair and normal way, won a 1/x lottery as one among y players

being true does not vary with the value of y at all.

I think this proposal can handle at least the two counterexamples you raised. LLL will surely be excited to learn that the winner is named ‘LLL’, and Luckless Laura will surely be disappointed to learn that the winner’s name is not ‘Luckless Laura’.

And there’s independent reason to accept this proposal. Suppose we speak of not LLL (which is quite a strange name), but rather Jack Smith. What then would the NYT report *mean* to the winner, Jack Smith, if he reads the report? Would he interpret the report as saying that *he* wins the lottery? Probably not, because he knows that there are hundreds of thousands of people out there who bear the same name. So I think the report can properly be taken to mean only that someone wins a 1/x lottery as one among y players and is

named‘Jack Smith’. The case of ‘LLL’ is only a special case, because the name is so unique that, to LLL, knowing the name of the winner is ‘LLL’ is sufficient for knowing that he himself wins.Indirect proof that there is a Gazillion Fallacy:

Assume (for RAA) that there is no GF. Now, assume (for CP) that there is a Lottery Paradox (due to Kyburg). W.r.t. the LP, we know that its crucial probabilistic assumption exploits some version of what we may call “the Lottery Intuition in Epistemology” (“the LIE”, for short). Although there may be controversy as to how the LIE is best expressed, it must be something in the neighborhood of the following conditional: If S may reasonably believe that it is highly objectively probable that p, then S may reasonably believe that p. The epistemic worth of the LIE is itself a matter of controversy (I’m not getting into that here) and it may depend on how the LIE is formulated. Brush the controversy aside: There is a LIE and there is a paradox of the lottery (and both are important, and the former is at the heart of the latter). This is uncontroversial. And, if there is a LIE, it’s nowhere more powerful than when we consider the mother of all lotteries, the Gazillion Lottery (GL). Now, notice that, in order to deliver LP, we must assume that we *know* that the lottery has a winner. But, in the case of GL, some of us are tempted to conclude that there can be no winner (that it is more reasonable to believe that there is no winner; it must be a hoax or an error). This is absurd: the larger the lottery, the more suitable it is for LP (the stronger the LIE will be). So, any reasoning leading to the exclusion of GL as a lottery for LP must be fallacious. It’s the Gazillion Fallacy. So, if there is LP, there is GF. But, now, we can use our anti-GF assumption to derive the falsehood that there is no LP. Therefore, there is a Gazillion Fallacy.

Jamie:

There is a lot here to comment on in [137].

(1) It is true that I have information about hoaxes, but none of it is relevant to determining the value of the parameter in question.

(2) The 2008 presidential election: I have some evidence about elections that is relevant to the US 2008 election. The question is what kind of model to use to represent the salient features of my evidence to the question you pose. On IP, I can follow an elicitation procedure that you are familiar with on expectations. The difference is my buy rate does not necessarily equal my 1-sell rate. But, I am not sure that this model is called for to answer your question: you are asking about what I believe, not what I plan to do.

My evidence is this: there are two features of serious modern presidential bids that are sufficient to sort out the candidates you mention. The first is a political and financial infrastructure for organizing and financing a campaign, and the second is the retail appeal of the candidate across the electoral map. Only Obama is sufficiently strong on both features.

How to model this? Propositional logic and resolution come to mind. Where do probabilities come in? I would think in observing (and hearing testimony) about what caused campaigns to fail.

This depends on what I know about the dealer. This isn’t a hedge; it goes to the heart of the matter. In the easyJet and card dealing examples, we are trying to assess what we know (and what we do not know) about the uncertainty mechanism delivering our confirmation number, our hand. In the card case, dealers shuffle a deck of cards to (usually, poorly) approximate a random draw without replacement. And we know that there are slight of hand artists who have mastered the illusion of performing a random shuffle when in fact they return the deck to the precise order they started with, or are able to keep track of a single card, depending upon the trick. So, is your dealer Clyde? A farmer and no nonsense friend since childhood? Or is it his son Clem, who doesn’t like you, and has Penn and Teller posters on his bedroom wall?

I don’t deny that there are cases in which we can assign credal probabilities. I deny that we can always do this. And I think that a good model of uncertainty should be able to distinguish between the uncertainty of our evidence and our evidence of uncertainty. Orthodox Bayesianism does not do this.

Ugh, that’s a pretty long answer. Can I plug my and Bill Harper’s edited collection that is coming out soon? I would need to lay down Evidential Probability before beginning to sharpen your first question (about thinking the event will occur) into one that the theory can either (a) answer, or (b) rule as malformed. The next question is fairly easy to address once we have the machinery in place. The third question is another long story, and is one that isn’t as clear to me yet, so it would be even

longer.Bryan,

The main idea behind the easyJet example was to start at a crazy endpoint and work backward. But the crazy scenario was similar to the lottery example in that the prior for hoax was higher than the prior for drawing that type of confirmation number. The idea then was to see where the most reasonable place was to pin our the disagreement(s).

Ah! Your [139] raises the remote possibility of easyJet insulting its clients. In many languages! (This is Europe, after all.) How then to avoid the problem of generating 7 letter words or phrases from

anylanguage?The answer is: take the vowels out of the character set.

PS to 144: I thought I’d spare you the redundancy and omit the crucial (and obvious) assumption that you have no reason — other than the LIE itself — to think that there can be no winner, that the report is either a hoax or an error. That has been repeated ad nauseam in this thread.

Hi.

I think I’ll never catch up now. I’ll give a few brief comments. As far as I’m concerned, at this point everyone is excused from answering just by feeling tired of the whole thing.

Bryan,

I thought everything in [138] was right except for this:

No. Its being so familiar can’t make it especially unlikely. To see this, consider a promotional event: you get a free ticket worth $100 if its exact alphanumeric sequence is drawn in a genuinely random drawing. Would you rather have X46M9E2 than 777777?

(I say no, of course. They are equally likely, equally ‘good’ tickets to have in the promotion.)

Back to the easyJet example: when you do get the really salient, familiar patterned sequence as an allegedly randomly assigned number on-line, you smell a hoax (or a glitch). I think you’re quite right to be particularly concerned, maybe phone the company to make sure — something you otherwise wouldn’t bother doing — things like that. My view: this is the right thing to do, because the chance of very salilent, familiar pattern given hoax is relatively high. (The chance of X46M9E2 given hoax is miniscule, at least as small and presumably smaller tha the chance of getting that sequence in the non-hoaxy way).

In non-quantitative (and I hope non-geeky) terms, the idea is that in the case of the familiar pattern, there is a pretty good alternative explanation to “It was just random”, whereas in the dull, random-looking sequence case, there is no such decent alternative explanation, so you have to stick with the hard-to-believe “It was just random”. (This is my “Shocked Bridge Player” explanation — is that analogy clear?)

Greg,

(1) It is true that I have information about hoaxes, but none of it is relevant to determining the value of the parameter in question.Of course it’s relevant. We know that people sometimes pull off hoaxes, we know that they’re often tricksters and like jokes, we know that 7777777 is a better joke than MF68U13. Those things are quite relevant.

Claudio,

No, that’s not true. He is equally lucky in each case. He has a one in gazillion chance of winning in each case.

Try this with a very small lottery – say it’s an in-house raffle with one hundred tickets and Louie buys just one ticket. That makes the intuitions more reliable, and it’s clear in that case that Louie is exactly as lucky if he buys a ticket, nobody else tries, and Louie wins, as if all one hundred tickets are sold, Louie buys exactly one, and he wins.

Jamie,

Have you read the whole paragraph in #140? Please, indulge me and take another look, will you? I was working with the hypothesis that there is a substantial intuition in this thread that the report is *less* credible if LLL was the sole buyer. If it weren’t for that intuition, there’d be nothing left to explain (just about). The irony is that you’ve been credited with pressing the point! Do I miss it?

Claudio,

Yes, the report is less credible if LLL was the sole buyer. That is to be explained, right?

But he isn’t luckier in that case. That’s why we want an explanation.

From [149]:

We do?!

This is the Bayesianism talking. I don’t have a degree of belief for seeing 7777777 given a hoax, and neither should you. Neither of us has grounds for assessing this probability.

Greg,

I’m perfectly content to leave it at this:

If our ordinary experience of the world gives us no reason at all for thinking that giving someone a ‘hoax’ ticket number of 777777 is more likely than giving someone a ‘hoax’ ticket number of MF68U13, and furthermore no reason to think that it’s less than .99999 likely to happen, then you’re right. Otherwise, I’m right.

…with this modification: neither of us knows enough

about the specific uncertainty mechanism at hand, which is the easyJet booking system. My ordinary experiences of the world give me no reason I can use to assess the probabilities necessary to effect this inverse inference.Jamie,

Here’s our old divide again: super-hoax or superluck? I don’t think we have to dwell on definitions of “luck”. There’s this extraordinary event (the sole buyer wins the Gazillion Lottery). Three alternative explanations have been considered: error, super-hoax, superluck. But one might add a theory of paranormal powers to that list. Now the choice becomes: believe any of those or suspend judgment? Given the elements of the case, the testifier’s reliability is the clincher for my choice: call it “superluck”. No other alternative seems rational in the circumstances. You’ll be either involved with a Gazillion Fallacy or a theory of paranormal powers or unwarranted skepticism. Until I see what’s wrong with my #140 and #144, I’m stuck. Not tired; just stuck!

Sorry, Claudio, I don’t understand what you’re saying in [156].

I told you what was wrong with [140], didn’t I? The winner is no luckier if he was the only buyer than if he was one of trillions of buyers. I thought this would be obvious once pointed out, and I’m not

sureyou mean to be disagreeing.So we want an explanation of why the report ‘feels’ more credible in the case in which there are many buyers. I think we agree on that. (Maybe by calling him ‘luckier’ you mean to be saying no more than that it ‘feels’ incredible if he was the only buyer? In that case I agree with you.)

My explanation is the one that parallels the Case of the Shocked Bridge Player. It’s Bayesian. I think you don’t like it, but I can’t tell for sure. I don’t know what explanation you like for why the report feels more credible in the case in which there are many buyers.

I will admit off the bat that I haven’t had yet the time to read ALL the comments on this problem, but I write just to say that the case of the report of an improbable lottery winner is somewhat closely analogous to a real-world problem that has been on my mind for some 30 years: the problem of whether the New Testament is a reliable document. I am an ex-Christian, but I continue to read in Christian apologetics, which delve into questions of the reliability of historical reports of unusual happenings all the time. Now, as you no doubt know, the reliability of the NT has been endlessly debated, and there is as a result an incredible diversity of opinion about whether we can trust the history found in the New Testament. In my judgment of the debate, the arguments all come down to matters of probability, matters much like the case of the reports of a lottery winner under inspection. Concerning the NT, we have reports that many extremely unlikely events have happened quite a long time ago, though we cannot quite be sure how extreme the unlikelihood is of each of these events. (This line of thought immediately runs into the difficult issue of how to assess the probability of near-impossible events.) Can we believe the NT’s reports of the actuality of these events, or any one of these events, say the most extreme of them all, the resurrection of this man named Jesus of Nazareth? This real-world argument, which is an extremely important matter if it has any chance of being true (any chance at all?) and a big waste of time and energy if it is false or almost surely false, hinges on probabilities much like those under discussion. It is interesting to contemplate, as well, how the discussion might take on a very different feel when we try to assess the lottery case in the light of an analogy to the issue of New testament reliability (rather than the various abstract test cases offered throughout this discussion). Suddenly, the discussion can become rather strongly charged with emotion, as is often the case in debates in religuous apologetics. Thanks for listening.

Ben,

Thanks for your comments on this.

I would stress that the issue is not simply judgments of probability, but also the

interpretationof probability. The core debate that I participated in here, which Jamie, Bryan, Claudio, Dylan, and others were gracious enough to act out with me, concerned how to interpret probability. There is no arguing over the theorems of the Kolmogorov axioms, but there is plenty of controversy over what measure theory has to do with uncertainty, and reasoning under conditions of uncertainty. Plenty of controversy, in other words, over how to apply probability.I think you are right to point out that many serious questions are similar in form to the topics we are discussing here. Most of us have ‘real world’ topics that motivate us when working and arguing about toy examples like those above. But our interests, combined with the complexity of life’s problems, drive us to look at toy examples that share the same structure to problems which interest us. This bit of abstraction removes (most?) of our emotional attachments, and allows us to isolate tricky features of problems that we might otherwise have missed. The hope is, then, that we can learn from these exercises and apply them in the real world.

Greg: I didn’t mean to imply in any way that there was anything cheap or philosophically wrong or silly with looking at “toy” examples, as you appear to have thought I was implying. In fact, I believe exactly the opposite: that it is almost always wise to step away from our real-world problems by looking at test-case examples and analogies, mostly because we often need to get some distance from emotionally charged nature of many of the real-world issues that we wish to solve or get answers to. I was just pointing out a real-world case of some urgent importance that might be of some interest to those involved in this discussion. On a side, but related issue, I’m also interested in the fact that all real-world issues are of urgency only to SOME human beings and certainly not to all — which is related to William James’s ideas about what notions are “live options,” those ideas that attract one person and not another quite so strongly (and often not at all), as discussed in “The Will to Believe”. On yet another separate, but related issue, it’s also highly interesting to me how our epistemic decision-making changes when the consequences of our decisions become real, when it is a real decision that people are trying to make, which, I suppose, is also a Jamesian issue. Thanks for the comment. I’m new to this blog, or whatever it might be called.

Thanks, Jamie.

If we must disagree, let’s make sure we’re not *confused* about what we’re disagreeing over — if confusion is avoidable. (This is one of the fundamental principles of my humble existence.) I think confusion *is* avoidable here, and your post #157 makes me see I can do better.

Consider these two points:

1. To my mind, your Bayesian response to the Gazillion Lottery case — developed in posts #109, #113 and #129 (those are the main ones, as far as I can see) hasn’t gotten past these two obstacles: (a) Greg’s anti-Bayesian case in posts #114, #119, #131 and #136 (other interesting posts by the two of you are largely redundant as regards the main contention); (b) your Bayesian explanation, even if successful, yields no answer to question (b) in my #140. There’s no Bayesian explanation of why the report “feels” less credible in the sole buyer case. The Bayesian answer to questions (a) and (b) is the same.

2. The United Federation of Planets is broke. Their hope for financial rescue lies in the newly created Gazillion Lottery. The lottery is open to all and only the trillions of highly evolved humanoids currently residing in the galaxy. The drawing will be held at the space station Babylon 5. Now, we have these two scenarios:

(i) The lottery is the talk of the galaxy. Almost all tickets were sold. And the winner is LLL, of Peoria, Illinois.

(ii) The lottery is a complete commercial failure. (The experts speculate that, after centuries of exposure to post-Kyburg epistemology, the LIE (see #144) is now permanently hard-wired in the cognitive apparatus of highly evolved humanoids and they are finally letting their actions be guided by it.) Only one ticket was sold. On the day of the drawing, lottery officials and reporters gathered at Babylon 5 to go through the motions. To everyone’s utter amazement, there was a winner.

We then have the problem: Is there anything special about the winner, LLL, in (ii)? Conspiratorial theories abound, but not a shred of evidence is forthcoming. One thing is for sure: LLL is *different* from his fellow humanoids. In (i), we call him “lucky”, but he seems completely ordinary. “Lucky” is just what we call an ordinary lottery winner; “lucky” is what you are when you beat the odds, no mystery implied. In (ii), he is just as “lucky”, but he’s also that extra something that gave us the eye-popping experience. I’ve called him “superlucky”. If we have no reason to suspect a hoax or clairvoyance — not a shred of evidence of any of that — why not call it a case of “superluck”? “Superluck” is just a word for (measurable) luck plus whatever it was that made the guy buy his ticket. Again: he is *not ordinary*. Ordinary humanoids didn’t buy tickets. But thinking that he is not ordinary is perfectly coherent with thinking that he cleanly won the lottery. Granted, we now have a new problem — the mystery about LLL. But here’s the important point: We cannot solve that problem for free! Reasoning from the odds of what is, in this particular case, a groundless conspiratorial theory is no alternative for the rational believer. We don’t even have enough on which to hang a suspension of judgment.

Maybe the problem here is, at bottom, the one Harman got from Kripke (or some variant of it): Is believing that LLL is the winner consistent with looking into the possibility that he is a hoaxer?

Claudio,

I don’t follow (2) at all. The last few sentences of the penultimate paragraph (“But here’s the important point…”) I find completely mysterious. Maybe someone else can answer your questions?

As to (1),

The Bayesian answers to (a) and (b) are the same? Maybe, if you mean the answer to both is “no”!

More important, there is indeed a Bayesian explanation of why the report feels less credible in the sole buyer case. Maybe nobody’s given it yet — but why would you think that there is no answer? (I was under the impression that someone had given the Bayesian explanation, maybe myself, but the thread has become so long that it’s a chore to look sort through it to find out.)

Jamie,

Please, disregard the sentence

“But here’s the important point: We cannot solve that problem for free!”

I neither like it nor need it. (We don’t get much chance for editing here, as you know.) Any better now?

(I should have written “conspiracy theory”, not “conspiratorial”. Sorry!)

I don’t generally reject newspaper accounts based on judgments of the probability that they are true. In fact I don’t think I ever do this. I judge newspaper accounts based on their internal consistency and their consistency with things I already take to be fact.

So I do not reject the newspaper’s account. Really, for newspapers to fulfill their function, we can’t usually judge them on a probabilistic basis. Improbable events are exactly the sort of thing we expect _would_ and _should_ be recounted in them.

But my points here are less about your point than they are about the usefulness of the notion of “newspaper” in the illustration you give.

-Kris

Do any of you believe the results of this lottery reported today in the New York Times?!

Greg, I think the example nicely illustrates the core point that the ex ante statistical improbability of an outcome, without more, just isn’t a defeater for the ex post belief that the outcome has in fact occurred. Here the relevant information is (a) the error rate in whatever method is used to verify the outcome of the lottery and (b) the error rate in the newspaper’s reporting of lottery results.

I don’t think those are relevant. The reason is that the football score lottery result is no more likely given an error (of production or reporting) than given a correct production and report. Now if the

hoax rateis non-negligible, that would be relevant.Is the ‘hoax-rate’ non-negligible? Is the ‘fraud-rate’ non-negligible? Do you, dear Bayesians, believe that 4239 was the fair, winning lottery number of the Ohio lottery Saturday last?

I do.

Is the ‘hoax-rate’ non-negligible? Is the ‘fraud-rate’ non-negligible?I’m sure they’re negligible. What do you think?

Do you, dear Bayesians, believe that 4239 was the fair, winning lottery number of the Ohio lottery Saturday last?I do, and I explained why. I’m sure you understood the explanation, so you must already have known the answer to your question.

I’m also pretty sure you understand the difference between the lottery reported in the

Timesand Jon Kvanvig’s case.Still a cute story, though.

Lottery ball drawing frauds are not unknown. (I assume the Ohio lottery Pick 4 is done with a ball machine; I haven’t confirmed this assumption.) One fraud method is to use weighted balls. I remember reading about the rigging of a pick 3 game in one of the US state lotto ball drawings many years ago, but a quick spin on the web failed to turn up evidence to confirm this memory. Anyway, …

I do think there is a negligible chance of saying something that hasn’t aleardy been said here about why there is no fundamental difference between the Ohio lottery example and Kvanvig’s first of two examples.

Jamie, in your post # 167 I think you are focusing on the wrong issue (or, anyway, a different issue than the one I’m focusing on). If you are trying to decide whether to believe a newspaper report that the result of the lottery was X, the pertinent thing to focus on is the probability that the report is correct. This is very different from the probability that the particular reported result would occur. The error rates in question are highly relevant to determining the probability of the correctness of the report. By contrast, obviously, they are not relevant to determining the probability that the result reported would occur. By the same token, the mere fact that “the football score lottery result is no more likely given an error (of production or reporting) than given a correct production and report” does not establish that these error rates are irrelevant.

Hi Stephen,

The lottery result is the football score; call this result

F. The lottery procedure could contain an error,E.We have some evidence, F, and we want to know pr(E | F).

But if the error is not a hoax but some kind of malfunction, then pr(F | E) = pr (F). (It is no more likely that a malfunction would produce that salient outcome than that a perfectly functioning lottery would.) And this in turn means that pr(E | F) = pr(E). That’s just a mathematical fact.

Of course, if the error rate is very high, then you’ll think an error in this case is very likely, but that has nothing at all to do with the evidence; it has nothing to do with the oddity of the football score result.

Now try it using

H(the hoax) instead of E. Since pr(F | H) is higher than pr(F), H gains probability from the presence of the evidence, F.Jamie, granting what you say here for the sake of argument, still I think you are focusing on a different issue. There are two questions in play:

(1) What is the probability that the result F would occur?

(2) What is the probability that the report that F occurred is correct?

These are very different questions. Question (2) concerns the reliability of the reporting mechanism, not the statistical probability of the result reported. This is a little tricky, because the content of the report is that a certain result F occurred. Still, the probability of the correctness of the report and the probability of the occurrence of the result reported are not the same thing. For example, to assess the probability of the correctness of the report, you have to know how often the newspaper screws up the lottery results. But you don’t need to know this to assess the probability of the occurrence of the reported result.

Likewise, if I have a visual belief that a certain event A has occurred, the probability that this belief is true is a function of the reliability of my vision. You have to distinguish this from the probability of the occurrence of the particular event that is the subject of my belief.

A somewhat analogous distinction is helpful for making sense of situations in which a person who is extremely reliable in a particular domain (e.g., Frege in mathematics) nevertheless believes something self-contradictory (e.g., the axiom of abstraction). The belief, qua belief, might have a very high probability of correctness, but the probability of the proposition believed is zero.

Hi Stephen,

Your (1) and (2) are different questions, indeed. Your (2) is this probability, I think:pr(E | F). That’s the probability of an error-event given that we have the evidence that the football score was reported as the outcome of the lottery.

Do we agree about that?

If so, then plug in the rest of my argument above (in [172]).

By the way, why is it so difficult to type text into the Certain Doubts comment box? Or am I the only one having that problem? I’m resorting to writing the comments in another application and cutting and pasting them in. But if I then decide to edit even a little, it’s just agonizing.

Jamie, if we interpret F as I thought you did in #172, then F is “The result of the lottery is F.” In that case, we might interpret R as “The newspaper reports that the result of the lottery is F.” Then question (2) asks for the value of Pr(F/R). If E is “The newspaper errs in reporting the lottery result,” then Pr(E/R)=Pr(-F/R). In general, we should not expect Pr(E/R) to equal Pr(E).

Okay, I think I created terminological confusion, but now I’m on board with your terminology.

We need a letter for the report the newspaper actually made – a particular one. So I’ll use

R*for that one. ThenPr(R*|E) = pr(R*)

Agreed?

Hi Jamie, it is tempting to think that these probabilities are equal, but they are not. One may suppose that if the newspaper errs, there’s no particular reason why it would make one erroneous report instead of another. Thus, in erring, the newspaper would select at random from all the possible reports there are and the probability that the newspaper would select a particular report R* (to use your notation) would simply be the antecedent probability of R*. The mistake here is that if the newspaper errs, then (depending on the type of lottery) one or more possible reports are obviously excluded, namely, the correct report(s). Thus, the newspaper cannot select its erroneous report from among all the possible reports there are, and one cannot conclude that Pr(R*/E)=Pr(R*)

In any event, all of this is tangential to the core point that one must distinguish between a probability attaching to the report and a probability attaching to the event reported. And if one is trying to decide whether to accept the report, the former probability, not the latter, is the one to focus on.

Stephen,

A fair point. Okay, so let’s talk about the possibility that there is a ‘malfunction’

Minstead of an error. When a report is a malfunction, the signal of the actual lottery outcome is lost and the newspaper can then get the report right only by an amazing coincidence. Now pr(R*|M)=Pr(R*). Right?It isn’t tangential. For if pr(R*/M)=Pr(R*), then pr(M|R*) = pr(M).

Jamie, my apologies for being so late in responding. I agree that, under suitable assumptions, Pr(R*/M)=Pr(R*). But I don’t see the relevance. The core question (or, at any rate, what I regard as the core question) concerns the value of Pr(F/R*). This value may be very high, despite the assumed equality of Pr(R*/M) and Pr(R*). Again, to my mind, the most important lesson of both the original scenario that spawned this whole string, as well as the football lottery score, is that the perceived reliability of the source of the report trumps the antecedent statistical improbability of the reported result. And this makes sense, because we’re really discussing two different things: the probability that the report is true versus the probability that the event reported would occur.

Oh, hi again.

Good, so you agree that pr(R*|M) = pr(R*). But you think this is irrelevant.

But I explained the relevance. When pr(R*|M) = pr(R*), then pr(M|R*) = pr(M). So the evidence (R*) is

no evidence at allof a malfunction. Seeing the football score reported does not change the probability of a malfunction in any way.By the way:

That’s not right; if you generally allow the reliability of the source to ‘trump’ you’re committing a well-known fallacy: ignoring the base rate.

Jamie, right, the report is not evidence of a malfunction any more than, say, a geiger counter reading would be evidence that the geiger counter is malfunctioning. The question is whether the report is evidence of the event reported. That is a question of the reliability of the reporting mechanism. The report can be extremely good evidence that the reported event has occurred, despite the equality of Pr(R*/M) and Pr(M). Moreover, in response to your last point, the reliability of the reporting mechanism is not primarily a question of the “base rate” at which the reported event occurs.

Hi Stephen

Great! I’m happy. Because in Jon’s original example, the report

isevidence of malfunction. This is the critical difference between the football lottery and Jon’s example. (Well, there are two, but that’s one.)Uh.

I thought Jon’s example simply took it for granted that the report is evidence of the event reported; nobody has questioned that, anyway. My understanding is that the question is whether it could be

enoughevidence to overcome the spectacular base rate (the immense prior improbability of the event reported).I don’t know what that means. If by ‘reliability’ you mean the probability of the report given the event, then the base rate is irrelevant. If you mean the probability of the event given the report, then in Jon’s example the base rate is crucial.

Hi Jamie,

Actually, under either interpretation of reliability that you identify at the end of your post, a high degree of reliability is compatible with any value for Pr(F). So if you know on independent grounds that the reliability of the reporting mechanism is very high (or you’re assuming that in the problem), then you don’t need to know the base rate at all. Now, perhaps your argument is that we can’t empirically verify the reliability of the reporting mechanism without knowing Pr(F). But I don’t think that is correct. Presumably we could estimate reliability by evaluating the accuracy of an appropriate sample of reports. (By the way, I think of reliability here as Pr(F/R*), but that’s somewhat against the grain.)

I disagree that the report in Jon’s original example is evidence of a malfunction in your sense. If you have an argument here, I’d be curious to see it.

Hi Stephen,

I don’t understand what you’re getting at.

You said “the reliability of a report is not primarily a question of the ‘base rate’ at which the reported event occurs.” I disagreed, for one interpretation of ‘reliability’. Now you are saying that if we already know the reliability, then we don’t have to know the base rate” … to know” … what?

Let

Emean that the event in question occurred, andRmean that the news reported that the event occurred.Now suppose you know on independent grounds that the newspaper is reliable, meaning that pr(R|E) is high, and to keep it simple suppose pr(R|E) = pr(~R|~E). What you want to know is whether you should believe that E, given that R: pr(E|R). But you cannot determine this without determining pr(E), the base rate; furthermore, with pr(E) you

candetermine what you want to know.Oh, okay. Well, when there is no malfunction, the chance of the report is 10^(-1,000,000,000). What is the chance of the report when there is a malfunction? The report is that Lucky Louis has won; there are relatively few reports possible given a malfunction, so I suppose the chance of the report given a malfunction is considerably higher than 10^(-1,000,000,000). I’m pretty sure this is what Jon intended (this is why it’s important that only 100,000 tickets were sold), but it’s up to the readers to decide what they think is plausible. So, if you think that in the story the chance of the report given a malfunction is 10^(-1,000,000,000), I won’t dispute it.

Jamie (and Stephen),

There are at least two questions that we might want to distinguish here. Suppose that White believes that E based on the newspaper report R, and Black believes that E based on the fact that Pr(E|R) is high. The two questions are: “Is White justified?”, and “Is Black justified?” I think it can be argued that White is justified, and that whether black is justified depends on Pr(E) (and whether he knows it).

I didn’t read trhough the whole thread, but I thought that maybe that difference between White and Black might be part of what is going on here.

Hi Juan-

A quick question/remark: what interpretation are you giving to Pr(E|R)? On an orthodox reading, Pr is simply Black’s degree of belief that E given R. The fact then is a psychological fact about Black, not a fact

per seabout newspaper reports or the events they report on.If Pr is thought to be objective in some sense, as I think it should, then we can ask whether there is such a thing as a distribution for

theevent E giventhereport R: these are propositions, not propositional variables.If we instead are treating E and R as variables for types of events and types of reports, respectively, then we are no longer working under the standard model. Then, sorting out how to model this comes prior to the issue you raise here. Put another way, sorting out how to model Black and White’s evidence and the probability assessments of Black’s evidence will go a long way to resolving the issues turning on the distinction you raise here.

Hi Jamie,

Actually, under either interpretation of reliability that you identify at the end of your post, a high degree of reliability is compatible with any value for Pr(F). So if you know on independent grounds that the reliability of the reporting mechanism is very high (or you’re assuming that in the problem), then you don’t need to know the base rate at all. Now, perhaps your argument is that we can’t empirically verify the reliability of the reporting mechanism without knowing Pr(F). But I don’t think that is correct. Presumably we could estimate reliability by evaluating the accuracy of an appropriate sample of reports. (By the way, I think of reliability here as Pr(F/R*), but that’s somewhat against the grain.)

I disagree that the report in Jon’s original example is evidence of a malfunction in your sense. If you have an argument here, I’d be curious to see it.