Of Elections and Lotteries

Synopsis: I wonder why, in light of some solid cases of lottery knowledge, people still doubt lottery knowledge.  I also suggest an X-phi research project that thought would boom after 2004 but didn’t.

General motivational prolegomena: So I made it until 4:02 Central Daylight Time before obtaining testimony that Obama was re-elected.  FBF’s Jason Rogers and Jeremy Fantl deftly crushed my blissful ignorance.  Not caring about politics much more than knowledge, I had isolated myself from any and all reports about the election from the time polls opened until late this afternoon.  I made a public display of my ignorance on Facebook about this matter (as is my wont, true) and I assumed that many people were saying to themselves “He knows darn good and well Obama has been re-elected, he’s just being provocative!”  I thought this because I thought that there was so much hatred of Romney that he didn’t have a serious chance.  It turns out that was an artifact of my having such a high proportion of friends in Academia (it looks like he got 51-ish% of the popular vote.)

But assume that things were as I took them to be.

Assume that polls had consistently indicated for months that Obama had a 30% lead in the polls.  It would be very natural for a Romney supporter to say, “Man, we are not going to win this time around, damn.”   And it would be natural for a fellow-supporter to respond “I know, it really bums me out.”  The nondefectiveness of the first utterance gives friends of the knowledge norm of assertion a reason to think there is knowledge there.  And foes of it–like me!–will take the nondefectiveness of the reply as evidence that they knew that Obama would win.  Frequently through the day I heard voices (you know what I mean) saying “Oh come off it, you know who won, sheesh.”  It never occurred to me to reply to the voices that this was not a case of knowledge.  I carefully avoided saying that I didn’t know who won for this very reason.  I did take myself to know, just not to have been told and so acquired *testimonial* knowledge.

Similar kind of case in the neighborhood.  Suppose a precinct in a very, very wealthy area occupied almost exclusively by movie stars come back as expected: 88% Obama.  However, Romney frivolously sues for a recount and wins.  The head re-counter will naturally say to his staff: “Look, we all know how this is going to end, but the law requires us to do it.”

Objection: Ever heard of “Dewey Beats Truman”?
Reply: That error wasn’t based straightforwardly on polls.  If Wikipedia can be trusted here, then “The paper relied on its veteran Washington correspondent and political analyst Arthur Sears Henning, who had predicted the winner in four out of five presidential contests.”  And there may have been some wishful thinking involved, too, for Wiki also says the paper was “famously pro-Republican.”

Objection: It is just immediately obvious that that margin is not enough for knowledge.
Reply: The polling evidence isn’t supposed to be one poll, it is supposed to be a series of polls returning the same results.  It is essentially structurally similar to Vogel’s Heartbreaker case.

This leads me to a clarification about my point here.  My reflective response to lottery propositions is the same as my first reaction: Of course we know losers are losers! To think otherwise is to be either bad at math or in the grip of a theory.  When it is pointed out so carefully–a la Hawthorne 2004–that ordinary knowledge has a lottery structure we should not for a moment (okay, not for more than a moment) become skeptical of ordinary knowledge, we should use that structural similarity to be skeptical of our skepticism about knowledge in explicit lotteries (especially since people are constantly bamboozled by numbers (#Kahnman&Tversky)).

But for some, their lottery skepticism runs pretty deep and they need other reasons to support lottery knowledge besides the fact that we take ourselves to know so many things that turn out to be, essentially, lottery propositions.  I think the election case is that kind of case, as is the Heartbreaker case, the retirement fund case, the matchbox case, and the mispronunciation case.

I just don’t understand why these cases aren’t considered decisive in favor of lottery knowledge.  One of the most interesting (if quite speculative) aspects of Hawthorne 2004 is the investigation of the vacillation of our intuitions regarding lottery knowledge.  People should look into this more.  It would be a great combo of epistemology and X-phi.  #ResearchProject  (Some of Jennifer Nagel’s stuff is relevant here, but has a bit different focus than what I have in mind here.  I have in mind specific application of the cog sci lit’s demonstration of how jacked up we are about thinking with numbers to an explanation of why some people’s (not mine, ever, at all) intuitions go all skeptic-y when they think of life as a lottery.

Perhaps I can make an explicit argument.  Let’s see.

a. A significant number of people (not me!) are such that their intuitions about particular cases vacillate based on how they are described in the following kind of way.  When they are described in explicit lottery terms–“L-descriptions”–(say, as a first pass, low-value equiprobability cases) the subjects have skeptical intuitions and when they are given ordinary descriptions–“O -descriptions”–(intuitive, no def, essentially non-lottery descriptions), they attribute knowledge (and ascribe “knowledge,” that’s how you can tell.

b. A single case C can have two versions Cl and Co when it is given a L-description and an O-description.

c.The skeptical intuitions generated by cases c1-cn given L-descriptions form class K.  Ordinary common sense (knowledge-affirming (and, ordinarily, “knowledge” ascribing)) intuitions generated by cases c1-cn given O-descriptions form class O.

Premise 1/Methodological Assumption: When intuitions vacillate about a case or a set of structurally similar cases, we should favor the intuitions which are more solid, if one class is more solid.

Premise 2: Class-O intuitions are more solid than Class-K intuitions.

Evidence for Premise 2: L-descriptions, unlike O-descriptions, involve known bamboozlers (numbers, large numbers, games of chance, risk, and math).

Lemma 1: We ought to favor (give more credence to) Class-O intuitions than to Class-K intuitions.  From 1, 2 and some obvious stuff.

Premise 3: If Lemma 1, then, ceteris paribus, we ought to extend ordinary confidence to lottery cases rather than extending lottery skepticism to ordinary cases.

Premise 4: Ceteris is paribus

Conclusion: We ought to extend ordinary confidence to lottery cases rather than extending lottery skepticism to ordinary cases.

We have several of the right kind of cases in the literature already, and they are not hard to generate, so I’m pretty convinced of the conclusion.


Of Elections and Lotteries — 11 Comments

  1. If I understand you right, Trent, you are suggesting that you know in advance of the draw that a ticket in any sufficiently large fair lottery will lose, whenever it is in fact the case that this ticket will lose. You suggest that any intuition to the contrary is generated by the “known bamboozlers” of “numbers, large numbers, games of chance, risk, and math”. I’m not seeing how the bamboozling is supposed to happen — could you be a bit more specific about the mechanism here?

    I ask because I’d be really, really reluctant to put math in the epistemic doghouse. (And, fan of Nate Silver that I am, I don’t think either of us knew who would win the election yesterday until the polls closed, although I thought, and perhaps even knew, that Obama had roughly a 90.9% chance… .)

    Incidentally I’m not convinced that ordinary knowledge actually does have a lottery structure — I think that something is lost when we re-describe many ordinary judgments in lottery-type frameworks.

  2. Jennifer, thanks for your questions.

    1. Yes, I think that a sufficiently high (total) probability (relative to one’s evidence) that the ticket is a loser is a (logically) sufficient condition for propositional justification that that ticket is a loser. And that when one properly basis their belief that that ticket is a loser on that evidence, then that is a (logically) sufficient condition for having knowledge-level doxastic justification. And that when these conditions hold and the belief is true, one knows that the ticket is a loser.

    2. I have not suggested the precise mechanism whereby this is occurring, that’s what I think would make a great research project at the intersection of epistemology and X-phi. But I take it that the lit on biases and heuristics supports the very general thesis that

    BH1 Subjects show an elevated risk of bamboozlement when thinking about probabilities.

    That’s still vague and a short inference from, not the direct result of, the tests. My admittedly casual but frequent and very interested reading of the B&H lit leads me to believe that there are lots of theses similar to BH1 such as

    BH2 …when thinking about games of chance.

    BH3 …thinking about large numbers.

    If my argument is valid then if my claim that the BHi are supported by the B&H literature is substantiated by detailed application, we’ve got a pretty good argument that the pervasive skepticism discussed in Hawthorne 2004 can be avoided without any fancy new linguisticsy stuff. I think that’s a good thing.

    I think the BHi are just a step above common sense in obviousness, and I don’t think the claims need much by way of particular empirical support for Premise 2 to be belief-worthy. But it would be really nice to nail it down solidly by connecting it to particular studies in the literature or to gather some new data to test the hypothesis more directly. Seems like a great research project to me.

    3. As to the issue of whether there are cases that admit of both O- and L-descriptions without (relevant) loss, I guess I just think John gives some good examples in K&L. But it might not be as widespread as some have claimed. My recollection is that he makes it seem quite broad. I put that in my data, becuause it’s generally accepted and I wanted to focus on what follows from it, but it is interesting that you question it.

    • Hi Trent,

      You are not the only person to point to the biases and heuristics literature as a possible source of error in thinking about probabilities — Hawthorne does exactly that himself, claiming that work on the availability heuristic suggests that we are liable to overestimate probabilities of error on hearing them mentioned. I’ve argued in a paper in PQ that this is a misreading of the relevant empirical literature. The availability heuristic has the wrong activation and cancellation conditions, for starters. So I don’t think this is a good research project for xphi right now — I think that it’s a research project that has already been executed in the empirical literature on reasoning, and the existing empirical research tells pretty decisively against a Hawthorne-style explanation. Unless you have a better story about what it is in the biases and heuristics literature that is problematic, I’m not going there. I don’t think that body of literature gives us any *blanket* reasons to distrust our own mathematical or probabilistic reasoning (thank goodness). I think that our judgments about mathematics and risk can be perfectly accurate, that there is absolutely nothing wrong with my appreciation of the odds of my ticket’s losing a one-in-a-million fair lottery, for example.

      But I agree with you for reasons explained here that we do not need to resort to fancy semantics to get out of the problem, or to save Closure.

  3. Jennifer.

    1. I never claimed originality for this. It is too obvious to anyone familiar with the literature. Indeed, I noted that John suggests it. But I omitted reference to the availability heuristic precisely because I don’t think it’s the most promising version of the strategy.

    2. I don’t think the kind of strategy I sketch an outline of requires *blanket* reasons to distrust our own math or prob reasoning. I tried to be careful about two things:

    a. I said there was an “elevated risk” of error when math/numbers/chance came into the picture.

    b. I said that when two intuitions conflict, the fact in (a) can lead to a *favoring* of one vs. the other. There is no need at all to refer to blanket distrust in such a strategy. If I have to indicators, both highly reliable but one more so than the other, I go with the more reliable one.

    3. I don’t think we need any X-phi to ward off skeptical worries from structural similarity between ordinary knowledge and lottery situations because I’m perfectly happy being a hard-headed Moorean and saying that if there is such a parallel structure then I’m a far sight more confident in my ordinary knowledge attributions than in my lottery ones (or at least that’s what I’d say if I *had* skeptical intuitions about lotteries).

    4a. There are a lot of other parts of the B&H lit than the availability heuristic (though it’s a nice broad one and pretty flexible and applicable), and I haven’t seen it mined thoroughly to test some of the relevant skills. For example, when I taught High School math, it was pretty common for syllabi resources to include stuff (kinda like this: http://serc.carleton.edu/quantskills/activities/UndBigNos.html and this: http://www.ucmp.berkeley.edu/education/lessons/billion/billion.html ) to try to get people to appreciate the largeness of large numbers. It does not come naturally. Whether it is the size of the national debt, the age of the univers, the size of the universe, the scale of the sub-atomic, what have you, educators must constantly overcome our inability to appreciate the vastness of numbers over relatively low thresholds.

    4b. The same goes for games of chance. If people were naturally good at them, there’d be no Vegas. And there are other related factors plausibly involved in lottery intuitions and they all come together to make me pretty sure of the following:

    TD1 Lottery intuitions are less reliable than ordinary intuitions

    where that is interpreted in such a way as to have as an essence that the basis on which some people (not me!) feel apprehension about attributing knowledge in explicit lotteries is less secure than the basis upon which we judge that we know things in ordinary life under ordinary descriptions, even if those situations admit of L-descriptions due to relevantly similar structure.

    I wish I had the money to give you a huge grant to do this, because you rock at such things. Otherwise, I repeat my recommend to folks what appears to me to be an interesting line of research.

  4. Just a question: given that Class-O intuitions are more solid (I’m not sure I understood what you mean by ‘solid’ though) than Class-K intuitions with respect to some case c, why exactly do we have to favour the first ones? Any prospect of presenting evidence in favor of the thesis that, if O intuitions are more solid than K intuitions about a case c then, probably, the judgment generated by an O intuition is true and the judgment generated by a K intuition is not?

  5. Hi, Trent. Great post! Just a quick note to say that Ori Friedman and I are in the midst of a series of studies on lotteries and knowledge and closure. Preliminary results are intriguing, and I will post a draft here when it is in good enough shape (might be a few months).

  6. Hi Trent,

    OK, I’m really glad you aren’t resurrecting the availability heuristic explanation!

    But I’m still not sure I like your reservations about our mathematical way of thinking about (say) future contingents. You suggest that people are poor at intuitively approximating the difference between a million and a billion (OK), and then suggest that there is something wrong with “the basis on which some people (not me!) feel apprehension about attributing knowledge in explicit lotteries”. I still don’t see how the weakness you identify can be pinpointed as what is giving rise to the apprehension.

    Here’s my take on the source of the apprehension: when we reason probabilistically about a prospect and come to the conclusion that its chance of occurring is (say) .99, it seems like a mistake in probabilistic reasoning to conclude categorically that it will happen. “There is a .99 chance of p, therefore p” really is bad reasoning, and we’re not wrong about it being bad reasoning, and we really wouldn’t have categorical knowledge of the outcome if we attained a belief in it by reasoning in that (actually bad) manner.

    When we think about things in the L-way, we reach the considered judgment that the relevant prospect is very likely (“this lottery ticket is very likely to lose”). As long as we are reasoning in the L-way we would be making a mistake to claim categorical knowledge. It may however in some cases be open to us to reason in other ways about the outcomes we are considering. So for many mundane claims about future events (“I’m going to have lunch with my friend Melanie today”) we do not reason probabilistically about the chance of the occurrence, compounding various risks, etc. We simply take as a default the most likely outcome (let’s call this the O-way of reasoning; psychologists might describe it as a heuristic way of thinking). In many cases the O-way of reasoning may be a perfectly good way of securing knowledge. And we shouldn’t rush to conclude that our positive intuitions about securing knowledge in the O-way really are in conflict with our negative intuitions about securing knowledge of the same propositions in the L-way. There is no conflict in saying that someone can know something through testimony right now, even though she can’t know it through perception; likewise, it may be possible for someone to know something through heuristic thinking that he can’t know by probabilistic reasoning.

    In short, I think we are right to feel apprehension about claiming to know in advance of the draw that lottery tickets will lose — this apprehension comes from a precisely accurate deliverance of our probabilistic reasoning (like the recognition that .99 does not equal 1). We would be wrong to think that this apprehension is a reason for skepticism about our ordinary ways of knowing.

    Class-O intuitions and class-K intuitions are (generally) both correct, and not really in conflict.

  7. Jennifer,

    “99% chance that p, therefore p” is only fallacious in monotonic* reasoning. In default logic it’s just fine. And I’m inclined to think–and partly due to B&H lit–that we are nonmonotonic reasoners who use default logic all the time.

    Brian, why am I not surprised… Please post something when you have it together.

    *I typo’d “nonmonotonic” because I was focusing on the next sentence, where default logic is a standard nonmonotonic logic. Thanks to Greg for catching this typo.

    • Trent,

      Default logic is a non-monotonic logic, and detachment rules of the kind you’re advocating are usually taken for examples of ‘nonmonotonic reasoning’.

      Detachment, as you put it here, however, is not just fine within default logic, however. There is a missing component to the argument: the default condition: if 99/100 (of observable r’s) are p, it is not provable that (r is) not p, then (r is) p.

      I think it would be more fruitful to look at extensive measurement examples rather than ‘explicit lotteries’. We routinely take ourselves to know (assert; act as if true) the height of our children, say, or the length of planks of wood; otherwise bookshelves wouldn’t get built, children wouldn’t be enrolled in school. The main difference between lotteries and measured quantities is that there isn’t a public announcement to look forward to that will resolve the true height of a child, the true length of a plank. I fail to see this as very telling about detachment rules; it is an accidental feature of this thought experiment.

  8. Greg, I take it we’re on the same side here, right. Are you thinking there is a species-genus relationship here between lotteries and measurements? Or are you suggesting they are or are not structurally similar?

    Is the suggestion that knowledge of an announcement throws people off in lottery cases? I said something like that in my dissertation.

    Consider: I ask you to estimate the height of the child within reasonable parameters (an inch or an inch and a half, say. You would probably take yourself to reasonably know the height. But then I say that the exact hight of the child will be posted tomorrow, this might make you hesitate to say you knew the hight. I attribute this in part to people being very risk averse to being called out as wrong.

  9. In the case of the lottery, I take it that we are supposed to withhold claiming to know that the ticket is a loser because we’ll find out in fact whether our ticket will win or lose. I am fine with this advice, so far as it goes. My point instead is that it is an uninteresting case and the advice one might take from it about so-called ‘lottery propositions’ does not carry you very far.

    One doesn’t have to get fancy with extensive measurement, although the epistemology literature would be better-served by measurement than by toy lotteries. The old carpenter’s saying to ‘measure twice, cut once’ is a detachment rule. If we take ourselves to know extensive quantities of physical objects, such as a stick’s length, we come by them through some finite set of measurements. Our confidence that a stick so-measured is between .7532 and .7533 meters in length? Well, so long as no known errors occurred to confound our procedure (the default condition), such an undefeated procedure is such that it would be extremely unlikely that in fact the stick is either less than .7532 meters in length or greater than .7533. If one wished to print lottery tickets in some proportion to represent those odds, that would be an interesting lottery; however, we mere mortals are not privy to the outcome of that draw. Nevertheless, we detach!

Leave a Reply

Your email address will not be published. Required fields are marked *