What is a probability judgment?

I find the notion of outright belief pretty puzzling.  I’m pretty inclined to take it to be a (probably context-sensitive or interest-relative) notion concerning degree of belief sufficiently close to certainty, or “practical certainty”.

One reason to think there’s a purely epistemic notion of outright belief I often here–when I’m holding forth about the sufficiency of assigning probabilities–is that to assign a probability n to some proposition p entails holding the outright belief that the probability of p is n.  I don’t have a satisfying answer to this.  I am quite certain that I am never fully certain in a probability judgment, but I’m completely at a loss as to what to do about higher-order probabilities (other than to punt to psychological limitations and say something like “Well, as many orders as I can ascend, the probabilities don’t dwindle that much.”)

Jeffrey gives a reply on p. 46 of _Probability and the Art of Judgment_ but it seems completely unconvincing to me.  One move is to go operationalistc to some degree or other.  Pure operationalism is a dead end, but it’s often hard to tell whether an approach–like that of Kaplan’s–is operationalistic or holistic.  Every time I try to state or understand or defend a holistic view of what it is to assign a probability or make a probability judgment it sounds operationalistic.  This is one of the very most frustrating problems to me.


What is a probability judgment? — 18 Comments

  1. I assume you’ve got some grip on credences. Why not say that outright belief is just a credence greater than .5? Then let full belief be a credence of 1. I’m assuming there is some obvious flaw with this suggestion, otherwise formal people wouldn’t be so puzzled about outright belief. So what’s the problem?

  2. Chris, many people balk at the threshold account because they fear the Lottery “Paradox”. One of the advantages of studying with Kyburg was loosing any sense I ever had (which wasn’t much) that there was anything paradoxical about it. He always called it the Lottery *Argument*. That always sounded just right. When I learned it under the name “paradox” I always wondered how people thought conjunction introduction could just go on forever without problem, nor did I see much of a problem with having beliefs such that one knows that they couldn’t conjoin without error. It’d be better to be able to avoid it, but that’s life. So, just noting that this is a common objection to threshold views of belief, though it has nothing at all to do with my qualms.

    Here’s an objection by way of dilemma. It’s either > .5 or it’s utterly arbitrary. But .500000000000000000000000000000000000000000000000000000000000001 is > .5, yet, surely, we wouldn’t outright believe a proposition we thought that probable.

    I’m not sure I *do* have much of a grip on credences, though. I prefer the term “degree of certainty” for the epistemic attitude (which might best be described as a personal estimate of objective evidential probability on one’s evidence). The term “credence” I don’t like because it seems too tied to Savage-style personal probabilities defined in terms of preferences and defined in terms of betting behavior. That just seems to operationalistic.

    I think taking belief to be a psychological state of conviction or being convinced works in a very broad range of cases (and what degree of certainty leads to conviction could vary from person to person and from time to time or type to type by person). The reason I can’t fully endorse this is that I *think* there are some things I believe that I wouldn’t say I was convinced of. It’s hard, but I do suspect I have some outright beliefs which are really quite weak.

    I keep an Excel spreadsheet to track my credence in theism–yes, I know–and 95% of the time it’s between .65 and .95. I definitely outright believe it even when it’s .65, but sometimes that number looks pretty low (though it’s basically 2:1, which sounds better). But if .65, then why not .6499? And if…and there goes the heap. It could be that there are states where there’s just no fact of the matter whether I believe it or not. It’s not like we can expect any natural phenomenon not to be vague.

    Now I don’t keep Excel spreadsheets for any other beliefs, but I’m pretty sure I have other beliefs–outright beliefs–whose strength varies quite a bit–maybe my belief that democracy is the best form of government or the belief that most people are basically good.

    Peter van Inwagen once mentioned that he holds something very much like what I’ve called the “conviction view” of belief (if I understood what he said properly (no pun intended)).

    But this doesn’t help me understand what probability assignments are!

  3. I like Levi’s take on Ramsey’s take on Keynes, which I take to be this: there are two ways to think about logic, logic as consistency maintenance and logic as truth preservation (ie, consequence), and those two notions have probabilistic corollaries.

    The orthodox Bayesian view treats probability as a consistency maintenance mechanism, where any joint distribution is admissible as rational, then money-to-mouth behavioral arguments are added to constrain probability assignments. The idea is an ingenious piece of mathematical psychology. That’s Ramsey.

    The alternative, which is far less tidy, is to focus on what follows form what, which is a very different question; actually, very different questions. Here is one example: Suppose all votes in a precinct are counted and 30,000 out of 50,000 registered Independents voted for Boss Hog, and between 350 and 400 out of 500 municipal employees voted for Boss Hog. Roscoe is an independent municipal employee. Based on this statistical evidence, what is the probability that Roscoe voted for Boss Hog? Here the task is to reason with statements about probabilities, rather than to reason probabilistically about statements, to reach a conclusion about a probability assignment. Probability assignments here will be interval valued, and assignments may be incomparable. That was (part of) Keynes. (And Ramsey’s view was that this way ends in disaster; better to repackage this problem behaviorally, and treat these messy details as exogenous factors.)

    So, as a first pass to answering your question, it might help to settle between these two options because it will clarify how you propose to put probability to work, which will in turn clarify the type of thing you are making judgments about.

  4. Hey Trent,

    There’s an argument that I posted a long time ago on my blog that is similar to one in Fantl and McGrath’s new book that I think causes trouble for the view that identifies out and out belief with some degree of belief above a threshold (but below certainty). Let ‘DB’ pick out the degreed mental state and ‘BB’ pick out the binary belief.

    Suppose Jill has strong evidence for believing p. It’s strong enough that she should be very confident that p and she is. It’s not so strong that she should be certain. She’s not certain. Her degree of belief reflects just how strong her evidence is.

    (i) If ~p, it doesn’t follow that Jill has made a mistake by being in DB no matter how high the degree of confidence is provided that it is sub-certainty. Indeed, we can tell the story in such a way that she’d be making a mistake by _not_ being in DB. I’d add that if her evidence is very strong and her degree of confidence is appropriate to the evidence, there’s just no sense in which she’s mistaken by being in that state.
    (ii) If ~p, it _does_ follow that Jill has made a mistake by being in BB. She can avoid the mistake only if she does not believe p.

    So, whatever ‘BB’ and ‘DB’ pick out, they are distinct states of mind.

  5. Hey Clayton,

    Maybe I’m just not familiar enough with degrees of belief stuff, but I don’t see the disanalogy between (i) and (ii). Both the DB and BB are non-mistaken insofar as they are appropriate responses to their evidence. Both are mistaken insofar as their contents are false. So each is appropriate in some respect and each is mistaken in another.

  6. Hey Chris,

    I don’t know if this intuition pump will work, but here goes.

    (a) There’s a lottery with 1,000,000 tickets and you have one ticket. You are very confident that you’ll lose but you win. (Congratulations.) While your degree of belief that you’ll lose was quite high, it wasn’t a mistake in any sense for you to be that way mentally.

    (b) You believe that the number of stars is even and have a high degree of confidence that the number of stars is even but the evidence you have is just the evidence we all have. I think there’s a perfectly good sense in which it’s a mistake for you to be so confidence, to have a high degree of confidence, etc… But, the number of stars is even (say). There’s no sense in which the belief is mistaken. It’s correct. It’s not mistaken. Yes, it’s not justified, it’s unreasonably held, there might have been mistakes in the reasoning that led to it, but the belief that the number of stars is even is not mistaken provided that the number of stars is even. And that would be true even if you were provided with lots and lots of misleading evidence by those who think that the number is odd.

    Believe p when p is true but the evidence suggests otherwise and the belief isn’t mistaken but a high degree of confidence in p when the evidence indicates ~p is mistaken.

    • Clayton, there is potential ambiguity about the unit of evaluation. Chisholm prefered to speak of “believings” as do Sosa and Zagzebski. This drags in the etiology. So they’d say the belief–in the epistemologically relevant sense–has a true content but is still normatively defective.

  7. Hey Trent,

    I agree that the belief is normatively defective, but that’s in part because I think things can be normatively defective while still being correct and normatively defective without being mistaken.

    So one question, I suppose, is this. In the lottery example, if your degree of confidence is precisely as high as the evidence warrants but you win the lottery, is there _any_ sense in which you are mistaken by virtue of having the high degree of confidence?

    I’d say ‘No’. That’s either an indication that out and out belief requires a higher degree of confidence or that the state of mind picked out by belief talk is not the state of mind picked out by degree talk (the remaining argument is supposed to be a kind of LL argument. There’s one mental state that is not mistaken in any sense in conditions C and there’s a mental state that is mistaken in _a_ sense in conditions C so we have two mental states).

    I think we can run essentially the same argument in the stars case. The true belief is not mistaken in any sense. It’s normatively defective, as you say. The high degree of confidence might be mistaken in some sense and it might be normatively defective by virtue of being mistaken. (So, one mental state is normatively defective by virtue of being mistaken and one mental state is not normatively defective by virtue of being mistaken but it is nevertheless normatively defective. It turns out that we have two states of mind, by LL and a difference in the normative properties.)

  8. Hello Daughtry. You said: “to assign a probability n to some proposition p entails holding the outright belief that the probability of p is n”

    I may be picking nits, but it seems by using the word “assign” you are in a sense saying that a proposition has a certain feature, in this case a probability. So, when I assign the probability n to proposition p, I form the belief p* about proposition p. Thus:

    p* = p has the probability n.

    Now, since p* is a belief, I can: (i) hold the belief as true, (ii) hold the belief as false, (iii) withhold judgment on the belief. In both options (ii) and (iii) I don’t assign anything to p because I don’t predicate any feature to p. In option (i), I do assign something to p, namely that it has probability n.

    It seems to me that beliefs I hold as true are beliefs that are “outright beliefs” for me. Thus, whenever I assign a probability to a belief, I hold the outright belief that the probability of that belief is the probability that I assigned to it. I think my argument looks something like this:

    (P1) If I assign the probability n to proposition p, then I hold proposition p* as true where p* = p has the probability n.

    (P2) If I hold the proposition p* as true, then I hold the “outright belief” that the probability of p is n.

    (P3) I assign the probability n to p.

    (C) I hold the outright belief that the probability of p is n (1-3)

    Going by the rest of your post and the replies, I think I have misinterpreted the main point of the post and possibly even the part that I commented on. If this is the case I apologize. However, I do think I sufficiently responded to the proposition you put forth as it is written (although, I may not have).


  9. Hey Clayton,

    You ask, “In the lottery example, if your degree of confidence is precisely as high as the evidence warrants but you win the lottery, is there _any_ sense in which you are mistaken by virtue of having the high degree of confidence?” You say “no” I say “yes.” You were very confident that a falsehood was true. That sounds like a mistake to me.

  10. Hey John D,

    I think you are confusing Trent with a famous pop singer and former American Idol contestant. Or maybe you were just complementing Trent’s singing abilities in a clever way.

  11. “You were very confident that a falsehood was true. That sounds like a mistake to me.”

    Suppose the degree of confidence is in line with the evidence but is below the threshold for belief. I suppose you could be _very_ confident and not believe (not have the binary belief). You’d be mistaken to be confident in line with the evidence because you were confident that a false proposition was true and you didn’t avoid making a mistake by refraining from believing. That seems weird to me. To avoid the mistake, you’d have to be less confident than the evidence warrants. But isn’t _that_ a kind of mistake? Is this a kind of epistemic dilemma? You make a mistake if your degree of confidence doesn’t match your evidence and a mistake if your degree of confidence does match your evidence.

  12. I suppose one could turn to metaphysical jujitsu, arguing for the possibility of being in one and the same belief state and yet allow something else wiggle to explain the difference between the two judgments about error, but perhaps that should be kept in one’s pocket until ‘mistake’ and ‘confidence’ and ‘belief’ are developed.

    That said, don’t most parties (with the exception of strict probabilists) grant the mechanics behind Clayton’s argument?

    And if strict probabilism is the main target, here is an argument: every probability judgment presupposes a possibility judgment, so not only are categorical judgments distinct from probability judgments, probability judgments presupposed categorical judgments.

  13. Well, I saw that all of you were refering to him as Trent, but his name shows up “Dougherty” to me, which my right brain translated as “Daughtry”. I’ve tried to suppress that side of my brain as much as I can, but my subconscious love for American Idol singers must get in the way every now and then. 🙂

  14. Greg, sorry for the delay in responding to this, I’m doing a summer teaching thing that’s pretty time-consuming.

    1. I don’t think that probability judgments presuppose any kind of possibility judgment. That Pr(P)=p might *commit* one to the truth of the proposition that P, but that’s a logical fact, not a psychological fact. I think I’m interested in the psychological fact, but perhaps I’m mistaken.

    2. I think that when possibility judgments *are* made, they are usually epistemic, and I think epistemic possibility is a probabilistic notion. (See last two papers listed on http://sites.google.com/site/trentdougherty/).

    3. I think Jeffrey would just point out that we wouldn’t need to categorically believe P in order to assign non-zero prob to P, we’d only have to be sufficiently confident. I’m not certain of ANYTHING, thus, for any P, I’m not certain that P, yet for many P I assign a non-zero probability. I might even only need to be moderately convinced that P to sensibly assign a non-zero probability to P. Sadly, again, I have no idea how to handle higher-order probabilities.

  15. Hi Trent,

    I got sidetracked with the LaTeX thread and missed your comment.

    In order to define the measure ‘Pr’ you need an outcome space, which is defined over an outcome space, i.e., a space of possibilities. There is no probability without first specifying your possibilities…even if the specification comes by picking this algebra over that one for your probability space. I don’t know what the psychology is in making such a choice, but it is categorical in nature, and it gotten at by a Bayes decision on threat of regress.

  16. Hmm. I should log in to make comments; my typing is getting out of hand.

    The idea is that you cannot turn to second-order probabilities to model the decision of picking one outcome space over another on pain of regress.

  17. OK Greg, now I see you’re point. I thought you were saying that before I could judge “Pr(P) = p”, I had to judge “◊P”.

    Still, my reply is of the same kind. A judgment is a psychological event. There might be a probability space *denoted* or *implied* or *assumed* or *invoked* by probability judgments, but I think there is rarely a judgment about what the outcome space is.

Leave a Reply

Your email address will not be published. Required fields are marked *