Quine and Degrees of Belief

I wonder if anyone has insight into why Quine never ran any of his epistemology off of degrees of belief (unless, of course, I’m just unaware of something in the Quinean corpus, which there is some chance of, since I know I haven’t looked at all of it). I don’t know the answer, but here’s a guess.

Suppose one thinks that there are both beliefs and degrees of belief, and that they are related by the following necessary equivalence:
S believes that p iff S’s confidence level regarding p exceeds some threshold T.

One can fill out the Lockean picture of the relationship between belief and degree of belief by accepting this equivalence and taking the direction of reduction to go from left to right. That is, the fundamental reality is degree of belief, with the derived reality that of belief.

How would one oppose this Lockean thesis? Well, obviously, one might think the equivalence claim is false, either by denying any relationship between beliefs and degrees of belief, or by doubting the existence of one of the two. Perhaps Richard Jeffrey is the most famous denier of belief, but it is hard to find an explanation by those who think there are no degrees of belief. I found one though: from P.M.S. Hacker, “Of the Ontology of Belief,” where he says,

But there are no degrees of belief, so belief cannot be a feeling. I cannot believe that p more than you do, although I may be more certain than you that p. I cannot believe that p just a little or very much, although I can be inclined a little or very much inclined to believe that p. Of course, one may strongly or firmly believe that p (though not “weakly” or “moderately”), but this does not indicate a degree of belief. It signifies the strength or firmness with which one cleaves to the belief one has. It is the ease or difficulty of shaking the belief in question, and not the belief itself, that has degrees. It makes sense to ask how convinced, doubtful, suspicious, confident, etc. someone is that p, but not to ask how belief-ful or how much one believes that p. It is the belief-related adjectives that do this work, not the noun “belief”. The evidence I have in favour of its being the case that p may increase, but my belief that p does not therefore increase, although my conviction, certainty or confidence that p will. (p. 5 of pdf file located here).

But what if one wishes not to abandon the equivalence? One might insist that no reduction in either direction is possible, but one might also try to reduce degrees of belief to belief itself. My suspicion is that Quine may have thought this (though, as you’ll see, I have very little evidence). Here’s why I suspect this. The only person to my knowledge that suggests a possible way of reducing left-to-right is Gil Harman. I don’t have the quote with me, but what I recall is that he says something like this: degree of belief is an epiphenomenon of how revision of belief occurs.

If we wax metaphorical a bit, we can put Harman’s idea into the language of the web of belief, where depth in the web signals greater resistance to revision in the face of recalcitrant experience (or other learning). We can use various measuring strategies (e.g., betting behaviors) to give a rough guide as to how to generate comparisons between various levels of resistance to revision. That’s how we get degrees of belief out of belief. Many of the things that Quine says about the web of belief are suggestive of such a view, so I wonder if Quine would have preferred the anti-Lockean explanation.

OK, there are questions to be answered here, but if we suppose that the story goes something like this, then we’d have an explanation of why Quine does epistemology through belief rather than degrees of belief. With other epistemologists, we might cite lack of familiarity with formal techniques and methods as the explanation, but I think it would be slightly bizarre to consider such an explanation for Quine!

Anyway, just an idea, and I’d love to hear other ideas as to why Quine doesn’t show any interest in degrees of belief.


Quine and Degrees of Belief — 17 Comments

  1. The entry for “Belief” in *Quiddities* shows succintly that Quine was rather dismissive of both beliefs and degrees of beliefs:

    “But beliefs grade off (…) to where their dispositional content apart from lip service becomes tenuous to the vanishing point. What shared trait can have grouped all these extravagantly diverse states of mind, real or professed, under a single serviceable term, *belief*? None, I submit. They are grouped rather by a linguistic quirk, the adapter *that*, which can be prefixed thoughtlessly to any and every declarative sentence to produce a grammatically impeccable and hence presumably meaningful direct object for the term *believes*.”

  2. That’s a good point Juan, though I’m not sure how the remark applies to degrees of belief. In any case, he also says this:
    “A belief, in the best and clearest case, is a bundle of dispositions. It may include the disposition to lip service, a disposition to accept a wager, and various dispositions to take precautions, or to book a passage, or to tidy up the front room, or the like, depending on what particular belief it may be.”

    This suggests that he doesn’t want to have to answer to ordinary language, as “beliefs grade off”, but that there’s a straightforward replacement notion that doesn’t grade off, and it will be reminiscent, at least, of the ordinary notion of belief, at least as found in the best and clearest cases.

    Moreover, worries about the concept of belief itself provide motive, as in Jeffrey, to replace belief with degree of belief.

    Perhaps, though, the idea is to replace mentalistic talk with dispositional language alone? That would be too bad. . .

  3. “Perhaps Richard Jeffrey is the most famous denier of belief, but it is hard to find an explanation by those who think there are no degrees of belief.”

    Here is one: What evidence delivers, nearly all of the time, are real-valued bounds on “degrees of belief”. The probability interval [l, u] associated with a statement does not specify a range of equally rational degrees of belief between l and u: the interval [l, u] itself is not a quantity, only l and u are quantities, which are then used to specify bounds. It makes no sense to say that the degree of belief is [l,u]; furthermore, no degree of belief within [l,u] is defensible on the evidence; thus there are (generally) no rational degrees of belief.

    Here is another: to construct a degree of belief about anything you first need to agree on what the probability space looks like, which means that you must have a firm belief (not a degree of belief!) about what is and what is not possible. Thus, degrees of belief cannot be treated as primitive.

    Here is a third: First-order probability is highly undecidable, far worse than first-order logic. (The validity problem for first-order probability logic with a single binary predicate is not even decidable relative to the full theory of real analysis.) Thus, it is clear that this notion “degree of belief” is at best a representative notion, not a primitive notion, and that it is part of an extremely limited representation language at that.

  4. (Am I right or am I right? Why was my comment deleted?)

    My deleted comment about 1.degrees of belief and Quine, 2.mental engineering, and it’s connections with other comments:

    Gregory Wheeler:
    “Here is another: …. is not possible.”

    This corresponds to my part about apperceptual objects.

    Gregory Wheeler:
    “The probability interval [l, u] associated with a statement”…

    This means that “I know l, and I know u, so between them we may find the thing that pertains to I and U, the thing that we want to establish our opinion on.”

    Here belief is about getting the first empirical thing on something, namely the identity of our thing in the context of l and u. So we know our thing, but when we modulate it to that specific range then the thing is just under scientific, rational modulation, and that modulation science offers us through formulas can not contain our own personal inner belief. It contains cold numbers and formulas.

    I have not covered this point in my deleted comment, as I am at this point only interested in abstract mechanisms, and not in what they produce. This is in fact the main distiction between me, a not professional mental engineer, and you all, professional philosophers, that not only do research inside the complete schema of human functionality, but entangle with the complicated problems of applying the results being produced by every little part of the whole mechanism.

    Gregory Wheeler:
    “Here is a third:”…

    This third says what I have said about the affective energy fields, that they can not be something in themselves. The probability for knowing is a function of the empiric, material field, and while G.Wheeler says about representative / primitive I say about a noumenal realm that one has to be able to exist in, in order to be able to track the real noumenal objects in a specific instance / and provide the content of the belief for that particular instance.

    Again – why was my comment deleted? Can’t you people see that I’m right, that I’m naturally talented and that I need your help in order to get fit and get into the tracks of society?

  5. Hi Greg, I understand the first point, though maybe there is some mileage to be gotten from defining some notion of thick confidence that covers the range in question. Even so, I wonder how to make any attempt when your evidence is, e.g., that there is an urn with 100 marbles in it, and either it contains 10-20 red marbles or 70-80, with the rest black. I can see some hope for interval confidences, but these “gappy” cases seem worse to me.

    On the second point, I’m not sure why not treat the firm belief as just certainty? I’m afraid I don’t have such degrees of belief, but that’s a general problem about logical truths.

    On the third point, I’d like to read more. Can you point me to some literature? Thanks!

  6. Hi Jon,

    (i) We might say that either the proportion of red balls in the urn is between .1 and .2, or that it is between .7 and .8, full stop. The twist is if we wish to say something about a particular ball, call it A, that is a member of the urn. What is your degree of belief that A is red?

    If what you stated is all that we know about the urn, I would say that the probability that A is red is between .1 and .8. However, if I know additional things about the urn and the manner in which A is selected I can then sharpen this interval considerably. If, for example, it is reasonable for me to view the uncertainty mechanism for selecting A to be equivalent to considering the draw of A from one of two 100 ball urns, the first known to contain between 10-20 red balls, the second to contain between 70-80 red balls, and the probabilities of picking each urn to be p and p*, then I can conditionalize to sharpen my interval. And there are other mechanisms that I can appeal to as well.

    But the catch to this is whether or not the preconditions for these methods are satisfied, whether in fact we have p and p* for instance; whether max entropy is reasonable to apply in the objective bayes framework; and so on. Some of the heat in the philosophical debates on these matters reduces to whether or not an epistemic account ought to be built up from assuming that values for the necessary parameters are always available.

    (ii) Generally it is not a good idea to identify certainty with probability 1, impossibility with 0. Consider a toy example involving a 6-sided die. It is natural to identify the outcome space with the six faces of the die. This gives probability 1 to the event of the tossed die showing either side 1, or side 2, or side 3, or side 4, or side 5, or side 6. But we shouldn’t say that we are certain that this die will show one of these 6 sides on a given toss: it might be tossed on a shag carpet, in which case the edges and corners should be included in the event outcome space. Nor should we be certain that any six-sided die thrown on a flat surface will show one of the six sides: we might use a painted sugar cube as our die, in which case “total destruction” should be included in the outcome space as well.

    The point is that “degree of belief” makes sense only with respect to a probability space, and you need to have definite commitments a.k.a “beliefs” (or have complete and definite preferences of a given set of outcomes, if you want to go at this qualitatively via Savage) before you can talk about rational degrees of anything. Thus, degrees of belief are a highly derivative notion. Or, to put the matter the other way around, if you insist upon treating degree of belief as a primitive, then degree of belief becomes a highly contrived notion.

    (iii) Work on probability proof theory: Fagen, Halpern and Megiddo (“A logic for reasoning about probabilities”, Information and Computation, 1990, ) provide a proof theory for the standard probability semantics for propositional variables, and deciding satisfiability is NP-complete. But there are significant obstacles to providing a proof theory for more expressive languages. Halpern (“An analysis of first-order logics of probability”, AI Journal, 1990) shows that the validity problem I mentioned is highly undecidable. The reason is that standard Kolmogorov probability is itself a higher-order function on sets, so any language that is expressive enough to afford probabilistic reasoning about probability statements will extend beyond the complexity of first-order reasoning about real numbers and natural numbers.

    Cheers, Greg

  7. Two hints from Ernie LePore. The first is to look at the index of Web of Belief, under “belief: change of” The second is from p. 429 in Quine’s Library of Living Philosophers vol. “Reply to Hilary Putnam”: Qine used to talk about “degrees of observationality” but later he (his thanks to Bergstrom) replaced that with “degrees of revisability.”

    The latter suggests strongly a view like the one I attributed to Harman, that language concerning degrees is best understood in terms of belief revision rather than the other way around.

  8. Another argument then against this view, the view you attribute to Harman, is that belief change operations include expansion, i.e., the wholesale addition of a variable (proposition) at time t that was not present before t. But this type of operation completely changes the state space that the measure is defined on; at t-1 (and perhaps before) we have one probability space, where as at t we have another.

    To map this into a concrete example, imagine that at t-1 we believed were dealing with a die tossed upon a marble floor. But at t we learn that the die to be tossed is made of sugar.

  9. Greg, I don’t see this point. A defender of Harman would want to distinguish between the idea that beliefs come in degrees and the idea that the degrees so measured satisfy some probability function. The Harman idea has to allows that there are claims not yet in the web of belief (so that withholding, understood as a propositional attitude, doesn’t encompass every claim that isn’t in the class of things that one believes or believes the opposite). Initially, all we would do is provide an ordinal ranking of differences between openness to revision, and then figure out some way to map this onto the real between 0 and 1. This is all just a matter of psychology. The epistemology comes in when we claim that the psychology has to satisfy certain requirements in order to be rational, and that’s where probability ought to appear if at all. But maybe I’m not understanding your point?

  10. If I have a degree of belief of 0.1666… at t that the tossed die shows a 6 (which is an odd doxastic state), then that belief accords to some probability function defined on a probability structure. Let’s say that p(toss = die crumbles) is undefined in this particular structure; that is, there is no “die crumbles” event in set of outcomes, hence no measurable event for “die crumbles” in the algebra of the probability space. This accords to expansion: there are events to which the agent at t takes no attitude.

    We may revise, by expansion, that the die is made of sugar. That is, we may add this information without retracting another belief. And yet this does force a change in the probability space if, as we will assume, “die crumbles” is now a relevant outcome. This then forces a change in degree of belief in die lands 6 assuming, as we will, that the sugar die is also a fair die.

    The point to notice is that there then is no (non-trivial) expansion operator on this view. Any expansion by non-trivial information necessitates constructing a new outcome space and adjusting the probabilities accordingly.

    The probabilist asks us to imagine that we be able, or he tells us that rationality dictates that we should aspire to be able to anticipate how to calculate an update on our system of degrees of after learning that the die to be tossed is made of sugar.

    My point is that a relatively routine and uncomplicated belief revision operator, expansion, is transformed into a heavy-lifting updating operation, since non-trivial new information will change the state space the agent uses to define his degrees of belief. That agent, if rational, is imagined to have foreseen all possible revisions, and to have on hand the parameters necessary to update and recalculate given each possible contingency. I don’t think we do anything like this, nor do I think it is nomologically possible that we do anything like this. And if ought implies can, if our psychology should bound our epistemology, which the naturalists have told us that it should, then we shouldn’t do anything like what Harman proposes.

    That’s rough; but that’s the general shape of the idea.

  11. Oh, I see now, Greg, what you are worried about. I agree completely on this point. I think we need to distinguish the diachronic view from the synchronic one, and I was assuming that we construe the Harman account only in terms of a synchronic view. What to do with belief revision itself won’t be a matter for some formal rule (especially for Harman!); I think all he’d want to say about that would be in terms of inference to the best explanation. And I doubt any purely formal characterization of that rule can be found.

  12. What happens in the sugar die case is that a normally synchronic belief revision operation, expansion, turns into a (diachronic) update operation. My worry is that if this case is representative of the probabilistic proposal, then the normal distinction between static and dynamic operations collapses. If the distinction collapses, then any change in view becomes a diachronic operation. So if the case is representative, then we couldn’t restrict the view attributed to Harman to synchronic belief revision operations.

  13. Greg, we need to separate belief revision principles from synchronic facts about a mental state at a given time. Belief revision is a change from one total state to another, and all of those can take time, and hence shouldn’t count as synchronic. I think we need to distinguish any probability proposal here, which will be a normative proposal, from a psychological point. Some people think there are beliefs, some people think there are degrees of belief, and some think both. We want an account of the relationship, if both exist. Such a relationship, even if couched in the language of belief revision, won’t in fact involve any principle of belief revision. Instead, they’ll address the question from the point of view of a particular time slice of the agent in question. The talk of being an epiphenomenon of belief revision is, on this idea, code for depth of embedding in the web of belief, where depth measures resistance to change (which of course is a purely present fact, not dependent on what happens at any other time).

  14. I think Greg’s point (if I understand it) is that one cannot separate the thesis that degree of belief is the primitive notion (in terms of which full belief is defined) from the (erroneous) thesis that inference is solely a matter of updating. That thesis stumbles on the fact that new possibilities (e.g., the die is made of sugar) sometimes have to be taken into account, altering the state space (a process radically different than updating). If degree of belief were the primitive notion, then, on Greg’s argument, every possibility would have to be included in the state space already and assigned some degree of belief.

  15. Yes, Steve, that’s the general idea. There is some terminological cross-pulling between “updating”, “revision”, “synchronic”, and “diachronic” that each of us (i.e., Jon, you, me) are tangled in, which might be obscuring the poin. I would use the term “revision” as you (Steve) would use updating; updating and revision for me are diachronic in the sense that both happen through time (cf. Jon), but the “inference mechanism” to accommodate a new item of information can either be a logical or set theoretic operation (thus, instantaneous; thus synchronic) or some reconfiguration of the state using operators that are not logical or set-theoretic but essentially state-change operations (thus, not immediate; thus, diachronic).

    I don’t want to lean too heavily on this terminology; I think the general idea/worry that you (Steve) mention survives translation into each of our ways of talking about this.

  16. Yes, Steve, that’s precisely the idea. There is some terminological cross-pulling between “updating”, “revision”, “synchronic”, and “diachronic” that each of us (i.e., Jon, you, me) are tangled in, which might be obscuring the point. I would use the term “revision” as you (Steve) are using updating; updating and revision for me are diachronic in the sense that both happen through time (cf. Jon), but the “inference mechanism” to accommodates a new item of information can either be a logical or set theoretic operation (thus, instantaneous; thus synchronic) or some reconfiguration of the state using operators that are not logical or set-theoretic but essentially state-change operators (thus, not immediate; thus diachronic).

    One reason for the cross-pulling is that “updating” in the way I am using it is closer to how that term is used in the belief revision/belief updating literature, whereas “updating” to a probabilist is (if orthodox) conditionalization, which is a logico-algebraic operation on a given probability structure.

    I don’t want to lean too heavily on this terminology, or this way of parsing time; I think the general idea/worry that you (Steve) describe survives translation into each of our ways of talking about this.

  17. Whoops! My internet connection broke during the last postings. Jon, if you wish, you can delete (2:49) and this one, and keep (3:02). At any rate, ignore (2:49).

Leave a Reply

Your email address will not be published. Required fields are marked *