Christensen’s De-pragmatized Dutch Book Argument

I got back yesterday from the conference on formal and traditional epistemology at Oklahoma organized by Jim Hawthorne and Wayne Riggs. It was utterly fabulous! Except that I was really sick when I talked, and had to leave early to get home (with a fever of 102!). I have very little idea what I said, except for one remarkable lapse: I forgot what my last argument was supposed to be! I don’t know what I said, but what I wanted to say, I’ll write here.

It’s about David Christensen’s DBA from his beautiful book Putting Logic in Its Place and the way in which subjectivists should understand the perspectival character of rationality. The book is easy to read, very entertaining, and the arguments quite compelling, especially the ones about the import of the Preface Paradox for deductive closure principles about rationality. But the argument that I don’t think quite succeeds is the argument for probabilistic incoherence being a defect.

Here’s how the argument goes.

The argument is intended to sustain the following result:
Simple Agent Probabilism: If a simple agent’s degrees of belief violate the probability axioms, they are defective.

A simple agent is one who values money and nothing else, and whose money preferences are positively linear. In this way, the value of each extra dollar is the same as the value of any dollar, no matter how much money the agent has or lacks. The argument begins with the following principle:
Sanctioning: A simple agent’s degrees of belief sanction as fair monetary bets at odds matching his degrees of belief.

Christensen says,

I take these as very plausible normative judgements: any agent who values money positively and linearly, and who cares about nothing else, should evaluate bets in this way. The way in question here is the way described in Sanctioning: if your degree of belief in p is 2/3 and you are offered a bet that will pay you $1 if p is true and cost you $2 if p is false, then if you are a simple agent, you should regard this bet as fair. We can understand, then, the role of the appeal to simplicity of the agent: it is a way of controlling for interference into the assessment of rationality of degrees of belief by messiness concerning preferences. A person’s preferences might be inconsistent; they might be non-linear; they might violate transitivity; etc. When we want to connect attitudes about fairness of bets to the rationality of degrees of belief, we want to control for such insanity (if such we prefer to label it). Once we control for this by stipulating that the agent is simple, the hope is that we can read off conclusions about rational credences from information about attitudes toward fair bets.

The remainder of the argument uses the following principles:
Bet Defectiveness: For a simple agent, a set of bets that is logically guaranteed to leave him monetarily worse off is rationally defective.
Belief Defectiveness: If a simple agent’s beliefs sanction as fair each of a set of bets, and that set of bets is rationally defective, then the agent’s beliefs are rationally defective.
Dutch Book Theorem: If an agent’s degrees of belief violate the probability axioms, then there is a set of monetary bets, at odds matching those degrees of belief, that will logically guarantee the agent’s monetary loss.

So, the idea of the argument is this. First, restrict discuss to simple agents, so that money and a linear ordering on it is all we need to talk about to measure the agent’s preferences. Fair bets for such an agent are ones that match the agent’s degree of belief. So, to have rational degrees of belief, a simple agent’s degrees of belief have to match his attitudes about fair bets. Unfair bets favor the book or favor the agent. One class of bets that favors the book are those that are logically guaranteed to favor the bookie. These we say are rationally defective.

The trick of the argument is to connect this Bet Defectiveness principle with the principle that follows it, Belief Defectiveness. Here is the way Christensen argues. He first notes that Belief Defectiveness doesn’t follow from Bet Defectiveness. He considers cases of what I will call “messy preferences” to show that the second doesn’t follow from the first. In such cases, the value of a payoff for one bet may affect the value of the payoff for the next bet: I’ll value roast duck more, he says, if I don’t yet have one than if I do. But, of course, this possibility is ruled out by noting the qualification in the principles in question that the agents in question are assumed to be simple agents. So, the claim is, without messy preferences, Belief Defectiveness follows from Bet Defectiveness, since without value interference, bets that are individually acceptable will also be acceptable in combination. Once we get to Belief Defectiveness, the rest is just math! Add in the Dutch Book Theorem and you get Simple Agent Probabilism.
This argument assumes that messy preferences are the only things that can block the move from defects in bets to defects in beliefs, but they clearly are not. Here is where holism about rationality conflicts with probabilistic coherence requirements. Christensen claims that the bet described in the last paragraph is one that, given your degree of belief you should regard as fair. We should ask, however, whether this is a prima facie or ultima facie “should”. As an ultima facie claim, it won’t be true unless messy preferences are all that need to be controlled for to line up attitudes about fair bets with degrees of belief.

The point to note, however, is this: it is not only preferences that can be messy, but beliefs as well. Suppose I am a strongly puritanical Calvinist, believing wholeheartedly in the work ethic of such a view, and holding that divine anger will be visited on any who try to benefit in ways outside of honest toil. My degree of belief in p might be 2/3, but I won’t regard the bet in question as fair; in fact, I’ll regard every bet offered as an unfair bet. I will think any bet of any sort will work to my disadvantage. Or, again, suppose I’m convinced that bets are quite often finkish: that accepting them triggers a change in the prospects of winning. So, no matter what bet we’re considering, I can’t regard any bet as a fair bet, because there aren’t any such bets: fairness of a bet, I also believe, requires that no party to the bet will have a special advantage over the other party, and that viewpoint is one we can’t ever accept. Or, once more, suppose I can’t tell what my degree of belief is and I have no idea on reflection what the chances of p are. You ask me whether a bet on p that costs $2 and pays $3 if p is true is a fair bet. I have no idea. Perhaps I’ll reflect further and come to some decision in some way or other, but my reflection may lead to either answer compatible with my inaccessible degree of belief being 2/3.

In each of these cases, nothing said contravenes the assumption that the agent in question is a simple agent. What is messy here are not the preferences of the agent, but rather the beliefs of the agent. Moreover, in each case, there is no reason to view the additional features of the agent’s perspective on the world to be out of order or inadmissible or illegitimate in a way that would allow one to ignore such complexities. In each case, the point of the example is to show that a proper understanding of the agent’s perspective on the world requires more, and perhaps less, than an account of some particular degree of belief they have. Rationality is perspectival in this sense, and it is a mistake to think that the degree of belief in question is all that needs to be noted in characterizing the perspective in question.

Once these points are noted, we can only accept a prima facie version of the Sanctioning principle:
PFS (Prima Facie Sanctioning): A simple agent’s degrees of belief prima facie sanction as fair, i.e. give one a defeasible reason for regarding as fair, monetary bets at odds matching his degrees of belief.
PFS, however, is not strong enough to sustain Simple Agent Probabilism, except when the defect of suffering from probabilistic incoherence is itself taken only to be a prima facie defect.


Comments

Christensen’s De-pragmatized Dutch Book Argument — 13 Comments

  1. Nice point Jon. So Christensen’s DBA for probabilism needs to be reformed in a way that makes it an even weaker case in favor probabilism than it already seemed to be.

    Fortunately, there is also a depragmatized representation theorem argument (an RTA). I’m not completely happy with how that goes in Christensen’s book either. I think a better way is to argue that an agent’s comparative confidence relation (i.e. the agent is “at least as confident that B as that C”) should (ideally) satisfy certain plausible axioms (e.g. transitivity) that capture our intuitions regarding how comparative confidence “should” work. Then show, with a representation theorem, that probabilistic credence (degree-of-belief) functions provide a useful and faithful representation of relative confidence relations. Then one can add that when this confidence relation is added to a preference relation (that satisfies additional plausible axioms), there is a representing probability/utility function pair such that act X is preferred to act Y iff X has higher expected utility than Y. Done this way, the idea is that insofar as real agent’s have reason to try to emulate ideally coherent relative confidence and preference, they can use credences and utilities to help them to do so.

    But what reasons do real agent’s have for trying to emulate ideally coherent relative confidence and preference? One way to push the argument is to look at the individual axioms, and see if they seem to provide compelling restrictions on what confidence and preference should be like. Another tack is to observe that this approach at least gives us a “theory of decision” that has well-studied characteristics (many of which seem good), and ask for an alternative proposal that does as well.

    How satisfactory is this sort of argument for probabilism?

    Jim

  2. Hi Jim, I think the same argument I gave works against Christensen’s discussion of representation theorem arguments as well. The key move in his argument is similar to the one in yours, except he has the normative connection between preferences and beliefs and you have a normative claim about levels of confidence themselves. It’s the normativity here that gives me pause, though, but that is because I’m a subjectivist here. Since you aren’t, you’ll find the normativity claim more plausible than I will, since I’ll think about “messy” perspectives from which the “should’s” in question are simply false from the point of view of that perspective. Once one makes the subjective turn, I think you need to go all the way to where the rules are, to put it in Kantian language, rules that one gives to oneself as a cognitive being. They are themselves part of or dependent on the perspective itself, rather than rules that are externally imposed on the perspective in order for it to be rational. That’s why I think the best place to start is to look first at what it is to be irrational from the point of view of the light one already has, and then describe how new information can, from that same perspective, require belief revision and different rules to follow. When things go well in this process, we may end up at the point where probabilistic coherence is required for rational credence or level of confidence, but there is no guarantee of this.

    So the difference with your RTA is, at bottom I think, a difference between a subjective view of rationality and a more objective one. Does that seem right to you?

  3. Jon, I’m not sure I understand what a subjective view of rationality is supposed to be like. So let me try this.

    1. Is it “fully subjectively rational” for an agent to be “more confident that C than that C”

    2. Is it “fully subjectively rational” for an agent to be “more confident that (C and not C) than that C”

    3. Is it “fully subjectively rational” for an agent to be “equally confident that (C and not C) as that (C or not C)”

    Each of these is just the denial of one of the axioms of the comparative confidence relations. I wouldn’t call someone who violates the corresponding axioms “irrational”, but rather “less than ideally rational”. Is subjective rationality supposed to be even more forgiving than this about violating these rules? Or perhaps subjective rationality will only depart from objective rationality on some of the other rules that the comparative confidence relation is supposed to satisfy for “ideally rational” agents. If so, I’ll ask you about each of those in a later reply. But perhaps even the rules suggested above are too much for an account of subjective rationality. Or perhaps I’m just not getting your idea about the nature of subjective rationality.

    Please tell me more.

    Jim

  4. Hi Jim, sorry, my descriptions are very vague aren’t they? 🙂 So the idea involved is a special kind of holism that I think a theory of rationality needs to honor. Maybe the easiest way to see some of its implications is to think about the requirement that debates in philosophy of logic need to honor the possibility that both parties are rational. So take your first axiom from your paper (not #1 above), the nontriviality axiom. It says that your never more confident about a counterexample to excluded middle than to the excluded middle claim itself. It’s hard to see that as compelling for every rational agent, especially those who think, e.g., that there are counterexamples to excluded middle in QM. Notice that this same worry will affect the minimality axiom as well.

    The general strategy here is to take each claim made and try to think of an overall belief (plus experience) system in which denying it would be rational from the point of view of that system. In addition, there is the Harman worry lurking for right equivalence and left equivalence: violations might exist unnoticed. This same point affects the 5’s and 6’s as well.

    Of course, some account of how idealization is worth doing can handle some of these problems. I’d rather see such arguments starting from the way in which we always and everywhere defer to truth, logical or otherwise (that is, our prob for p given that the truth about p is x, is x). (Of course, we’ll need to accommodate theorists who deny this deference principle, but if we suppose we exclude that issue from discussion, we can account for idealizations in the following way.) We can then specify the interaction between the fundamental theory of rationality and whatever class of truths one wishes to impute to the agent, logical or otherwise. Some of the truths will include the specific instances of logical and probabilistic relationships, and we don’t need any other theory of rationality other than the fundamental one for that either. All we need is a special rule in the fundamental theory that prohibits rational belief in contradictions. If we treat idealizations in this way, we can reveal what their underlying structure really is: they just assume that the agent in question knows some stuff we don’t or needn’t. That doesn’t change the theory of rationality at all, though, so looked at this way, the idealizations shouldn’t appeal to norms other than those that apply to us.

    OK, lots of vague stuff again. . .

  5. OK, I think I see. But let me press a bit. Will a subjective theory still have some “principles of rationality” that govern how confidence (and belief) and desire and action should come together for a rational agent? And will agents count as irrational, or less than fully rational, if they tend to violate those principles even in easy cases?

    Jim

  6. The most that can be said is this, I think: there is a theory of rationality that will be formulable in terms of some conditions necessary and sufficient for an attitude being rational. What’s worrisome even about this claim is that we need to account for the possibility of denying a philosophical theory of rationality just as much as we need to account for the possibility of denying any other claim. So if there is such a theory, it will have to contain within it the resources to explain how it itself could be reasonably denied. Maybe there’s such a theory, but I don’t know.

    But even if there is, it is another step to think that there are lower-level necessary truths that if violated entail irrationality. If we shoot for defeasible rules, I think we may be able to get those to come out necessarily true, but there’s a problem of formulating the defeater condition so that the rule isn’t trivial as a result.

    What I was hinting at near the end of my talk was to quit looking for necessarily true principles and just pay attention to the actual rules that we start with and modify in the course of further learning. Such rules will correspond to conditionals of some sort, but these conditionals won’t need to be true or necessarily true to function in a way that results in the product of following the rule being rational. So the rules one begins with are default rational and require modification when information is presented that kicks in some metarule that is also involved, in some sense, in the perspective in question and which takes the information as input and the modification or abandonment of the first rule as output. Then if you still follow the first rule, the output is irrational.

    So I hope to work out the details of this picture over the next few years, and then it may turn out to be clear enough see whether it is even possible to articulate!

  7. “such a theory, it will have to contain within it the resources to explain how it itself could be reasonably denied.”
    I agree with this and think it is fundamental. However it seems incompatible with any exceptionless law of non contradiction. Is it not contradictory to assert (p & it is rational to believe not p)?

  8. Jonny, two thoughts. First, the sentence “p&RB~p” isn’t a contradiction, since it isn’t an instance of p&~p. So the more interesting question is about the status of the assertion. In 3rd-person contexts, there is no difficulty, as in: “Frege’s axioms yielded a contradiction even though it was rational for him to believe that they didn’t.” Prior to learning about the Russell paradox, of course. But 1st-person contexts would appear to be Moorean-paradox-like. To say that p is true but it is rational for me to believe ~p is as paradoxical as saying p is true but I don’t believe it. In both cases, the paradoxicality is probably pragmatic rather than semantic, so there’s no threat to any semantic principle from the paradoxical nature of such assertions.

  9. John, first I’d like to thank you for your very kind words about my book. Your worries about my Dutch Book argument are interesting–I had not thought of the problems you pose. I’ve thought about them a bit now, and have been meaning to comment for a while, but things have been very hectic. Let me try out a possible response or two.

    The Calvinist and the believer in “finkish” bets strike me as similar cases. In both, the agent’s beliefs amount to her taking the bet to have a different payoff structure than it has “officially” or “intrinsically”. One possible response to this sort of case is to specify that, for the purposes of the Dutch Book argument, the expected payoffs of the bets include not only those that are officially part of the bet, but all expected monetary consequences of taking the bet. This would not seem unmotivated to me. Another response is to restrict ourselves to considering “simple betting situations,” i.e., to stipulate away situations where the agent’s expected monetary payoffs diverge from those officially specified in the bet.

    (The second of those options would, I think, be analogous to restricting our attention to simple agents. The restriction to simple agents is to control for what you nicely term “messy preferences.” And I think I might be happy to make a parallel stipulation to control for messy beliefs. One might wonder whether this would vitiate the argument, because we’re not always in simple betting situations. I think I’d want to respond in the same way I respond to the parallel worry about simple agents.)

    The case of the agent who doesn’t know her own degrees of belief seems different to me. I’m not sure what to say, but I’m also not sure it presents a problem. I don’t envision agents evaluating bets by explicit second-order reflection on their credences. The beliefs are supposed to inform preferences for bets directly. (I haven’t thought this through, but the alternative sounds to me like it might lead to some sort of unpleasant regress.)

    David

  10. Hi, David, thanks for the very interesting reply. Just a quick thought here. When I think of bets being offered to me, I think of fairness in terms of chance. For example, if the bet is on UNC winning the national championship next year, a fair bet is one where the odds match the chance of their winning. In order to connect this idea with the usual account of fair bets from decision theory, we would need a deference to chance principle of the sort recently defended by Joyce. If such a principle is false, then I’m not sure how to defend the usual account of fair bets. For example, suppose I know that my credence about the tarheels is .2, but I think (I’m certain) the chance is .1 (on the assumption that the deference to chance principle is false). Should I take or give 4-to-1 odds, or should I be indifferent on taking or giving 9-to-1 odds? Strikes me that the latter is obvious.

    I wondered about the infinite regress idea as well, but I think it can be avoided. Suppose I think chances are probabilities, and probabilities are degrees of belief. Then take the above example again, where my degree of belief is .2, but I’m certain it is .1 (the chance is .1). This couldn’t happen if credences were transparent, but they aren’t. This makes me suspicious that there is too much transparency being assumed in decision theoretic accounts of rational action and rational belief. Do these worries make any sense to you?

  11. Hi Jon (and sorry for mistyping your name, as I just noticed I did in my previous post!)

    I’m not sure what your views on chance are. But if what you have in mind is objective chance of the sort involved in indeterministic laws (so if a fair coin has already been flipped, out of sight, the chance of its being heads is either 1 or 0, but not .5), then I think that fairness of bets should be defined independently of chance (as I think a bet on heads at even odds would be fair in the coin case).

    I do share your intuition about the case where my credence is different from what I take the objective chance to be. I’d be inclined to explain that intuition by saying that the reason that I should be indifferent to 9-to-1 odds is that (given my belief about the objective chances) I should have .1 credence in T.

    About transparency: like you, I don’t believe in transparency. I’d be inclined to say that what rationalizes my decisions and actions with respect to some issue T are my beliefs and desires about T and related matters, not my belief about my belief about T. Ceteris paribus, if an agent believes that it’s going to rain, then she should take an umbrella–and this holds irrespective of whether she believes that she believes that it will rain. She should be able to reason about whether to take an umbrella by thinking about whether it will rain, how inconvenient carrying the umbrella would be, etc. Even if she was devoid of second-order beliefs, her actions could be rationalized by her degrees of confidence and her values. So at the basic level, transparency is not required.

    I know that there are complications that arise when agents do engage in second-order reflection on their beliefs. And I don’t know how to sort them all out, especially in cases where the second-order reflection suggests that the first-order beliefs are irrational. But I hope that the heart of basic credence-desire model of rational action doesn’t require the agent to go second-order. And I hope that we can eventually say something good about what happens when agents do reflect on their beliefs which doesn’t presume transparency.

  12. Good points, David. You’re exactly right that the notion of chance can’t be just the single-case probabilities of indeterministic systems. We need some notion of objective probability so that even deterministic systems don’t have probabilities of only zero or one for events in them (like the coin flip event). If I had a theory here, I’d be happy!

    I agree that we don’t want to require metacognition for rational belief. My concern is that the rationality of belief is a matter of an individual’s perspective, and so we shouldn’t treat two individuals as sharing the same perspective when they differ at the metalevel even when the are the same at the base level of beliefs, credences, and confidences.

    This point affects, I think, the account that has to be given of a fair bet. I defer to chance in the way you suggest, thinking my degree of belief is off-track if I learn it doesn’t line up with chance. But I also think that there are perspectives from which this wouldn’t be the right thing to think. If you’ve seen the Joyce piece from the Arist. Soc., it is really instructive in this regard, since when he defends deference to chance, he builds in a host of assumptions about the nature of chance and the information it encodes. So if one’s perspective includes denials of some of these assumptions, you can get a perspective from which deference to chance is inappropriate (I’m assuming that the denials are not themselves irrational, of course).

  13. Good points, David. You’re exactly right that the notion of chance can’t be just the single-case probabilities of indeterministic systems. We need some notion of objective probability so that even deterministic systems don’t have probabilities of only zero or one for events in them (like the coin flip event). If I had a theory here, I’d be happy!

    I agree that we don’t want to require metacognition for rational belief. My concern is that the rationality of belief is a matter of an individual’s perspective, and so we shouldn’t treat two individuals as sharing the same perspective when they differ at the metalevel even when the are the same at the base level of beliefs, credences, and confidences.

    This point affects, I think, the account that has to be given of a fair bet. I defer to chance in the way you suggest, thinking my degree of belief is off-track if I learn it doesn’t line up with chance. But I also think that there are perspectives from which this wouldn’t be the right thing to think. If you’ve seen the Joyce piece from the Arist. Soc., it is really instructive in this regard, since when he defends deference to chance, he builds in a host of assumptions about the nature of chance and the information it encodes. So if one’s perspective includes denials of some of these assumptions, you can get a perspective from which deference to chance is inappropriate (I’m assuming that the denials are not themselves irrational, of course).

Leave a Reply

Your email address will not be published. Required fields are marked *