Literature question: is there anything out there about epistemic constraints on, or an epistemic rationale for, the use of Jeffrey conditioning? Dick Jeffrey was, himself, a radical personalist, so he had little to say about the conditions under which it might be rational to update via his famous formula. But plenty of non-personalists are interested in questions about conditionalization. I’m familiar with, e.g., Kyburg, Thalos and Bacchus’s article “Against Conditionalization,” but I’m looking for something with a positive moral as well as purely critical literature. There are some interesting considerations that I would like to see taken into account; but why reinvent the wheel if it’s rolling around out there somewhere? Feel free to offer pointers, particularly to any articles on the rigidity of likelihoods in conditionalization.

Tim,

I recommend the following three papers on JC:

(1) Diaconis, Persi and Sandy L. Zabell (1982), â��Updating Subjective Probabilityâ��, Journal of the American Statistical Association 77(380): 822â�� 830.

(2) Field, Hartry, (1978), â��A Note on Jeffrey Conditionalizationâ��, Philosophy of Science 45: 361â��367.

(3) Carl Wagner’s: http://www.princeton.edu/~bayesway/Wagner.pdf

-B

Tim,

On Jeffrey conditionalization, you might take a look at my paper in the Journal of Phil. Logic, Feb., 2004.

Also, you mentioned “rigidity of likelihoods” in Jeffrey conditionalization. It turns out that the likelihoods cannot be rigid, except in very special cases (though of course posterior probabilities are taken to be rigid in Jeffrey Cond.). I prove a theorem about the impossibility of keeping both likelihoods and posteriors rigid at the same time in an appendix to a paper that should be out soon in Mind (in April, I think).

A bird’s eye view:

Suppose a fair die is tossed on a normal surface in a normally lit room. Given that the die lands showing a face with a dot in the center, the probability that the die lands showing an odd face is 1—not a terribly exciting example, but it will do.

Jeffrey’s rule is motivated by the following variation on this experiment: Suppose the setup is the same except that the room is dimly lit: You observe that the die has landed but you aren’t quite sure whether the die is showing a face with a dot in the center or not. However, standard conditioning on event A presumes that the actual world is in A not that you aren’t quite sure that it is in A. So, what to do?

-If the agent (you) are able to assign a point probability to your uncertainty—that is, if it is reasonable for you to ascribe probability .8 (say) to ‘center dot’ and .2 to ‘no center dot’—then Jeffrey’s rule may be applied.

-However, if you want to express (i) that the probability of center dot is at least .8, or to express (the belief) (ii) that the probability of center dot is .8 and the probability of center dot is .3 (to represent overlapping sets: nobody says

evidencehas got to be consistent, after all; usually it isn’t.), Jeffrey’s rule won’t be applicable on either case. Thus, Jeffrey’s rule only allows one to express a narrow class of uncertainty claims about evidence.-Personalists aren’t completely out of luck, however: folk have turned to relative entropy approaches as one example in attempts to handle cases like this, seeing Jeffrey’s rule as a special case. The problem then is that sentences like (i) and (ii) (typically) yield a range of values, so the question then is which one is best. This question takes the form of whether such-in-such proposed metric induces an order that allows us to evaluate the distance from our original measure to each member of the class of induced measures, and to consider various selection criteria based upon some notion of minimal change. One might then ask whether this machinery corresponds to how we would or should expect ‘best’ to behave. And, moreover, if you’re a traditional personalist, you’d like your machinery to spit out a unique value—which motivates various add-ons for yielding one. But perhaps you’re just left with the class induced by these constraints, full stop. The literature picks up around here.