Let S be a set of beliefs and experiences that is the evidence on which we are going to apply defeasible reasoning. Then suppose we have propositions p1…pn with the following properties:

1. S |~ p1 (p1 is a defeasible consequence of S).

2. S U p1 |~ p2 (p2 is a defeasible consequence of the union of S and p1).

.

.

.

n. S U p1…pn-1 |~ pn.

Question: what does it take for this sequence of belief revisions matching this sequence to result in the belief that pn to be rational or justified?

First answer, Williamson’s: it requires that, by step n in the process, one knows S U p1…pn-1. We’ve discussed this view before here, but it’s not very plausible.

Second answer, the empiricist view: where S is one’s basic empirical evidence–sensory experience, perhaps–pn is justifiedly believed just in case it is justified relative to S. Again, not very plausible, since sometimes our coming to believe something legitimately makes it part of our evidence.

Third answer, Chisholm’s: not in his written work that I can detect, but in conversation, he said something like the following. Where S is some initial set of appearance states, justification for pn requires a sequence of rational addings of p1…pn-1. If that occurs, however, the set S is no longer the set of appearance states that characterize the person in question. Instead, one will have added a number of other appearance states, including being appeared to p1-ly, being appeared to p2-ly, etc. (For those who insist on syntax making a property the appropriate value in the adverbial characterization, we could employ lambda conversion on p1 … pn.) Of course, this is not enough for justification for Chisholm, but let’s hold fixed satisfying the other constraints. The important point is that the class of evidence will not be exhausted by S U p1…pn for Chisholm.

Fourth answer, the defeasibilist’s: pn is justified if each of p1…pn is added without thereby undermining any of the information in S or any of the pi’s. The probabilist will say: but pn might be improbable given S U p1…pn.

There are lots of issues here, one important one being whether the logic of defeasible consequence can come apart from probability in this way (I think Jim Hawthorne’s posts argue they can’t, but I may be misunderstanding here.) Another one is why probability should be assumed to have the kind of power over justification that the objection assumes. Perhaps *known* probability does, but then the class of evidence would no longer be S U p1…pn.

The question that interests me the most, however, is whether the Chisholm position helps. That is, if we assume that the probabilist objection to the defeasibilist position is telling, does the problem disappear if we adopt Chisholm’s view?

If the Chisholm view is correct, and the evidence embodied in S is supplemented, e.g., by one’s being appeared to p1-ly, then is the latter appearance one’s evidence, the defeasible consequence relation between S and p1 becoming essentially irrelevant? Or does being justified in believing p1 here require not only that it be defeasibly supported by S but also that one be appeared to p1-ly?

It seems to me that if we could really use something like a lamda operator to convert any proposition into a state of being appeared to, then we could simply use such states to explain justified belief and wouldn’t need to appeal to defeasible reasoning at all. Unless perhaps one’s inferring p1 is precisely what puts one in the state of being appeared to p1-ly, so that just as there are visual and tactile appearances, there also are “inferential” appearances. Still, it strikes me at first blush as just a bit too liberal an account of appearance states.

Stephen, thanks for the comments. On the second one, lambda conversion intuitively takes one from propositions to properties and vice-versa; I only mentioned it because the canonical form for an appearance state for Chisholm is being appeared to F-ly, where F is a property. So if one balks at being appeared to p1-ly, we’d have to resort to lambda conversion to solve that problem.

On the other point, I’m not quite seeing the worry yet. The new evidence set would at least be S plus the new appearance, and if a defeasible consequence of that set is, presumably, explicable in part by the defeasible consequences of S itself (ceteris paribus, of course, since the new appearance state could undermine some of the consequences of S itself).

Ted Poston sent me the following email:

“Plantinga discusses a Chisholmian view like this in WCD pp. 54-63. He locates the view in Chisholm’s self-profile (ed. Radu Bogdan). There Chisholm expresses the following view: Believing p is epistemically preferable to believing q =def Those of S’s purely psychological properties which do not include believing p and believing q are necessarily such that having those properties and believing p is intrinsically preferable to having those properties and believing q (quote from Plantinga, p. 55).

Ted’s right that this view is close to what I attributed to Chisholm here, but it isn’t quite the same. The view I attributed has to do with belief revision, which has diachronic aspects to it, and Chisholm’s written work doesn’t address that issue, as far as I know.

Jon, if I could just offer one more comment, it seems to me that the problem is not that Pn might be improbable given S U P1 U . . . Pn-1. Indeed, as you’ve described it, Pn presumably is highly probable given this evidence. The worry, as I see it, is that it seems difficult to have confidence in Pn if one is fairly confident that one or more of the members of S U P1 U . . . U Pn-1 is false. And presumably the larger this set is the more confident one will be that it contains at least some falsehoods. It is this, I think, that is the source of the “drainage problem.”

It has always seemed to me that an appeal to something like “explanatory integration” (see, e.g., Haack’s book) might be helpful here. One has to look at the degree of explanatory integration of the entire package S U P1 . . . U Pn, and not simply at defesible consequence relations between earlier stages of inference and later ones.

Jon & Stephen,

The most widely studied nonmonotonic logics, system P (called the preferential consequence relations) and the stronger system R (called the rational consequence relations), both have a rule called CUT:

if S|~ p1 and SU{p1}|~ p2, then S|~ p2.

They also have AND: if S|~ p1 and S|~ p2, then S|~ (p1&p2).

In your example, iteration of these two rules results in S|~ (p1&p2&…&pn).

So one way to look at the question you raise is to ask if system P (or the stronger system R) make sense, and how should we understand these two rules of those systems.

AND is pretty clear. It says, “if based on S you strongly believe that p1 and based on S you strongly believe that p2, then based on S you (should) strongly believe the conjunction of p1 and p2.”

If you model these logics in terms of probability, then “strongly believe” means probability 1 (i.e. certainty). But probability 1 is defeasible in the probability theory (the Popper functions) that models these logics — i.e. certainty is defeasible.

CUT says this: “if based on S you strongly believe p1, and if based on p1 together with S you

wouldstrongly believe p2, then based on S (alone) you (should) strongly believe p2.Let me say again, if you model these logics in terms of probability, then “strongly believe” means probability 1 (i.e. certainty). And both AND and CUT are probabilistically valid. Notice that in systems P and R there is no “drainage”. Modeled probabilistically, the reason there is no drainage is that only probabilities less than 1 tend to “leak”.

Jim, thanks for the very useful comment. Here’s what I really want. I want a logic for defeasible consequence that allows me to add such consequences of what I presently believe and, ceteris paribus, continue doing so throughout my life to move from one perfectly rational set of beliefs to another. So when I add a belief, I want the addition to allow me to use this new belief as evidence for coming to believe other things. The ceteris paribus clause concerns having to give up some of my evidence on learning new things, but I’ll worry about that issue later!

So if there is a logic for defeasible consequence that can only be used with respect to certainties, then it won’t be what I’m looking for. So when I ask about Chisholm’s views, I’m assuming a perhaps non-existent logic, and then wondering whether whatever drainage problems there are can be responsibly avoided by adopting Chisholm’s suggestion. I’m not sure they can, but it would be nice to see exactly why.

Jon,

It sounds to me like you want a logic for which something like the following

canoccur through a sequence of inferences or updates:1. you are justified in believing S

2. based on S (alone) you are

notjustified in believing p23. based on S (alone) you are justified in believing p1

4. so you are now justified in believing S&p1

5. based on S&p1 you are justified in believing p2

6. so you are now justified in believing S&p1&p2

Does that look like what you are after — i.e., a logic that can produce a sequence of steps of this kind? In particular, have I got 2 as compared to 5 right? If not, it might help me to better understand what you’re after if you can explain how this misses it.

Jim, yes that’s the idea. I think it’s the idea behind Pollock’s Oscar project? The only qualifications I’d put on the above are that S might include (or be limited to) appearance states, and they are not appropriate objects of justification. And, if we adopt the Chisholm suggestion, the class of appearance states is enlarged as one adds new beliefs.

Jon,

I think that how class S grows over time is a separate issue, and I think I understand how that should work. So let’s hold S fixed, and consider the beliefs gotten by inferences from S (i.e. from the S at some given time).

It seems to me that when we think about inference (including defeasible inference) we need to attend to a distinction between two conceptions, which I’ll call “static” and “dynamic”. To see what I’m getting at, let’s just consider normal deductive inference for a moment. The static conception is

logical entailment. The dynamic conception isdeductionorproof. Given a certain system for constructing proofs it may well be that to prove p2 from S I first need to prove p1. So in a sequence of inferences in a proof there may be a time when p2 is not yet proved (or provable, just yet), but p1 is. Then, after proving p1, it is “added to the proved theorems”, and I can now prove p2, and then “add it to the proved theorems” as well. However, from the point of view of the logical entailment relation (the static conception), p2 was a logical consequence of S all along (as was p1 as well), and there is no need to “get to p2 through first getting p1” with regard to the logical entailment relation itself.It seems to me that the same distinction goes for a defeasible entailment relation ‘|~’. It may be that in a proof procedure for “justifiedly believed” we may need to prove p1 from S first, and then use S&p1 to prove p2 (i.e. we may not be able to prove p2 without first proving p1 and using it with S). But even so, the “static defeasible entailment relationship” ‘S|~ p2’

should holdif both of the “static defeasible entailment relationships” ‘S|~ p1’ and ‘S&p1|~ p2’ hold — even though we may not be able to derive p2 (in some specific deduction system for ‘|~’) without first deriving p1.Maybe this is just a bit too fast, though. The static relation ‘B|~ A’ must represent something like “A is justifiedly

believablebased on B”. However, only after an agent has done the proof does A become “justifiedlybelieved” for her on the basis of B. So there is still an important doxastic distinction here. But, for the most part, studies of nonmonotonic logics have focused on the static relationship ‘|~’, rather on the corresponding proof theory (though of course logicians do have systems through which they make the appropriate deductions). It seems to me that what you are calling for is more attention on the proof theory, because it it there that “justifiedly believable” turns into “justifiedly believed”.Whadoya think?

Jim, very nice explanation! Here’s what’s still bothering me, though. The probability of pn given S can be much lower than the probability of pn given S alone. That was Stephen’s original point. So either the logic in question allows one to get to pn or it doesn’t. If it doesn’t then this “drainage” problem disappears. If it does, then the problem needs a solution. As I understood your previous comments, when we apply some of the best-known logics to actual belief, the appropriate idea is to treat them as applying to full belief. But that’s a severe limitation, and I’m interested in applications to epistemic situations of realistic believers. Does this make sense to you?

Jon,

Yes, it does make good sense. But I’ll need to think more about it. One question. It sounds like you accept that their are various grades of belief — at least two grades, anyway: full belief, and some weaker kind that is often appropriate to the epistemic situations of realistic agents. Are these two grades just different levels in a range of different possible belief strengths? If so, does the range consist of various grades or levels in belief strength that are connected by an ordering on belief strengths — perhaps by a “believes more strongly than” relation?

Jim, I don’t have settled opinions on this matter, but I’m inclined to favor two dimensions for assessing beliefs. One is degree of belief and the other strength of belief. Both would satisfy some ordering condition of the sort you mention. I posted on August 28 about the two dimensions, but don’t expect anything more than pointing and hoping…

Jon, I think you are right to distinguish between degree of belief and strength of belief. Keeping that distinction in mind, it seems to me that the problem arisies from the fact that the following three intuitively plausible propsositions can’t all be true:

(1) There is a relation of evidential support that holds between beliefs of at least a certain strength (or between appearance states and beliefs of at least a certain strength), and this relation is an important part of the explanation of why one is justified in believing all of the things one is;

(2) When a belief is evidentially supported by a body of evidence, it is highly probable given that evidence, but the probability in most cases falls short of 1;

(3) The relation of evidential support is transitive.

Jon, I take it that you and I agree that (3) is the one to reject. We agree that you can’t assess later stages of inference solely on the basis of the initial body of evidence. Your suggestion is that the relevant body of evdience expands at each stage, whereas I want to appeal to the coherence or degree of integration of all of the stages (and the initial body of evidence) taken together.

I think we could put our two views to a test. As I understand it, on your view so long as the required relation of evidential support exists, later stages of inference are always permited (because the body of evdience automatically expands with each stage). On my view, later stages are permitted only insofar as the degree of coherence or integration remains above a threshold level.

Stephen, the suggestion I attributed to Chisholm does expand the body of evidence at each stage, but it doesn’t hold that later stages of inference are always permitted. The body of evidence will include more things at later stages, on this suggestion, but it may also include less, if new experience conflicts with old.

Of the 3 principles you mention, I also question (2). There is a way to get it to come out true, if we take probability to mean epistemic probability, but then it won’t conform to the probability calculus, I think. My hope was to skirt these issues by suppressing any idea of connections between the probability calculus and the logic of defeasible consequence, but Jim has me worried that that may not be possible.

About the connection to Chisholm’s views, I take it that ‘|~’ for Chisholm will express the “rational confidence” relation. Given a set, S, of such and such psychological properties p1 is rationally believed (or S |~ p1). A logic for rational confidence (RC-logic) seems to be different than the non-monotonic logics Jim mentions in an earlier post. In particular the RC-logic will not have the rule CUT. RC-logic should encompass both deductive and inductive reasoning, yet it will still have the defining feature of non-monotonicity. Here’s a particular case. Suppose p1 is x2 + y2 = z2. And pn is Euler’s claim that Fermat’s last theorem holds for the case of n=3. (Fermat’s theorem is that there are no whole number solutions for the equation xn + yn = zn for numbers greater than 2.) Now suppose by RC this holds S |~ p1 but it’s not the case that S |~ pn. There’s a sequence of addings, however, that will get you from S union the pns-1 to pn. This shows–I think–that CUT doesn’t hold for RC. Moreover, it’s still non-monotonic because even though from S union the pns-1 it’s a consequence that pn, we can add r to the set and the consequence no longer holds. Let r=Andrew Wiles’ testimony that Euler’s proof contains a subtle mistake. What’s interesting–at least for a neophyte like me–is that the inferential relations that RC aims to capture are descriptive. It aims to capture inferences that are within the rational scope of normal folks. It seems that to make RC contain a normative element it should involve some idealizations–(e.g.,) what would a person understand if they thought about their evidence for a moment or two. (I’m just starting to learn about the fascinating subject of non-monotonic logics so this may be discussed in the literature.)

Ted, I’m not sure what the example is supposed to be here. Can you say what p1…pn are?

Jon, p1 is the Pythagorean theorem and pn is “there are no whole number solutions for xn + yn = zn where n=3”. The p-sub-i’s are the various claims one proves along the way to pnâ??properties about whole number solutions to the Pythagorean theorem and properties about would be whole number solutions to the other equation.

Ted, I’m still not seeing it. You must think that a defeasible consequence of the pythagorean theorem is the first of several claims along the way to showing that there are no values for x, y, z so that x to the third times y to the third equals z to the third. Do you think that because each of the p-sub-i’s will be logical truths? If that’s so, and the conclusion is as well, then the assumption that S |~ p , but not pn is inconsistent. But how else are we supposed to defend that the PTheorem defeasibly implies the first p-sub-i?

Jon, That’s right-each of the p-sub-i’s are logical truths, but more importantly at each stage they are simple enough consequences of what has come before to be within the rational purview of normal folks. The example shows, I think, that if the intended interpretation of ‘|~’ is rational belief (or even rationally believable) then it will yield a non-monotonic logic without the rule CUT. In the example above I claim that S |~ p but not S |~ pn, even though there’s a series of apriori addings from the initial claim that will get pn as a defeasible consequence from the union of S with all the p-sub-i’s. Another way to think about this, I think, is that the logic for rational belief will yield a modal structure without, at least, S5. Suppose S is the original world; the accessible worlds from S will not include every logical consequence of S but only those consequences that are simple enough for normal folks to “see”. Suppose one of those worlds is p1. This world will be the union of S and p1. From the S U {p1} world there will be some accessible worlds that are not accessible from S. Hence counterexample to CUT. In addition this sort of structure shows that even if the intended interpretation of ‘|~’ is rationally believable then CUT will not hold.

Ted, I think you want a logic with a modal structure weaker than S4 actually, since transitivity is the problem here: I can infer from p1 to p2 and from p2 to p3, but not necessarily from p1 to p3, etc. Iâ??ve had similar worries to yours, as you know, but I would express them as worries about psychological closure. I share Tedâ??s worry about CUT, though I would have just said that the problem is that psychological states arenâ??t closed under logical entailment, and I have similar problems with AND–if S|~ p1 and S|~ p2, then S|~ (p1&p2): That is, I donâ??t think conjunction introduction always works for non-ideal agents (not to mention the Lottery Problems for AND!) both because of the possibility of justified inconsistent beliefs and because people often just donâ??t see that issues are related, etc. My worries here are also related to some issues Jon has raised. I like anything formal so Iâ??m keenly interested in defeasible conditionals, but in the end I want to know *What is it for?* I know what *I* would like to use it for: judging the rationality of peopleâ??s beliefs. Like every other formalism, non-monotonic logic can be treated at very different levels of idealization. At one extreme it will be purely descriptive and CUT will clearly not apply. At the other extreme the assumption logical omniscience will clearly make CUT appropriate. In the middle, weâ??ll want a moderate level of idealization describing a level of idealization we think it reasonable to hold actual agents to which will result in regulative ideals. Unrestricted CUT wonâ??t belong in this logic, but weakened versions may. My bottom line is that we need to get our intended interpretation settled fairly firmly when talking about these matters. It seems to me that by for the most helpful and illuminating comments have come when people were most clear about what they were doing philosophically. I think it would be interesting if everyone would say explicitly how they would fill in “A |~ B iff…” from a *psychological* point of view.

Trent,

With regard to spelling out an account of defeasible reasoning, I’m mainly a logician. I’m primarily interested in spelling out a logic for ‘|~’ that would be appropriate for ideal (logically ominscient) agents. The point of doing this, from the logician’s point of view, is this. Supposed we attempt to spell out a more psychologically realistic non-monotonic logic (a project that, I agree, needs to be done). We certainly wouldn’t fault realistic agents if they happened to be good enough reasoners to do better than required by our realistic theory. But what would count as “doing better”? Is there a standard against which we can measure the strengths and weaknesses of realistic reasoners? The point of a logically ideal account is to supply such a standard. The ideas is not that real agents are “irrational” for not living up to the ideal standard. But we want to understand how and why they may fall short of the ideal standard (e.g. for reasons of computational complexity), and to distinguish those from other conditions which we may count as real psychological faults (like not paying a sufficient amount of attention). So, I think, we need the logicians account of logically ideal defeasible reasoning as a kind of standard against which to measure (or at least compare) more psychologically realistic accounts.

Jim,

I’m interested about your thoughts on this matter, since it raises methodological issues that I find interesting to think about. How do you suppose that we settle on an account of ideal defeasible reasoning? It seems that we don’t have a very clear pre-theoretic idea of this notion in place unlike, say, logical consequence. The disagreements about logical consequence are, by contrast, rather focused and refined. Consider the logical pluralist as an example: the position springs from our familiar, pre-theoretic notion of validity. We don’t seem to have as well of a constrained notion w.r.t. defeasible reasoning.

I wonder if the notion of defeasible reasoning is rather more like the notion of preference. Preference is a rather complicated notion, like defeasibility, unless you’re an economist: in which case an agent’s preferences consist of a complete preordering of a set of outcomes by a reflexive and transitive relation over those elements (outcomes). For many applications, this idea works. But decision theory defines rational economic agency in terms of the notion of economic preference, rather than defining economic preference in terms of rational economic agency. But is economic preference all that preferences are? (Doubtful; preferences change, for one thing, often in light of new information; it isn’t clear that the gap between this feature and idealized theories is a narrow one to span; preference revision (treated decision theoretically) raises the question of whether an agent evaluates the alternative preference structures before changing (unlikely) or after (likely; hence, problematic).

Returning back to defeasible reasoning. What grounds your ideal notion of defeasible reasoning?

Greg,

My favorite pre-theoretic understanding of defeasible consequences is to read it as saying, “among possible states of affairs in which B is true, A is usually true.” Here one might replace the word “usually” with “almost always”, or “very probably”, or “fairly likely”, or some other probability-like notion. I think there are closely related logics for each of these. And I think we have a good enough handle on probability to use it as a semantics for these logics, much as possible truth-values provides a semantics for logical consequence. One might take the probability functions in such a semantics to represent (possible) measures on possible worlds or possible states of affairs, or one might take them to represent idealized (ideally coherent) belief strengths.

What do you think of such an approach?

Jim,

I like very much the project of investigating the scope of probabilism, which I understand to involve two tasks: the first is to provide a probabilistic semantics for an epistemic notion or relation, while the second is to evaluate whether that model is faithful to the epistemic notion at hand.

My question addresses your thoughts on the second half of this exercise, namely fit. It would seem that positive arguments would be necessary here if one is proposing a logic as an ideal standard for defeasible reasoning.

Greg,

I take it that the point of developing a logic of defeasible consequence is to represent what an ideal agent (an agent not limited by computational ability or resources) should defeasibly believe based on other statements she accepts (and perhaps based on her nondoxastic states as well). (One might also develop a logic that better fits more realistic agents, with their computational and other cognitive limitations — but I think that project will be easier to carry out, and evaluate, after we get down the logic that would suit ideal agents.)

It seems to me that belief also clearly comes in strengths or degrees — and that it makes good sense to model ideal agents as believing some claims more strongly than others. And it is at least worth exploring a logic of defeasible belief simpliciter that makes contact with a logic of degrees of belief — one where simple belief is belief strength above a threshold. I think this gives us one handle on how belief should work, and I know of no other handle that ties so directly to our intuitions about belief, or that we understand as well. So a semantics that ties defeasible consequence to measures of belief strength in the form of probabilities (or, better yet, qualitative probabilites — where the basic notion is “believes B at least as strongly as C”) makes sense, not simply as an abstract model theory, but as a way of trying to explicate a logic of belief that is faithful to a central intuition about the nature of belief. I know of no better, more central , more intuitively plausible conception on which to base a semantic theory for defeasible consequence.

Do you know of a better, more intuitively compelling basis on which to build a logic of defeasible consequence? If so, please tell me about it.

Jim,

In addition to probabilistic models for non-mon conditionals, two other semantics spring to mind–preferential semantics and the semantics for epistemic conditionals. So, the question remains.

One might plausibly argue that the economic notion of preference is intuitively plausible, central. Does that mean that the essential nature of preference is choosing among alternatives? The example of revising preferences (above) suggests that it isn’t. If preference was essentially economic preference, we’d say that this example was conceptually confused or to be explained away by the psychological limitations of people. The example is selected so that these alternatives are not attractive.

I don’t know of a better, more intuitively compelling basis on which to build a logic of preference revision. However, I wouldn’t accept that the centrality and intuitiveness of the economic notion of preference were relevant to evaluating the fit between a decision theoretic modeling of preference change and actual preference change.

(Here I am granting that degrees of belief are intuitive, but I don’t think that they are at all–however useful it is at times to think of belief in these terms. Entrenchment? Confirmation? Justification? Reluctance to change a belief? These notions seem like epistemic concepts that come in degrees; but belief doesn’t seem to…and talking this way conflates these notions, which tends to drive traditional epistemologists nuts…but this is another discussion.)

Jim,

Very interesting points, I have a few questions and comments:

1. Idealizations

I think we are on exactly the same page wrt the use of idealized logics, I just wanted to know at which level we were *currently* working. I wonder if youâ??re familiar with our own Paul Weirichâ??s work on idealizations in decision theory, heâ??s got a new book on it.

http://www.missouri.edu/~weirichp/

My interests lie in doing the same thing for uncertain inferences, trying to find that â??sweet spotâ?? where we can expect normal adult human agents to perform.

2. Matters Modal

You write: “My favorite pre-theoretic understanding of defeasible consequences is to read it as saying, â??among possible states of affairs in which B is true, A is usually true.â??â??

I like this idea a *lot* and have tried to use it in my own account of objective prior probabilities in an objective Bayesianism, but I keep running into problems in the notion of *proportions* among possible worlds. Iâ??d *like* to say that â??A I~ Bâ?? reads â??most A-worlds are B-worlds but it seems like all the classes will be infinite. There will be at least countably many A worlds with at least denumerable infinity of both B-regions and ~B regions. Iâ??ve tried to get the worlds to â??nestâ?? so that I can speak of ratios in the same intuitive sense in which there are twice as many naturals as evens. I just donâ??t know the logic to get it done, but perhaps itâ??s already been done. Perhaps itâ??s even easy. Iâ??d like to know!

3. Probability

You write: â??I think we have a good enough handle on probability to use it as a semantics for these logics.â?? and â??One might take the probability functions in such a semantics to represent (possible) measures on possible worldsâ??

So hereâ??s something Iâ??ve been wondering: If defeasible conditionals are supposed to account for uncertain inference and the semantics are probabilistic, why not just go Bayesian? What would you say to a Bayesian like me who doesnâ??t understand why I need anything other than the probability calculus for a logic of uncertain inference? I think the logic of defeasible conditionals looks *cool*, *really* cool. I still just donâ??t know what theyâ??re *for*. Especially for someone who already has some grasp on how to use the probability calculus to handle uncertain inference.

Sorry for the long post, but this stuff fascinates me.

Trent

re: intuitiveness of degrees of belief

Thatâ??s the problems with intuitions, I suppose. I must say I find the notion very intuitive myself, but I donâ??t think they are necessary for the project of formal epistemology of uncertain inference. Iâ??m not concerned with knowledge, so not that concerned with the justification it requires. Iâ??m more interested in a broader notion of rationality which will map onto some counterpart of epistemic justification. So even if beliefs donâ??t come in degrees, we can still be interested in the rationality of the graduated attitudes that attend belief, like confidence. So suppose that belief is binary, all or nothing (which seems totally counterintuitive to me). *Itâ??s the graduated properties attached to beliefs that do all the work.* From the fact that I believe that S is a liar, you many not infer (with much accuracy) my actions, even given a preference scale (except in special circumstances). However, if you know that Iâ??m *highly confident* in that belief, youâ??ll be able to infer (quite accurately) that Iâ??ll *call* S a liar, or what have you. Williamson tries to get knowledge to matter, but I find his thief example totally unconvincing. But it isnâ??t some binary notion of belief that motivates the thief either: itâ??s his *confidence* attached to the belief that thereâ??s a diamond in the house that keeps him looking. So for those whoâ??unlike myselfâ??donâ??t find degrees of belief intuitive, I think the project of using probability theory or probabilistically semantized notions to analyze rationality is open to you.

Trent

Trent,

On your post #28, heading 2: I spent about a year thinking about that problem when I was in grad school ten years ago. The mathematics didn’t exist to solve it then (or at any rate, I wasn’t smart enough to come up with it) and I would be very surprised if it exists now. It really is a rather deep problem isn’t it? After all, it seems so natural to think of conditional probabilties in terms of “degree of entailment” and to think of that in terms of proportions among sets of possible worlds. But I really think the problem of comparing “sizes” of infinite sets of possible words means that the appeal to possible worlds can’t be more than a useful heuristic here. But I would be very glad to be proved wrong.

On heading 3: I think bayesianism doesn’t really capture our pretheoretic conception of evidential support (to the extent we have one) because, on the pretheoretic conception, the support relation relates full beliefs yet can’t be equated with a conditional probability of 1. But, of course, that might just mean that the pretheoretic conception marries incompatible features.

Just a follow-up thought on the problem of proportions among infinite sets of possible worlds. What makes in plausible to say, e.g., that the proportion of even numbers to natural numbers is 1/2 is that you can go through the natural numbers in a certain order, and every other one is even. But when it comes to possible worlds there really is no natural order in which to take them. Moreover, the cardinalities involved here must be at least nondenumerably infinite, which makes it even more difficult to use an ordering to define the relevant proportions.

Stephen,

1. Re: Matters Modal

Iâ??m glad Iâ??m not the only one whoâ??s totally perplexed in the face of the â??ratios of worldsâ?? problem. I have two reasons for hope that it can be solved. First, there just *is* a sense in which there are twice as many naturals as odds, and some mathematician is (has been?) smart enough to formalize the notion. Second, there is a sense in which some of the worlds will â??includeâ?? others. I take worlds as maximal-consistent conjunctive propositions. Plantinga introduces worldbooks to illuminate worlds, but I like worldbooks better than worlds. Since worldbooks are maximal they wonâ??t contain one another in the sense that the set of conjuncts of one will be a proper subset of the conjuncts of another. However, many of the stories will be of finite length which are â??to be continued.â?? Other worlds will continue the story and so â??containâ?? worlds in that sense. There will also be many worlds many of whose conjuncts are subsets of the sets of conjuncts of other worlds. I have hope that whatever mechanism captures the idea set forth in my first consideration can capture the idea set forth (very sketchily) in my second consideration.

2. Re: full and partial belief

You write: â??on the pretheoretic conception [of evidential support], the support relation relates full beliefsâ??

Again, *my* pretheoretic conception doesnâ??t relate full beliefs, not as far as I can tell. As long as Iâ??ve thought about evidence (junior high I think) Iâ??ve thought of it as relating level of confidence. The folk know that you can â??believe something in your heartâ?? on lesser evidence than it takes to assert something. The difference evidence makes (after a certain point) is not *whether* you believe, but *how* you believe: more or less firmly. I remember when I became a Christian (15 years old) thinking â??Iâ??ve got enough evidence to join a church, but do I have enough evidence to try to get others to do so.â?? So my pretheoretic notion of evidential support was graduated and had to do with pragmatic things which required not mere binary belief, but a certain level of confidence. â??Do I evangelize?â?? was not a question about whether I had enough evidence to believe, it was about whether I had enough evidence to have a level of conviction to assert and argue. There are distinctions this picture glosses over, but this much seems clear: in my mind evidence related graduated properties, not binary ones.

Iâ??m not sure where this fits in, but Locke and Hume sure thought of evidence as relating something that came in degrees. Their thoughts arenâ??t pre-theoretic in one sense, but they are pre-Bayes and represent, I think, a core notion of evidence.

Trent

Stephen,

Iâ??m putting this in a separate post, because some philosophers completely discount such data. I think thatâ??s a bad idea, since lexicography is a legitimate science, and as such philosophers should take its results as touchstones; especially when discussions of pre-theoretic notions come up. Pre-theoretic notions are evidenced by linguistic behavior in ordinary language. That is an empirical matter and is studied by the empirical science of lexicography.

The OED defines evidence this way: â??The quality or condition of being evident; clearness, evidentness.â??

Thatâ??s clearly a degree-theoretic notion. The earliest recorded uses are also graded.

1665 BOYLE Occas. Refl. V. iv. (1675) 310 Certain Truths, that have in them so much of native Light or Evidence..it cannot be hidden. 1677 HALE Prim. Orig. Man. I. ii. 63 They [our faculties] expand and evolve themselves into more distinction and evidence of themselves. 1721-1800 in BAILEY. 1882 MIVART Nat. & Th. (1885) 122 So evident that we require no grounds at all for believing them save the ground of their own very evidence.

So thereâ??s some empirical evidence of what Anglophone humans’ pretheoretic notion of evidence is, for what itâ??s worth.

Trent

Greg,

It seems to me that belief does come in degrees — but perhaps you would prefer to call it “degrees of confidence”.

Can you explain why preferential semantics provides an intuitively compelling basis for a logic of defeasible consequence. Isn’t defeasible consequence supposed to be about warrented defeasible belief? How does a “preference” relation provide anything more than a mere abstract formal semantics? That is, I’m asking you the same question you asked me with regard to probabilistic semantics. How are preferential models faithful to the epistemic notion that defeasibles consequence is supposed to capture? I tried to provide a brief account of why I think probabilistic semantics is faithful to the epistemic notions at issue. Can you do the same for preferential semantic, saying how it is supposed to tie in a plausible way to defeasible belief?

Presumably the preference relation isn’t about what the agent “prefers to believe”. Is it about belief at all. Or have we changed the subject? I have nothing against working out a logic for an ideal agent’s preferences — and perhaps that logic is defeasible, too. But that doesn’t help with the logic of defeasible belief. So can you tell me whether (and how) preferential semantics provides models faithful to the epistemic or doxastic notion that defeasibles consequence is supposed to capture?

The same goes for the “semantics of epistemic conditionals” you mentioned. I’m not familiar with that. Perhaps you can explain it and tell us how it is faithful to the epistemic notion that defeasible consequence is supposed to capture.

And can you tell us why either of these semantic theories provides a more plausible basis for defeasible consequence as a logic of defeasble belief than does probabilistic semantics (as I described it above)?

Trent, I agree with you that may learn a lot from Weirich’s work about how to, as you say, find the sweet spot where we can expect normal adult human agents to perform. But I’m not working at the level of adult human agents yet.

Trent and Stephen, one can define measures on infinite sets (even of uncountable cardinalities) of possible worlds — no problem. It’s just that such measures are not unique. Mathematicians define measures on uncountable sets all the time. But for any given uncountable set, there will be lots of different measures possible, and no one will necessarily be “more natural” than any other. When these measures are probabilities, they may be thought of as defined on “proportions”, relative to the specified measure. Thus, I think it makes perfectly good sense to think of probability functions as measures on possible worlds (and to think of each sentence as representing a set of possible worlds). The Popper functions, which take conditional probability as primitive, seem especially well suited to be interpreted this way. But there are many distinct probability functions — many distinct ways of assigning a measure to possible worlds. So prior probabilities will not be “uniquely given”. Still, each Popper function may be thought of as a possible degree of entailment relation, relative to some way of assigning meanings to sentences and relative to some measure on worlds.

Similarly, there is no unique defeasible consequence relation. Formally the defeasible consequence relations are like probability functions (like Popper functions). There are many such consequence relations that satisfy the formal axioms. Each is relative to a way of assigning meanings to sentences and to a measure on the prominence of some possibilities (possible worlds) among all possibilities (possible worlds). The idea is somewhat analogous to Lewis’s account of counterfactuals. Lewis doesn’t think there is any single, “uniquely given” closeness measure, but that there are lots of different closeness measures among worlds, and that in appropriate contexts we may draw on an appropriate measure, and that counterfactuals are true or false relative to the measure appropriate to the context. The measures on worlds for defeasible consequence are not “closeness measures”. They are “measures of proportionality”. But they are not uniquely given by the sizes of the sets of worlds. They are “imposed” in much the way Lewis’s closeness measures are “imposed”.

Trent, with regard to your third question, on why not just stick with Bayesian (probabilistic) degrees of confidence (or degrees of belief), here are several shots at it.

1. For some artificial intelligence systems it might be reasonable to build in a defeasible reasoning component that doesn’t involve a complete Bayesian probability function. It may not be practical or feasible to build into the system a probability model that specifies unique degree of belief for each propositions in the systems vocabulary — and doing so might be overkill, given what the system is designed to do. Still, one might think that the logic of the defeasible consequence relation should be a logic that can in principle be modeled probabilistically, because that may be how one thinks the logic of belief should work in general.

2. Your question is a bit like the following question. Once one has a possible worlds semantics for modal logic, and if one thinks that the semantics is a proper rendering of the modalities, and not a mere model theory for it, why not just translate all modal talk into possible worlds talk in the object language, and then throw away the modal logic? This is in effect what David Lewis does. He says, here is how to translate modal talk into possible worlds talk — not let’s just talk possible worlds talk in a language for first-order logic, and drop the modalities. After all, the first-order language that directly quantifies over worlds completely captures the modal logic, and is more expressive. (I only intend this as an anology — there may be good reason not to buy Lewis’s metaphysics, and so to want to maintain a language with modal operators and the corresponding modal logic.) I am quite sympathetic to this kind of move (or would be if I bought Lewis’s metaphysics). But even Lewis recognizes that we use modal talk pretty naturally (we tend to think in those terms), and aren’t likely to just through it away. So we can either continually translate into the possible worlds representation, and do all of our logic there, or we can have a modal logic that is faithful to the deeper possible worlds analysis, and often just do logic at that higher level. I think that much the same applies to the logic of defeasible belief that has a deeper underlying probabilistic semantics.

3. It may be that for some (or many) purposes in epistemology the deeper probabilistic analysis is overkill, and a logic of belief that works like probability at a threshold may provide a more digestable analysis. In any case, belief talk is common, and we often think in terms of belief, so we may want to see what its logic should be like if it rests on the deeper conception of degrees of confidence (or on a “believes more strongly than” relation). I think that such a logic, with a probabilistic semantics, gives us some insights into the preface and lottery “paradoxes”, for example.

Jim, on your post #35, granted you can define measures on infinite sets, but the problem Trent and I are worried just is the uniqueness problem. It’s hard to rest comfortably with the fact that there is no principled way to settle on a measure. It suggests that the possible worlds framework can only be of very limited usefulness in explicating our pretheoretic conception of evidential support. That being said, I do acknowledge that it is somewhat illuminating to say that if A is evidence for B then, generally speaking, B is true in most worlds in which A is true (“genereally speaking,” because I think you probably have to take account of relevance and other factors; not everything is evidence for a necessary truth; it is possible to have evidence for a necessary falsehood, etc.).

Trent, on your post #33, it’s not so clear to me that the OED definition “the quality or condition of being evident; clearness, evidentness” is degree-theoretic. In fact, on a certain reading it isn’t. I do believe you’re right that our pretheoretic thinking recognizes degrees of belief, as well as full beliefs that are held more or less confidently than others. But I think it also allows one to infer a full belief in certain circumstances where the probability of that belief’s being true, given one’s evidence, is quite a bit less than 1.

Jim,

Very nice comments. It is indeed the *uniqueness* that is the key. The objective Bayesian wants to be able to say that the objective probabilities are just plain logical facts, logical relations between propositions just like deductive logical relations. So if thereâ??s not uniqueness, thereâ??s nothing (which is why I didnâ??t take non-unique measures as being relevant to the question). Your counterpart theoretic comparison was very interesting and Iâ??ll have to think about that.

The modal model example was very helpful. Itâ??s ironic too because Iâ??m known for wanting to dispense with operators and just quantify over worlds (which I donâ??t mind doing as an actualist, since, after all, the do exist even if they are lies). I just finished an article yesterday against the converse Barcan formula that depends on quantifying over worlds. The key point in your analogy, as I understand it, was that there is no real competition between the probability calculus and the sorts of non-monotonic logics youâ??re looking at because the probability calculus generalizes the logic. Itâ??s just that some particularizations are more usefully applied in certain situations than the general theory. Do I have that right? If so, that really helps a lot and is an apt analogy.

Trent

Jim (and Trent, Stephen: very nice discussion. I enjoy reading all of your threads very much. My comments here are directed to the thread with Jim.)

I am not committed to either semantics providing a better basis for defeasible consequence. Rather, I am very interested in how to evaluate whether a logic for defeasible consequence is a reasonable candidate for an ideal theory of defeasible consequence. I suppose one could simply call the notion so-defined ‘defeasible consequence’. However, I suggested in (ln 24) that viewing such a theory for ideal defeasible consequence commits one to providing a motivation for adopting this semantics, in so far as we’re to think that studying the behavior of this ideal agent will give us prescriptive guidance or descriptive insight into defeasible reasoning. I’m sincerely interested in this question.

I understand our discussion to have taken the following form. I understood your initial reply to offer two reasons as motivation: (1) probabilistic semantics are the only game in town, and (2) the notion of degrees of belief and the exercise of constraining belief by the axioms of probability are intuitive and well understood, in so far as probability is a well understood theory. I offered the two semantics–preferential models for non-mon consequence, and the semantics for epistemic conditionals for non-mon conditionals–as examples directed against claim (1). I offered the analogy of preference revision to address (2) in the following sense: economic preference is an intuitive notion and well understood and economic decision is a well understood theory. However, we (most of us) don’t view the theory as prescriptive without careful qualification. When the objects of choice are themselves preferences, decision theoretic treatments don’t seem to be appropriate. (Jon Doyle has written on this recently in Computational intelligence 20(2) 2004, which is an expanded version of a Doyle and Rich Thomason piece appearing in AI Magazine, 1999.) The point is, we view this as a problem for the theory and not a problem of people being irrational.

I offer this outline because I worry that we run a risk of talking past one another. I understand you to be committed to the view that a logic for defeasible consequence will provide a theory for ideal agents making defeasible consequences. Hence, I take it that if there is a disagreement in states between such an agent drawing a defeasible consequence and an actual person drawing a defeasible consequence, given the same inputs, you would say that the reason was due to some failing of the person—perhaps a nomologically necessary failing–to correctly draw a defeasible inference. Is that accurate?

If so, why think this is the case? We don’t think preference revision is irrational. Or would you think that preference revision is irrational?

Greg,

You say, “I understand you to be committed to the view that a logic for defeasible consequence will provide a theory for ideal agents making defeasible consequences.” Yes, that’s the idea, I think. But I would put it somewhat differently. I would say instead, “I am committed to the view that an important project in the development of the logic for defeasible consequence is to provide a theory of defeasible consequence relations that is not constrained by the limited cognitive abilities (particularly, the limited logical and computational abilities) of actual agents.” To the extent that there are differences in your characterization and the one I just gave, I buy mine rather than yours.

You go on to say, “Hence, I take it that if there is a disagreement in states between such an agent drawing a defeasible consequence and an actual person drawing a defeasible consequence, given the same inputs, you would say that the reason was due to some failing of the personâ??perhaps a nomologically necessary failingâ??to correctly draw a defeasible inference. Is that accurate?” Yes, that is accurate. But I wouldn’t call sub-ideal reasoning “irrationality”. The “failing” is that the actual person would be an even better reasoner if she were able to make the ideal inferences. (You certainly wouldn’t say she was a worse reasoner than the actual reasoner, would you?) Perhaps it will make the point clearer to put it the other way around. What “failings of reasoning” would the “ideal reasoner” have if she failed to agree with the actual reasoner? Perhaps she would have failings, but they would be “failings of modeling realistic abilities”, not failures of reasoning.

In any case, I think we cannot do without the ideal model — i.e. we cannot give a sufficiently rich account of “the logic” if we try only to develop a “logic for realistic reasoners.” I think we will always need this “logically omniscient ideal” for the following reason. No matter how good your “logic of realistic reasoning” is at describing norms for real reasoners, there will always be some cognitive differences among real people. Although no real agent will be logically omniscient, some will be more “logically adept” than others. And these people will (or should) count as better reasoners for it. I see no plausible way to draw a firm line for “good enough reasoning” — i.e. I doubt that we can develop a “logic of real reasoning” that places us in a position to make the following claim (about that logic), “Reasoning that reaches the logical depth this ‘real logic’ articulates is as good as we can possibly want a real reasoner to be, and any actual reasoner who happens to possess an ability to reason more deeply than this just cannot count as any better at reasoning”. The logical ideal may be an unattainable standard for real reasoners, but it may nevertheless be a least upper bound on reasoning ability — an ideal limit that we cannot stop short of (in a non-arbitrary way) in our attempts spell out a sufficiently rich logic to capture all of what can count as “good reasoning”.

Let me make it clear that I don’t intend this as some sort of knock down argument that a “logic of realistic reasoning” is impossible. I’m just doubtful that any such logic, if we ever get one, can completely supercede the kind of normative role played by the stronger logic that is suitable to ideal agents (which I think we can develop). In any case, when you have such a “realistic logic of defeasible belief” to show me, then we’ll see!!!

Hi Jim,

Thanks for your reply, and the charity in reading my hastily composed post. I’ll try pressing the point once again. I don’t think that I am committed to ‘a logic of real belief’, since I’m not sure what that would be, or how to identify such a thing–which is what drives my question of asking how you do it. I am very sympathetic to the

useof logics to model various types of arguments and processes, including non-monotonic arguments. (But I think that certain kinds of arguments, such as statistical arguments, give us firm enough input-output constraints to address this problem of fit that arises when considering candidate logics for this KR purpose.) I don’t have anything like an ideal agent in mind in carrying out this project. Nor do I need one.In order to make the comparative judgment between the ideal reasoner and the actual reasoning, I would think one would need to establish that the model you propose is the model of an ideal defeasible reasoner. If the question is, given a model of defeasible reasoning such and such, where the two disagree, is the actual person mistaken, then, sure, you are correct: the person is mistaken; this follows trivially. But we can ask whether the theory is an appropriate one to apply to model defeasible reasoning too, and I am asking how you would defend the claim that your proposal is the correct standard for defeasible reasoning. Forgive me for pressing and apologies for my typing. I find the discussion very stimulating; thanks for your comments.

Greg,

If I’m understanding you, your question is *the big question* of the following sort: why consider functions that satisfy the axioms of probability theory to be an appropriate representation of ideal belief? People have given various kinds of answers to this question — e.g. “dutch-book avoidance” (e.g. Ramsey, de Finetti); “calibration” (e.g. van Fraassen, Joyce, and others); how well it fits the “belief component” of a theory of preference, belief, and decision (e.g. L.J. Savage and others). I think all of these give us some reason to think that probability is a good model of ideal degrees of confidence (or degrees of belief). But one of the things that I find most convincing is that there is a rather simple axiomatization of a binary relation that can be interpreted as ” the agent is at least as confident in A as in B” (or “believes A at least as strongly as she believes B”), and this set of axioms look to me intuitively like very plausible constraints on ideal belief. I’m talking about a version of the axioms for so-called “qualitative probabilities”. Furthermore, it can be proved that any such relation (that satisfies these axioms) can be modeled uniquely by a standard quantitative probability functions.

When you use a logic to model statistical arguments, you may not employ ideal agents directly. But presumably you think that real agent’s beliefs should “conform to” such arguments as much as possible. (Otherwise, what is the epistemic import of the logic — or is ir epistemically useless?) It seems to me that to the extent that you think this logic should inform belief, there is something very much like an ideal agent lurking in the background. Perhaps my point is most easily made in terms of standard deductive entailment. The semantics of deductive logic need not draw on agents of any kind. But as soon as you try to describe the epistemic import of the logic, it is hard to avoid saying things like: “whenever the premises are true, the conclusion has to be true — so an agent who is (ideally) logically coherent and believes the premises must not believe the conclusion to be false.” There may be important caveats, but you get the picture.

One might say, “fine — but the semantics for deductive logic doesn’t involve ideal agents, only the epistemological application does.” If that’s what’s bothering you, I can provide what seems to me to be a compelling account of qualitiative probability (and of quantitative probability), defined on sentences, that interprets the relation (or the function) in terms of “weightiness measures” on sets of possible worlds. In this version the ideal agent only comes in afterwords, as in the deductive case. An agent who is (ideally) coherent can only measure the weightiness of possibilities in certain ways (which are constrained by the previously mentioned intuitively plausible axioms on the weightnesses of possibilities); and an (ideal) agent’s belief strengths (or confidence levels) should fit coherently with some such measures if her belief strengths are to match how weighty the possiblities are (or can be), given how possibility weightings must work (given the intuitively plausible axioms for such weightings).

Does that help?

Hi Jim,

Yes, exactly, it is *the big question*. We’re probably anticipating each other’s answers, but, still, I think it is important that we (royal, hopefully; you and I, worst case?) grapple with it. It seems that we (royal: epistemologists) have largely stopped addressing this question–perhaps because it is prone to frustration, or maybe because folk became bored with the topic and went off into their own circles to work. I think that it is a mistake for epistemologists to abandon this topic, however, since it is being confronted in other domains…particularly in AI. I’m aware of some misgivings in philosophy about AI, and AI has certainly invited some of this skepticism with its at times overly optimistic research programs. But, at bottom, it is addressing many of the core set of issues exercising epistemologists. And the field is bursting with activity. The computational stance (for lack of a better term) offers constraints to work within that address some of the components of the big question that various posts to this, and other threads, have addressed.

In (brief) reply to the role of ideal agents: I’m interested in uncertain reasoning, which in many (most?) cases arises where the idealization assumptions are difficult if not impossible to satisfy. That’s not to say that, retrospectively, one might demonstrate how a piece of cogent uncertain reasoning conforms to the constraints of an ideal agent. But, I’m not sure what

guidancesuch an agent is in these cases prior to reconstruction. It would seem that the epistemic force of this model would hinge on this question. This is far from knock down, of course.Formally, the problem of defeasible consequence is a problem for applied logic. Which, I’ve offered, has two components. The first is some pre-theoretic idea (those outside epistemology would call this some `philosophy’) about the notion to be modeled: what defeasible consequence is, what logical consequence is, what identity is (if you’re a type theorist, say), and so on. The other component, of course, is the formal system, which is a piece of mathematics: here we examine structures, namely sets on which relations and functions have been defined and correspondences shown to hold between these structures. Then, we see whether a given formal system is faithful to the pre-theoretic notion we are working with. In this respect, it is a descriptive exercise. Perhaps we learn about limits–no empty domains, some terms don’t get a type, whatever–and then incorporate those into the boundary conditions for applying this formal system: we say, modulo these conditions, that the system is faithful to such-in-such notion and so, under these conditions, may be thought of as giving prescriptive advice.

I tend to think that we’re still at the descriptive stage of understanding defeasible consequence. There are epistemic benefits, however. By having this

pluralityof formal models of defeasible consequence, with their corresponding motivating examples, we’ll have a set of benchmark examples mapped into structures that we maythenstudy formally. And that looks exciting, to me.I stress the term ‘plurality’ since I do think we hurt ourselves, collectively, by focusing too much on our respective schools. It is this idea that motivates my pestering you with counter-examples (not all of which are successful, granted) or pressing the case in general for us to look outside of Bayesianism.

So the big, big question is whether (formal) epistemologists will let someone else eat their lunch.

Greg,

You write: â??Iâ??m interested in uncertain reasoning,â??where the idealization assumptions are difficult if not impossible to satisfy. â?? one might demonstrate how a piece of cogent uncertain reasoning conforms to the constraints of an ideal agent. But, Iâ??m not sure what guidance such an agent is in these cases prior to reconstruction. It would seem that the epistemic force of this model would hinge on this question.â??

This is exactly the issue Iâ??m interested in Greg. I think it helps to think of the issue on analogy with deductive reasoning. Psychological states are not closed under entailment, but if I’m highly confident of P and see that P entails Q, I should feel uneasy if I donâ??t believe Q. This is because the laws of deduction are normative for cognitive agents whose goal is maximal cognitive excellence within reasonable cost limitations. Thatâ??s just another way of saying that it approximates a cognitive ideal. This can be represented anthropocentrically by ideal agents. We are not idea agents so we scale back rules of deduction to reasonable psychological limits. This is just what Paul Weirich is doing with decision theory in his recent _Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances_ (I really urge you to check it out, you can look in the TOC on Amazon). But like Jim says, we’ve got to have an ideal to scale back first. I think itâ??s just a plain fact that learning (truly learning, i.e. internalizing) some logic helps you think better or at least more clearly in many cases (especially when youâ??re trying), i.e. provides *guidance* in what to believe. I think probability theory is even better at this. There have been at least three papers where I assigned intervals to the constituents of my argument only to find that I was probabilistically incoherent. After some reflection, my probabilities settled down more coherently (because, I would say, of certain standing dispositions being triggered by the reflection). What more practical guidance could one want? I have a fellow Bayesian with whom Iâ??ve written some papers and heâ??d say the same thing I think.

Iâ??d prefer to say simply that I want a maximally accurate and comprehensive representational system (within reasonable cost limitations) and I donâ??t think probabilistically incoherent systems can be accurate, thus I desire coherence. But another, and fair, way to say that is that Iâ??m trying to do my cognitive best, trying to achieve a cognitive ideal, trying to be more and more like an ideal cognitive agent (even though my realistic psychology lets me cut myself some slack, I donâ??t fret that I donâ??t believe the deductive consequences of all that I believe or that, no doubt, Iâ??ve still got lots of incoherence – I do what I can. Part of what I can do is reflect upon the coherence of by belief system, in particular how far it deviates from what I consider necessary conditions for ideal performance).

Trent

Hi Trent,

Thanks for pointing out Weirich’s work and for your comments. I’m mostly familiar with Levi’s project to ground belief change in decision theoretic terms and have thought about this topic in comparing Levi’s approach to Hans Rott’s (including work with Pagnucco) project of grounding belief change

andnon-monotonic logic in rational choice.I don’t deny that Bayesian updating is useful. I also, being slightly more careful now, don’t deny that modeling ideal agents is useful–theoretically useful, even. Theoretically, I’m much more sympathetic to Levi’s project than Jim’s approach, because I think genuine inductive expansion of our beliefs is uncontestable–since I accept statistical evidence statements, if for no other reason–and so tend to not favor views that ask me to think that inductive expansion is contestable or to think that it is really just deduction at bottom. I’m inclinded to think, in either case, that the topic has changed. This particular debate is an old but important one, deserving our continued attention. I suspect that particular people involved in this debate get bored with talking past one another and return to their respective, likeminded groups (some existing largely outside of philosophy proper, or outside of the `english speaking world’, or both, it should be added), content to ignore each other. However, if my sociological conjecture is true, I think it is a mistake that costs us all dearly. The issues exercised by this clash continue to crop up in other places: I’m seeing more AI papers looking like epistemology papers, several of which grapple with pieces of this issue. They tend to come to it by comparing the behavior of models with what they wanted to model in the first place, which in matter epistemic is, or should be, the province of philosophy. These are the main themes I have wanted to sound in my posts.

I would dispute your characterization of the laws of deduction, since I think they in themselves tell us very little about how to `maximize cognitive excellence’,

particularlywith respect to `reasonable cost limitations’. Here I’m sympathetic to Gilbert Harman’s view that logic and epistemology have fundamentally distinct aims, although I don’t accept the view that the two are not related at all. I’ve argued this second point in joint work with Pereira appearing in JAL.I’m denying, in this instance, that we’ve clearly in hand the appropriate ideal model to fall back on–denying, that is, the initial premise that starts Jim’s

normativeprogram. Citing that the probability measure is intuitively compelling is not sufficient grounds for regarding its structure (i.e., the mathematical structure of the measure) to be isomorphic to the structure of the notion you wish to model, which, in this case, is defeasible consequence.Hence, I pressed Jim for a positive account for adopting this view of defeasible consequence. Jim mentions Dutch Book arguments. It would be interesting to see how a dutch book argument would go for adopting this view of defeasible consequence, and whether it would fair better or worse than dutch book arguments for strict Bayesianism. (On Dutch Books: A PSA panel in Vancouver consisting of Kyburg, Levi and Seidenfeld addressed replies to various versions of the argument a few years back. It is the most recent and comprehensive review that I am aware of, although I don’t know whether the panel papers were published by PSA.)

Does that address your comments? Forgive my delay, I was out of the office yesterday.