Closure for Defeasible Consequence?

In his Stanford entry on defeasible reasoning, Koons says the following:

In particular, a logical theory of defeasible consequence will have epistemological consequences. It is presumably true that an ideally rational thinker will have a set of beliefs that are closed under defeasible, as well as deductive, consequence.

I’m not sure either of the two claims are true, but I want to focus more on the claim about the defeasible consequence relation. (For deductive consequences, I think some closure principle is true, but even ideally rational thinkers do not engage in every deduction within their intellectual purview.)

So suppose we consider a set S of claims, where S* is the set derived by adding all the defeasible consequences of S. Then the claim would be that it is impossible for an ideally rational thinker to believe S rather than S*.

A first thing to note is that the relation in question can’t be the Chisholmian one here. As I understand his views, Chisholm’s concept of defeasible consequence is clarified by epistemic principles, principles whose antecedents specify items that prima facie support certain conclusions. To sustain the above claim, however, one will have to limit the defeasible consequence relation to what S supports, all elements of S considered. So if S contains p, and p prima facie supports q, but is defeated by further information d (contained in S), then q is not a defeasible consequence of S.

There is another point to note as well. The notion of a defeasible consequence, if it is to be of any epistemological use, needs to be able to hold between things other than beliefs. Being appeared to redly prima facie warrants believing that something is red, even when the person doesn’t believe that s/he is being appeared to redly. So something’s being red should be a defeasible consequence of the claim that one is appeared to redly (at least when no defeaters are present).

This point gives us reason to reject the claim that an ideally rational thinker will have a set of beliefs closed under the defeasible consequence relation. For the set of beliefs might defeasibly support q, but be undermined by some experience one is having which is not encoded within the belief system at all.


Closure for Defeasible Consequence? — 25 Comments

  1. The suggested closure principle also raises what might be called the “drainage problem.” Suppose your only reason for believing P is that it is probable given Q, that your only reason for believing Q is that it is probable given R, that your only reason for believing R is that it is probable given S, and so on back to some corpus of evidence E. It is quite possible, despite the intermediate links in the chain, that P is not probable given E. The support provided by the evidence E has, as it were, “drained away.”

    If the suggested closure principle is true then the relation of “defeasible consequence” is not at all like a conditional probability relation.

  2. Stephen, this is a good point and raises the question of whether a defeasible support relation appropriate for epistemological use could be a logical one. One condition usually assumed (which Koons discusses) for logical relations is cumulativity, and one condition on cumulativity is the CUT principle:
    if A is a subset of B, which is a subset of the defeasible consequences of A, then the defeasible consequences of B are a subset of the defeasible consequences of A.

    My initial reaction, though I’m not quite sure about this, is that if you like degrees of belief, then your epistemology should pay more attention to conditional probability than to the logic of defeasible consequence. But if you want an epistemology for belief simpliciter, maybe in the sequence you mention, by the time you get to S, your evidence isn’t just E anymore: it’s E plus all the other things you’ve added to your stock of beliefs on the basis of defeasible reasoning.

  3. Defeasible support relations are very, very cool–so hats off to both Rob (for writing the entry) and to Jon (for wanting to discuss it).

    Just to try to clarify some issues. First, it is an open question whether nondoxastic states can enter into justificatory and defeat relations. If they can (as Pollock has been arguing for a zillion years, plus or minus), then our defeasible consequece relations had better b able to handle that. But that’s OK. Nothing in the formal representation precludes pre-doxastic states from being “premises” to the left of |~. (Pollock’s system of defeasible argumentation has allowed this from the start.) So long as we are careful to mention the possibility of nondoxastic states being included as premises, I think no harm is done by pretending (perhaps counterfactually) that they are all beliefs.

    Second, I guess I am a little confused about Jon’s first point in the post. Suppose S is a set of “basic” states (maybe they’re basic beliefs, maybe they’re inputs from perceptual a la Pollock). And suppose N is the set of norms of the agent, or epistemic principles if you like–the defeasible and nondefeasible principles in her epistemic repitoire. What would an ideally rational agent with tthat information believe? The natural thought is to say that she would believe the result of applying her norms to S iteratively. In the limit (if her norms are well-behaved), she reaches a steady state. That’s S*.

    Here is something we know about this process. Hold N fixed. Suppose S1 is included in S2. Then it will not in general be true that S1* is included in S2*. As far as I can see, that is both compatible with and perhaps even underlying the Chisholmian/early-Pollock idea of epistemic principles.

    I think I must be misunderstanding your worry, Jon.

  4. Glad to hear from you on this, Thony, and this time there’s no obvious joke you’re making that I’m missing (remember the disagreeing with officials post…).

    On the first point, I agree–nothing is affected substantively by allowing non-beliefs as premises. And then we could reformulate Rob’s point about ideally rational thinkers in terms of closure on the set of beliefs and experiences.

    On the second point, I don’t think anything hinges on what I said, but here it is. If we take an epistemic principle such as
    If p, then if no grounds for doubt, q;
    my first reaction is to say that this means that p is a defeasible reason to believe q, whether or not there are grounds for doubt. That is, if p provides defeasible support for q, it does so in every situation. But nothing turns on denying this point, as far as I can tell; it’s just an assumption that I was inclined to make initially.

  5. OK, I think I see the point. Just to be sure, let me add one more thing.

    Suppose p is a defeasible reason for q. So p provides defeasible support for q, as you say, in every situation. But what makes defeasible consequence both useful and hard is that this is no guarantee that an arbitrary set S of premises that includes p will also include q. And that’s just for the reason that the support q gets from p in every situation is defeasible and hence can get trumped. And that’s why S1 and S2 can be neatly ordered and have that not be true for S1* and S2*.

    So I think you can make your initial assumption (if I’m following you), and live with defeasible closure, too.

  6. That’s a nice way to put it, Thony, to distinguish defeasible reasons from defeasible consequences. And yes, I do want some kind of closure for the defeasible consequence relation.

    This really is neat stuff; I didn’t know about all the logic work done on the relation. Been waking up thinking about it… so maybe more posts coming…

    In your first comment, is this right? S1 and S2 are initial states, with S1* and S2* the end states, respectively, after iteratively applying the norms N to S1 and S2 and their successors?

  7. Hi, Jon. Yep, that’s right: think of S1 and S2 as premises, or given information, or whatever, and the starred versions as the (in fancy logic talk) fixed-points of applying the rules in N. It’s always good to hear that notions of defeasible consequence are getting their share of late night attention.

  8. Jon and Thony, I apoligize, but do you mind if I return to Jon’s earlier suggestion that in defeasible reasoning, the “evidence” at later stages includes the conclusions reached at earlier stages? I think something like this must be right, but in a sense it simply restates the problem. The question is this: granted it’s epistemically appropriate to infer an (undefeated) defeasible consequence of other things you know/believe; but is it appropriate to turn around and reason on this new conclusion to reach other defeasible consequences? That answer simply isn’t clear.

    One aspect of the problem is that, intuitively, there must be some relationship (though certainly not one of identity) between defeasible consequence and objective probability, such that if p is a defeasible consequence of q, then (in the typical case) the objective probability of p given q is fairly high (but only typically, for reasons of the sort discussed by Plantinga and others.) Now if “fairly high” is significantly less than 1, then it is by no means clear that it is appropriate to infer an undefeated defeasible consequence of previously inferred undefeated defeasible consequences.

    I note that Koons mentions in his article that preferential consequence relations (which satisfy cumulativity) essentially are equivalent to conditional probabilities that are infinitesimally close to 1. But, from the standpoint of epistemology, this likely is not the right relationship between defeasible consequence and objective probability.

    I don’t know what the drainage problem really shows, but I suspect it shows that linear models of defeasible reasoning are incorrect. (Though I confess at this point I’m not entirely sure what that means.)

  9. I might be confused about some of this discussion. Let xSy mean that x defeasibly supports y. What is Koons’ claim? Is it:

    1. If xSy and ySz then xSz

    ? If that’s the claim, then I’m not sure what bearing Jon’s following remark has:

    But if you want an epistemology for belief simpliciter, maybe in the sequence you mention, by the time you get to S, your evidence isnâ??t just E anymore: itâ??s E plus all the other things youâ??ve added to your stock of beliefs on the basis of defeasible reasoning.

    Initially, it seems like the point here is that even if eSp and pSq, it might be false that (e&p)Sq. But that doesn’t directly bear on claim (1), which would just say that eSq. Maybe the point is that the closure principle is actually something like:

    2. If xSy and ySz then (x&y)Sz

    I think both (1) and (2) are refuted by Klein’s “clever car thief” example. Suppose that a clever car thief is defined as a car thief who presents all the appearances of really owning the car he’s driving (giving observers overwhelming evidence that he owns it, even though he does not). Then

    3. “Jones is a clever car thief” supports “Jones gives overwhelming evidence of really owning the car he is driving.”
    4. “Jones gives overwhelming evidence of really owning the car he is driving” supports “Jones owns the car he is driving.”
    5. But it is not the case that “Jones is a clever car thief” supports “Jones owns the car he is driving.”
    6. It is also not the case that “Jones is a clever car thief and he gives overwhelming evidence of owning the car he is driving” supports “Jones owns the car he is driving.”

    This counter-example is very hard to resist, since (3) is true as long as entailment counts as a special case of support. (And if you want to reject that, because it’s not “defeasible,” you can surely modify the example so that clever car thieves merely give an extremely high probability of giving overwhelming evidence that they own the cars they drive.) (4) is true as long as the existence of overwhelming evidence of something counts as defeasible support, which seems tautological. (5) and (6) are true as long as a proposition doesn’t support something that it’s incompatible with.

    Or should the defeasible support closure principle be formulated in some other way? Perhaps the point was

    7. If (xSy and given x, ySz), then (x&y)Sz.

    The problem with this seems to be one of triviality, since I don’t know how to understand “ySz given x” other than as meaning (x&y)Sz. Or maybe the principle is

    8. If xSy and (x&y)Sz, then xSz.

    This doesn’t fall prey to the Clever Car Thief, and it seems perfectly acceptable.

  10. Perhaps we should make a distinction, to help de-muddy our waters. On the one hand, we have relations between propositions and propositions (or sentences and sentences; it doesn’t matter for us): “is a conclusive reason for” is one such relation, and “is a prima facie reason for” is another. And on the other: we have relations holding between a set of sentences and a sentence–classical entailment/consequence |= is one such relation, and defeasible consequence relations |~ represent another class (I want a double ‘~’ but don’t know how to do it in ascii).

    Of course, we want these two relations to hook up in the right way. For the deductive case it is easy: p is a conclusive reason for q iff {p} |= q. And since |= is monotone (if A |= p and A is a subset of B then B |= p), we are off and running. Things are trickier for defeasible reasons since |~ isn’t monotone. But that’s what makes defeasible reasoning epistemologically interesting.

    Closure properties, officially, are properties that sets have. And so they are about relations of our latter sort. Just to see it in action in the classical case: to say that a a set of an agent’s beliefs K is closed under logical consequence is just to say that it is closed under |=: i.e., for any p we like, if K |= p then p is (already) in K. Or, to put it another way: let Cn(X) be the set of consequences according to |= of a set X. Then deductive closure comes to saying that an agent’s beliefs are a fixed point of Cn: Cn(K) = K. So the closure condition here is on the set of beliefs, and it reflects that we think that an ideally rational agent concludes all she can on the basis of “…is a conclusive reason for…”.

    Advocating closure for defeasible consequence is just the same: K is closed under defeasible consequence iff if K |~ p then p is already in K. Or, again: Let Inf(X) be the set of consequences of X according to |~. (The ‘Inf’ is meant to remind us that it is inference, broadly construed, that the agent is up to.) Then closure is just a fixed-point: K is closed under defeasible consequence iff Inf(K) = K. So the closure condition here is on the set of beliefs, and it reflects that we think that an ideally rational agent concludes all she can on the basis of “…is a defeasible reason for…”.

    But here we see a big difference between the deductive and the defeasible cases. In the deductive case, if p is a conclusive reason for q, and X includes p, then no matter what else X includes we know that X |= q. That’s because |= is monotonic. But |~ ain’t. So (letting p ~> q represent that p is a defeasible reason for q) from the fact that p ~> q and X includes p, we have no guarantee that X |~ q. That depends on: (i) the specific details of a particular proposal for |~; and (ii) what else is in X–it may contain a defeater for p ~> q (in which case X not-|~ q), it may contain a defeater for the defeater for p ~> q (in which case X |~ q), it may contain a defeater for the defeater for the defeater for p ~> q (in which case X not-|~ q), and so on.

    That’s the hard part of the problem–specifying just how defeasible reasons and defeaters interact: giving a story about, given a stock of defeasible reasons/epistemic norms, just what is and what is not a defeasible consequence of an arbitrary set X. That is just the problem of figuring out the logic of |~. (There are lots and lots of candidates on the table; Rob’s article sketches some of the most prominent/historically important.)

  11. The question as I see it is this: if you infer Q because it is an undefeated defeasible consequence of R, in what circumstances can you then turn around and infer P, which is a defesible consequence of Q? As the number of “inferences upon inferences” increases, intuitively one seems to lose some confidence in the ultimate conclusion, even though each of the links in the chain appears undefeated. Perhaps one way of framing the question is to ask whether the length of the chain of inferences can itself be an undermining defeater of the ultimate conclusion.

    The clever car thief example does not appear to me to be a case in which “inference upon inference” explains why the ultimate conclusion is defeated (at least that is not how I would analyze that example).

  12. Stephen–what’s attractive about the logic of defeasible consequence is precisely that it avoids having to let information about conditional probabilities drive one in the direction you wish to go. If one is interested in belief simpliciter, there’s no obvious reason to hold that our body of evidence is not enlarged by reasoning in accord with the logic of defeasible consequence. If that means we end up at a point which is not likely to be true given our original evidence, that’s not any obvious problem. Perhaps it shows that one couldn’t have gone directly to that conclusion from the original body of evidence; that intermediary steps are needed. But, in the absence of some counterexample to the idea or some argument beyond pointing out how the conditional probabilities come out, I don’t see the problem here. What is true, I think, is that the logic of defeasible consequence won’t help when thinking in terms of degrees of belief. But when I want an epistemology for belief simpliciter, there’s no way to translate the epistemology of degrees of belief to help with that project. And look! That’s just what the logic of defeasible consequence is particularly suited for, when properly constructed…

  13. Jon, my point is not that “inference upon inference” is never permissible (it certainly is in many instances), but only that a good theory of defeasible reasoning has to explain how it works. There are times when we trust it and times when we don’t (for example, in certain legal settings). So a good theory has to model it accurately. There are other questions as well. For example, when it isn’t permitted, can that be explained within the framework of undercutting defeaters and rebutting defeaters or do we need some additional machinery? The whole thing seems to me to be a somewhat neglected, but very interesting, area.

  14. The traditional rule is that “the prior inferences must be established to the exclusion of any other reasonable theory rather than merely by a probability, in order that the last inference of the probability of the ultimate fact may be based thereon.” New York Life Ins. Co. v. McNeely, 79 P.2d 948 (1938). In other words, in order for the final step to be inferred by a preponderance of the evidence, the preceeding stages must be established beyond a reasonable doubt.

    Now, that’s the traditional rule, and it isn’t universally followed. Nor would I necessarily advocate it as correct. But I do think an adequate theory of defeasible inference should have something to say in this area.

    Cohen has an interesting discussion of this in The Probable and the Provable.

  15. Stephen–
    One other observation about the drainage problem. Obviously, a defender of closure will reject that a belief is defeasibly justified iff it has a high probability, or a probability above some threshold. That’s ok. But here’s something I think is not okay to reject: a belief is defeasibly justified *only if* it has a probability above some threshold. For instance, surely every justified belief is more than 1% likely to be true.

    Now, the really serious drainage problem, as I see it, is that it looks like you can always get the probability of a belief at the end of a chain to fall below whatever threshhold you pick, by having a sufficiently long chain.

    How could this be avoided? It seems that the only way to avoid this problem is to understand defeasible support in such a way that, when A defeasibly supports B, B is *not* thereby less probable than A.

    Well, here is a silly way to do this (not a plausible account of def. support, but perhaps illuminating as a mere existence proof): Suppose that

    “x defeasibly supports y” means “P(y|x) > P(y), and P(y|e) > .9”,

    where “P(y|e)” is the probability of y on your total evidence (including x). This stipulatively blocks the drainage problem, since the moment the probability of something in your chain of reasoning drops below .9, you no longer have “defeasible support”. And it’s not completely uninteresting–there is some connection with an intuitive notion of support (provided by the first clause: x does have to raise the Pr. of y to count as supporting it).

  16. I want to address two points in the exchange between Jon and Stephen, lines 11 to 13. First, conditional probability functions are not the only mechanism by which to provide a probabilistic understanding of defeasible support.

    (Ah, and it is important to keep in mind that conditional probability is not a monotone function, since it is a probability function and probability functions are (weakly) monotonic; rather, the second place of a conditional probability function may be thought of as behaving non-monotonically in the sense that the probability of an event (set) A conditioned on an event (set) B may either increase or decrease when B increases. Notice that the first position of this function does not share this behavior, however. I mention this only because there are many details we are glossing over in the discussion, but the way conditional probability functions ape non-monotonicity can be a key sticking point when setting out to provide the details.)

    Second, perhaps the relationship between support and consequence should be thought of in terms analogous to the relationship between truth and logical consequence: namely (the philosophy of logic aside) the material conditional is very useful in mathematics and remarkably well understood, despite its rather clumsy behavior when applied to most things outside of its intended domain, which is mathematics. Perhaps, then, we shouldn’t necessarily be alarmed by the depletion of support by iterative applications that Stephen is talking about and be so quick to dismiss it since, I conjecture, it would be difficult to do so without also ruling out all systems that feature a measure theoretic modeling of support for propositions and a threshold-sensitive consequence relation. That the support property depletes does not mean that a long conjunction of propositions cannot be supported; it just means that the support for each of the conjuncts plus logic isn’t enough to guarantee that the conjunction shares the same degree of support. Likewise, p only if q and a false p doesn’t render q hopelessly indeterminate. A long conjunction might be supported, but you have to look at the support you have for that conjunction. The logic of support isn’t worthless, because there are conjunctions that are supported by the relatively high degree of support shared by each conjunct. Indeed, it may even guide us to investigate the support for below-threshold-for-acceptance conjunctions that are otherwise entailed by what is supported.

    The idea in short is that we don’t beat up on the material conditional for failing to tell us what to do when an antecedent is false; perhaps, then, we shouldn’t beat up on a formal modeling of epistemic support relation that puts a curb on arbitrary conjunctions of what is already supported.

  17. I think that Jon is ultimately correct, as he put it above (#12), that one’s body of evidence is enlarged by reasoning in accord with the logic of defeasible consequence, and that this is a key part of solving the drainage problem. But the problem is to understand what that means exactly, because on one level that’s really just a restatement of the problem–what is it that entitles you to enlarge your body of evidence in that way?

    Perhaps the problem can be posed like this: it is natural to assume that (undefeated) defeasible reasoning is truth conducive, meaning that, in the long run, you’re more likely to hold true beliefs if you infer (undefeated) defeasible consequences of things you already believe/know, than if you infer things that aren’t. This suggests (though only suggests) that there is some relationship between a belief’s being a defeasible consequence of others and its being probably true given those other beliefs (or, as Mike put it (#16), its probability being above some threshold, given those other beliefs). It’s true, this relationship might not hold, in which case one either would have to deny that defeasible reasoning is ultimately truth-conducive, or explain its truth-conducivity in some other way.

    On the other hand, we clearly do engage in “inference upon inference,” and while in some circumstances we recognize that the support that attaches at each stage in the reasoning sort of bleeds off as we progress to later stages, in other circumstances we don’t feel that it diminishes (or we just don’t worry about it).

    I think what may be going here is that while there is some relationship between a belief’s being defeasibly supported and its being probably true, one can’t apply some simple mutliplicative probabillty rule at each stage. Most likely (and this is just hunch), in those cases where we permit stacking of inference upon inference, we’re responding to some intuitive sense that the ultimate conclusion and the augmented body of evidence that Jon recognizes together form a coherent whole that is more likely to be true as a package than is apparent if one looks at the inferential stages sequentially.

  18. Gregory,
    I failed to understand your point that conditional probability functions are monotone. Can you explain that again? I assume that monotonicity in this case would mean that P(A|B&C) must always be > or = P(A|B), because A is analogous to something that is being inferred, either from B, or from B&C.

  19. Hi Mike,

    Probability functions, as mathematical functions, are monotonic: if A is a subset of B, then Pr(A) is less than or equal to Pr(B). If A is a smaller part of the sample space than B, then the probability of A must be less than or equal to the probability of B. For this reason, conditional probability functions are likewise monotonic in the first position: if A is a subset of B, then Pr(A | X) is less than or equal to Pr(B | X), for any X with a positive measure. However, the conditional probability function is non-monotonic in the second position, since conditioning on a smaller part of the sample space may either raise or lower the conditional probability. This point is contrary to the assumption made in line 20, if I’ve read this post correctly.

    You can ape non-monotonicity by conditioning on different events in your sample space, of course, but this behavior isn’t attributed to the function itself but rather to how the probability mass is assigned to events in your sample space: it is possible to look at different locations in your sample space and for the probability of the event you’re interested in at those locations to increase, decrease or remain the same.

    But, the probability function itself is a monotonic function. Hence, probability is inferentially monotonic: from a superset of probability premises, you get the same conclusions or a superset of them.

    Does this answer your question?

  20. Thanks, Greg. I got all that up til the last statement: “probability is inferentially monotonic: from a superset of probability premises, you get the same conclusions or a superset of them.” Maybe I don’t know what you mean by “probability premises.” You mean statements that are explicitly assessments of probability? Or statements that are themselves probably true? Or something else?

    Either way, it seems like, from a superset of premises A, you could “get” (that is, be justified in believing) a proper subset of the conclusions that you could get from A. I say this because I assume that to be justified in believing a proposition B on the basis of (a set of propositions) A is, roughly, for B to have a high probability conditional on A (or maybe this is too simplistic, but at least it should be partly a function of that conditional probability, and going *below* some threshold level for that conditional probability should take away justification). Since conditionalizing on a proper superset of A (=a *subset* of the space of possibilities) can lower the probability of B (as we’ve agreed), it follows that B might be justified by A, but not justified by some proper superset of A. Right? So, unless I misunderstood some of your terms, you should say that from a superset of premises A, you (probabilistically) can get *either more or fewer* conclusions than from A.

  21. Hi Mike. Right, one can think of probability as a logical operator in the object language itself or as a metalinguistic operator applying to sentences or sets of sentences. The point holds either way, but let’s go with the latter. Suppose that the probability of the material conditional statement p –> q is m and the probability of the statement p is n. We may infer that the probability of q is between m and max(0, (m+n)-1), where 0 is included to make sure that the resulting value is a probability. The same point about monotonicity will hold if you want to just talk about the value being between m and (m+n)-1 and not worry about the result being a probability, however. The point is, add more probability premises and q is still between m and max(0, (m+n)-1). This holds for conditional probabilities as well, so long as we fix the point in the sample space that we’re conditioning on. That is, if a set of probability premises G entails that Pr(A|B) is [m,n] (i.e., if the probability of A conditioned on B is between m and n), then any superset of G entails that Pr(A|B) is [m,n]. Hence, probability is inferentially monotonic: you may add probability premises without losing any probability conclusions.

    I entirely agree with you that non-monotonicity seems to be a common property of everyday, pre-theoretic rational inference. In fact, it even seems to be a common property of inferential statistics which is an enterprise that explicitly applies probability to make reasonable inferences. There are several interesting philosophical questions that spring from these observations. One general question is whether there is any logical structure to this behavior at all or whether the apparent non-monotonic behavior of defeasible reasoning is, in the end, too content dependent to think of as a logical property, so too content dependent to think of as a property of a defeasible closure operation. (Hmm, in fact, I suppose someone could play around with these properties and argue that defeasible closure operations don’t exist….) But some, including me, think that there is some interesting logical structure to patterns of non-monotonic/defeasible reasoning–which is (partly) why I am holding out against the standard use of conditional probabilities to model defeasible reasoning.

  22. Gregory, I think you are right that our pre-theoretic conception of defeasible reasoning really can’t be captured using conditional probabilities. Still, I don’t think closure is a part of that conception either. It seems that our pre-theoretic thinking recognizes that some (undefeated) defeasible reasons are better than others and that, in consequence, a belief can be undefeated and yet not as well supported as other beliefs. Moreover, one naturally feels more reluctant to reason on the less well supported beliefs, even if they are supported to whatever degree is required to make them justified or rational. In that case, failure of closure seems inevitable.

    By the way, am I right in thinking that the inferential monotonicity of probability that you describe is not enough to guarantee closure (assuming one were to model defeasible reasoning along those lines, which I take it you would discourage)?

  23. Hi Stephen, point well taken. When I’m more careful, I usually just talk about arguments and formulas, and wonder whether there is any interesting logical structure to non-monotonic arguments that (non-monotonically) entail their conclusions. I think statistical arguments have enough structure to study non-monotonic consequence. They are constructed from sentences and events (which I understand better) and, if a working model were to emerge, would provide a basis for testing the effectiveness of that model–better than common sense reasoning, I think, which was the original motivation for non-mon logic. So I think of arguments as structures, or items we can reason about (or with) and think of non-monotonic arguments as candidates for structures that (may allow us to) isolate some of the relations (defeasibility; support) and properties (rational acceptance) that enter into the general discussion about defeasible reasoning qua cognitive faculty. The goal or idea or intuition or hope is that there is some common structure here that we can exploit, even if only to see clearly how it doesn’t at all behave like we think these properties should behave in our epistemic theories. Of course, I’m hoping that there will be a more positive payoff.

    I’m not sure I understood your last question about closure and inference. I don’t think classical probabilistic logic (which is what I’ve described) would be a very good candidate for articulating closure conditions. For one, if you start with point probabilities for your premises you typically get a range of possible values for your entailed conclusion rather than another point value. It gets tricky to work with if you interpret the measure as degrees of belief, since (logically) any of these values are consistent which throws a wrench into thinking that probability provides rationality constraints for belief.

    However, I think that if you model support as ‘the probability of p is no less than n’ (or, probability of false acceptance is 1-n) and observe how the lower-bound depletes within probabilistic logic, then the entailed lower-value of the interval becomes a fixed lower bound for support for the entailed proposition. So, if you have a threshold point, T, you can manipulate supported propositions with your logic and compare how various logical consequences of what is supported stack up to T, your acceptance level. This gives some logical structure to ‘greater support for’ and ‘lesser support for’ relations. And it is formal: it holds regardless of interpretation of formulas.

    Non-monotonicity is another matter. I’ve been toying for a few years with an extension of default logic that incorporates this support behavior that I’ve sketched above. The basic idea is that you can introduce a sentence, p, with some lower-bound of support, n, which means that, as far as we know (which means here as far as what is contained in our theory), the probability that p is no less than n.

    The logic is non-monotonic since you can introduce p under these conditions but learn (that is, add a formula to the theory) that defeats the application of the rule that introduced p at n. I don’t have anything that addresses defeat of a rule–something that says whether we should take the rule itself out of service for being too tough or too lenient. But, semantically, each of these rules is itself bounded by a lower-bound probability. So, a rule could be ‘false’ if, given what we know, it says that a consequent q non-monotonically follows with a lower-bound of no less than m when in fact it *is* less than *m*. I like this idea, conceptually, but I haven’t figured out how to exploit it.

    Woe. Way too long. The short of is that I was being sloppy with the epistemology in an effort to be clear about the math.

Leave a Reply

Your email address will not be published. Required fields are marked *