closure for justification and the order of inquiry

Suppose one adopts a closure principle for justification of the following sort:

CLOSURE: If S justifiably believes p and justifiably believes that p implies q, then S is justified in believing q.

Justifiably believing implies believing, in this formulation, but being justified in believing does not.

Suppose one is thinking of the lottery paradox, with n tickets. Suppose also that on your theory you are justified in believing that your ticket will lose. Your theory also endorses the principle that it is impossible to be justified in believing p and at the same time be justified in believing ~p. Finally, suppose you justifiably believe that some ticket will win.

Then consider the following conditional: if ticket 1 loses, then if ticket 2 loses, then . . . if ticket n loses, then it is not the case that some ticket wins. Your ticket is ticket #1, and your evidence is probabilistic. So you come to believe, justifiably, the conditional minus the antecedent about ticket #1. You apply this line of reasoning n times, and get a contradiction.

To save CLOSURE, one might say the following. Once you use your evidence to conclude that ticket 1 loses, the level of confirmation that any remaining ticket will lose is not n-1/n, but rather n-2/n-1. And once you use this evidence to conclude that ticket #2 will lose, your evidence about the remaining tickets gives a confirmation level of n-3/n-2. At some point, the value of n-m+1/n-m is low enough that you it fails to justify believing that the next ticket will lose, so the lottery example isn’t a counterexample to CLOSURE.

Note however that the order of inquiry here makes a difference to what you are justified in believing, if one takes this way out of the problem for CLOSURE. If the tickets are ordered in one way in the conditional with n embedded antecedents, you can come to be justified in believing that ticket #64 will lose. If the antecedents of the conditional are ordered in another way, by the time you get to the question of whether ticket #64 will lose, you’ve already gone far enough that the confirmation level doesn’t justify believing that ticket #64 will lose. So, if you like CLOSURE (or some close cousin of it), and if you like this response to the lottery counterexample to CLOSURE, then you must maintain that the order in which you apply a body of evidence to a set of propositions can make the difference between a claim being justified or not being justified. But how could changing the order of the antecedents in the imagined conditional have this effect? After all, it’s not as if one was unaware prior to drawing the inferences about what one’s body of evidence shows.

So: can one get out of the lottery counterexample without embracing the unusual claim that the order of inference here makes all the difference as to whether a particular belief is justified?


Comments

closure for justification and the order of inquiry — 7 Comments

  1. CLOSURE: If S justifiably believes p and justifiably believes that p implies q, then S is justified in believing q.

    Jon: Don’t you instead need some principle of multi-premise closure?

    That ticket #1 doesn’t win, doesn’t by itself imply the first conclusion reached (the long conditional minus the first antecedent), but also needs the long conditional as a premise. That long conditional can be thought of as part of the set-up of the case, and so one might think that the premise that #1 doesn’t win does, for current purposes (i.e., given the set-up), entail the first conclusion reached. But after that first inference is drawn, aren’t we definitely into multi-premise closure to get us any further? For we then have to use the premise that #2 doesn’t win, along with the long-conditional-minus-the-first-antecedent to get to the next conclusion, and the l-c-m-t-f-a isn’t just part of the set-up of the case.

  2. Keith, maybe I’m missing something, but p is the claim that ticket #1 loses, and q is the long conditional minus its antecendent (which is p). The long conditional is the value for “p implies q.” By applying CLOSURE, we get justification for q.

    To apply ClOSURE again, I’m assuming that you come to believe q as a result of deducing it from p and the fact that p implies q. That should be good enough to yield that the belief that q is properly based, and so the justification for q together with proper basing of the belief that q yields that one justifiably believes q. I also assume that you use the same probabilistic assessment procedures used in coming to believe that ticket #1 loses to come to believe that ticket #2 loses. So if the probability is high enough, then you justifiably believe that ticket #2 loses as well. Then you are in a position to apply CLOSURE a second time, and so on.

  3. I see what you’re thinking now.

    I have always thought of closure principles as articulating ways knowledge/justification transmits over entailment, so I read your “implies” in your principle as meaning “entails.”

    In the second step, if this is going by single-premise closure, the single premise would be:

    P: ticket 2 loses,

    which one justifiably believes on probabilistic grounds, and the conclusion reached on its basis is:

    Q: If ticket 3 loses, then if ticket 4 loses, then…if ticket n loses, then it is not the case that some ticket wins.

    But then, I was thinking, this doesn’t fit your single-premise CLOSURE, for here P doesn’t entail Q. P can serve as a basis on which we may come to justifiably believe that Q, but only because we already justifiably believe that

    If P then Q: If ticket 2 loses, then if ticket 3 loses, then…if ticket n loses, then it is not the case that some ticket wins,

    which we presumably came to justifiably believe in the first step. But that P and something else we justifiably believe together entail Q does not mean that P entails Q. This seems a case where we justifiably believe that If P then Q though P doesn’t entail Q. That’s why I thought you needed to go to multi-premise closure.

  4. The reason I’m interested in CLOSURE is because, as you might have expected, it’s been defended in print (or at least a close cousin of it). And of course, you’re right that the entailment here is not between p and q, but in the simple modus ponens inference itself.

    If we change the principle from one appealing to implication between p and q to one appealing to entailment, the issue about order of inquiry won’t come up with having a conjunction-introduction rule to use under the scope of the justification operator. If we had that, we could conjoin your values for p and q, which together entail q, and then apply the entailment version of CLOSURE. But, then, the problem might be with the conjunction rule rather than the new closure claim, which is a multi-premise closure principle.

  5. Here is a suggestion: draw a sharp distinction between the probability of a conjunctive (disjunctive) event, and conjoined (disjoined) probabilistic events.

    I prefer to mark this distinction by in expressions and out expressions, respectively. An in expression is an ordered pair, (p, [l,u]), where p is a wff of a propositional language, and l and u designate reals in the unit interval marking ‘lower’ and ‘upper’ bounds. An out expression is composed of a collection of in expressions.

    The next question is, how are out expressions composed? In other words, how do you disjoin and conjoin in expressions? And how does closure fit into this? Take each question in turn.

    Suppose (T_1, [.999,.9999]) and (T_2, [.999,.9999]) are two in expressions. You may interpret each as saying that the probability that ticket i losses is .999 where i=1 or i=2, if you like. We know that there is the possibility of a depletion of probability mass by combining probabilistic events: the lottery paradox turns on this, and we are marking this point by noting that (T_1, [.999,.999]) *and* (T_2, [.999,.999]) does not entail (T_1 ^ T_2, [.999,.999]), where wedge (^) is boolean ‘and’ and *and* is yet to be defined. The proposal reduces to this: define a semantics for *and* (and also *or*) to compute the bounds of an in expression whose first coordinate is the boolean conjunction of the propositions appearing in the first coordinates of the *conjuncts* (*disjuncts*) of out expression, and the second coordinate–the probability interval–is calculated to contain the composite probablistic event denoted by the out expression.

    (I sketch a semantics for *and* and *or* connectives in a paper that I’d be happy to pass on; The key idea is the following observation: Pr(p^q) must lie in the interval [max(0,Pr(p) + Pr(q) -1), min(Pr(p), Pr(q))]. We may tighten this interval (and indeed, compute precise values) if we are warranted to make explicit assumptions about the relationship between the events p and q, such as when they are (probabilistically) independent, mutually exclusive, or when one implies the other, among other relationships that might hold between them. )

    The trick, then, is that once one has an in expression (e.g., (T_1 ^ T_2 [.998,.999]) ) whose bounds are guaranteed to offer greatest lower and least upper bounds on the probability of the corresponding composit event, (e.g., the out expression (T_1 [.999,.999] *and* T_2 [.999,.999])), then one may take the deductive closure of the proposition in the first coordinate of the in expression.

    OK. So, if one has a threshold point in mind (say, .99) then closure works as follows. First, restrict the construction of in expressions from out expressions to just those that pass threshold. Then, close *each* in expression under logical consequence. The resulting system is sound: all outputs (if any) will be above threshold. Also, order of premises does not matter. However, the system is not precise: there may in fact be composite probabilistic events that in fact are above threshold but for which the system generates no corresponding in expression. Also, you are going to have collections of in expressions (which, after all, corresponds to an out expression) that you may be tempted to interpret incorrectly. But, so long as you keep the distinction in mind (and your system for interpreting these expressions) one should be able to keep things straight.

    The analysis of the lottery falls out as follows: the mistake the lottery paradox invites us to make is to interpret a collection of accepted sentences (Ticket 1 loses, [.999,.999] *and*, …, *and*, Ticket n loses [.999,.999]) as acceptance of the conjunction of each proposition in the collection. The conversion of *and* to the corresponding boolean conjunction is controlled by the semantics for *and* which accounts for the depletion of probability mass that is possible when combining probabilistic events.

    That’s the idea, in a nutshell.

Leave a Reply

Your email address will not be published. Required fields are marked *