Comments

Defeaters and Probability — 15 Comments

  1. That’s hard to read. Does Pr(d) = probably d? So Pr(d) =
    d > .5? Suppose so and suppose (where q) & ~q
    3. d = q
    4. Pr(d)= .6

    If Pr(p -> q)= 1 (fair to assume) then, X ensures that Y is true (that Pr(~p) = 0)) and learning that d is true would defeat that relation. But learning that Pr(d) = .6 (that d is more probable than not) would not be a defeater. It would entail that Pr(~p) = .4. So X still makes Y reasonable given (4).

  2. How about this: I have some information (X) that justifies the claim that [I might win the lottery] (Y). “I probably will not win the lottery” is not a defeater for Y, but “I will not win the lottery” is. (Maybe.)

  3. Yes, if the claim itself is an epistemic possibility claim, then the probability of the denial of what’s possible won’t undermine the possibility claim. But the denial itself will. Very nice example.

    So, at a minimum, the principle will need to be restricted, perhaps to claims that aren’t epistemic modals…

  4. Mike A. had some loss of data in cyberspace, so here’s his counterexample (when special characters such as ‘less than’ or ‘less than or equal to’ are used, the html code must be entered instead of the symbol itself):

    1. Y = 0 ≤ Pr(~p) ≤ .4.
    2. X = (p –> q) & ~q

    So clearly X is good evidence for Y (X entails that Pr(~p) = 0).

    3. d = q (or, d states that q is certain)

    So learning that d (i.e., q) is true defeats X’s support for Y.

    4. Pr(d)= .6 (this entails that Pr(~q) = .4).

    Learning that (4) is true together with X entails that Pr(~p) = .4. So Pr(d) does not defeat X’s support for Y.

  5. Mike A’s example shows that the principle needs to be restricted further. Not only should epistemic modals be excluded, but so should probability claims.

    Maybe there’s other kinds of counterexamples as well?

  6. I’m puzzled by Jon’s example. It’s not true that (p –> q) & ~q entails Pr(-p)=0. First, (p –> q) & ~q entails ~p. So, if anything, (p –> q) & ~q is conclusive evidence for ~p. But, it does not follow from this that (p –> q) & ~q entails that ~p has ANY probability. We need to be careful not to conflate

    (1) p entails q [hence Pr(q | p) = 1]
    (2) p entails Pr(q) = 1.

    (1) and (2) are different. Just let p = q. Then, Pr(q | p) = 1. But, p does not entail that p has probabilty 1 (it only implies that p is actually true). And, if p has probability less than 1, then it’s even false that p -> Pr(p) = 1.

  7. I think Mike’s example works. Here’s another kind of example. You have information (E), which confirms a hypothesis (H), and you know (E) was generated by one of two sources. One of them (S1) is perfectly reliable, the other (S2) is less than perfectly UNreliable. In this case, learning that the source was (S2) can defeat (E) as evidence for (H). But, learning that the source was probably (S2) may not, depending on how the averaging of the likelihoods comes out.

  8. Branden, the example I reported was Mike’s, cleaned up so that it would display properly. Do you think the claim that X is good evidence for Y is true? If X leaves the probability of ~p open, it’s not clear how it could be.

  9. Branden/Jon

    At least as I assign probabilities, any proposition that is true is assigned probablity 1. I don’t know why I should not do so, especially in cases where I am betting on an outcome O and I know that O will occur: I’m certainly not going to hedge my bet in such a case. I do know that some others assign probablity 1 to just those propositions that are necessarily true (in some sufficiently strong sense of necessary).

  10. Branden’s point is well-taken. Those are importantly different. The inference I was relying on assumed this,

    1. [](A -> B)-> (Pr(A) -LE- P(B)))

    If A entails B then the probability of A is less than or equal to the probability of B. I put A = ((p -> q) & ~q) and stipulated that Pr(p->q)= 1 and Pr(~q)= .4. I concluded, given (1), that Pr(~p) is .4. But in fact
    Pr(~p) is at least .4.

  11. Thanks for the clarifications, Mike. I tend to think regularity makes sense as a constraint on epistemic probability (assign probability 1 only to things such that it is not the case that they might be false). I think assigning probability 1 to all beliefs is too strong, since there are p’s such that I believe them, but I also think they might be false. I believe that my ticket will not win the lottery, but I also believe that it might win.

  12. Pingback: Certain Doubts » Indicatives and Warranted Assertibility

  13. Pingback: Certain Doubts » February Stats

Leave a Reply

Your email address will not be published. Required fields are marked *