A Puzzle for E=K from Incremental Confirmation via Mounting Evidence

This has probably been discussed somewhere, but I haven’t seen it.

1. Here’s a pretty natural picture of learning from experience.  A scientist S proposes some hypothesis H (which we will assume is true).  She gathers evidence and it’s favorable to H, so she’s encouraged.  She gathers some more evidence, it’s also favorable, so she really thinks she’s on to something.  More inquiry, more evidence, more (properly based) rational credence.  Yet more.  Finally, at some point, S “fully believes” H, in that her evidence supports it very strongly (though not maximally) and her rational credence is very strong as well (or whatever you want belief to be).  One can conceive experiments which could falsify it, but those experiments can’t be done with current technology, and there is no particular reason to think those experiments would go badly for H.  S has come to know H on the basis of accumulating evidence (I stipulate that this is explanatory and experimental evidence, not merely statistical evidence, though some statistics might be involved).

Here’s what seems like a platitude in light of the naturalness of the picture above.

EP – One can come to learn some true P via accumulating evidence that P.

2. I’m highly confident that Timothy Williamson believes the following two claims.

A. “Some beliefs fall shorter of justification than others. In that respect we can grade beliefs by their probability on the subject’s evidence…”

B.  “a belief is fully justified if and only if it constitutes knowledge”

This makes it sound like he accepts the Natural Picture above.  But…

How do we answer the question How probable must something be to be known?  I’m not raising the common vagueness question here, I’m fine with borderline cases.  This problem generalizes for any item of true belief at any degree of justification which becomes an item of knowledge.  Suppose it’s n/m.  Now consider the case above.  What happens the *moment* it get’s to n/m?  It becomes an item of knowledge, right?  But if E = K, then its probability must be 1.  And if it is *in virtue of* it’s being n/m probable (modulo the other assumptions) and it’s being evidence is *constituted* by it’s being knowledge, it seems as if we have a proposition the probability of which—at one and the same time—has probability n/m *and* probability 1.  What’s going on?

A related but different problem from the Dual Probability Problem (DPP) is the Quantum Leap Problem (QLP).  The puzzle is that the graph of the probability of H over time will look very strange.  It starts out low and rises at a smooth slope of 30-45 degrees and then BAMB it shoots straight up–perfectly vertically–to 100%.  That just seems strange to me.

In what way can Williamson and other E=Kers (pronounced “eekers”) affirm EP?  What’s the story or surrogate?


Comments

A Puzzle for E=K from Incremental Confirmation via Mounting Evidence — 9 Comments

  1. Right, this was the central problem I dealt with in my PQ paper “Why Williamson Should be a Sceptic.” There I thought the only plausible thing for the E=Ker to do would be to restrict fairly significantly the class of propositions that are known. Thus we succumb to a sort of skepticism. A perhaps less radical response would be to reject E=K for something like Clayton Littlejohn’s view, according to which E = non-inferential knowledge. CL’s view is in the spirit of Williamson’s, but avoids the problem. You could also go contextualist, and have your semantics deliver ‘S knows p iff p is part of S’s evidence’ as true in every context.

  2. Dylan, I thought Dodd 2007’s main point was: “Look, most of our beliefs have sub-unit probability, so they aren’t knowledge on Williamson’s view. Furthermore, [and I love this point,] we don’t have FMSs for very many propositions, in particular the future-oriented ones.” And then you go quantum about the present and it looks worse.

    I guess my problem is that I’m pretty skeptical about the future myself, don’t like quantum arguments, and think Tim could bite a lot of those bullets. What a great paper that is though, really a model in my opinion. I wish I would have written that paper!

    Yet I think the issue I’m raising is a bit different than the one use raised. Your paper definitely casts a shadow on Wmsn’s ability to honor EP, but the puzzles that are animating me here are more formal in a way. The puzzles that caused me to post are ones about oddities in the behaviors of probability assignments.

    I suppose the reason I thought those puzzles were connected to EP was that the belief would become safe way too early in the process, especially if the person is OVER confident in her theory, as scientists often are. And that thought led to the second post.

    Another way to think about the connection between the puzzles and EP is this.

    1. If W accommodates EP, then he’s stuck with those puzzles.
    2. Those puzzles are unacceptable.
    3. So W can’t accommodate EP.

    Then I want to see how to either make the puzzles go away or what surrogate there is for EP.

  3. Thanks for the compliment.

    I still think that the central issue in my paper is at least similar to the one you’re worried about, at least if I’m understanding you correctly.

    I argued basically: Look, there are all these beliefs we take ourselves to know that seem to have a small chance of being false. This is particularly obvious wrt our beliefs about the future because you can appeal to the objective probabilities in quantum mechanics to make it clear. But really the problem is general — it’s the case for any hypothesis h we’ve inferred ampliatively from the evidence. Now, TW could say that we acquire knowledge of h, and then h comes into our body of evidence. (Here EP comes in — this is what he needs to say if he accepts EP.) Then — once knowledge is acquired — h’s probability becomes 1 given K=E. But then he succumbs to what I called ‘Klein’s Problem’. h’s evidential probability goes from less than 1 to 1, just because we inferred that it’s true. But that’s bizarre.

    Isn’t this problem at least something like what you’re worried about? I focused on beliefs about the future, and you’re focusing on scientific theories, but the issue for both of us is how evidential probability can go from less than 1 to 1, simply by our coming to know it. I think Clayton Littlejohn, early in his recent paper on CKAs also focuses on this issue, and proposes his refinement of K=E as a way to get out of the problem. I do think his refinement is a good one if you want to avoid skepticism, and you want to hang on to the spirit of K=E.

    I know Alexander Bird has written a lot on applying Williamsonian epistemology to the epistemology of science. You might check out some of his papers.

  4. Trent,

    You might want to have a look at this BLPS paper by Douven and Williamson. This paper assails threshold acceptance views. They argue that (modest) rationality constraints entai that, in your terms, n/m must equal 1, which effectively trivializes the threshold view.

    Their construction is clever, although one can challenge the basis for the rationality constraints they use ground their generalization. (I do, for example, here.)

    Many moons have passed since I worked through Knowledge and its Limits, so whether this reference works for you or against you I could not say. But, it would appear to importantly constrain the discussion.

  5. Hi Trent,

    Here is a crude safety-inspired formal way to think about it (see also my comment on the first notion of graded safety in the next point). To simplify it, suppose you are firmly convinced of the truth of hypothesis h all along.
    – At t1, your basis for believing h is b1. Among close worlds, there are a number of cases where you would have b1 as well; among those, some are cases in which h is true (correct b1 cases) and some are cases in which h is false (error b1 cases). Say that the probability of h is the ratio of correct b1 cases among b1 cases.
    – At t2, your evidence improves, your basis for belief h is b2. The probability of h has changed; if things go well, it gets higher. (It is bound to get higher if the evidence just cuts of some error b1 cases.)
    – At some tx, you get to a basis bx such that all bx cases among close worlds are cases in which h is true. Your belief that h is now safe and you know it. The probability of h has given above has raised to 1.

    Roughly put, increasing evidence is a matter of progressively destroying sets of error possibilities until there is none left, not of increasing probability until a magic threshold of knowledgeability.

  6. Dylan,

    Yes, thanks for the refresher. I read your 2007 paper in 2006, hard to believe it’s been five years! So, yes, that is much more similar than I thought and maybe even the same problem it sounds like. Do you have Clayton’s fox example in mind? I suppose that is similar as well. Thanks for the Bird reference, too. I’m glad I’m not alone in thinking what you call the Klein problem is bizarre. Your paper should be read by all. So here’s the link, y’all.

    http://st-andrews.academia.edu/DylanDodd/Papers

    Greg,

    Thanks for the reference, I was not aware of that paper, though the issue looks familiar. I go with Henry on threshold views. Your Studia Logica paper looks very interesting!

    Julien,

    This is quite interesting, just what I’ve come to expect from you. I have a few questions. 1. Are the bi sets of evidence? It seems that in at least some cases they have to be, right? 2. You write “At t1, your basis for believing h is b1. Among close worlds, there are a number of cases where you would have b1 as well.” It seems that there’s an implicit “and so” between those sentences. That seems to result in just the situation I described: evidence is doing the work, it’s what determines the arrangement of modal space around this world. What else would it be? 3. It seems like there will be infinitely many b1 correct cases and also infinitely many b1 error cases, no? 4. Suppose the situation is this: At t1, b1 is having observed 100 swans and 90 are white, and at t2 b2 consists in having observed 1000 swans 900 of which were white. How can the probability that the next swan is white be anything other than .9? But (i) there’s no guarantee that we’ll get this ratio out of the worlds, and (ii) we don’t seem to need any modal stuff to get the right answer here. 5. Won’t the modal stuff go wonky in cases where from bn to bn+1 the gathering of more evidence makes me tired and more prone to fall asleep and not get more evidnece, so, but for a bit of epistemically harmless luck, I might have got a wrong result, but as it is I get the evidence successfully? Duncan says that the kind of luck involved in obtaining evidnece doesn’t threaten knowledge. But in this situation we’ve got something that makes my belief more modally fragile, but I *do* have the further evidence, so the epistemic probability should go up. 6. It seems much too strong to say that *all* close worlds with the same basis much be h worlds for me to know h! 7. Let R be the radius of “close” in this case. Why R? Why not just a bit farther where there’s one ~h world? I just don’t see how modalists address such questions.

  7. Hi Trent,

    1-2: To simplify let’s not just take your basis for h but your basis for anything. That’s the totality of your evidence, i.e. on the E=K view, the totality of what you know. Then you we consider counterfactual cases (a) which are close, (b) in which you know the same thing as you know in actuality. Among them the correct cases are ones in which h is true, and the error cases are ones in which it is false. In reply to 2: closeness is not supposed to be determined by the subject’s evidence but by the kind of similarity relation that Lewis takes to be at play in counterfactuals, for instance.

    3: yes, so I said in my comment to your next post, it’s not trivial to assume that the ratio is well defined.

    4. Looks right. Note though that the toy model was not meant to give you evidential probability, but how increasing evidence gives you knowledge. (My own view is that to evaluate whether you know that h we can’t only consider error cases involving h itself; with this modification the measure we’d get would violate the probability axioms, e.g. both p and -p can have a low “probability”.)

    5. The idea is to get a case in which I get evidence for p but that makes my belief in p less safe? Certainly worth exploring, but we’d have to see the details.

    6. That’s infallibilism for you. Whether it’s sceptical depends on two things: (a) how close worlds are selected and (b) how bases are individuated. Re (a): you don’t have to accept “if p has an objective chance of happening there is a close p-world”, for instance. (See Williamson’s “Probability and Danger” and his reply to Hawthorne and Lasonen in Williamson on K).

    7. NB: Recall that you said “I’m not raising the common vagueness question here, I’m fine with borderline cases”: isn’t that the common vagueness question? At any rate, here’s how I take it that Williamson addresses it: (a) the general epistemicist story. Where precisely the line falls is a matter of the overall pattern of use of “know”. If our use had been slightly different, the line may have fallen slightly further away or closer. It’s not a big deal that it falls exactly where it does, and we don’t know where it does. (b) as with the generality problem. If you’re looking for a conceptually reductive story that gives you a principled definition of R in physical-psychological terms, you’re not going to get/find it. Rather you work out closeness backwards from our intuitions about knowledge. You say: S knows in case A; in case B S makes an error; so B is either not close to A or involves a different basis.

  8. Julien, you do a great job of showing how, in Williamson’s case at least, his views on various matters all come together to make it work. This is a plus on the side of systematicity but also could lead to a “domino effect” if one has doubts, as I do, about each of the parts. Thanks again.

  9. ———–
    PS – One concern is that the literature moved passed the diagnosis of (i) a certain class of proposed Gettier cases as being genuine cases (Fake Barn Country, some from Harmon (Suzie?), and, I think, a newspaper one) and (ii) a general diagnosis of Gettierization as a luck problem to safety, then ever more complex accounts of it.

    I think we needed more debate over whether those cases are genuine cases (I don’t accept them, I’m met smart people who don’t accept them (in fact, there seem to be lots of people who don’t accept barn facade cases as genuine)) and whether Gettierization is really accidentality at it’s core–rather than, say, essential dependence on falsehood or something.

    Perhaps as the safety literature continues to approximate the post-Gettier literature, we can return to those earlier assumptions and examine them more closely.

Leave a Reply

Your email address will not be published. Required fields are marked *