1. Here’s a pretty natural picture of learning from experience. A scientist S proposes some hypothesis H (which we will assume is true). She gathers evidence and it’s favorable to H, so she’s encouraged. She gathers some more evidence, it’s also favorable, so she really thinks she’s on to something. More inquiry, more evidence, more (properly based) rational credence. Yet more. Finally, at some point, S “fully believes” H, in that her evidence supports it very strongly (though not maximally) and her rational credence is very strong as well (or whatever you want belief to be). One can conceive experiments which could falsify it, but those experiments can’t be done with current technology, and there is no particular reason to think those experiments would go badly for H. S has come to know H on the basis of accumulating evidence (I stipulate that this is explanatory and experimental evidence, not merely statistical evidence, though some statistics might be involved).
Here’s what seems like a platitude in light of the naturalness of the picture above.
EP – One can come to learn some true P via accumulating evidence that P.
2. I’m highly confident that Timothy Williamson believes the following two claims.
A. “Some beliefs fall shorter of justification than others. In that respect we can grade beliefs by their probability on the subject’s evidence…”
B. “a belief is fully justified if and only if it constitutes knowledge”
This makes it sound like he accepts the Natural Picture above. But…
How do we answer the question How probable must something be to be known? I’m not raising the common vagueness question here, I’m fine with borderline cases. This problem generalizes for any item of true belief at any degree of justification which becomes an item of knowledge. Suppose it’s n/m. Now consider the case above. What happens the *moment* it get’s to n/m? It becomes an item of knowledge, right? But if E = K, then its probability must be 1. And if it is *in virtue of* it’s being n/m probable (modulo the other assumptions) and it’s being evidence is *constituted* by it’s being knowledge, it seems as if we have a proposition the probability of which—at one and the same time—has probability n/m *and* probability 1. What’s going on?
A related but different problem from the Dual Probability Problem (DPP) is the Quantum Leap Problem (QLP). The puzzle is that the graph of the probability of H over time will look very strange. It starts out low and rises at a smooth slope of 30-45 degrees and then BAMB it shoots straight up–perfectly vertically–to 100%. That just seems strange to me.
In what way can Williamson and other E=Kers (pronounced “eekers”) affirm EP? What’s the story or surrogate?