I am using Charity Anderson’s forthcoming Phil Studies paper on fallibilism and epistemic modality for my fallibilism seminar this semester and we discussing some of her work in progress in which she suggests that issues pertaining to the fallibility of faculties and methods is more basic than my evidential probability approach to epistemic possibility and fallibilism or at least that the evidential-probability approach only captures one dimension of fallibility. The paragraph below explains why I think the evidential probability approach covers *all possible* kinds of fallibility, but in a way that need not deny an important role for other notions of fallibility.

There may be some way in which fallibility of *faculties* is most basic (I’m not a fan of “methods” talk). But the relevance/impact of their fallibility on epistemic agents is measured in epistemic probabilities. Sensory faculties are just really close-to-me instruments. Some radon tester’s reliability rating is just some cold, third-person fact if I don’t have a basement. It doesn’t impact my epistemic agency, doesn’t show up on the screen. When I *use* that instrument–or a telescope or microscope or mass spectrometer or eyeball or ear or nose or whatever–its reliability is then relevant to my epistemic perspective and represented as an epistemic probability. So, faculties are “more basic” explanantes (plural of “explanans”) in the sense that they are a *source* of the epistemic probabilities. But my epistemic perspective is *constituted* by (phenomenal) evidence and best represented by epistemic probabilities.

I think I agree with a lot of what you say here. There seems to me to be a problem with your thought that “the evidential probability approach covers *all possible* kinds of fallibility”. An old problem for defining fallibilism is that we want to allow for our fallibility about logic and math. Can you capture such fallibility via epistemic epistemic probabilities? Let P be some logical or mathematical truth that I believe or know. It doesn’t seem we can capture the fallibility of my justification for P with epistemic probabilities. Since it’s a logical/mathematical truth it must have an epistemic probability of 1 (I’m assuming epistemic probabilities obey the axioms of the probability calculus).

Dylan, that’s surprising, but good to hear! In my 2011 Routledge Companion to Epistemology entry on Falibilism, I have a section applying my epistemic probability formulation to necessary truths. It involves two main moves, one not-so-controversial the other not so not-so-controversial.

1. Epistemic probabilities are not mathematical probabilities (I may not even need this thesis since math prob is defined over sets and Prob 1 is the omega which measures the whole space. Though that is somewhat analogous to a necessary truth, it is not obvious that it is a legit extension.)

2. The evidence for mathematical statements come from intuitions and arguments. Intuitions are treated like a species of perception and both perceptions and arguments fail to yield certainty. Evidential probability measures the weight of evidence, and there is no mathematical proposition where the weight of the evidence yields certainty.*

The situation is much the same as with physical probabilities, probabilities of past events, or probabilities of determined future events. They all have some kind of probability that is “really” either 1 or 0, but it’s not epistemic probability.

____________________

*And maybe you could use some kind of surrogate proposition like “I am right that Goldbach’s conjecture is true” which, arguably, is a contingently true if true.

I’d have to see how you work everything out. I don’t know how to think about probabilities where you don’t have necessary truths having probability 1, or where you don’t have logical equivalencies having the same probability.

Dylan, it’s not that uncommon to reject the necessary truth thing. For a very long time statiticians have used workarounds like the latter one I mentioned.

I don’t need to deny the logical equivalence thing, though one might find oneself in the following kind of paradoxical situation. All three could seem true to a person:

1. p is clearly true

2. q is less clearly true

3. p is logically equiv to q

One hopes that reflection upon 3 would naturally lead a properly functioning mind to bring 1 and 2 in line. But one will not have perfect evidence for 3 for any p or q (though some cases will be really, really close, those will not be the problem cases).

Consider doing some advanced logic homework where you are supposed to say whether wffs are theorems or not. You prove, or so you think, that A is V and B is ~V. But then in the next section it asks whether A and B are equiv and you prove, you think, that they are. Then you realize this can’t all be right. It would be absurd to just “decree” a set of coherent probability judgments. It’s more complicated than that. Even while looking stuff back over it will be very dynamic and the more convinced you are of the equiv proof, the more “pressure” you’ll feel to bring A and B together and the more convinced you are of the V and ~V proofs, the less confident you will (and the less you will have a *right* to be) in the equiv proof. I think life is full of things relevantly like this.

I agree with everything you say. Of course a properly functioning but non-logically omniscient mind will sometimes be uncertain of logical truths, and have different credences in logically equivalent propositions. But I’m used to thinking that this can’t be captured with the probability calculus. I’ve taken this showing that that the pc is limited in its epistemological usefulness.

But maybe the problem of logical omniscience can be overcome. I don’t know the literature on attempts to overcome it.