Been reading this morning about epistemic risk and the value of new information in Don Fallis’s work. See, e.g., “Attitudes Toward Epistemic Risk and the Value of Experiments,” available online here.
I’m just beginning to get into this literature, so what I write below is a bleg for those who know the literature better. But here’s a quick excerpt of what caught my eye:
Performing an experiment is essentially like buying a lottery ticket that has epistemic prizes. Prior to performing the experiment, there is no guarantee that the epistemic benefits will outweigh the epistemic costs. Even so, we can still try to show that a scientist should always prefer this epistemic lottery to simply maintaining her current degree of confidence in the truth. As Maher puts it, we want “some argument to show that the chance of getting closer to the truth by gathering evidence outweighs the risk of getting further away from the truth.” [pp. 217-218]
The difficulty Fallis discusses is the difficulty of securing this result in either a categorical belief model or a degree-of-belief model, concluding that both fail.
I wonder, though, why one should have endorsed the quoted claim above in the first place.
The idea in the quote is that we want an explanation of why it is always rational from a purely intellectual point of view to seek additional evidence. I can think of one way to motivate such an idea: if the epistemic goal is completely unrestricted, to the effect that a purely intellectual being is motivated, for all p, to find out whether p, then gathering more evidence will also be an epistemic good, since it will at least get you more information about what the presently unknown evidence is. So you’ll have the chance to extend your range of true beliefs.
But that seems not to be the spirit of the quote. The focus is supposed to be on a particular issue: maybe a single proposition and its negation, or one given hypothesis and a set of exhaustive and exclusive alternatives to it. And the idea is that, when focused in this way, getting more evidence about the issue in question is always supposed to be valuable from a purely intellectual point of view.
Of course, from an all-things-considered point of view, further inquiry has costs, and so closure of inquiry, based on all factors, can clearly be rational. But, it seems to me, even when we abstract to purely intellectual goals, we should expect to get the same result. Simple examples seem to support such. Consider G.E. Moore’s belief (or degree of belief) that he has hands, and imagine offering him a consult with an infallible oracle on this very issue. Moore won’t see any reason for taking you up on the offer. He has experienced, if I’m right, epistemically legitimate closure of inquiry on the question.
Call this last claim “the Datum”.
Now, one may try to explain away the Datum so as to retain the spirit of the quote above in various ways. Perhaps Moore’s degree of belief in the hands proposition is 1, and thus there is no chance in his mind of getting closer to the truth but only a risk of getting further away. We needn’t imagine Moore to be this dogmatic, however, to embrace the Datum.
Perhaps Moore thinks that even infallible testimony won’t put him in a better epistemic position than his current one. That’s possible, but again not necessary: Moore’s lack of interest in consultation with the oracle needn’t arise out of such an attitude.
Perhaps we would want to insist that Moore must have other pressing intellectual concerns to pass on consulting the oracle. So, perhaps there is no way to construct the case so that the costs of consultation are exactly zero (from a purely intellectual perspective), and this fact sets up a competition between investigating the hands proposition further and investigating other claims.
I doubt this will help. Imagine that Moore is non-dogmatic and equally convinced of everything: for every proposition p toward which he takes any positive attitude (i.e., in the (.5, 1) interval), he takes the same attitude toward all of them. One of these is the hands proposition. He has no interest in looking into the matter further. But for other propositions on the list, he has such an interest. Why would this be irrational on the basis of purely intellectual concerns? It would, to be sure, be an intellectual Buridan’s Ass type situation, but that’s no argument for irrationality here. It would, to be sure, be inexplicable in purely intellectual terms, but again, why think such inexplicability is an indicator of irrationality (rather than, say, another reason to reject the Principle of Sufficient Reason).
Or imagine this. Take an everlasting version of Moore in the above position, so that he has all the time in the world to investigate things further, and subject to no needs or interests or purposes or desires other than those involving purely intellectual concerns as reflected the (synchronic) epistemic goal of getting to the truth and avoiding error, and then ask him his preferences about which of his positive attitudes to investigate further and in which order.
Though we might find him with no preferences about which propositions to investigate first, we might also find the opposite. He might have preferences about which to investigate first. And he might have a distinction between those propositions he is indifferent concerning further investigation, those propositions he would like to investigate further, and those he is positively disposed against investigating further. Is there an argument to give here that such a possibility involves irrationality on his part? Granted that we won’t be able to explain these preferences, but preferences can be rational, one would think, without being explicable, and if they are rational, the the closure of inquiry on certain issues that they express is epistemically legitimate.
Perhaps, though, the idea is that there is something about being in the role of a scientist that makes inquiry being open-ended in the way imagined the only rational possibility; that, if you are a scientist, you have an intellectual-role-obligation that makes getting new empirical data worthwhile from a purely intellectual point of view. I have some attraction for that view, but not for the view that makes closure of inquiry always and everywhere illegitimate from a purely intellectual point of view (given the kinds of constraints in question). (I’d also think it’s a derivative obligation from the more fundamental role obligation that teachers have, regardless of whether the subject matter is science or something else (cognitively respectable!).)