By coincidence I was looking at a 2002 Analysis paper by Adam Morton in which he offers an epistemic version of Newcomb’s paradox. His idea is to imagine a scenario in which an agent stands better odds acquiring true beliefs by being epistemically reckless than by his being epistemically responsible. However, a problem with the example is that a correct assessment of epistemic risk depends on what precisely the agent knows about the experiment that Morton asks us to imagine; it appears that the example is compelling precisely to the extent to which the answer to this question is obscured.
The example is this:
An infallible predictor predicts your inferences in the following situation. You see a film of two coins being tossed. One of them -coin A- comes down heads ten times in a row, the other -coin B- is roughly even H/T. You have to come to a conclusion about the eleventh toss, which has occurred and whose result is hidden from you. The predictor manipulates your chances of making a true prediction as follows. He has a collection of coins; some are biased to varying degrees to H or T and some are fair. All are tossed tend times and the sequences filmed. If the predictor’s prediction is that you will believe that the eleventh toss of coin A is heads and that the eleventh toss of coin B is heads, then the sequence that you see is chosen so that coin A has a heads-always bias and coin B has a bias to heads although in the chosen sequence it came down roughly even H/T. If the predictor’s prediction is that you will believe just the eleventh toss of A is heads (and have no belief about that of B) then the sequence that you see is chosen so that both coin A and coin B are fair. As a result, if you are (infallibly!) predicted to make the more extravagant prediction then at least part of your prediction is almost certainly true (and the other part has a more than 50/50 chance). But if you are predicted to make the more modest prediction then that prediction has just a 50/50 chance of truth. (So the probability of a true belief in the first case is more than 0.5, and in the second case it is 0.5.) That can be taken as an argument for the 2-coin prediction. But on the other hand the coins have already been filmed, and whichever ones they are your belief will be reasonably safe if you predict just one coin and verging on risky if you predict both. (Adam Morton, 2002. “If you’re so smart why are you ignorant? Epistemic causal paradoxes”, Analysis, 62(2), p. 115. Emphasis mine).
An key issue in cases like this is to be clear about what the agent knows (or is reasonably assumed to know) about what Glenn Shafer calls the protocol and what Gert de Cooman and others calls the uncertainty mechanism that is operating in the experiment. Specifying this information is crucial for getting reasonable results out of an uncertainty framework. So, one thing we should do when thinking about Morton’s example is press him to be clear about what the agent is presumed to know about the infallible predictor, and how He goes about making his predictions. That is, it is reasonable to poke at this example to pin down what we are to supposed to assume that the agent knows about the uncertainty mechanism operating in this experiment.
My hunch is that Morton’s example doesn’t remain very compelling once we start poking. Consider the italicized sentence at the end of the quote. The evaluation of the 2-coin strategy as risky or not depends crucially on what the agent knows about the uncertainty mechanism. Under normal circumstances it is risky to adopt a 2-coin strategy, since infallible predictors don’t figure in our default background assumptions about coin-tossing experimental setups. (Or not in mine, at least.) But under the conditions described, it isn’t a risky strategy to take the 2-coin strategy. So, does our agent know that the uncertainty mechanism is sensitive to his reasoning about the case in the manner described for us? If so, this is knowledge about the experiment at hand and the otherwise risky strategy isn’t so risky after all. If he doesn’t know this peculiar detail about the experimental setup, then he’s reasonable (even if unfortunate) to assess the 2-coin strategy as risky, which would suggest that he only decide on coin A.
It is true that under the unfortunate case his strategy is sub-optimal w.r.t. yielding true beliefs. But, so what? This is precisely the same condition we all are in w.r.t. Papa-God, overlooking his mischieveous film-student son and his toying with mankind. Papa-God knows the deterministic outcome of all the events described; there is no uncertainty under his protocol.