Adam Morton on Epistemic Causal Paradoxes

By coincidence I was looking at a 2002 Analysis paper by Adam Morton in which he offers an epistemic version of Newcomb’s paradox. His idea is to imagine a scenario in which an agent stands better odds acquiring true beliefs by being epistemically reckless than by his being epistemically responsible. However, a problem with the example is that a correct assessment of epistemic risk depends on what precisely the agent knows about the experiment that Morton asks us to imagine; it appears that the example is compelling precisely to the extent to which the answer to this question is obscured.

The example is this:

An infallible predictor predicts your inferences in the following situation. You see a film of two coins being tossed. One of them -coin A- comes down heads ten times in a row, the other -coin B- is roughly even H/T. You have to come to a conclusion about the eleventh toss, which has occurred and whose result is hidden from you. The predictor manipulates your chances of making a true prediction as follows. He has a collection of coins; some are biased to varying degrees to H or T and some are fair. All are tossed tend times and the sequences filmed. If the predictor’s prediction is that you will believe that the eleventh toss of coin A is heads and that the eleventh toss of coin B is heads, then the sequence that you see is chosen so that coin A has a heads-always bias and coin B has a bias to heads although in the chosen sequence it came down roughly even H/T. If the predictor’s prediction is that you will believe just the eleventh toss of A is heads (and have no belief about that of B) then the sequence that you see is chosen so that both coin A and coin B are fair. As a result, if you are (infallibly!) predicted to make the more extravagant prediction then at least part of your prediction is almost certainly true (and the other part has a more than 50/50 chance). But if you are predicted to make the more modest prediction then that prediction has just a 50/50 chance of truth. (So the probability of a true belief in the first case is more than 0.5, and in the second case it is 0.5.) That can be taken as an argument for the 2-coin prediction. But on the other hand the coins have already been filmed, and whichever ones they are your belief will be reasonably safe if you predict just one coin and verging on risky if you predict both. (Adam Morton, 2002. “If you’re so smart why are you ignorant? Epistemic causal paradoxes”, Analysis, 62(2), p. 115. Emphasis mine).

An key issue in cases like this is to be clear about what the agent knows (or is reasonably assumed to know) about what Glenn Shafer calls the protocol and what Gert de Cooman and others calls the uncertainty mechanism that is operating in the experiment. Specifying this information is crucial for getting reasonable results out of an uncertainty framework. So, one thing we should do when thinking about Morton’s example is press him to be clear about what the agent is presumed to know about the infallible predictor, and how He goes about making his predictions. That is, it is reasonable to poke at this example to pin down what we are to supposed to assume that the agent knows about the uncertainty mechanism operating in this experiment.

My hunch is that Morton’s example doesn’t remain very compelling once we start poking. Consider the italicized sentence at the end of the quote. The evaluation of the 2-coin strategy as risky or not depends crucially on what the agent knows about the uncertainty mechanism. Under normal circumstances it is risky to adopt a 2-coin strategy, since infallible predictors don’t figure in our default background assumptions about coin-tossing experimental setups. (Or not in mine, at least.) But under the conditions described, it isn’t a risky strategy to take the 2-coin strategy. So, does our agent know that the uncertainty mechanism is sensitive to his reasoning about the case in the manner described for us? If so, this is knowledge about the experiment at hand and the otherwise risky strategy isn’t so risky after all. If he doesn’t know this peculiar detail about the experimental setup, then he’s reasonable (even if unfortunate) to assess the 2-coin strategy as risky, which would suggest that he only decide on coin A.

It is true that under the unfortunate case his strategy is sub-optimal w.r.t. yielding true beliefs. But, so what? This is precisely the same condition we all are in w.r.t. Papa-God, overlooking his mischieveous film-student son and his toying with mankind. Papa-God knows the deterministic outcome of all the events described; there is no uncertainty under his protocol.


Comments

Adam Morton on Epistemic Causal Paradoxes — 4 Comments

  1. Greg, I haven’t read the article, but the case as described puzzles me. If the predictor is infallible, then the riskiest strategy is always the A-only selection. That is, the objective chance here, given the infallibility in question, favors the selection of both coming up heads.

    So, when he talks of risk, he must mean something more subjective, right? I think that’s just your point? Namely, that once risk is thought of as involving the subject’s perspective, you have to be very careful about what the subject knows.

  2. Jon, yes, that’s the idea.

    And we might then ask how this god works, precisely. Does He arrange the films to favor the strategy the agent considers riskiest no matter the conclusion the agent reaches? If so, and the agent knows this, then what he, the agent, reasons out to be the risky strategy becomes in effect not risky, which suggests that the agent cannot meaningfully judge a strategy as risky or not since this very judgment has a material effect on the outcome of the experiment.

    There are details to chase down, and maybe there is a position to chisel out here that makes the example work, but (1) I’m skeptical that it will work in the end and (2) I think we should be on guard against being asked to think about an example like this one with this type of information obscured or missing.

    The reason that people’s intuitions (might) get sent off in different directions by this example isn’t due to a point about epistemology analogous to the divergent results from causal and evidential decision strategies that Newcomb’s exercises, but from an underspecification of the example and an equivocation over the term ‘risk’.

  3. I wonder if the example would work out better if the predictor weren’t described as being infallible. Why not simply describe him as having always been accurate in the past? Then you’ll get some of the cross-pulling characteristic like you get between causal and evidential decision strategies, won’t you?

  4. We still would have the question to settle of what the agent knows about His, the predictor’s, track record. If it is like the day following the night for all nights in the past, the assessment of risk won’t be, necessarily, self-refuting but there would be very strong grounds that it would switch given the agent’s assessment. As we weaken His track record by introducing some error into his history of predictions and, assuming again that the agent knows about this, wouldn’t this then simply be a bias on the experimental outcome to take into account when the agent assesses risk? That is, the agent would know that his assessment has some probability of biasing the outcome to such-n-such degree. I still think the cross-pulling goes away when we clarify what position the agent is in, epistemically.

Leave a Reply

Your email address will not be published. Required fields are marked *