An Interesting Example

Here’s a lighthearted example for a holiday weekend. Some background: my son often helps his girlfriend babysit for a brother/sister pair who are 3 and 5. He told my wife a story about one of them being unable to make the ‘m’ sound, so instead of saying, “I want to see a movie,” what comes out is “I want to see a wovie.”

Carol (my wife) asks me, “which of S and C would you expect the story to be true of?” I answered, “the older one, because you wouldn’t be asking me if it were the younger.” Pissed her off because she thinks the obvious answer is the younger one. So I told her I was taking into account the additional evidence that she was asking me the question.

In reflecting on the example, I wonder if it would be irrational (or unjustified, if you prefer), after being asked, to still think that the answer is the younger one. To do so, you’d have to fail to take account of the fact that you are being asked the question. Moreover, it’s not as if you didn’t notice that you were being asked, and you also know that it is relevant to the answer in question. So it’s evidence regarding the correct answer, and it’s evidence you possess. But it’s hard to see how the view that the correct answer is “the younger” is irrational. If asked about the implications of being asked the question, the answer would likely be: “Oh, I didn’t think of that.” And that’s a legitimate excuse.

If both answers can be rational, evidentialists need to talk both about the nature of your evidence and what you make of it, in order to allow the rationality of both answers. The phrase “the totality of your evidence” must include both the evidence and what one makes of it. Counting the latter as part of the totality of one’s evidence strikes me as a bit of a stretch, but I won’t press that point here.

The alternative is to say that you are irrational if you answer “the younger”. That seems wrong, and the fact that my wife is still pissed off at me for ruining the story is evidence that it’s not irrational (I believe I heard the remark “why can’t you just be normal sometimes”)…


An Interesting Example — 8 Comments

  1. If there were something substantial riding on your guess (but such that your wife was not involved in nor aware of the consequences, so that you didn’t think she had some sort of ulterior motive to confound you with the question more than usual), it seems as though taking into account that the question is being asked in the way it is being asked, it would clearly be irrational not to take that information into account.

    As it is, a question without anything substantive inclining you to take care in which answer you give, then, a) it is understandable that we don’t fault someone who guesses the younger, since they might not have thought it through, and b) the practical consideration of being cooperative with the set-up (which has an expectation that you will take the ‘obvious’ guess), seem to explain the resistence to label someone irrational, even if, strictly speaking, they ought to have made the other guess.

  2. One thing I just noticed is the tense of the question, “which of S and C would you expect the story to be true of?”

    In which case, rather then asking for a prediction based on the evidence you have after she has asked the question, it could be seen that she is asking you for a report on what your prediction would have been prior to being asked. In which case, the answer “the younger” would be perfectly fine, even if you now, after hearing the question rationally believe that your wife is about to reveal that it is the older child.

  3. Lewis, on the second point, I think that’s right, though I’m not sure I reported the question accurately. She might have asked, “Whom do you think the story is about?”

    McGrath and Fantl would really like your answer in the first paragraph of your first comment! At least, to this extent: the standards for rationality change given the practical significance of the issue.

    Here’s why I don’t think that will work here. It’s one thing for practical significance to move you from just over a threshold to just under. But here, the very same body of evidence appears to make rational believing incompatibles, and that is much stronger than any contextualism or subject-sensitive invariantism on offer to this point.

    I take it, in the second paragraph of the first comment, that you’re inclined to explain the rationality in terms other than epistemic rationality. The idea, I take it, is that social cooperation outweighs the significance of exactly following your evidence. That might be correct, but I’m still hesitant. To get that result, my worry is that the bar for rationality is too high, so high that the excuse “I didn’t think of that” won’t ever be a good one; that to form an epistemically rational belief, a person needs to have thought of everything relevant that they know, and also noted the implications of what they think of at the time of belief formation. But this looks to me like a case where it is perfectly acceptable from an epistemic point of view not to have taken into account the implications of the fact that one has just been asked the question.

    None of this is meant to have (much) probative force against views that deny this point, though. I’m just a fan of more optionality in the theory of rationality than some others are, and I like cases that require at least a pause before dismissing them!

  4. I also like this case a lot. And I think that the fundamental issue, whether or not one weighed the fact that the question was being asked as evidence, is really interesting.

    As for I wasn’t suggesting that rationality is context-sensitive, and that those practical concerns would change what is or is not rational. I was hoping to suggest that the absence of substantive consequences for guessing wrong reduces the motivation of people evaluating the situation to label failures in rationality as such.

    The other suggestion, about cooperation is that one’s response to the question isn’t always taken to be a sincere assertion of belief. For instance, if one responds with, “I have no idea” they might be encouraged to just go ahead and guess anyway. So, even if the person is suspending judgment on the matter, they will still be encouraged to guess. If one recognizes that there is a set up such that the question is “Which do you think it is?” with the unspoken subtext of “you’d expect it to be X, right?”, you might, go along with the mutually recognized ‘script’ for the conversation, to be cooperative to the way they are planning to tell the story, they can non-assertively guess, which would be considered an appropriate response to the question, even if it is not what they believe to be the correct response.

    If the response isn’t an assertion, then the question of rationality is about what action is rational to perform, saying “The younger” or saying “the older”, rather than about whether the guess is a rational one to believe correct. And then, the social-cooperative considerations can play a role without impacting our analysis of epistemic rationality.

    I think though, that there are probably many cases similar to this one that could evade, or at least, purport to evade, this type of analysis, and present the problem of the same evidence seemingly justifying either of two contradictory beliefs. My intuitions are less friendly to optionality, and so my reaction to such cases would very likely be to search for alternative explanations than to permit the same evidence justifying either of the two contradictory beliefs.

  5. I’m inclined to think that social-cooperative considerations raised by Lewis are a key to resolving the apparent contradiction.

    Jon asks: “I wonder if it would be irrational (or unjustified, if you prefer), after being asked, to still think that the answer is the younger one. To do so, you’d have to fail to take account of the fact that you are being asked the question.”

    I think this line is a bit strong, however. Consider cases of deception. Imagine that it is the younger child who has trouble with his m’s and that Jon’s wife, knowing he will reason precisely as he has, poses the question in the same manner. “Ah Ha!”, we can imagine her saying when Jon answers that it is the older child. “Got you!”

    Now, we might be tempted to wave off this case for the sake of simplicity: Assume agents are sincere. But it isn’t so clear how to pin down sincerity in this example, since the example raises the issue of common knowledge and its role in extracting information from questions in question-answer exchanges. For instance, it appears that Jon assumed that the question was thought of and posed to him because its answer was surprising. He bet on this being true with his reply. Yet this is distinct from the case where the answer is as expected and the question was posed to trick him.

    Deception is only an illustrative example; we might generalize: what an agent (Jon, let’s say) is doing in such cases is (1) interpreting the question, semantically; (2) drawing inferences about this question w.r.t. both what Jon knows and what is common knowledge between Jon and his wife (e.g., what Jon knows that his wife knows; what Jon knows that his wife knows that Jon knows), including the event of the question being posed to him by her; and (3) judging the risk of error he faces in carrying off either his interpretation or reasoning. Deception, then, is one source of error. A poor handle on the language or the facts at hand are other sources. And so on.

    A proposal: the rationality of contradictory assessments can be explained by finding a difference in the values of one or more of these parameters in the cases under comparison.

  6. Greg, excellent analysis! I think you are exactly right about how to explain away the contradictions, but there’s one part that bothers me here. Just as we should be hesitant to describe perceptual beliefs as inferential, I think we shouldn’t say that I drew inferences here from background information. And I know I didn’t make any judgements about the risk of error, though it is true to describe what happened in terms of assumptions I made.

    This talk of assumptions raises an issue that I’m going to post about…

  7. Pingback: Certain Doubts » Assumptions

  8. Hi Jon,

    I’m not sure how heavily I’m leaning on agents being aware of drawing inferences and probing for errors, but I take your point that I am not in a position to describe the first-person account of what other people are aware of doing in such cases.

    There is a methodological idea that is rattling in the background and it might be helpful to lay out. It won’t answer your concerns straight on, but it might point to a back door.

    There is an interesting literature on question-answering systems and multi-agent systems in computational linguistics/applied logics literatures that bear on some of these issues, and I think each does so in philosophically rewarding ways. These fields are large, but a good place to start on question and answering is the Information Systems Processing group at Amsterdam (David Ahn is my go-to person) and the modal and temporal logic groups at Liverpool that include Michael Fisher and Wiebe van der Hoek, respectively, are doing very good work that should be studied. For instance, Weibe is doing very nice work with modal-temporal logics for reasoning with common knowledge.

    Some of this work is what I had in mind when I offered breaking up the analysis into those three steps.

    The reason that I mention applied logic and computational linguistics is that I think they are, or will be, a basis for another take on how to fit pragmatics into epistemology, in general, or to fit it into rational belief fixation and change, in particular. Linguistic data from natural language is important, and we certainly can *apply* logic (and probability) to do certain things in formal semantics; but logic itself is not applied model theory, which is something that desperately needs repeating. Logic is a theory of what follows from what. As such, there is a steep price to adding indices to a formal language to fit natural language data. In chasing after a formal langauge that will deliver you an object that you may then point to and say that you mean that when you assert such-n-such, you wind up with a convoluted beast that is mathematically worthless for inference.

    Why is this consideration important philosophically? I think a worry that some people have with pragmatic encroachments in epistemology is the unintended consequences that might arise from reconfiguring the conceptual building blocks to allow for pragmatic features. In assessing the force of the linguistic data on such questions, it might be helpful to look at work that addresses many of these issues but that hasn’t the (computational) luxury to indulge the natural language formal semanticists’ fetish of tossing more bells and whistles into the language. Folding knowledge of this work (and, more importantly, the aims, methods and results) into the discussion of pragmatic encroachment would put us in a better position to sort out what is the formal stake in this (these) unfolding debate(s).

Leave a Reply

Your email address will not be published. Required fields are marked *