Huemer and Foley on Swampman

Foley’s Swampman (Fs) arises out of the swamp, produced by a lightning, fully decorated with a vast array of true beliefs. Foley think Fs knows a lot more than we do, but many, probably most, have remained unconvinced. Mike made the following really interesting remark about Fs:

Foley-swampman (Fs) is interesting. So Fs correctly believes that he is a swampman who happens to have only true beliefs. You ask Fs: “Gee, Fs, it seems really unlikely that you would have so many true beliefs and no erroneous beliefs, given that you were formed by a lightning strike. What do you think accounts for that?” Fs says: “Nothing. It’s just a huge coincidence.” Now it seems to me that there really is something irrational about Fs, even though by hypothesis his answer to your question is correct. As a first pass: Fs holds a set of beliefs about where his first-order beliefs came from, on which it is extremely improbable that his first-order beliefs are true (and he acknowledges that); yet he still holds on to those first-order beliefs. That’s a kind of incoherence. (I call this meta-incoherence, because it is the meta-beliefs that fail to cohere with the first-order beliefs.)

Fs is clearly not a hidden variables theorist about his beliefs!

My interest, though, is about mike’s meta-incoherence requirement. Here I want to introduce my cousin-in-law, Dan Kersten, who does vision research at the University of Minnesota. Dan’s work on vision involves what he terms a Bayesian model, and he is investigating the ways in which we construct, and the pitfalls involved in the construction of, a 3-dimensional experience of the world from the sensory input we receive. Last time I talked with Dan, he was reporting that the probability of truth for our 3D beliefs rises as high as .25!

According to his model, of course. I suggested, as any good Moorean would, that maybe the model was wrong. That conversation went nowhere. I’m not completely sure why, but it’s not because Dan is not astute. It reminded me more of reading Unger’s defense of skepticism: there are a number of assumptions that lead to Dan’s model, and they seem so obvious to those who reflect on them that it would be hard to give them up just because the probabilities that result are too low to be comforting. This is the same phenomenon that makes Unger’s defense of skepticism powerful and disturbing: it’s not an appeal to infallibility that drives the argument, but rather other weaker claims that students (and even epistemologists) find rather compelling. After all, it would be rather strange to say that you know where you live, but you have no right to be sure, no right to feel certain about where you live. (I’m not claiming the argument is sound, though…)

So, consider poor Dan’s epistemic condition. He’s in precisely the same situation as Fs. But no one is inclined to think that Dan doesn’t know a lot about his 3D environment. Is Dan irrational in some way? I’m inclined to think not, though I’m also inclined to agree with Mike that there is a kind of incoherence here that is a negative factor of some sort, epistemically speaking.

Here’s a strange thought experiment about Dan, however. Let’s imagine Dan to have no cognitive defects or lacks whatsoever beyond those that might involve his particular area of research. Then imagine that his theory is correct, which after all, might be the case. In such a case, is there something lacking epistemically about Dan? Is there some knowledge or understanding that we must suppose him to fail to possess? Is there some level of rationality or warrant or justification we must say he has not achieved? I’m not sure there is any such thing here. What we want, of course, is for our scientific investigation into cognition to generate well-confirmed theories on which our commonsense view of the world is highly likely to be true. But, as my mother-in-law loves to say, “Wish in one hand…”, and I’ll let you finish the saying on your own. So what if the truth of the matter is much less sanguine, epistemically speaking?

I find it hard to ascribe any epistemic fault or defect to Dan on these suppositions. The fault, if there be such, is in the world in which Dan finds himself: it is not an ideal world for cognitive beings such as we are. There is Mike’s meta-incoherence present, but it is precisely the kind of incoherence Dan ought to have (just as in lottery and preface cases, especially in the fallibility version of the preface, the appropriate response, cognitively speaking, is to be characterized by a kind of incoherence).

Maybe all we can say about such cases is that, in such a world, one will always have a reason for further inquiry to try to resolve incoherence. Perhaps that’s the consequence of failures to confirm, and even disconfirmations of, hidden variables theorems (which is just we I’m tempted to tell Dan: he’s failed to this point to uncover the real factors that separate our commonsense 3D picture of the world from other possible doxastic responses).


Comments

Huemer and Foley on Swampman — 15 Comments

  1. Maybe this comment should go under the other post, I’m not sure. The thought is that swampman is trans-rational rather than irrational; Fs is installed with true beliefs. There are some important details that need to be fixed: in the other post, we’re told that he can answer questions, and some reasoning capacity is required to understand questions being put to him in order to answer them; hence he must know something, even if we agree that his epistemic abilities are importantly different from ours. Here we’re told that he simply has true beliefs; I do not understand what this means. On the Q-A description, however, Fs differs from us w.r.t. epistemic rationality precisely because he doesn’t need our epistemic facilities. He’s godlike in this respect. We are not.

    Mike’s reply to the Sally example is a good one; I think the example fails to do what I wanted it to do. However, the worry that I was trying to catch with it remains. The worry is this: Huemer’s analysis is concerned with cases of assigning probability based upon evidence, originally, although his reply to the Sally case is more personalist in character and describes the orthodox line, construing evidence as my evidence. This is what probabilistic coherence amounts to. But it is irrational for Fs to hold his beliefs, on the first story, because we are assuming some shared distribution with Fs about how unlikely it is for him to be so doxastically rich. It isn’t clear to me that Fs has the same distribution, nor that the question posed to him reveals it.

    I don’t know the grounds to support this assumption, other than orthodox ones about updating. And if you follow that line, then things get strange in how statistical arguments are treated. That’s the picture of the worry. So, maybe my question is better put this way: why is Fs beliefs about his first order belief extremely improbable? What probability are you appealing to here? This seems like a statistical probability, which needn’t adhere to coherence constraints at all, and not something that Fs must govern his belief about his beliefs.

  2. Jon, I’m not sure exactly what Dan thinks. If he thinks that his model is exactly correct, I’d say he’s irrational. Obviously, our visual system gets 3D shapes right a lot more than a quarter of the time, and this would be highly improbable on his model.

    But if he only believes his model is approximately correct, he could be perfectly rational, because even though the model only gives a .25 likelihood for something that is in fact the case (that we correctly perceive a 3D shape — and it gives a much lower likelihood for the fact that we repeatedly correctly perceive 3D shapes), it might still be the case that a very similar theory would give the correct probabilities (i.e., close to 1).

    We seldom expect scientific theories to be more than approximately correct anyway.

  3. Mike, I agree that our visual system is more reliable than Dan’s model allows, but I’m not seeing the inference here to irrationality, even if he believes that his model is correct. It can’t be the existence of meta-incoherence, since that is present in preface and fallibility paradoxes. Perhaps it is how exaggerated the meta-incoherence is (i.e., it is one thing to think that some of my beliefs are false, but quite another to think that most of them are). Is that it?

  4. I assume that in the preface and fallibility cases, the subject is probabilistically coherent (or not incoherent; he could lack precise degrees of belief). We discussed this in another thread, but I don’t think we understood each other. In those cases, I take it that the subject has a very high degree of belief in each of a large set of propositions, but a low degree of belief in their conjunction — that can be perfectly coherent. I didn’t understand how you’re seeing the cases.

    In the case of Dan, he has individual beliefs that clash with his theory. For example, his single belief that there’s a person in front of him, when you’re talking to him, is meta-incoherent with his belief that his visual system is only 25% reliable.

    That’s a simplification, since his belief that there’s a person in front of him could have further justification independent of whether his visual system has the correct 3D shape of you (e.g., there’s the sound of your voice, and there’s perhaps a 2-dimensional representation of you that enables recognition of you as a person even if the 3D shape is wrong). But you can presumably find a case in which he holds a belief (“that thing is such-and-such shape”) whose only justification would be his 3-dimensional visual experience.

    In a case of meta-incoherence, either the first-order or the meta-beliefs might be to blame. In this case, I would think Dan’s theory of the visual system is to blame; he should drop that belief to restore rationality. An interesting question is how one determines which belief is to blame. I don’t think it’s merely a matter of weighing the prima facie justifications of the two beliefs, since in some cases a meta-belief with a weaker prima facie justification defeats a first-order belief.

  5. For what are basically Foley reasons, I doubt that the inconsistency at the level of rational belief can always be explained away by appeal to consistency at the level of rational degrees of belief. At the very least, it won’t work for belief in mathematical claims that are false. That’s not to say, of course, that it won’t work in a number of other cases, including perhaps lottery and preface. I haven’t seen a solution here yet that I thought covered the full range of versions of either paradox, though I have no argument that there can’t be such a solution.

    There’s another complication in the case of Dan’s theory. It could be that the probabilities are as his model says: that given the physical features of our perceptual system and the nature of the objects we perceive, the probability of being right is low. But maybe there is further information to conditionalize on here.

  6. Well, if the physical probability of our visual system getting things right is only .25, then it’s a miracle that it keeps getting things right — and that’s too much of a coincidence to be believed. Recall Lewis’ “principal principle”: if you are certain that the objective probability of A is x, then you should assign degree of belief x to A given that information: Ps(A|Po(A)=x)=x (where Ps is a rational subjective probability, and Po is some objective probability).

    However, what you say makes me think that it could be that the physical probability of the visual system getting things right is .25 conditional on Dan’s theory, but it is .99 conditional on Dan’s theory plus some other physical conditions that, as a matter of fact, usually obtain when we look at things. Or in other words: maybe there are some common background facts that Dan didn’t take account of (and perhaps haven’t been discovered) when he calculated the .25.

    About mathematical beliefs: you’re right to point out that probabilistic coherence isn’t a rational requirement for all necessary truths, both because a false mathematical claim might have some degree of justification, and because a true mathematical claim might have less than full justification. But notice how nicely my metacoherence requirement deals with that. Say you form a mathematical belief M as a result of an attempted proof. Then your rational degree of belief in M should equal your estimate of the probability of a belief of yours being true given that it was formed by that method. So suppose you think your ‘proofs’ are only 80% reliable. Then, according to metacoherence, you should have an 80% degree of belief in M. That’s true regardless of whether M is actually true.

  7. Mike, I feel a post coming on about Lewis’s Principal Principle! I’m glad you mentioned it–it will be interesting to see if there is a true principle in the neighborhood here.

  8. Greg was getting at something earlier which interests me, but it seems to have been bypassed in favor of this discussion of Dan’s experiments (which I skimmed, and so don’t quite follow). Greg’s point had to do with the oddity of assigning a probablistic argument to Fs beliefs. Remember what Mike said:

    “Fs holds a set of beliefs about where his first-order beliefs came from, on which it is extremely improbable that his first-order beliefs are true (and he acknowledges that); yet he still holds on to those first-order beliefs.”

    But maybe Fs wouldn’t acknowledge this in the way Mike thinks. For wouldn’t Fs hold both of the following? (1) This is extremely improbable, should this happen to anyone at all, that such a person’s beliefs all turn out true; and (2) In my own case (the case of Fs), it is not at all improbable, for it has a probability of 1.

    If this is right, then Fs wouldn’t exhibit the alleged incoherence. Or am I missing something crucial?

  9. This was the worry, Matt. I was eager to complain about how evidential relations are handled in Bayesian frameworks and got the sequence of issues wrong for this example. Maybe these issues will come up later when we get an account for why Fs thinks this way.

    There is another piece to this, namely specifying what a swampman is. Here are some candidates:

    (i) Swamp-phonebook: this is an inert collection of ‘beliefs’ about facts. Structurally, it is a set of statements. And it doesn’t really do anything. This would be an nonrational creature; ascribing rationality would be a category mistake.

    (ii) Talking Swamp-phonebook: this is a swamp-phonebook with a speaker. We assume that the facts it broadcasts are selected stochastically. Also an nonrational creature.

    (iii) Swamp data-hub: Imagine all of the truths of oracles of various mythologies passing through swamp data-hub on their way to answering people’s questions posed to various oracles. This is more interesting than swamp phonebook; there is non-stochastic structure here. If we were a large government agency, we might like to spy on this data hub to learn about what those people were asking. Still, Swamp data-hub is not the kind of thing we’d ascribe rationality to.

    (iv) Swamp Savant: Has an IQ of 25 and lots of true beliefs but has little idea what to do with them. On one reading, this seems like what people imagine Fs to be. A case can be made for swamp savant being the kind of thing that could be rational. It is also the kind of thing that is arguably irrational. I doubt that it is Huemer-irrational, however.

    (v) Swamp Press Secretary: Has an IQ of 250 and is sought after by government Vice Presidents (or former Presidents, if your politics run this way). This thing has all and only truths, tells all and only truths, and can systematically mislead anyone by telling all and only truths. He is hyper-rational. Huemer might score him irrational, however.

    (vi) Ordinary Swampan: This is a human-like thing in a swamp who is imagined to be cognitively like us except, in so far as this is at all possible, he’s right about all of the things he believes and he believes a lot of non-trivial things. Questions: Does he learn new things? Can he understand us when we ask him things? If so, can he give *correct* answers to our questions? (If so, then he has some grasp of right and wrong answers, and so some rough idea of true and false even though he himself only believes true things.) Can we count on him telling us the truth? Can we count on him not misleading us with the truth? (What does he eat? Plants? Humans?)

    I would caution against waving away all these questions as irrelevant, even those that appear to be jokes. There is a lot of information packed into the answers to these questions that impacts the selection of the model.

  10. Matt,

    Why would Fs hold that the probability of his having all true beliefs is 1? Fs is supposed to have only true beliefs, and I don’t think that’s true.

    My claim was that the probability of getting all true beliefs, given that one forms one’s beliefs by the method Fs used, is very low. Substituting different proper names into that in place of “one” doesn’t make any difference to the probability. Maybe you’re conditionalizing on more information than just the method Fs used. Or maybe you’re wanting to characterize “the method” in some very “thick” way such that the truth of the beliefs is entailed by a description of the method.

  11. Greg has complexified it for us, in a not unimportant way; but I want to reply to Mike.

    Clearly, the method by which Fs gained his beliefs was successful: he knows how he gained them, and he knows they are all true. And this is why I distinguished between (1) and (2) in my post. It seems to me that Fs knows two facts here, one of which has to do with himself, my earlier (2) (or something like it). And giving it a probability of 1 just has to do with this: the emergence of an Fs-like creature from a lightning-struck swamp happened once (presumably), and in Fs’s case, it worked epistemic wonders. So out of 1 chance there was 1 successful case; that’s all I mean by probability being 1.

    Mike may dispute (2)’s use of probability, but why shouldn’t probability be indexed to persons (particularly when we’re dealing with a phenom like Fs)?

    Or maybe a better approach would be this. Fs knows that the *prior* probability of coming to have all true beliefs in this way is VERY low. But Fs also knows that in his case, it worked out, however remarkable that is. What would be incoherent about Fs knowing those two things?

  12. Fs would have low probability if it is reasonable to assume that Fs is in the reference class of human beings w.r.t. belief formation.

    Fs would not have a low probability if Fs is in the reference class of swampmen, which I assume is the singleton set, {Fs}.

    The reply then is that your claim about the method holds for humans, not swampmen. And because therefore it is sensitive to the selection of a reference class, the generality of the proposal is suspect.

  13. A brief continuation on F-Swampman. Suppose that you offered Fs a conditional bet: “So, Fs, if another lightning strike occurs creating another swampman, what odds would you give that he will also have all true beliefs, just like you?” Will Fs say, “Why, I’d bet in favor of that proposition at any odds!”? Would Fs cheerfully bet his life for a dime on that proposition?

    If yes, then I think he’s irrational. If no, then he doesn’t think the probability of a swampman getting all true beliefs is 1.

    Greg and Matt, I think your mistake is deploying an overly simple version of the frequency interpretation of probability. Even frequentists don’t say that the probability of an event occurring in a given type of circumstance equals the actual frequency with which it has occurred, in a small number of trials. I believe the standard version of frequency theory is that the probability of E occurring in some circumstances equals the limit of the proportion of times E would occur in an unlimited sequence of trials.

  14. (i) Don’t the conditions for evaluating the conditional bet depend crucially on what a swampman is? If a swampman is a thing with these epistemic capabilities, then he would be reasonable to fully believe that the next swampman would be a swampman. Yes. Now, betting one’s life on tautologies presupposes a particular model of full belief that we needn’t adopt. One could also tweak the details of the example to make it more reasonable for the swampman to suspend judgment and not take the bet, a state that I suspect you’d have a hard time representing in your model.

    (ii) The bit on frequentist theories is off the mark. Recall that there are also logical theories that appeal to known statistical frequencies. Here Matt and I might come apart…which is why I wanted to pin down more information about what a Swampman is, and what evidence is available to it, or us about it. This will go a very long way to settling the issue, but your model might get banged up in the process. But anyway, Matt’s right: the point can be made much more generally. So, maybe we can stay together for a while.

    To get the result that swampman is being unreasonable, you have to put him in a class where it is practically impossible to gain any epistemic advantage by lighting strike, like the class of human beings. There is no reason to put Swampman in this class. He was created by a lighting strike; He belongs to the class of Swampmen, which contains only himself.

    What is known about this class? A lot. He himself has all and only true beliefs. And if he is able to interact with us, I suspect he would be able to confirm that he has true beliefs. (Here again, details of the example matter.) If so, then that’s the end of the story. You don’t turn to statistical reduction when you have the answer already in hand.

Leave a Reply

Your email address will not be published. Required fields are marked *