A puzzle about mental representation and causation

I am curious what people think about a prima facie puzzle that I have been thinking about. It is more of a mind than an epistemology puzzle. However, certain answers to it have important ramifications for epistemology. So, I think it appropriate to CD and expect CDers to have views on the matter. Here goes:

(1): For all mental states; M represents M (itself) only if,
the presence of M in optimal conditions causes the tokening of M (itself).

(2): There is at least one mental state, M, which represents M (itself).

(3): It is never the case that the presence of a mental state, M, causes the tokening of M (itself).

(1) looks like an instance of a causal constraint on mental representation. Many theories of mental representation are prima facie committed to such a constraint. E.g., information-theoretic accounts, certain versions of disjunctivism etc. (2) is motivated by examples such as the thought THIS VERY THOUGHT IS INTERESTING (I use caps for mentioning thoughts and their components). (3) looks like an instance of the general principle that nothing is the cause of its own tokening.

However, if (3) is true, then the consequent of (1) is never true. But if so, the antecedent of (1) is never true. But to assume that the antecedent of (1) is never true is to contradict (2). So, prima facie the three theses are mutually inconsistent.

I am interested in all sorts of responses. Is the puzzle is well-formed? Is it interesting? What do people think of the premises? Does it rest on an equivocation? Could it be articulated in a better way etc? Oh… and kingdom for a name! (So far ‘The No Causa Sui Puzzle’ is all I got).

Thanks in advance,
Mikkel


Comments

A puzzle about mental representation and causation — 12 Comments

  1. I think this is an interesting puzzle. I don’t think it’s fundamentally similar to Russell’s Paradox (which would involve a mental state that represents all mental states that don’t represent themselves).

    A couple thoughts. First, I don’t understand why the phrase ‘in optimal conditions’ appears in 1. Second, I doubt that 3 is true. Beams in a bridge can mutually support one another, and thus indirectly cause themselves to stay in place, so why not think that mental states can do the same? But maybe you’ll say that 3 pertains only to causation that creates something, as opposed to merely helping to sustain it?

  2. John: I agree on the Russell Paradox (see below).

    Re (1): The reason why the ‘in optimal conditions’ qualification is included is to allow for misrepresentation. In this manner one can have a non-veridical mental state token of the type M that represents O even if no token of O caused it. However, according to (1), this is only possible in sub-optimal conditions. So, (1) would likely be too strong without the qualification. The phrase derives from Fodor; the idea from Dretske.

    Re (3): Yes — I use the ’cause the tokening of’ locution for the reason you give. I can at least partly cause my continuing existence (by eating my vegetables etc). Perhaps a mental state can cause its own continuing existence as well (harder to provide a clear case). But I can’t cause the tokening of myself in the sense of bringing myself into existence. According to (3) nor can mental states.

    Derek: I think the puzzle is importantly distinct from Russell’s Paradox. For a start, they are about different domains. Russell’s paradox is about sets; the present puzzle is about representational mental states and causation. There is not an exact formal analogy either. As John Turri points out the present puzzle does not involve or require the assumption that there is mental state that represents all mental states that represent themselves. If so, the solution you mention has no clear analog in the present case.

    Perhaps, you are right in that the present puzzle is “something” like Russell’s Paradox since both may fall under the broad category of self-referential paradoxes. I guess that this is all that you meant to say? But if you think the similarity is deeper and the solution strategy should be analogous, I’d like to hear more.

    Thanks! Mikkel

  3. Mikkel,

    Assuming that 1 itself is not too strong, deleting that qualification would not result a condition that is too strong (as in: it would rule out too much). After all, 1 entails:

    (1*) For all mental states, M represents itself only if M’s presence causes the tokening of M.

    Also, it is not generally true that M is about X only if X causes M to come into existence. General thoughts are not like this. For instance, suppose the tallest person in the room is 6’5″ and has dark hair. I see this and think, “The tallest person in the room has dark hair.” Then a 7′ person, who also has dark hair, walks into the room. I see this and retain my general thought that the tallest person in the room has dark hair, but now my thought is about the 7-footer (he’s the one that now uniquely satisfies the relevant description), not the 6’5″ person (who previously uniquely satisfied the relevant description). Nevertheless, I say it’s the same thought throughout.

    So it’s not generally true that for M to be about X, X must cause M to come into existence. Why think it’s true in the special case where X=M?

    You might say that my example isn’t relevant precisely because it involves a general thought, whereas the example you had in mind earlier featured a demonstrative (‘this’) and so involved direct reference. In that case, I offer the following example. Suppose I think, “The most interesting thought had by someone in this room is interesting.” At first, I think this because Kripke’s in the room expressing his thoughts about direct reference. Then Kripke walks out of the room, and I’m left by myself. It might turn out that my thought about the most interesting thought uniquely satisfies the description ‘the most interesting thought had by someone in this room’. So my thought didn’t cause itself to come into existence, but it ends up being about itself.

    So I think 1 is false. Perhaps you can restrict it to directly referential thoughts, in which case you can keep your example about “this thought,” and dismiss as irrelevant general thoughts that end up being about themselves.

  4. The example “THIS VERY THOUGHT IS INTERESTING” made me think of the Liar Paradox, or

    “This statement is false.”

    The Liar Paradox is allegedly a form of Russel’s Paradox:
    http://fclass.vaniercollege.qc.ca/web/mathematics/real/russell.htm

    Also a comment on the set theory being a different domain: is it possible that mental states and causation could be mapped to ZFC set theory? If yes, then the mapping would be interesting in that existing ZFC theorems would be applicable to mental states, etc.

  5. John,

    Thanks for this. I am also inclined to regard the puzzle as a reductio of (1) but not of causally constrained accounts of mental representation generally. If so, the interest of the puzzle is that it compromises accounts committed to a causal constrain exemplified by (1). They are quite a few of those. I suspect that proponents of them will (have to) push other solutions. Moreover, reflection on the puzzle may provide some hints as to how to appropriately qualify causal accounts or, as you suggest, restrict the domain of mental states that they plausibly apply to.

    I don’t think that (1) entails (1*). But I can see why one might think so. If the consequent is read as follows: ‘M is present in optimal conditions and M causes M (itself).’ If this were the right reading, then (1) would entail (1*). If the conjunction is the necessary condition, so is each conjunct. So, (since (1*) is too strong) it is not a charitable reading of the consequent of (1). The idea is that a causal relationship between the representing state and the representatum is necessary *given* optimal conditions but not in all conditions. This would explain why the token M misrepresent O in cases which conditions are non-optimal and an object token of a distinct type, O*, causes M. This is what the ‘optimal condition’ qualification is intended to do. There is a question, then, of how to articulate the qualification as to not suggest the conjunctive reading. (The qualification ‘the presence of O in optimal conditions’ takes wide scope, so to speak). Tye uses the following subjunctive formulation:

    A mental state, M*, is S’s mental representation of an object, O, iff (if optimal conditions were to obtain, then M* would be tokened in S iff (O obtains and O cause M*)) (in Ten Problems of Consciousness).

    Your putative counter-examples are interesting and I’ll think some more about them. A question first, however: Do we agree that even if restriction-strategy you suggest is viable, the puzzle will still be generated? For you propose to restrict (1) to singular, directly referential mental representations. But, if so, (2) will still be motivated by the case at hand and (3) is general.

    (Entre nous, I doubt that even a restricted version of (1) is viable. For example, how may 5+7 = 12 said to cause the corresponding singular, directly referential belief? What are optimal conditions in this case etc.)

    Derek: The puzzle shares the self-referential aspect with the liar. But it differs in other important respects. For example, the liar sentence has a negation in it. I do not mean to rule out that a mapping of mental representation and causation into an appropriate set-theoretic apparatus might be helpful. (But, as noted, there seems to be formal disanalogies between the present puzzle and familiar self-reference puzzles). At any rate, in lieu of such a mapping, I prefer just to try to address the puzzle head-on. Maybe just a difference in our modus operandi?

  6. Hi, I’m not sure whether an equivocation can be ruled out, and I’m under the impression there might be a type/token mixup (which might explain the resonances to Russell). After all, what criterion separates to succeeding tokens of a mental states from one another?
    Also, maybe tokens of mental states aren’t representations of types of mental states, as suggested by (3), but it might rather be the case that some tokens of a certain mental state might be caused by another, earlier token of the same kind.
    But what I’m not sure about is what you call a mental state at all: Is it a single notion, of which we might come aware (by having it represented again an in conection with a kantian ‘ich denke’), or is it a total determination of a multi-variable mind-matrix (so we only have one token of mental state at a time, it’s all ‘the mental’ there is).
    A mental state of the latter kind wouldn’t represent anything, but could be classified by an external observer as e.g. “a state of shock and awe.” But the Question of the causation of mental states (what cases his fear) would have to be answered externally, not in terms of other mental states.
    Only the former view (of notions or emotions) could assign tokens mental states as causes of mental states, regardless of what they might represent.

  7. I think that the appearance of a puzzle here depends on some sort of type-token confusion. It’s not clear to me that a principle like this even makes sense:

    (1′) For all token mental states T, T represents X only if, the presence of X in optimal conditions would cause the tokening of T (not another token of the same type as T, but T itself).

    What would it be for this very token thought to be caused by other things in very different circumstances? I can’t make much sense of the idea. (It’s especially puzzling if we think that there are context sensitive thoughts; i.e. thought types that have different contents in difference contexts. Perhaps in ideal circumstances a token of the same context sensitive type would have had a different content. Would it be the very same token? This seems controversial at best.)

    A better principle is something like this:
    (1”) For all mental state types T; tokens of T represent X only if, the presence of X in optimal conditions would cause a token of T.

    (Fodor describes this kind of view by saying things like “cows cause COWs”; again, this seems much easier to make sense of than “this very cow would cause this very COW”.)

    If we think that demonstratives of the sort you’re considering (e.g., THIS THOUGHT) are context-sensitive, we might want a principle like this:

    (1”’) For all demonstrative mental states types D; D represents the salient object because in optimal conditions the salient object would cause a token of D.

    I don’t think that self-referential thoughts generate a puzzle for this sort of principle.

    (You might try to regenerate it by introducing a name for a thought: let A = A IS INTERESTING. But I still don’t think there will be a problem: it isn’t crazy to think that thinking a thought of type A under ideal conditions will cause another token of A, which is all that (1”) would require.)

  8. Hi Leif and Derek B.,

    Thanks for the helpful remarks! I am not sure whether the puzzle is generated by a simply type-token confusion. I take (1) to set forth a necessary condition on a state-token to be of a particular representational state type. Hence, the quantification over state tokens — is the worry that that “does not make sense”? At least, this way of stating the principle allows for the fact that a token-state, M, can be an O-representation (i.e., a mental state of a particular type) even though the token is not caused by an O. To exemplify: Normally, cows cause COWs and such causal patterns partly type-identify the relevant concept, COW. However, a fools’ cow may also cause a COW. So, S’s belief-token THAT COW IS CUTE caused by a fools’ cow may be type-identical to another belief-token THAT COW IS CUTE caused by a real cow. The reason why (1) is compatible with this approach is that it has the “in optimal observation conditions” qualification.

    Two comments:

    First, if such a weakened account would evade the puzzle, it would be fairly interesting. For (2) and (3) would thereby serve as constraints on causal theories of representation. Indeed, they would rule out some, more radical, causal accounts. For example, certain disjunctivist causal accounts according to which the mental state-type changes with a change in the causal ancestor would be compromised by the puzzle. I take it that many disjunctivists would have it that the two COW thoughts mentioned above are type-distinct. So, even if the puzzle is solved by resolving the type-token issue, it might still be of value insofar as it articulates a constraint on causal accounts.

    Second, I am inclined to think that the puzzle should not be regarded as a reductio of causal accounts generally, but only of radical versions of causal accounts. Maybe some type-token confusion underlies the puzzle. But I do think that some further work is required before we are out of town. As mentioned, (1) as it stands is compatible with such a weaker causal account such as the one sketched above. For it allows that in sub-optimal conditions, a thought of the COW-type may be caused by a fools’ cow. So, (1) already sets forth a weakened causal constraint for a token-state to be of a particular type — maybe that makes for a type-token confusion but I don’t quite see how. Would this be clearer if the following general thesis was considered?

    (1*) For all mental states, M is a token of an O-representing state type only if
    the presence of an O token in optimal conditions causes the tokening of M.

    I think the puzzle would be generated by the instances of (1*) where O=M (i.e., where the representatum is the represented state-token itself). As far as I can see there is not simple type-token confusion here. However, I’d like to use a formulation that is close to what causal theorists say and (1) is closer than (1*) although the latter may be a bit more explicit. Any thoughts?

    Derek’s last point about the demonstrative character of self-representational thoughts is very interesting. But I don’t quite follow why the puzzle wouldn’t follow from Derek’s third principle and (2) and (3). The idea is that the (demonstrative) self-representational thoughts that motivate (2) are themselves “the salient object.” Perhaps I’m missing something here? Btw. it may be that some self-representational thoughts lack a demonstrative component. A candidate is the general thought EVERY THOUGHT IS PHYSICALLY INSTANTIATED. So, it is not clear that a restriction-solution would be sufficiently general. But of course, one might say that causal accounts are just not plausible for universally quantified thoughts, self-referential or not.

    Thanks again for the comments. Do let me know if I miss the gist of them.
    Mikkel

  9. Hi Mikkel,

    I’m still a bit confused; maybe it will help to look closely at your (1*). First of all, I think that (1*) is obviously false unless it is amended into a subjunctive or counterfactual, like this:

    (1**) For all mental states, M is a token of an O-representing state type only if the presence of an O token in optimal conditions *would* cause the tokening of M.

    Now (1**) (as well as your original (1), I think) presupposes that it is possible to generate the very same token thought in entirely different conditions. But it is not at all obvious that this is possible. Consider: I’m drunk on a dark night, I’ve forgotten my glasses, and I have very confused background beliefs. I see a horse and think that it is a cow. Now, suppose conditions were optimal: it is bright day, I’m sober, I have my glasses, my background beliefs are straightened out. I see a cow. Suppose I think that it is a cow. Suppose this is a token of the same type that I thought on the dark night. Is it the very same token (as your principles would require)? I don’t really know how to think about this question (this is the sort of claim that I was worried did not make sense), but at best, the thought that it is strikes me as pretty implausible. That’s why I think that this sort of principle ought to run at the level of types, not tokens.

    (Another way to bring out the worry: there can be multiple tokens of the same type, but presumably it isn’t the case that in optimal conditions, they would all be tokened – that would just be redundant.)

  10. Hi Derek,
    You raise two helpful questions. First, should (1) be given a subjective formulation? Second, is it bad that (1) and variations of it allow for two type-identical state-tokens? I think the former is the most interesting question. So, let me address the latter first.

    I do not think that you provide the strongest case for two type-identical token representations. The reason why your case-pair is not a good candidate is that there are psychologically relevant differences between the two cases (drunkenness vs. soberness, confused vs. orderly background beliefs, glasses vs. glasses-deprivation). These differences may matter for state-individuation. I think they do. So, I think you are quite reasonable in judging the token states to be type-distinct. But none of the principles that we’ve discussed *entail* type-identity in the case you give. The principles set forth a necessary condition on being a state(token) of a particular type (e.g., a cow representation).

    That said, a stronger case, I think, for type-identical thoughts with distinct causal ancestors can be provided: Compare a veridical case where S’s COW-belief is caused by a cow and a non-veridical case in which S*’s COW belief is caused by a fools’ cow (an indiscriminable look-alike). Assume that S and S* are individualistic twins. That is, we hold fixed S and S* background beliefs, phenomenal states and physiological states as well as the non-relationally specified functional and causal roles of these states. (Some of these are parameters that vary in your case). Moreover, we hold past patterns of causal relations to cows, cow-experts etc fixed. So, crucially, the case is not an Earth/Twin Earth case! It is a veridical/non-veridical Earth case. Here I think it is extremely plausible that the agents do hold type-identical belief tokens. This is why S* should be ascribed a false belief. She applies the COW concept to a fools’ cow. If we run the case counterfactually, the same result ensues: Had the causal ancestor of S’s belief-token been a fools’ cow, S’s belief-token would have been of the same type – i.e., a (false) COW-belief. (1) gives part of the explanation why: The counter-factual observation conditions are sub-optimal.

    Now to the putative subjunctive formulation of (1): You are absolutely right that causal constraints are often stated subjunctively (cf. e.g., Tye’s formulation that I cite in Comment #6 above). I simplified the statement of (1) for the sake of simplicity. The question is whether the issue matters for the purposes of generating a potentially interesting puzzle. Here is a first-stab attempt to state a version of the puzzle that incorporates a subjunctive conception of (1).

    (1a): For all mental states; M represents M (itself) only if,
    (if optimal conditions were to obtain, then the presence of M would cause the tokening of M (itself)).

    (2a): There is at least one mental state, M, which represents M (itself) in optimal conditions.

    (3a): It could not be the case in any (hence optimal) conditions that the presence of a mental state, M, causes the tokening of M (itself).

    Perhaps this version is more interesting that the original version (or perhaps it is misguided in a different way : ). I was aiming for a version of (1) that did not hinge on any special conception of the causal constraint. So, I avoided the subjunctive formulation in favor of the more simplistic one. Perhaps this is a mistake. Let me know!

  11. I hope this comment is not too far removed from the precise question being examined. I wondered if it’s worth exploring the implications for the philosophy of mind of the recently published study – December 2008 – (http://www.medicalnewstoday.com/articles/133988.php) by Professor Beatrice de Gelder, of the University of Tilburg in the Netherlands, concerning the ability of the brain to process visual information in a person blinded by strokes on both sides of his brain which left him unable to see and devoid of any activity in the brain regions that control vision. Taking the study at face value, it seems that it can be said that the mind of the individual in this case has no conscious visual ‘perception’ in the usual sense. The crude question, I suppose, is what implications this has for explanations of the relationship between mental states and the external events they are tokens of.

Leave a Reply

Your email address will not be published. Required fields are marked *