Assumptions

In Greg’s comment here, he notes correctly that when we form beliefs, we often do so on the basis of things assumed. In the example there, my wife says something, and I respond in a way that shows that I assumed that she wasn’t trying to trick me and that she thought there was something surprising about the case.

My question here is just what an assumption is. I agree that I assumed these things in the exchange in question, but it is not true that I thought of these claims, or drew inferences running through the claims either.

One might think, then, that an assumption is a non-occurrent mental state akin to dispositional beliefs. That will imply, of course, that you can be in non-occurrent mental states like this without having first been in the relevant occurrent state. Pollock denies this for beliefs, and the best way to deny it is to distinguish between dispositional beliefs (which require prior occurrent beliefs with the same content) and dispositions to believe (which do not require prior occurrent beliefs). If we don’t want to try to settle that dispute, we might say an assumption is either a dispositional belief or a disposition to believe.

This view may be correct, but I’m not sure. In many cases where our assumptions are pointed out to us, we experience chagrin at the realization. Upon thinking about the particular propositional content in question, we do not embrace it. Instead, we reject it.

This point raises a problem for conditional analyses of dispositions, and may be avoided by what C.B. Martin dubs “realism” about dispositions. In such a case, the very conditions for the triggering of the disposition cause it to be lost.

Even though the view that assumptions are dispositions can be retained in this way, I’m not sure we have an explanation of the chagrin in question. To experience chagrin in such an immediate fashion, it would seem that we need some disposition in place that runs contrary to the assumption in question. Suppose, for example, that you are strongly averse to racism, but respond on a given occasion in terms that you agree are accurately described in terms of assuming that people of a particular race are more dangerous. You experience chagrin upon having this fact pointed out to you.

Your chagrin depends upon your aversion to racism, and such aversion involves, I would expect, cognitive commitments. You believe, or are committed to, lots of claims, including the exact opposite of the assumption underlying your response. But in general we don’t want to try to make sense of ascribing both the belief that p and the belief that not-p to an individual (except in cases where modes of presentation explain away the absurdity in question). Nor do we want to say, I think, that some of your beliefs about the races somehow went out of existence during the period of your assumption. Nor will appeal to degree or strength of belief help, so long as we are still willing to countenance the reality of the distinction between beliefs and non-beliefs (even if the threshold varies by context).

So maybe we must think of assumptions as dispositional mental states, or dispositions toward mental states, that are not reducible to beliefs?


Comments

Assumptions — 10 Comments

  1. Hi Jon; Happy New Year! There seems to be a trade-off in setting up this problem that might be worth probing. It seems that the introduction of a collection of distinct items (beliefs, dispositions to believe, or dispositions to have some other mental state) is motivated to get around the apparent problem of ascribing (A): both the belief that p and belief that not-p.

    But I wonder about this. First, by slicing up belief into thinner concepts from the start, we might miss some of common features of general cases of question and answer situations. Second, one worry that might motivate an aversion to (A) is that it licenses irrationality. But this is a harder case to make than one might otherwise suppose, and turns on questionable assumptions about the fit between logic and rational belief. For instance, I think the algebriac properties of combining beliefs are not isomorphic to the properties of boolean combination of propositions.

    You might be able to get to the same point by slicing up the domain and then explaining it all at the level of a boolean algebra. Similarly, you can read the bit code of this sentence, if you prefer. But maybe some information that is important to us is lost by doing it this way, never mind the task of keeping straight what is going on.

    The thought is this: After working with a toy-model with very crude elements (beliefs, same, in a simple, monadic modal language) we might then be in a position to introduce these thinner notions in a gradual and (somewhat) controlled way. Our intuitions about what is needed would be guided by the framework we are working with and particular observations about the failure of the framework to fit the data. The suggestion then is that sticking to a framework that avoids (A), from the start, might not be the best first step.

  2. It’s not the irrationality of contradictory beliefs that is the issue, but rather the possibility. One can have contradictory beliefs by accessing the content under two different modes of presentation (“I am tired”, “JK is tired”, etc.). But if we fix the mode of presentation, I don’t see how one can believe both p and ~p at the same time. To do so, such beliefs would have to be completely severed from explanations of action, including the act of asserting what one takes to be true.

    Since I think inconsistent beliefs can be justified, I don’t think there is any special reason to think contradictory beliefs could be justified as well, if such were possible. So it seems to me that the key issue is possibility itself.

  3. Jon, you say,

    “One can have contradictory beliefs by accessing the content under two different modes of presentation (“I am tired”, “JK is tired”, etc.).”

    I am not sure these beliefs are contradictory (using my own case for simplicity),

    1. I believe M.A. is not tired.
    2. I believe I am tired.

    Can’t get a contradiction from (1) and (2) since substitutivity fails. So your beliefs are not inconsistent.

    But then,

    “But if we fix the mode of presentation, I don’t see how one can believe both p and ~p at the same time”

    This is much more interesting, since Bp and B~p might not (depending on your views about closure) entail B(p & ~p). So these beliefs needn’t entail believing anything inconsistent. It’s strange since, even under the assumption that (Bp & B~p) does not entail B(p & ~p), there remains some sense in which you *can’t* have the beliefs Bp and B~p.

  4. Mike, in the example using names, I’m assuming that the proposition expressed is a Russellian one, and that a Millian account of names is correct. So the content of each belief is a structured entity involving the property of being tired plus you.

    On the second issue, the beliefs are contradictory independent of closure issues, though as you note, depending on what one thinks about closure for belief, contradictory contents may or may not imply the existence of contradictory beliefs.

    The primary worry about contradictory beliefs requires assuming something that behaviorists will exult about: that there is a connection between action and belief. Needless to say, however, I don’t think you have to be a behaviorist to worry here. Functionalism of a very weak variety is strong enough to cause worries.

  5. Jon, I must be misunderstanding you on this.

    “On the second issue, the beliefs are contradictory independent of closure issues”.

    The contents of those beliefs are inconsistent, since
    the set {p, ~p} is not satisfiable. But the beliefs certainly seem consistent since {Bp, B~p} is satisfiable (or rather is satisfiable assuming the set is closed under some sufficiently weak logic for belief).

  6. Mike, I didn’t mean anything substantive by my description, so I don’t think we disagree here. I say beliefs are inconsistent when we can derive a contradiction from their contents; contradictory when the contents instance p and ~p. So beliefs can be contradictory even when the person has no contradictory belief (i.e., an instance of believing p&~p).

  7. This strikes me as a case where it really helps to be a _practical_ (and not merely metaphysical) naturalist about the mind — not just adhering to some sort of naturalist ontology on the whole, but accepting that science, in this case cognitive psychology, tells us what some of the different denizens of the mental are. In this case, there are a number of different places where that vexing representing of ~p might be located, other than as a belief. To just name two off the top of my head: in modular cognitive systems, and in the prototypes associated (but not, I think, identified) with our concepts.

    Doing so eases both the epistemologist’s problem of assumptions and the behaviorist’s problem. (The former is that the contraries have to be real enough to generate chagrin; the latter, that we can still see how each contrary connects up with behavior.)

    How does it help with the epistemologist’s problem? By giving us, as noted, a stock of non-belief forms of representing. It may be hard to see how one could both believe p and believe ~p (though I would note & set aside cognitive dissonance as one mechanism to do so). But it’s not hard to see how we could believe p, but have ~p somehow represented in a different system.

    To answer the behaviorist’s problem, we have to say what behavioral evidence there might be for such behaviorally-active-but-non-belief representings. But fortunately, there is a wealth of such evidence. To use Jon’s original example, the prototype of THREAT that feeds into your affective systems may represent features of African Americans, and hence that unfortunate but well-documented tendency of even good-meaning white folks to lock the doors when they drive by persons of color. The prototypes are stored away in our fast, reflex-like, primarily unconscious layers of cognition, and that’s where we’ll find behavioral evidence for them. Beliefs tend to have less direct effecton such systems, but can drive conscious, deliberative activity (like assertion).

  8. Jonathan, this is very nice and helpful. It raises a hard question for evidentialists, however, if Greg is right (as I think he is) that assumptions play a role in belief formation, and hence by implication in a proper account of (doxastic) justification. One would have thought that evidentialists understand justification in terms of evidence possessed, where this latter idea is characterized in terms of the contents of belief and experience. But if assumptions play a role as well, then it is not as easy to see what an evidentialist needs to say here.

  9. My recommendation to evidentialists would be that they not focus too much on the formation of beliefs (which will, if they become serious naturalists (as everyone should, of course!), take them into unconscious & non-beliefy terrain rather quickly), but that they focus instead on the rational defense of their beliefs.

    That is, if S forms a belief p partly on the basis of the (non-belief) assumption that q, then basically what S needs is to have the epistemic resources such that _if q were challenged_, S’s evidence is sufficient to justify q. But those epistemic resources need not have played any role whatsoever in the production of the belief that p.

  10. Jonathan’s observations are very helpful and mesh quite well with what one sees on the computational side of several analogous issues. I get excited by some of the methodological issues in play here, so I’ll remark on that with a brief example.

    Distribution properties are very important for a formal language to enjoy, but it is not always clear whether those assumptions are appropriate to project onto the problem domain itself. Believing falsum (p and not-p) is a good point to zero out, I suspect. But believing p and believing ~p might not be such a bad thing for our toy language to allow. All we’re doing here is denying Chellas’ ( C ) axiom for classical modal logics, where X is a modadic modality (e.g., box):

    ( C ) (Xp and Xq) only if X(p and q)

    which corresponds to a class of non-normal modal logics that characterize non-monotonic inference. (See Horacio Arlo-Costa’s 71(1) 2002, Studia Logica). And there are a variety of reasons to be interested in non-monotonic logics, reasons that extend outside the bounds of epistemology proper.

    One reason for chasing these things down as they run outside of philosophy proper is that often see remarkably similar problems cropping up in other domains and practitioners there, sometimes, advancing the mathematics to isolate these features at a level of abstraction that affords us more understanding of what is going on.

    In reply to Jon’s question to evidentialists, I can imagine (without endorsing) a reply that would push the issue into the theory of assessibility. So an evidentialist might say that there is a mental action that the agent performs to ‘access’ the assumed information upon which to evaluate the epistemic status of the belief. Do you think this has legs?

Leave a Reply

Your email address will not be published. Required fields are marked *