Wishful Thinking Problems for Reliabilism

Phenomenal conservatism and dogmatism get a bad rep for allowing some cases of wishful thinking to provide prima facie justification.  In this post, I argue that reliabilism has wishful thinking problems that are even worse.

Contrast direct and indirect wishful thinking.  The former occurs when one bases a belief that P directly on one’s desire (wish, lust, etc.) that P.  The latter occurs in any other way.  For our purposes, we can assume all cases of indirect wishful thinking fit this pattern: a desire that P causally influences the way things seem, and then a belief that P is based directly on the seeming that is so influenced.  For example, I might base my belief that the nugget is gold on its seeming that the nugget is gold, where I have that seeming, not because of relevant expertise, but because I’m lusting for gold.  Lyons holds that, as a matter of fact, most wishful thinking is indirect in this way, which sounds plausible to me.  Seemings Internalism (SI) holds that seemings necessarily provide prima facie justification.  (SI is inclusive of dogmatism and phenomenal conservatism.)  Hence, SI seems committed to allowing indirect wishful thinking to provide prima facie justification.

Says Lyons: “For SI to bite the bullet here would be for it to hold that the epistemology of (typical) wishful thinking perfectly parallels the epistemology of (typical) perception: an agent has an appearance as of p, which prima facie justifies her in believing that p.  I myself would think that this bullet just can’t be bitten, that an epistemology that licences wishful thinking in this way simply can’t be taken seriously” (pg 299 of this paper).  Note that Lyons doesn’t claim that, given SI, typical cases of wishful thinking result in ultima facie justification.  Since it’s plausible that typical wishful thinkers have defeaters for their wishful thinking, the latter claim is dubious.

I agree with Lyons that it is counterintuitive to allow indirect wishful thinking to prima facie justify its content; however, reliabilists suffer from wishful thinking problems that are worse.  I admit it’s bad to allow typical cases of indirect wishful thinking to provide prima facie justification.  It’s worse, in my mind, to allow possible cases of direct wishful thinking to provide ultima facie justification.  Proponents of SI can say that it is impossible for desires, by themselves, to be an acceptable basis for beliefs.  Reliabilists can’t say that.  (I think the same holds for externalism more generally and possibly also for some versions of internalism.)

To see the basic point, start with a simple version of indicator reliabilism.  On such a view, a desire that P will justify a belief that P just in case that desire is a reliable indicator of P.  Epistemic angels can bring about the required reliability by organizing the world to ensure that desires (within a certain domain) regularly come to pass.  The modal profile of P can ensure that any basis of P, including a desire for P, is a trivially reliable indicator for P.  Since Goedel’s first incompleteness theorem is necessarily true, there’s never a case in which you will be led astray by believing the theorem on the basis of desiring or wishing it to be true.  The modal profile of certain contingent truths also can make desires trivially reliable indicators of P, but I won’t go into the details here.  More sophisticated indicator reliabilisms will give more subtle accounts of reliability, but give me an account and I’ll give you a possible case in which the view allows a belief to be ultima facie justified on the basis of a desire.

Depending on what they say about process individuation, process reliabilists may approve of actual cases of direct wishful thinking.  A process can be wrong every time it takes a desire as an input, but still be reliable overall if desires rarely get used as inputs.  For example, I might use process R 100 times.  It’s output might be mistaken both times it took a desire as an input, but be very reliable because it was correct in all 98 other uses.  Suppose that desires do sometimes causally influence the way things seem (which Lyons accepts) and that this happens in a very small number of times relative to the number of perceptual seemings we have (which seems plausible).  When desires do get involved, are those desires unusual inputs to the normal—and reliable—process of perception or do they always indicate that a different process is at work?  I don’t think reliabilists say enough about process individuation for us to say one way or another.  But if desires are unusual inputs to a normal reliable process, then process reliabilists are committed to allowing direct wishful thinking to provide ultima facie justification in the actual world.

So here’s a challenge, reliabilists.  Give me a proposed account of process individuation that doesn’t allow any actual wishful thinking to count as justified, and I’ll show you that there is a possible process that (i) takes at least one desire as an input and (ii) produces mostly true beliefs (or satisfies whatever account of reliability you put forward).

(I’ve assumed that the relevant sort of reliability is reliability in the world of the process being evaluated (what Lyons calls “in situ” reliability).  Alternative views hold that what matters is reliability in the actual world.  Such versions of reliabilism probably don’t have wishful thinking problems, unless their account of process individuation forces them to approve of some actual case of wishful thinking.  But they achieve this advantage by relying on the most implausible feature of their view, namely that processes that don’t exist can’t yield justified beliefs.  Possible processes that don’t exist can’t yield justified beliefs, because they aren’t reliable in the actual world.)


Comments

Wishful Thinking Problems for Reliabilism — 11 Comments

  1. Hi Chris,

    Suppose a reliabilist says that only cognitive processes can produce knowledge, and wishes have the wrong direction of fit to be an input to a cognitive process, regardless of whether the process in question is reliable. Cognitive processes and their inputs have mind-to-world direction of fit, but wishes and desires have world-to-mind direction of fit.

    This is consistent with wishes influencing the character of inputs that have the right direction of fit. Wishes just can’t be the inputs themselves.

  2. Hi John,

    Let me make sure I understand where you are going with your comment. I say reliabilism, as currently formulated, looks committed to approving of direct wishful thinking. One way to respond is to provide a non-ad hoc explanation for why desires can’t be inputs for cognitive processes. You are attempting to provide such an explanation. Is that what you are up to?

    In any event, I’m not sure I see why a cognitive process can’t take something with the wrong fit as an input. If a cognitive process took a desire as an input, that might make it a bad cognitive process. But I don’t see why taking a bad input should prevent a process from counting as cognitive.

    Suppose you are correct. You still allow that desires could cognitively penetrate experience, which could be appropriate inputs. Then the reliabilist will still be committed to approving of indirect wishful thinking, which means it’s not clear that the reliabilist is any better with respect to wishful thinking than is the dogmatist.

  3. Hi Chris,

    Yes, I was offering a principled explanation on behalf of the reliabilist.

    On the envisioned proposal, cognitive processes take as direct inputs only states with mind-to-world direction of fit. The quality of a cognitive process is determined by reliability. Call any mental process that produces a belief “doxastic.” Non-cognitive processes that are also doxastic processes, such as wishful thinking, are not bad cognitive processes. But they are still epistemically bad, in the sense that they can’t result in knowledge.

    The envisioned proposal is consistent with wishes influencing experience, which then serve as inputs to cognitive processes. In contrast to the Lyons quote, I see no reason why anyone, reliabilist or dogmatist or otherwise, should think that this is a problem in principle. Indeed, I suspect that it would be a serious problem to rule it out as improper, necessarily, in advance.

  4. I’m a proponent of dogmatism, and I still find it a bit counterintuitive to allow indirect wishful thinking to produce prima facie J. I think it’s interesting and, as a dogmatist, nice that you don’t get the intuition that would be problematic for me.

    Regarding your proposal, I worry that it would put restrictions on reliabilism that reliabilists normally wouldn’t want. Most reliabilists want to leave it open that a process produces justification even if its inputs aren’t mental items or aren’t the sort of mental items that have a fit to the world (e.g. if experiences don’t have content at all). I think your proposal rules out those possibilities and pushes the reliabilist toward an evidentialist point of view. I’m not necessarily saying that’s a bad thing; I’m just reporting what I take to be the implications of the view.

    Also, what is it for a process to have a mind-to-world fit? Is it just for the output to have a mind to world fit?

  5. Hi Chris,

    Maybe a detailed example would convince me otherwise, but it seems to me that cases where desires serve as inputs involve different processes than in cases where they don’t. (This relates to your previous post on this topic, too.) Suppose a perceptual process P takes retinal stimulation (and things like eye position and motion, etc.) as inputs and returns object identifications as outputs, and this process is highly reliable. If P* is just like P but also takes desires as inputs, I’d say that P* is a different process, and if on this occasion I’m using P*, and it’s unreliable, then I’m not prima facie justified.

    I don’t know about other reliabilists, but I wouldn’t want to take John’s suggested route, because I think there are cases where desires and other motivational states can actually contribute to justification, as I mention in the paper you’re citing. If being afraid of snakes makes me better at spotting snakes (not just more “powerful”, but more reliable), then that’s a factor our epistemology should take into account, so I wouldn’t want a blanket prohibition on such states figuring into cognitive processes. For this sort of reason, I don’t think that every influence of desires on perception/belief is a problem in principle. I take it that wishful thinking is one particular species of desire-influenced belief.

    Obviously, all this would be more convincing in the presence of detailed theory of process individuation. But even without it, it seems intuitively clear enough that the desire-mediated process is a different process than the other, more reliable one.

  6. Hi Jack,

    I take no stand on whether desires lead to different processes. I don’t really have clear intuitions about the sort of process individuation that would be required for epistemic evaluation–it’s just such a thorny topic. So my goal in this post and the previous one (http://el-prod.baylor.edu/certain_doubts/?p=3325) was just to indicate what I think the options are for the reliabilist and hopefully to indicate some reason for being pessimistic about its prospects for resolving the various issues.

    Based on our personal correspondence (which I may have misinterpreted), I’m a bit surprised by what you say in this comment here. It sounds like you are allowing inputs and outputs to do more of the individuating work and that you are after narrower processes than I had previously assumed. I think your remarks in this comment at least gesture to a very narrow way of individuating processes. As you know better than I do, there are all sorts of things going on in perception besides what’s the direct result of retinal stimulation. If all that stuff leads to a different process (rather than constituting additional inputs to the same process), individuation would be very narrow. Of course, there may be good reasons to treat desires differently than all that other stuff, but I imagine it will be tricky to find a good reason for making such a sharp difference.

    In any event, it’s not clear how what you say in this comment can help save the reliabilist. To whatever extent desires-as-inputs leads to a different process, it will be easy for epistemic angels to be stable features of the environment (across close possible worlds) that guarantee the reliability of those processes. Also, generally speaking, the narrower the processes, the easier it is for the modal profile of propositions to lead to trivial reliability of the relevant process.

    As far as your response to John goes, I’m not sure it’s decisive. Again, it all comes down to process individuation. Much of the good that desires can do has to do with priming, and it is at least open to the reliabilist to deny that the priming stuff gets built into the cognitive process that is evaluated for reliability.

  7. “Much of the good that desires can do has to do with priming, and it is at least open to the reliabilist to deny that the priming stuff gets built into the cognitive process that is evaluated for reliability.”

    I don’t think this option is available to the reliabilist. Fundamentally, the cognitive process attempts to recognize patterns and JTB is one such patttern. Pattern recognition is skewed by evolution. Two terms, Apophenia, and Patternicity: The tendency to find meaningful patterns in meaningless noise, describe innate cognitive constraints.

    “Why do people see faces in nature, interpret window stains as human figures, hear voices in random sounds generated by electronic devices or find conspiracies in the daily news? A proximate cause is the priming effect, in which our brain and senses are prepared to interpret stimuli according to an expected model.”

    SH: I think this includes wishful thinking and even magical thinking.

    …”evolutionary modeling demonstrates that whenever the cost of believing a false pattern is real is less than the cost of not believing a real pattern, natural selection will favor patternicity.”

    So I think natural selection has a bias in constructing our cognition and this will impact any theories our cognition devises to recognize patterns -> jtbs -> reliabilism. I’m not so sure that Godelian Inc. nor Tarski Undefinability of Truth capture this biased origin. Godel and Tarski apply to formalized processes and their resistance to being captured and their inexhaustibility. I think this biased basis precludes a formal system from being devised, so that there is no prior structure for Godel Inc. or Tarski Undefinability to bear application upon – come to grips with.

  8. Hi Stephen,

    Thanks for your comment. As I understand it, you are not objecting to my original post but my defense of Turri’s comment. I said the reliabilist might sensibly deny that the priming effect get built into the process that gets evaluated for reliability. Are you disagreeing with that claim?

    If I understand you, and I’m not sure I do, your point is that priming makes a big difference to what one actually believes or perhaps what is perceptually represented to one. I agree, but it doesn’t follow that the reliabilist must say that the priming stuff gets built into the process to be evaluated for reliability. Visual conditions, such as lighting, make a big difference to what one ends up believing, but most reliabilists don’t want to include lighting conditions as a part of the relevant process. I also doubt that some of the information that you appeal to, such as hyper agency detection, is best cashed out in terms of priming.

  9. Hi Chris,

    Yes, I agreed with you interesting original post for the most part, with the exception of how you supported your argument within one paragraph.

    “Epistemic angels can bring about the required reliability by organizing the world to ensure that desires (within a certain domain) regularly come to pass. The modal profile of P can ensure that any basis of P, including a desire for P, is a trivially reliable indicator for P. Since Goedel’s first incompleteness theorem is necessarily true, there’s never a case in which you will be led astray by believing the theorem on the basis of desiring or wishing it to be true.”

    I haven’t encountered this — I’ll call it a thought experiment. Godel’s Inc. Theorems require the conditions of a formal system. I think this Angel would need to be omniscient to reliably and consistently sort and organize into the correct domains, beliefs or desires included within beliefs. IOW, the Angel isn’t a hyper-computational device and so still subject to a higher-level halting problem, as I understand your description of the capabilities of the angel. So your example doesn’t fall under the purview of a formal system so it doesn’t have a bearing on Godel Incompleteness because the Angel’s powers include all three: Completeness, Consistency and Soundness (meta-theoretic); sufficiently rich formal systems can’t be all three, so no Godel Inc. traction. I think your Angel example transcends the scope of a formal system… Did you read your Angel example/usage in the literature?

    Yes, I disagree that “the reliabilist might sensibly deny that the priming effect get built into the process that gets evaluated for reliability.” Of course this hinges on the word “sensibly”. I’m mostly a Physicalist which I think supports a view close to Fallibilism.

    A primitive ancestor sees motion in the grasses of the Savannah. Is it a predator or the wind? Most of our ancestors played it safe rather than being sorry, and their genes dominate the gene pool, even though the result of their particular -lion or wind- decisions were probably not confirmed. Our thinking is more optimized for survival than rational objective judgments. But since our survival depends on responses to cause and effect events in the world, our cognition still approximates reality well enough to generate highly predictive physical theories. Natural selection does not perfectly optimize the gene pool, it’s probabilistic. The gene pool contains fixed benevolent, neutral and deleterious random mutations. Genes provide the instructions for building our brains, and so constrains how our minds perceive and prioritize and it includes large numbers of instinctual drives which act as judgment molders, curtailing the range of available choices.

    Our cognition is good enough but not optimized to rational decision making and evaluations, so there is always going to be a bias due to the genes selected to blueprint the building of our brain structures. Fidelity or reliability to a standard will eventually fail to randomness.
    I think this process undermines a sensible defense by a reliabilist, but I suppose it could be argued that evolutionary constraints are too primitive or too distant in time to be thought of as priming->or a factor in reaching an action potential which triggers neural network firings.
    I don’t seem to be very capable of writing a perspicuous post.

    “The best reaction to a paradox is to invent a genuinely new and deep idea.” Ian Hacking Regards, Stephen

  10. “I said the reliabilist might sensibly deny that the priming effect get built into the process that gets evaluated for reliability. Are you disagreeing with that claim?”

    The following quote appears to disagree with you claim?! I think what I wrote in the last post agrees with the quote below, although my post is more specific in example and less technically termed.
    I meant by an Angel or Demon example or usage, one that was also tied into invoking Godel’s Incompleteness Theorems in that context.

    Internalism and Externalism in Semantics and Epistemology
    By Sanford Goldberg

    f. A Refinement: Suitable Modulational Control
    “An important general point emerges from these considerations: cognitive agents like humans deploy various belief-forming processes in ways that are holistically integrated within the agent’s overall cognitive architecture. Very frequently, such processes are employed not in isolation, but rather under the modulating influence of various other or wider cognitive processes that are coupled to them and are poised to modulate them if and when certain forms of information become available to those wider processes. As we will put it, the given belief-forming process is under the modulational control of these associated processes. Such control can make for a selective application of the process or a selective inhibition, or can otherwise tailor its application to aspects of those local environments about which information is had—and thereby can enhance its reliability as so tailored. In principle, a whole host of different conditioning or modulating relations might be epistemically important. The wider processes might give rise to a narrower process — designing it or otherwise selecting or spawning it. They might trigger the conditioned process in ways that are fitting, or thought to be appropriate.”

    But I think they might also generate distortions as well.

    Regards,
    Stephen

  11. Hi Chris,

    Suppose a reliabilist states that only cognitive processes can produce understanding, and desires possess the wrong direction of fit to become a port to some cognitive process, no matter if the process under consideration is reliable. Cognitive processes as well as their inputs have mind-to-world direction of fit, but wishes and desires have world-to-mind direction of fit.

    This really is in line with wishes impacting on the smoothness of inputs which have the best direction of fit. Wishes just can’t function as the inputs themselves.

Leave a Reply

Your email address will not be published. Required fields are marked *