Seeing Connections in Your Reasons

Suppose you engage in a long chain of reasoning: you believe that 1) p, 2) if p then q, 3) therefore q, 4) if q then r, 5) therefore, r, 6) if r, then s, etc. You justifiably believe each of these premises, and on their basis, come to justifiably believe their  conclusion: z.

The next day, at 7am, while you are eating breakfast, you gain a reason to give up belief in p. So, you give up belief in p.

At 9am, while you are eating second breakfast, it dawns on you that by giving up belief that q, you have lost a crucial reason for believing z. So, you withhold belief in z. We can stipulate that any ordinary human would not have noticed the connection between p and z very easily; it is natural that you would only see the connection a couple of hours later.

It is clear to me that you are not justified in believing z at 10am. It is not so clear to me what the justificatory status of your belief in z is between 8am and 10am. I’m inclined both ways, which inclines me to think that disambiguation is needed. Perhaps I am justified (in the blamelessness sense of ‘justified’) in believing z, but I am not justified (in the evidential sense of ‘justified’) in believing z.  However, I’m hesitant to claim that a disambiguation is needed.

Any thoughts?

(This post is inspired by a case given by Kevin McCain in his forthcoming book.)


Comments

Seeing Connections in Your Reasons — 13 Comments

  1. Most beliefs in memory won’t be based on anything. It is typical to have no idea what the original basis of the belief was. The lack of a basis while in memory should not count against memorial beliefs’ justification on pain of massive skepticism about memorial beliefs. As a first pass, in the case you mention, you will be justified at 8am as long as the belief is stored, lacks a basis, and isn’t incoherent with your other beliefs. The condition involving incoherence is meant to rule out cases in which you have a stored belief in memory such that you can rationally believe Z only if you believe P.

    • Chris,
      Thanks for the note.

      “Most beliefs in memory won’t be based on anything.”

      That’s unclear to me. In my above case, before 8am (while I sleep), my belief that p, my belief that p->q, and my belief that q are all unconscious and stored in memory, but it does seem that my belief that q is based on the beliefs that p and p->q. Does this not seem to be the case to you? I’d say that this structure is present for all of the beliefs up through z.

      I take it that this is the standard foundationalist way of looking at things. I’ll admit that I’m with you in thinking that this way of looking at things is wrong.

      And yes, I’ll specify that I don’t have the explicit, stored belief that I can rationally believe z only if I believe p.

        • If you specify that the basing relations obtain in memory, then I do think the person is unjustified. I think forgetting one’s basis and changing one’s mind about the basis have different epistemic effects.

          Yes, I think memory beliefs can be justified even if they currently lack a basis. Forgetting the basis doesn’t entail losing the justification. I’m attracted to a certain kind of preservationism. The crude idea is that until you get a new basis (memorial seeming), the degree of pro tanto justification you have for a dispositional belief is fixed by what you did consciously, e.g., base the belief on good/bad evidence. It matters not that you no longer have the basis. Once you have a memorial seeming (or base the belief on something else), the strength of the seeming “resets” the degree of pro tanto justification you have for the belief.

          • “If you specify that the basing relations obtain in memory, then I do think the person is unjustified. I think forgetting one’s basis and changing one’s mind about the basis have different epistemic effects.”

            Chris,
            What do you think about a case in which a demon directly tampers with your mind so that your unconscious belief that p is just deleted? In this case, you are not aware that your unconscious belief that p was deleted.

          • Andrew, I wasn’t able to reply to you, so I’m replying to myself. If we are assuming that the basing chains are present in memory, the demon’s deleting the premise beliefs will result in the conclusion belief being unjustified.

            I doubt there are many actual cases like this, however. I think all most all beliefs are non-inferentially formed, and few, if any beliefs, are based on such a long chain of reasoning.

  2. I think this is where the distinction between justification and blamelessness is helpful. I think the intuition that something is wrong between 8-10am is due to a loss of justification, but the hesitancy to assert this is due to the agent being epistemically blameless.

  3. Your post highlights why I think the name of this blog, Certain Doubts, is so clever. It can be read as dealing with _particular_ doubts. I believe the most insightful way to interpret the name of this blog is that any conclusion reached, outside of simple logical structures like Plane Geometry where all the background influences (axioms) are specified, is that any complex conclusion relying on background assumptions about the nature of reality, is almost certainly wrong. It is nearly certain that the status of the completeness of our knowledge should be doubted. We saw the lines of thought/belief diverge when Euclid’s Axioms were attempted to be projected onto the nature of reality.

    Connecting this to your post — Suppose we have three assumptions to base our conclusion upon each with a 50% chance of being true. The probability of that conclusion or belief being true is 1/2 * 1/2 * 1/2 = only a 1/8 chance. Closer to your example, suppose we need or have used eight assumptions, and each assumption assigned a high probability, .9, of being true, to arrive at our conclusion. Thus the chance of our conclusion being true is .9 * .9 * .9 * .9 *.9 * .9 * .9 * .9 = ~39%. The probability of our conclusion being true with the certainty of the eight assumptions we assigned to 90% is less than 40%! Suppose we realize later that one of the assumptions doesn’t have a 90% chance of being true, but only 50%?! This reduces the truth likelihood of our conclusion to ~20%.

    In general, conclusions that rely on long chains of reasoning involving many assumptions with even a hoped for .999 chance of each being true are almost certainly doubtful. I’ve read a close discussion under Critical Thinking. I think the number one character flaw (probably genetic) for bright people to err in their reasoning is that they don’t realize how much evidence needs to be collected from reality before a notion evolves into a tenable belief. Smart people usually gather too little evidence on a question before they decide they have enough information to arrive at an accurate assessment. They inaccurately transfer analysis depending on their area of expertise into areas which may be similar to their experience but not sufficiently congruent, the analogy isn’t apt enough. The devil is in the details. Relating to your post, their are revisions to one’s belief or conclusion that are likely to occur two hours later, four hours later, etc. as one becomes more and more aware of subtle influences trickling in from all relevant real factors. Unless one lacks humility. It’s similar to a Turing Test where a judge who questions a source for 10 hours is more likely to decide correctly if the responses are from a human or a program than if the judge’s questioning session lasts only a half hour. When a new supercomputer is built it’s tested for accuracy against another supercomputer; usually the test is computing millions of digits of Pi and checking for a discrepancy. Programers test their programs.

Leave a Reply

Your email address will not be published. Required fields are marked *