Justification and Explanation

Consider two cases. In both cases, we have two hypotheses that explain data. For simplicity think of the data as data of sense. In one case, the hypotheses are a common sense one and a skeptical hypothesis. In the other, the hypotheses are two rival scientific explanations.

If you’re inclined toward common sense epistemology, you’ll be inclined to think that in the first case, the common sense hypothesis is justified (or made rational) by the data; and if you’ve got any understanding of how science works, you’ll hold that the data don’t confirm either theory in the second case. So, what’s the difference?

Peter Markie and I were talking about this question the other day, and he’s inclined to answer as follows. In the first case, the data prima facie justify the common sense view, and in the second case, the data do not justify either. The reason we find the two cases puzzling is that we are inclined to think there is a connection between justification and explanation, and though there may be, there isn’t enough of a connection to read off the justificatory facts from the explanatory ones.

Is this a good answer?


Comments

Justification and Explanation — 29 Comments

  1. Jon,

    In the case of the skeptical hypothesis H1, looks to me like it is dispreferred not because it explains the data any less well, but because H1 is more top-heavy than the
    commonsense hypothesis H2. You need considerably more assumptions in H1 than you need in H2 to explain the same phenomena, so (in most cases and certainly in this one) the probability of hypothesis H1 will be lower than the probablity of H2.
    There is a nice analogy in explanations for why there is so much evil. Some philosophers (PvI comes to mind) offer an explanation that makes unabashed appeal to detailed theological doctrine. He get’s a nice explanation, but only on the top-heavy hypothesis that all that doctrinal stuff is true. An explanation that does not make these sorts of assumptions is certainly to be preferred, even if the explanation is less good.

  2. If we are considering two rival scientific hypotheses, doesn’t it depend a lot on how outlandish a hypothesis is, vis-a-vis its competitor, whether we want to say “the evidence doesn’t confirm either one”? Many people claim to remember that aliens abducted them, and the hypothesis that aliens do in fact abduct them is one scientific (I guess) explanation of their memories, compatible with all the evidence. But I wouldn’t say that between this hypothesis, and a rival more down-to-earth (pun intended) one, the evidence confirms neither one.

    On this way of looking at things, skeptical hypotheses are outlandish scientific hypotheses, not confirmed because of their outlandishness. After all, why is “I am a brain in a vat” NOT a scientific hypothesis?

  3. I’m trying to understand how the remarks about justification and explanation work in your question. I take it that the scientist (or scientifically-minded epistemologist) would see this as an argument against common sense epistemology, a kind of modus tollens making use of your “If you’re inclined toward common sense epistemology,” then…. After all, shouldn’t we be torn if there are two explanations? And shouldn’t that mean that we don’t have justification to believe one explanation over another?

    But it seems that by definition the common sense epistemologist prefers common sense explanations, whereas the scientist does not. What Jon says about H1 being more top-heavy than H2 can be seen as a way of introducing Occam’s-Razor-like concerns into the discussion, which a scientist might find appealing. And Heath’s point seems to be that even scientists make distinctions between good scientific explanations and bad ones. But this, I think, only opens the door to the question that the common sense epistemologist has to answer here: Why should we prefer common sense to science when it comes to what we are justified in believing.

  4. (1) I don’t see how the skeptical hypothesis is top-heavy. The BIV scenario only requires a brain and the apparatus for stimulating it. The common sense view requires all sorts of things — tables, horses, planets, etc. If anything, Occam’s Razor seems to support BIVH.

    (2) The Real World Hypothesis (RWH) has at least one advantage over BIVH: RWH is consistent with the contents of our observations. In other words, RWH affirms those things that perceptually appear to us to be the case. BIVH, while it may explain why we have the sensory experiences we do, is radically inconsistent with the appearances.

    Are BIVH and RWH equally (or at least comparably) good explanations of the data? That depends on what the data are:

    a. If the data are facts about the character of our experiences, then perhaps yes.
    b. But if the data are facts about the things we observe, then no — the BIVH is terrible.

  5. Mike H,

    You and I are inclined in precisely the same way here. I’m referring to your second point: RWH lines up with the contents of experience (or is, at least, consistent with them). The BIVH does not.

    What’s difficult for Markie, and other “know-how” accounts of perceptual justification is that they can’t give this explanation. For them, the story of justification traces to stable dispositions to form beliefs in the face of given inputs. It has nothing to do with evidential relations between contents (or states displaying the content in question). That’s why Markie’s response surprised me: it looks like the kind of explanation you or I can cite, but not him. Does this make sense?

  6. One suggestion (not original I think): the skeptical hypothesis isn’t confirmed by experience because it only makes accurate predictions by piggy-backing on the “real world” hypothesis. The skeptical hypothesis has the form “Suppose that X, Y and Z were the case so that your experiences are exactly as if they were caused by the real world (in the way you ordinarily take them to be).” But there is no way of knowing what actual predictions follow without first knowing what the real world hypothesis predicts. Intuitively, hypothesis B can’t compete with hypothesis A by saying “whatver A predicts, that’s my prediction too.”

  7. So, roughly speaking, the asymmetry between the two cases would be explained by saying that a hypothesis is not justified by data where it has an “equally good competitor.” But a competing hypothesis is not “equally good” merely by incorporating the predictions of its rival.

  8. Yeah, that reminds me, Jon, that I’ve been wondering what the difference is between your “coherentism” and my “foundationalism”. I take it that foundationalists (myself included) generally accept that coherence can be epistemically valuable, as long as you have the appropriate foundations to start from. (No one is going to say, e.g., that a theory that gives a coherent, unified, etc. explanation of many observations isn’t better than one that is incoherent.) So the dispute is not over that, but over whether you need foundations.

    But since you allow sensory experiences among the items in the “coherent system”, and since (I assume) sensory experiences don’t respond to beliefs, it seems to me that those act like foundations.

    My point isn’t to dispute terminology, though; rather, I’m wondering if there really is a difference between us.

  9. I’m not sure there’s much disagreement either. I think you may have an easier time getting along without propositional content for experience than I have, though you couldn’t then use the reply to the problem above. This issue about content has been a focal issue to me recently, and I’m less sure that there’s enough propositional content for the coherentist story to work…

  10. Michael H,

    Since we’re on the topic of mental content, I don’t think that we have to say that sensory experience doesn’t respond to beliefs, or at least that we don’t have to say that sensory experience doesn’t respond to knowledge. A main point of contention between those that like nonconceptual content and those that don’t is whether or not what we know changes what we perceive (not just how we interpret what we perceive).

    This is going to be problematic for the foundationalist (right?) but the coherentist might not mind. Perhaps this would drive your and Jon’s positions apart?

  11. I do agree that our perceptual evidence of the world around us includes that we see chairs, etc., and not just that we have chair-seemings, etc. But there’s still an interesting question, I think, of whether or not just restricting ourselves to the data of seemings, the BIVH really is on an explanatory par with RWH. And I think that Stephen’s point is basically right, in this regard, and reveals what’s impoverished with the sort of “head-count” interpretation of Ockham’s Razor that Mike tried to use in comment #5. If the ordinary theory of the world is T, then the BIV theory has to have the form “There is a brain getting systematic impressions such that it will seem to the brain that (T)”. The ordinary theory will thus be simpler, in the sense that simplicity is to be used as a guide to theory-selection.

    Indeed, there are other ways in which the former will be a preferable explanation to the latter, if we use standard metrics of goodness of explanation. The former is more conservative, for starters. Also, RWH seems to me to leave fewer unanswered questions that BIVH: both BIVH and RWH share the same physics, biology, etc., so the general scientific account of how the universe works bottoms-out in the same place for both; but BIVH leaves open such rather pressing question: why have I been placed into this vat? Who is maintaining the vat, and for what (potentially nefarious) ends? And so on.

  12. Aaron —

    So, there are a couple of ways that experiences differ from (non-foundational) beliefs that makes them look like foundations:

    1) They are non-justified justifiers. The notion of justification, I assume, doesn’t apply to experiences. I’m neither justified nor unjustified in seeing a cat.

    2) I said that they ‘don’t respond to’ other beliefs. But this isn’t really to the point, because foundational beliefs aren’t normally defined as ones that fail to respond to other beliefs. Closer to the mark would be: they aren’t based on beliefs or other experiences. Obviously they aren’t inferred from anything, since the notion of inference only applies to beliefs. But I am also denying that they have an analogous but more general relation to other representations.

    Jon —

    I’m in favor of propositional content for experiences. I think the notion can be fairly modest — e.g., I am not saying that you have to have concepts for your experiences to have the content they do. Rather, the point is just that, when you have a sensory experience, something seems to be the case. For example, something of a certain color seems to be in front of you. That thing that seems to be the case can be formulated as a proposition; I think that’s all we need.

  13. Jon W. —

    a) I’m not clear on what criterion of simplicity you’re using. I’m actually very interested in simplicity, what it is, and why it is thought to be valuable. The BIVH is perhaps syntactically more complex — the statement of it is longer, as you described it. But I’m not sure why that is the relevant metric of simplicity, rather than, say, ontological simplicity.

    b) RWH is more conservative, but I don’t think that can count for much in a debate with the skeptic. I think that when we appeal to conservativeness as a virtue, we are assuming that our belief system as a whole is justified, likely to be mostly true, and the like. I think this begs the question if you’re debating a skeptic.

    c) BIVH doesn’t require all the posits that the scientific theory of RWH has. BIVH only requires those claims that are necessary to explaining the experiences of a brain. So, e.g., BIVH could be neutral on such things as the Big Bang, evolution, the existence of billions of stars, and sub-atomic physics (but not some basic brain science). Because of this, one might think BIVH has a higher probability of truth.

    d) It’s true that BIVH leaves unanswered questions. But it’s not clear to me how much evidential weight that has. If the questions were ones to which it seemed that there couldn’t be an answer, then I’d agree that that was a serious problem. Also, RWH leaves a lot of unanswered questions too.

  14. Mike,

    (a) I think one would be hard-pressed to find cases in the history of science in which a mere head-count of objects played any role in theory selection. _Maybe_ limiting the kinds of objects has played such a role on occasion, but BIVH and RWH are both on an ontological par there. Most typically, the relevant scientific notions of simplicity involve both pragmatic and aesthetic dimensions of simplicity, and in terms of both of those, T is simpler than “There is a brain getting systematic impressions such that it will seem to the brain that (T).”

    (b) This depends, as in all skeptic-involving situations, on one’s construal of the dialectic. I took the situation to be one where the skeptic is attempting to make a challenge to us, by claiming that BIVH is just as good an explanation as RWH. So we’re not begging any questions in pointing out that, by the norms that we use to evaluate explanations (including conservatism), it’s just not a very good explanation.

    (c) Another dimension of evaluation of explanations is scope — how much of the universe is being explained by the proposed hypothesis? If what you say here is correct, then it shows a huge mark against BIVH as compared to RWH. The latter is looking to explain a large chunk of the real world; the latter, if correct, explains only a tiny piece.

    (d) Unanswered questions are part of the score, when evaluating explanations. It’s not that BIVH has some and RWH has none, but rather BIVH has all the ones that RWH does (transposed into explanations about the structure of the brain-vat world), plus a number of ones that RWH does not.

    I think that (as Stephen indicated earlier), one has to be careful to recognize the distinction between merely being consistent with the data, or in some thin sense accounting for it by (non-explanatorily) predicting it (in the sense that BIVH makes the same predictions as RWH); and actually explaining it. Epistemologists worrying about skepticism tend to focus mostly on the former sort of case, because the skeptic is sometimes taken to argue, “Look, you can’t rule out BIVH!” But the data may fail to rule out a hypothesis, while still rendering that hypothesis a pretty bad explanation by comparison with another.

  15. Jon W.,

    I’m trying to see how all of the talk of explanatory virtues bears on the probability of the BIVH or RWH being true.

    a) Simplicity: I don’t see why pragmatic or aesthetic considerations are relevant to the likelihood of a hypothesis being true. (Of course, I don’t understand very well how simplicity is an epistemic virtue in general.) It seems that you’d have to presuppose something like that our aesthetic sense tracks the truth.

    About head-counting, here’s an example. When I was in college, an anthropology professor of mine discussed how the Americas came to be settled by many different Indian tribes. He said that the simplest explanation was that there was a single crossing of the Bering Strait, followed by diffusion and differentiation of the descendants of the people who crossed — as opposed to multiple strait-crossings.

    Or: imagine that it had been possible to account for anomalies in Uranus’ orbit by positing either a single additional planet, or 17 additional planets. Which would have been preferred?

    b) Conservativeness: I’m not sure I follow you here. The sort of skeptic we’re dealing with is denying that our belief system as a whole is justified or likely to be true, right? If I’m right that the conservativeness criterion rests on the assumption that our belief system as a whole is justified, do you not think we’d be begging the question to appeal to conservativeness? Would you agree that if we directly appealed to the assumption that our belief system is justified, then we’d be begging the question? Do you think appeals to conservativeness do not presuppose that our belief system is justified?

    c) Comprehensiveness: I think comprehensiveness is a virtue in some sense, but I’m not sure that “how much of the universe is being explained?” is the relevant question, as opposed to “how much of the data” or “how much of what we know is being explained?” The skeptic will say that in answer to the latter questions, the BIVH explains all the data, and everything we actually know to be the case.

    d) Unanswered questions: I don’t see how unanswered questions bear on the probability of a hypothesis being true. Unlike the case of simplicity, I not only fail to understand how it bears on probability of truth, I also don’t have any intuition that it bears on it at all.

    Example: When Robinson Crusoe sees footprints on the beach, he concludes that there is another person on the island. This hypothesis generates a lot of unanswered questions: what is this other person’s name? How old is he? What is his favorite color? And so on. But I don’t think that counts at all against the theory. I don’t just mean that the consideration is ‘outweighed’ by some countervailing virtues. It seems to me that listing the number of questions that Crusoe doesn’t know the answer to about this person is just irrelevant.

    From a broadly Bayesian standpoint, it seems that unanswered questions are relevant only to the extent that (a) all the possible answers to these questions are themselves improbable, or (b) it was antecedently probable that we would have had answers to those questions.

  16. a) I strongly doubt that the anthropological inference in question was a matter of ontological head-counting (of events), but rather that the single-crossing hypothesis better fit the data. Multiple crossings would produce a different pattern of settlement, linguistic diversity, etc.

    One might just as well wonder, btw, why ontological head-counting is a guide to probability, either. I think one can get the illusion that this is so, if one underrepresents the competitor hypotheses: something like “at least one planet” vs. “at least 17 planets”. Since the latter entails the former, but not vice-versa, one might reasonably think that the former is more probable. But the actual hypotheses are way more specific: “one planet of such and such mass, traveling at this velocity in that orbit” vs. 17 similarly-specific propositions. Neither entails the other — indeed, they won’t even be mutually consistent — and no easy comparison of probability will be possible.

    Now, in actuality, surely astronomers would prefer the one-planet hypothesis to the 17. But they have many more specific reasons to do so: a general belief that planets are fairly rare; the specific belief that planets are observable, and that people have been watching the sky closely for such things for a long time. And, of course, 17 planets is likely to be a terribly _inelegant_ solution! Moreover, if there were really a competitor hypothesis like that, the scientific community at large would probably hold their bets ’til some observations could be made to tilt the scales empirically.

    Also, if I’m right about unanswered questions (below), then the one-lanet hypothesis will have 1/17 as many unanswered questions as its rival.

  17. Dang it! The computer ate most of my response! I’ll reconstruct piecemeal.

    b) For starters, as I said before, I take us merely to be showing a would-be skeptical challenge to be unsuccessful; we are thus not required to sink to the skeptic’s level, to respond to him. Rather, it is incumbent upon him to show us that his hypothesis is equally explanatory by _our_ lights of what makes for a good explanation. And conservativeness is part of that. So, no legitimate questions are being begged here.
    (Anyway, I took the initial post to really be about understanding explanation, not about refuting skepticism.)

    c) You might be right here. I can’t think of any piece of science that has needed to make the distinction in that way, so I’ll just have to treat it as an open question.

  18. d) (and really all of this exchage): There are at least two fundamentally different ways of approaching the whole issue of explanation in epistemology. One way is to consider, one by one, the various putative criteria that might make for goodness of explanation; to evaluate each for how probability-promoting it seems to be (primarily on intuitive grounds); and from there approve only the probability-promoting ones as _really_ components of goodness of explanation. This approach has the merit that, whatever comes out of it, we have good reason to take there to be a tight connection between p’s explanatory quality and p’s likelihood of being true.

    The problem with that approach, though, is that its results seem miles away from the set of explanation-providing and explanation-evaluating practices for which we have paramount reason to believe does a good job of getting a hold of the probable, i.e., science. The approach I prefer (it would not be amiss, in this aspect of it, to call it Quinean) takes the scientific practices are overwhelmingly well-formed, though not at all beyond revision. It is a contingent fact, learned over generations, that the universe is such that this is a good way to form, evaluate, and revise our theories of it. And any epistemology of explantion should start (but need not end) with that fact.

    So, I take it that part of that fact is that goodness of explanation includes such factors as: simplicity, measured often in aesthetic and pragmatic terms; conservativeness; comprehensiveness (though, as I noted earlier, I’m not sure how best to measure that); and minimizing unanswered question; also such as yet undiscussed factors as testability (which I take it will not help distinguish BIVH and RWH).

    Finally, I take it that these criteria of goodness of explanation are meant primarily for comparative judgments — for iinference to the _best_ explanation. So that’s why the Crusoe case isn’t a problem — his explanation is still the best one available, given his data.

  19. Jonathan, I like very much your Quineanism about explanation here. Even given your holistic defense here, there’s still the following issue. List all the explanatory virtues, and take them one by one. Notice that none, or nearly none, are truth conducive. How is it that, when combined, truth-conduciveness magically appears? I think the best approach here is to duck the question! 🙂 That is, the lesson is that the truth connection is a bit more remote than is required by this dialectic. If I’m right, then the fact that simplicity, conservativeness, et. al., are not individually truth conducive is less than ideal, but does not imply that they are not justification makers.

  20. Thanks for your replies, Jonathan. More follow-up:

    (a) I don’t remember the details of the example, but the Anthropology professor was comparing theories that were actually put forward by people in the field, and he specifically made a point that his view was more consistent with Occam’s Razor, because it required fewer (only 1) crossings of the Bering Strait. So I don’t know whether the theories fit the data equally well (probably not!), but it was clear in any event that the simplicity was being appealed to as a significant reason in itself to prefer one theory.

    I’m not very up on the history of science. Here are some examples of head-counting from philosophy, though: Nominalists claim that their view is better because it requires one fewer type of thing in the world. Physicalists similarly sometimes claim that their view is better because it requires one kind of thing instead of two. What do you think of these cases?

    (b) Let’s take a different example. My favorite scientific dispute: say two physicists are arguing over whether the Copenhagen interpretation of quantum mechanics or Bohm’s interpretation is better.

    A: “The CI is better because it is more conservative.”
    B: “Why is that?”
    A: “Because most physicists believe the CI, so it is more consistent with our present beliefs.”

    If it is true that most physicists believe the CI, do you think A has made a good argument?

    I think we might want to distinguish two kinds of appeal to conservativeness. In the first kind (call it “background conservativeness”), one argues that theory X is better than theory Y because X is more consistent with background knowledge, or established theories about other matters, that is not in dispute between the proponents of X and the proponents of Y. For instance, an anthropological theory should be consistent with theories in geology, genetics, etc., to the extent that they bear on one another.

    In the second kind of appeal to conservativeness (call it “question-begging conservativeness”), one appeals to the fact that theory X itself is more widely believed than Y, to show that X is better than Y.

    As you might gather, I like the first kind of conservativeness better than the second.

  21. Continuation:

    (d) You might be proposing an argument something like this:

    1. Science has done a good job of getting to the truth.
    2. The best explanation for this is that the criteria for explanations actually used by scientists are reliable (truth-conducive, etc.).
    3. Therefore, probably, the criteria used by scientists are reliable. (From 1, 2)

    We could then go on from there to defend specific criteria:

    4. Scientists use the criterion of conservativeness.
    5. Therefore, probably, the criterion of conservativeness is reliable.

    One question about this kind of inference is what criteria of explanation are used to establish (2). If it is the same as the criteria that (3) talks about, then there’s an epistemic circularity issue.

    Another question is how we know that (1) is true. If, for example, we know it on the basis of the actual arguments that scientists have for thinking their theories to be true, then there’s a second circularity worry, since those arguments wouldn’t enable us to know (1) unless the conclusion (3) were true.

    I don’t have a well-developed view about epistemic circularity. But here’s another issue. (1) and (2) might lead (only) to the conclusion that the set of criteria used are overall reliable; yet it might be that some particular criterion in that set isn’t reliable, doesn’t contribute to the overall reliability of the set, and should be dropped. E.g., maybe the success of science is mostly just due to its use of the criteria of empirical adequacy and simplicity, and the other criteria are superfluous.

  22. Mike,

    (a) With regard to the North American anthropology case, I’m certainly willing to look at the particulars. But I’d think that I can agree with your professor that it the simplicity criterion may be doing a lot (though probably not all) of the work here. It’s just that the right way to understand the simplicity criterion is not in terms of head-counting. The one-crossing will probably be more elegant than a multi-crossing theory; it also will leave fewer open questions (I’m guessing that the initial crossing date could be pinned down moderately well, whereas any later ones would be much less clear.)

    Unfortunately, philosophical practices don’t have anything even remotely like the track-record of scientific practices, so the quirky ontological fetishes of some metaphysicians just doesn’t have standing here.

    (b) Nope, nothing wrong with A’s argument — so long as the first statement is understood as “CI is better to the extent that it is more conservative”. This is a holistic affair, and conservatism’s just one criterion among several, which may well be overridden by the others (as of course happens with many of our most celebrated scientific achievements, like natural selection or general relativity.) What can make it look fishy is that it can sound like maybe A wants that appeal to conservativeness to settle the matter, and that indeed would be a lousy argument. But generally, a theory’s being the almost-consensus view is a pro tanto good-making feature of the theory.

    (d) I think the epistemic circularity worries can only come into play if there is an active challenge either to the vast bulk of what we take to be our accumulated knowledge, or to the scientific methods themselves. On my understanding of the dialectic, that won’t be the case.

    But I agree with you completely about the last point — that’s why I was at pains at several points to say that these criteria are not beyond revision, and that our epistemology of explanation should start, but need not end, with the fact of the bounty of scientific knowledge. Any revisions or challenges, however, have to be derived from within the framework that is taking the science fully seriously. E.g., one might show that in most of the cases where scientists have appealed to criterion X, it has been in defense of a theory that would later lose out to a rival that had a lower score on X. Indeed, suppose further that you could then propose some alternative criterion Y, that would capture the minority of cases in which X seemed to work, but would have made better selections in the cases that X botched. Then you’d have an excellent case for revising our practices to dump X and adopt Y. These norms do get debated — cf. the current literature on Morgan’s Canon — but the criteria can only usefully be tested within and against science itself, and not from the armchair.

  23. Jon W,

    I wonder what you think about Foley’s objection to epistemic conservatism. Here’s an adaptation of it to the scientific case:

    Suppose there are two theories, T1 and T2, which are the only contenders for being the best explanation in some domain. Suppose that, when the scientists are trying to decide between the two, T1 is just shy of being preferred on the basis of the normal criteria for goodness of explanation. The scientists irrationally accept T1 anyway. But as soon as they form this irrational belief (or as soon as a sufficient number of them do), it becomes rational, because now, in addition to its other virtues, T1 also has the virtue of conservativeness, so that pushes it over the threshold for justified belief.

    Now you might think the scientists have a defeater in my scenario, because they have evidence that they formed their belief irrationally. So imagine that 20 years later, the scientific community has pretty much forgotten how T1 came to be adopted. As soon as they forget (or as soon as a sufficient percentage of the scientists no longer know how T1 came to be the consensus view), T1 becomes the justified theory.

    As an aside, I think something like this in fact happened in the case of the Copenhagen Interpretation of QM, except that CI wasn’t just slightly shy of being the best explanation; it was far short of being the best. (And of course I don’t think CI ever became justified, but I think it is viewed as such because conservativeness is given more weight than intelligibility.)

  24. Mike, I’m not sure I understand what the force of the case is meant to be here. Yes, sometimes conservatism can pull one away from the truth. So can the other criteria. Simplicity is objectively non-truth-promoting when the universe fails to be simple; conservatism is objectively non-truth-promoting when we’ve collectively gotten onto a wrong track. And so on. None of this undermines the basic claim here, which is that the criteria all together give us a good (but very fallible, of course) guide for theory selection, and that we’re justified in believing in accord with inference to the best explanation.

    Maybe if you filled in your take on CI a bit more I could see what you have in mind, but right now I’m just not seeing the problem.

  25. The objection isn’t that conservatism can fail to lead to the truth. For all I’ve said (or for all Foley said in “Epistemic Conservatism”), the belief could be true.

    Foley’s objection was to epistemic conservatism on the individual level; his objection was that the doctrine allows cases in which a person is by hypothesis unjustified in adopting a belief, yet the belief immediately becomes justified as soon as he adopts it. I assume that most people would find this counterintuitive.

    I’m just adapting this to a social case — i.e., a community irrationally adopts some belief, but as soon as they adopt it, it’s rational. None of the other criteria of justification have this consequence.

  26. A few responses: first, even if we were just sitting around with no particular reasons to suspect that we should adopt conservatism, and so just mooting about our intuitions about it, I don’t see where this case would show very much. I agree that we don’t want it to be _generally_ the case that people could get their beliefs justified so easily; and that we don’t want our norms to be easily _gameable_ by someone who’s looking for an irresponsible shortcut to justification. But the case doesn’t show either of those things to be real threats. Rather, it shows that _under heavily convoluted & presumably rare circumstances_, a norm of conservatism might have this odd consequence. And it seems to me the right response, if we’re just intuition-mooting, would be to say: well, ok, that’s one very small mark against it; but how does it do on more central sorts of cases?

    But, anyway, we’re not just intuition-mooting here. Rather, we already have excellent evidence for this norm — namely, its deployment in the suite of extraordinarily successful norms that go into the scientific evaluation of hypotheses. Your hypothetical would have at best a small weight in the context of armchair mootage — that weight drops effectively to null in the actual context we’re considering.

    This also shows why it’s a mistake to suppose that conservatism rests on the antecedent rationality of the scientists. Which is a good thing, since there’s plenty of reason to think that the judgments of individual scientists, and of the scientific community at large, are influenced by all sorts of non-rational forces. Nonetheless, the norm of conservatism is one that seems to work as part of our package, and consideration of hypothetical cases isn’t going to show otherwise.

  27. Hm, I think we have very different methodological assumptions. I think Foley’s argument is compelling in the intuition-mooting forum, simply because the intuition he relies on is strong. When I first heard it, it convinced me that epistemic conservatism was false. I don’t think the frequency with which cases of the kind occur matters — indeed, Foley’s case may never have actually been realized; still, conservatism implies that if such a case were to occur, the belief in question would be justified, and that counterfactual strikes me as clearly false.

    Also, even though we have strong evidence that science is generally good at finding things out, I don’t think we have strong empirical evidence for the conservatism norm. This is for two reasons: (1) Because the general reliability of science is compatible with some particular norm of scientific practice being dispensible (as we’ve both agreed), and (2) because anyway, the norm of conservatism is an interpretation of scientific practice that I think is open to question. Other interpretations are possible; e.g., perhaps the norm is really “Prefer theories that are consistent with existing justified theories” or “Prefer theories that are consistent with existing theories that are not in question.” And I think more interpretations may exist.

Leave a Reply

Your email address will not be published. Required fields are marked *