Reducing “Epistemic Blame”

I am very interested in what is often called the “ethics of belief.”  I am blamed for believing things I shouldn’t and in turn blame others for believing things they shouldn’t.  I think that’s all fine.  I do not, however, think of this as distinctively epistemic.  It’s hard to say what is distinctive of epistemology, but it seems that issues of practical rationality or failure of a duty of care are clear cases of things that are not distinctively epistemic.  I suspect that there is no non-evidential notion of distinctively epistemic blame.  Yet responsibilists often explicitly claim there is (Montmarquet, Axtell, Baehr).  Below the fold is one way I think one can see that there is no non-evidential distinctively epistemic notion of blameworthiness.

I want to make the dilemma I posed in “Reducing Responsibility” ( more explicit here.  I think it is most usefully put in the form of a recipe or flowchart for reduction of an alleged item of epistemic irresponsibility to something else.

  1. Take the case of a particular instance of so-called epistemic irresponsibility–say, S taking doxastic attitude A toward p in circumstance C–and ask the following questions.
  2. Does A fit S’s evidence wrt p?
  3. If not, then there is a simple explanation–lack of evidential fit.  Still, the further questions would still need to be asked, since the alternative explanations are not mutually exclusive.
  4. In C, is there much at stake in A’s being inaccurate?
  5. If the answer is No, then there appears to be no basis for the charge of irresponsibility.
  6. If the answer is Yes, then ask the following question.
  7. Are the stakes those of S?
  8. If Yes, then not inquiring further into whether p was a failing of practical rationality.
  9. If No, then ask the following question.
  10. Are the stakes those of some individual to whom S owes a relevant duty of care?
  11. If Yes, then not inquiring further into whether p is a failure of a moral duty.
  12. If No, then there is nothing at stake and so no basis for a charge of irresponsibility.

In this way, it seems to me, any alleged case of epistemic irresponsibility can be seen to be either a standard case of lack of evidential fit, some non-epistemic shortcoming, or no problem at all.

My issue is that I haven’t been able to find or think of a clear case of s0-called epistemic irresponsibility that isn’t easily explainable as resolving into one or more of these other properties. It would need to be (or at least it would be better to be) a case were S had a doxastic attitude toward a proposition which didn’t fit.  Otherwise, it would be impossible to show that epistemic irresponsibility was not just a case of lack of evidential fit, the error theory being that the responsibilist is latching onto the evidential norm and illegitimately appropriating that failure to do as one epistemically ought into their own conception of irresponsibility.

But I’m not very good with cases, maybe there’s one out there I haven’t seen or maybe someone can think of one.


Reducing “Epistemic Blame” — 37 Comments

  1. Hi Trent,

    I take it that the most controversial step in your reasoning will be this:

    “If the answer is No, then there appears to be no basis for the charge of irresponsibility.”

    I have a general question about your project. If I think about someone who reasons about matters of no practical importance and commits all manner of informal fallacies, I guess I would say things like, ‘If anyone is to blame for his mistakes, it is him’ whereas I would reject the claim that someone is to blame for mistakes if they are due to deception. Maybe ‘blame’ as I’ve used it doesn’t have to do with the reactive attitudes, so maybe it’s not the relevant notion of blame. I take it, however, that you wouldn’t say I spoke falsely if I said, ‘He’s to blame for his own mistakes’, you’d say that it’s just not the relevant notion. If so, it would be helpful to think about how we should capture the relevant notion of blame.

    As for the justification for (5), I guess I’d share the intuition that you wouldn’t take up the reactive attitudes when someone makes sloppy mistakes about matters of no real practical importance. My only reservation when it comes to endorsing the argument is this–someone might say that those attitudes are distinctive of moral assessment anyway and we shouldn’t expect things like resentment to be the mark of epistemic blame. I guess it’s fair to ask those who think there is some significant notion of blame what they have in mind. For my own part, I’m happy to work with the very thin notion of blame mentioned above, one according to which you’re to blame for the mistakes that are attributable to your poor methods and processes of reasoning and to refrain from saying that this blame carries with it the kind of condemnation that comes with moral blame.

  2. Clayton,

    1a. Right, the sense of “blame” there is like the sense of “responsible” in Greco. It has no evaluative connotation at all. Rather, it is merely attributive it “chalks up” the action to that agent as opposed to another, as he sometimes says.

    1b. It still may be that you speak falsely, though, because it could be that that manner of speaking implicates something true by expressing something false. I don’t have strong views about that, but I don’t want to commit to the truth of that manner of speaking. It might be metaphorical, and on standard accounts of metaphors, they are false.

    2a. I am indeed puzzled by what resonsibilists mean, and it has been extended reflection on this that has lead me to pose this dilemma to tease out the right kind of case. I am, however, pessimistic about there being such a case.

    2b. I, too, am OK talking this way (even if falsely) and noting the thin-ness of such talk. However, the Thickies as Axtell calls them, see things otherwise, and that’s what I’m probing. The charge made against Young Earth Creationists that they should know better is supposed to be a very thick kind of blame indeed. And at least some of those who make the charge seem to be imputing a particularly epistemic shortcoming. Like Plantinga on the broader criticism of orthodox Christianity in general, I’m trying to understand the de jure objection. He doesn’t much consider, as far as I can recall, the possibility that it is not an epistemic claim at all.

  3. Here’s two shots at what might constitute non-evidential epistemic blame.
    1) Maybe the stakes are epistemic. The stakes for accepting P seem higher in a case where P causes me to rethink my beliefs about logic than where it causes me to rethink my beliefs about ice cream.
    2) Maybe it’s only by doing something wrong that A fits S’s evidence wrt P. Suppose Joe underestimates the probability raising effects of Type U evidence and overestimates the same for type O evidence. Suppose his evidence wrt to P consists of U and O type evidence. Looks like he’s still blameworthy.
    2A) Since (2) seems plausibly reducible to evidential blame, here’s one that might be more difficult. Imagine a case where Joe is trying to be epistemically irresponsible (perhaps in a misguided effort to spite his epistemology profs) but is unlucky or bad at it. So A fits S’s evidence wrt P–but by no fault of Joe’s.

  4. Trent,
    You wrote, “It’s hard to say what is distinctive of epistemology, but it seems that issues of practical rationality or failure of a duty of care are clear cases of things that are not distinctively epistemic.” This claim relies on the assumption (I think) that truth is not a distinctively epistemic value. Feldman takes this view, saying that truth only has pragmatic value. But if truth could be shown to have distinctively epistemic value, then could a failure to gather more evidence, in the face of peer disagreement, say, be something for which we are epistemically blameworthy?

  5. Ray, thanks,

    at 1) Why think that? For most people, icecream has more utility than their beliefs about logic. A change in your beliefs about logic might not even make probable different epistemic practices, since they are largely hard-wired. The only reasons I can see to value logic beliefs over icecream are practical reasons: you care more. If you *don’t* care more, then either you are liable to criticism for not caring or you are not. If you are not, then there is no epistemic irresponsibility. If you are, then it seems to me to be practical or moral, most likely the latter. “You are a bad person for not caring more about your logic beliefs.” Maybe, maybe not, but I still don’t see any emergent normativity.

    ad 2) It sounds like the criticism is this “Darn you, you should have come to have A to p via some route other than having phi-ing.” Okay, so what’s so bad about phi-ing? Were someone’s rights violated? Was someone’s well-being put at risk? Did I have something at stake? These sorts of questions don’t seem to me to leave anything out.

    ad 2A) This sounds like a case which could block knowledge or doxastic justification. I think etiology matters to knowledge and doxastic justification but not propositional justification, which is what evidentialism is a theory of (at core).

  6. Chris, thanks,

    I’d have to see that suggestion fleshed out more, but I do not think that truth is an epistemic notion nor the or even *a* proximate epistemic goal. I think the “rationality module” of the mind aims just at making doxastic output match sensory input. Yes, evidence consists in intentional signs of truth, but that doesn’t make truth the proximate epistemic goal of the module. That module might be part of some greater whole which has as a goal believing truly, but a person, qua person, has many goals, being happy the ultimate one. Truth may be a part of that, but I don’t see how that is epistemic. It’s a good thing, I guess, but I’d rather be rational than right. Any lucky fool can be right.

  7. Trent,
    Suppose you were kidnapped by a mad scientist and awoke to find yourself in his lab, with your brain connected to his computer by a complex of wires. He says, “I am going to inject a belief into your mind about something you are very curious about–the extinction of the dinosaurs. But you may choose either to have a true belief without any evidence, or a false belief with lots of evidence. When you wake, you will not remember this conversation, and you will spend the rest of your days as my prisoner in the lab. Which do you choose?” How would you answer?

  8. Hey Trent,
    I’ve talked about this general topic w/Chris Gadsden a good deal. I wonder how you are fixing the meaning of the word ‘epistemic’. I’m inclined to take something to be epistemic if it is a constituent (not sure what I mean by that) of knowledge. But truth is a constituent of knowledge. So why isn’t truth epistemic?

  9. Chris, thanks, I actually used an example just like this in a paper arguing against alethic epistemic value monism at the International Epistemic Goodness conference at Oklahoma a few years ago. And years before that Ted Poston and I talked about pill cases with the same structure. It seems *obvious* to me that I’d prefer the false belief with the evidence over true belief sans evidence. I mean, imagine I wake up and I have this belief, then I reflect on the matter and realize I don’t have any evidence. There are two ways it could work out which your telling of the story doesn’t settle. Either I’m an evidentialist or I’m not. If I am, then I loose that belief pretty quick anyway and so getting it in the first place is no big deal. If I’m not, then maybe I retain the belief in an unjustified manner, but that seems pretty pathetic to me. My mental faculties are made to *think*. Not just think that this or that is so but to *think about* *whether* such and such is so. That’s the mind’s proximate telos in my view.

    But it would take some lemmas to connect the question, the way you asked it, to the present issue. I don’t think epistemic normativity has anything to do with preferences.

  10. Andrew, thanks, the general principle which would secure that result doesn’t seem right. I don’t think belief is epistemic and I don’t think antigettierization is epistemic. I also don’t see any special, local reasons to get your result.

    I think that knowledge is epistemic because it’s “specific difference,” as Aristotle would say. Socrates didn’t seem to know about the Gettier problem or address it directly (would love to be corrected here), unless it’s contained in “having an account” (which I actually think is plausible). We have the class of true beliefs and knowledge is characterized by a special subset with a special property. It’s THAT property which is the epistemic one.

  11. Trent,
    Regarding the thought experiment, most people I talk to prefer the true belief or find it hard to decide. What have you found others’ intuitions to be? I’m wondering if a staunch commitment to evidentialism might produce a bias. But I do appreciate what you’ve said about it.

    Regarding what is epistemic (thanks for your comment Andrew Moon), I think your response to Andrew makes sense. But would modify Andrew’s statement slightly to say that once you have a belief, whatever gets you further down the road toward knowledge is epistemic. I’m surprised that you wouldn’t think that a person who has non-gettiered JTB is doing better epistemically than someone who has gettiered JTB.

  12. Your response to Ray’s example in (2A) highlights what I find puzzling about this project. (And the paper, once I noticed a key footnote about propositional justification in it.) It is obvious that epistemic evaluation goes well beyond what are typically forwards-looking—ex ante or propositional—evaluations of *potential* doxastic attitudes that could be held, given certain evidence or otherwise. Not only is there such a thing as backwards-looking, ex post evaluations of *doxastic* justification, but there is such a thing as *one’s* being open to criticism *for* having attitudes with this status. If your belief happens to be based on your evidence in a reflectively lucky way, or by certain other kinds of luck—if, say, some incompetence of yours happily misfires—there is a problem with *you*. And it goes beyond any problem with your attitudes, or their fit with the evidence. Think about cases from Turri’s paper on the supposed priority of doxastic over propositional justification—e.g., where D is the attitude to have to P given evidence E, and where you reason as follows: “The tarot cards say I should believe P given E, and I’ve got E, so I’ll believe P.” The principal target of evaluations of epistemic praise and blame is a *person*, who actually hosts some definite doxastic attitude. It is close to a category mistake to see evaluations of praise and blame as being directed in the first instance at some *attitude* that one *could* have. And for two reasons—(i) the *person*, rather than her attitudes, is the *primary object* of this type of evaluation, though (ii) she is evaluable *for* arriving at her attitudes in some way. If you intend to focus on ex ante epistemic evaluation—evaluation at the *propositional* level—then your claims are true. But trivially so. You’re simply talking past the responsibilists. They are self-consciously *not* talking about this kind of evaluation. (Indeed, I’ve often heard one virtue epistemologist say that he cares little about propositional justification.) If you agree—as your reply to Ray suggests—that etiology matters to doxastic justification, and also agree about his intuition about that case, I don’t know how I’m supposed to take your claims.

    • Kurt, thanks, I have for years said, and I don’t think this is original to me, that not only must we distinguish between propositional and doxastic justification but between both and personal justification. I have a faint memory that Alston used this terminology. I think in my “Reducing Responsibility” I point out that persons don’t have epistemic properties, properly conceived. They have projects and purposes but are not the proper unit of epistemic evaluation.

      And I have told responsibilists that they are talking past us evidentialists. See my “E-relevant or Irrelevant?” reply to Baehr on Janusblog. That’s why I wish they’d stop criticizing evidentialism on the basis of inquiry concerns! But there is a section in my book on this nonetheless. So, please, spread the word: responsibilsit objections don’t apply to what we evidentialists are talking about. 🙂

      Indeed, I’m just harping low these many years later on the same basic points Conee and Feldman made against Kornblith in the 80’s. I guess the 80’s really are back in fashion! 😉

  13. Of course, I myself would still want to distinguish doxastic unjustifiedness from epistemic blameworthiness. The former has as its *primary* target some attitude, while the latter has as its *primary* target some person. Moreover, I’d say that a person may be excusable for believing unjustifiedly—say, in some case where one has misleading higher-order evidence about a bad belief-forming procedure one has used (e.g., if some actually fallacious inference rule appeared valid to one). Doubtless you’ll have a different and maybe theory-driven intuition here. But the structural point revealed by these cases seems enough to make room for a distinctive notion of epistemic blame, which might be characterized by principles like the following VERY ROUGH one, which I give just for illustration: (P) S is epistemically blameworthy for having some doxastic attitude D iff (i) D is propositionally unjustified, (ii) S is in a position to know this, (iii) and S’s holding D is attributable to some epistemic incompetence of S’s. Notice that although attributability of an unjustified attitude to the subject is *necessary* for blameworthiness, it is not sufficient. And even if it were sufficient, I’d say that it’s a further step to the claim that blameworthiness *consists in* some nonnormative notion of attributability plus propositional unjustifiedness of an actual attitude. Rather, attributable unjustifiedness (in certain conditions) MAKES FOR a subject’s being epistemically blameworthy. And I’d worry that any reasoning that collapses MAKING FOR into the converse of CONSISTING IN is going to overgeneralize, so that we get bad conclusions in other domains (e.g., we end up saying that someone’s acting wrongly, where the wrongness is attributable to some bad character trait of his and he’s in a position to know all this, is simply identical to his being blameworthy).

  14. Kurt, regarding your second post:

    1. In your principle: Why think that every incompetence is blameworthy. I’m incompetent at penuckle. So what? Why does this give rise to any *blame*? Same for epistemic incompetencies. What am I missing.

    2. I’m a particularist. My mind is going to work with cases much better than principles. Can you give me a case to work with?

    If it matters, I’m utterly convinced by Kvanvig’s two papers that propositional justification comes first, not to mention the theoretical elegance of the Conee Feldman’s well-foundedness.

    I should add, if I did not say it above, that I think etiology goes into the basing relation, and I think it’s plausible that *both* resonsibilists and reliabilists might have some stuff that goes in there. They both seem to me to be taking about doxastic justification anyway, so I just wish people would stop criticizing evidentialism as a theory of propositional justification if that’s not what they are talking about. Either that or spend more time defending the thesis that doxastic (or personal) justification is basic.

  15. Thanks, Trent. First off, I quite agree that this cuts both ways, at least given the way in which certain responsibilists level their objections. I agree that it’s a mistake to object to evidentialism by pointing to cases that illustrate the slogan “etiology matters”. Why? Because evidentialism is in the first instance about propositional justifiedness, and etiology matters, most clearly, in the doxastic evaluations. But surely it’s a mistake, for exactly the same *type* of reason, to object to responsibilists by focusing on the propositional evaluations, and by noting that no clear notion of epistemic blame applies there. Of course, I think the disputes here can go deeper—people needn’t simply talk past each other or be quiet. But it’s tempting, and easy, to talk past one’s opponent in this particular dispute. That was my first worry.

  16. Secondly, I agree that one must be careful about isolating the personal evaluations from both propositional and doxastic justification. But I would not do this by applying notions in the ballpark of *justification* to persons. Here, in part, I follow a simple analogy with the practical case. Obviously, persons can’t be permissible or impermissible in any sense. I’m inclined to think that justification falls in the same broad, domain-crosscutting normative category as permissibility and impermissibility—it’s a deontic notion in some broad sense, rather than a hypological notion (one associated with blame and praise of some type, in some domain) or an evaluative notion (one associated with goodness or badness). And so I don’t think it is persons who are, in the first instance, the target of evaluations of justifiedness or unjustifiedness—rather, it’s their attitudes. But it’s a mistake to generalize from here to the conclusion that *no evaluations* apply in the first instance to persons in epistemology. (I guess I’ll have to look at what you’ve written on this, though.) One thing that’s distinctive, I would say, about concepts in the ballpark of praiseworthiness and blameworthiness is that they apply in the first instance to persons. Similarly with *some* other normative concepts, such as, perhaps, competence. It’s the person who is competent in the first instance. An attitude cannot be competent or incompetent in the first instance. It can be formed by a competent person, and thereby be competent in some *derivative* sense. (Of course, this leaves it open whether person-level competence can be analyzed into notions in some *different* normative category, such as the evaluative category in the narrow sense more familiar in the ethics literature (e.g., associated with goodness and badness). Maybe a person is competent, in some specified sense, to the extent that she reliably brings about *good* outcomes, in that sense.)

  17. Finally, I worry that you are overloading “blameworthiness” with moral connotations that can be stripped from it, even while leaving it a normative notion, not simply reducible to nonnormative attributability. I guess this goes back to what Clayton was saying at the beginning, and a certain problem with your response to that. There are many domains of evaluation. Morality is one. Prudence is another. There are indefinitely many others that come pretty cheaply if one thinks that any game, for instance, gives rise to normative standards in some very weak, non-reason-implying sense. In any of these domains, there are a number of things to evaluate. We can evaluate a person’s acts, and ask whether these acts are in accord with the rules that govern behavior in those domains. We can also evaluate the person. Take prudence, for instance. It’s prudentially impermissible to do what will bring you fifty years of future agony. Now, suppose you can’t tell that your act will bring about this outcome. We certainly can’t call you DUMB here for so acting. Still, plausibly (I think), there’s something prudentially wrong about your act. While your act was—I would say—objectively prudentially impermissible, you can’t be *prudentially* blamed for it. Your ignorance excuses you, in a sense of “excuse” that belongs to this domain. Your act may even be attributable to you here—it’s not like your ignorance makes you non-autonomous or something. So whatever it is that you’re free from, it doesn’t seem like mere nonnormative attributability. NOW, key point: maybe terms like “excuse”, “blame”, and so on, are essentially linked with morality, semantically or pragmatically. If that’s right, we might need to introduce some new terms to get at the notions at issue. But it’s easy enough to see—I would say—what these deeper notions are by *structural analogy* with the moral case, just looking at cases like this one. That’s what I’d say about your penuckle example too. Relative to the penuckle-domain, you are blameworthy for acting on that incompetence (modulo certain further conditions).

  18. One bottom line. In lots of domains, there’s a clear notion of excusable wrongdoing. Let “blameworthiness in a domain” be whatever it is that the excuse in the domain eliminates. Plausibly, in many of these domains, one’s act can still be *attributable* to one, even while one is excusable. Some cases of ignorance—normative or factual—seem good candidates. It is implausible to treat all these cases as cases where the wrongdoing isn’t attributable to one, in the nonnormative sense. Having an excuse from ignorance doesn’t make one’s act non-autonomous. Doesn’t it simply follow, then, that for many domains, there’s some blameworthiness concept that reduces neither to wrongness in the domain nor to mere nonnormative attributability? Now, you use the word “thin” in replying to Clayton—taking up his language, admittedly. But, as I now suggest, calling this notion “thin” can’t boil down to calling it “nonnormative”, as you had in effect suggested. If “thin” is to be contrasted with “thick” in the familiar sense from Bernard Williams, it’s at best unclear whether we should call this notion thin when it’s used in some nonmoral domain. In the prudential domain, blameworthiness seems to go hand in hand with assessments of “dumbness”, and these seem thick in that sense. Moreover, that distinction isn’t obviously relevant here—perhaps pace the responsibilists being targeted. And if “thin” simply means “nonmoral”, then the fact that the notion is thin is neither here nor there. For morality is one of many domains where there are cases worth describing as involving excusable wrongdoing, in the domain-relative sense. Given that, there’s got to be a corresponding normative property that is removed by the domain-relative excuse. If “thin” isn’t being used in one of these senses, I don’t know what it means, or why it’s dialectically bad to “go thin”.

  19. Maybe epistemology is special if there is a central deontic (as opposed (a) to evaluative in the narrow sense, or (b) hypological) category in it for which no analogue of excusable wrongdoing can arise. But that would be surprising—it would mark off epistemology not just from ethics, but from all sorts of domains of evaluation—and call for explanation. Given that analogue, and the fact that autonomous but wrong behavior can still be excusable in some cases, it looks like there will be some clear notion of blameworthiness in the domain that isn’t mere attributability of wrongness. This is just whatever it is that excuses for “wrongdoing” in that domain help to assuage or eliminate.

  20. Hey Trent,
    Thanks for the response. I wonder how you are fixing the meaning of the word ‘epistemic’. It’s certainly not a word used in ordinary English that we can have intuitions about. Or do you think that ‘epistemic’ is used in a certain way in contemporary epistemology so that it has fixed on a certain meaning? Or something else?

  21. Kurt, wow, some of what I say here is similar to what I say in my replies to your emails. It will have to be brief.

    1. It is consistent for me to say that Responsibilist critiques of evidentialsm are talking past it and still argue against responsibilism as an evidentialist. I’m not saying they are inconsistent. I’m saying one kind of evaluation is clearly epistemic and indispensable and the other seems to be neither.

    2. I’m not making any generalization at all. I’m demonstrating a failure to see a new kind of normativity (and it may well be a failure, but it’s not a willful failure, since I was big on it for many years. I don’t think the notion is any less important for being moral. On the contrary, I’m inclined to think it is more important.)

    3. I have endorsed Feldman’s thesis that there is something relevantly like blame that we apply in failure to fulfil role oughts. I think it is *analogous* to the use in cases where the will is involved.

    4. BIG ONE: I do *not* think normativity comes cheaply. I am all for multiple dimensions of evaluation. But I want to keep them *seperate* from one another (analytically, not causally).

    5. I think useful analogies between ethics and epistemology are few and far between. I think epistemology has mostly suffered from attempts to impose an analogical structure. This is not to say that epistemology hasn’t benefitted from the value turn. I think it certainly has. But the value turn doesn’t depend upon trying to impose a similar structure. Practical rationality and theoretical rationality have little by way of common structure beyond–at a very general level–you make a Pro and Con list and weigh reasons.

    6. Re: “bottom line”: I can’t help but think you misunderstand my whole project. I’m not denying that epistemic irresponsibility is a normative charge. I’m denying that it is a sui generis normativity. I’m asserting that it is JUST PLAIN OLD MORALITY as applied to belief-forming practices. I’m saying that most of what has come under the “Ethics of Belief” banner has not been epistemology, but rather APPLIED ETHICS.

    7. Re: “excuses” I think you have it the wrong way around here. You need to convince me that I should think of epistemic normativity on a model with whatever moral normativity you have in mind here. Maybe this will help you see where I’m coming from. If there’s one thing I’m sure of in normativity it’s this: the GOOD comes before the RIGHT. That’s one reason I’m a consequentialist in ethics. So EPISTEMIC OUGHTS need to be derived from EPISTEMIC VALUES. The theory of value that I favor in this derivation is an old-fashioned teleological one. There is value in a thing fulfilling its characteristic function. I think the characteristic function of the mind is to match doxastic outputs to sensory inputs. I’m inclined to think that the facts about matching are conceptual truths. That’s something I would like to defend at length and something I think epistemologists would really do well to focus on: what determines the facts about match or fit.

  22. Andrew. I don’t have anything to add to what I said at March 23, 2012 at 2:15 pm.

    The epistemic pertains to the specific difference between true belief and knowledge. It is “warrant” in Plantinga’s sense. I suppose I do want to add one thing. It pertains to the formal conditions not the causal conditions, since we are doing philosophy and not psychology. So it’s not “warrant” in Plantinga’s sense. It’s the formal aspect of warrant. It’s what Chisholm did. 🙂

  23. I really should say that’s what I think traditional epistemology is. I think that there is now a family of similar projects. For example I think that confirmation theory is not just philosophy of science–though it is that–but also epistemology. And, yes, it’s older than Chisholm, but not, I think, what Chisholm was doing. I don’t think there’s anything there like the kind of continuity you see in the first edition of Theory of Knowledge. That’s not a criticism, it’s a taxonomic justification. But I don’t think the ethics of belief–in the traditional sense–has much similarity to any of them.

  24. Why do I care about the taxonomy? I don’t. Not much anyway. But I think the taxonomy is a teacher. It is a way of reminding ourselves–those of us, like me, who are very, very interested in the ethics of inquiry–that we should not be looking for sui generis properties. We should really, truly be doing the ethics of belief. That is, ethics. And we should therefore educate ourselves more in the way of ethics. And instead of trying to (in my opinion) “pervert” epistemology by remaking it in the image of ethics, we should “become perverts” and actually practice ethics. 🙂 Bt srsly

  25. Trent,
    Here’s where I do not follow you. Warrant includes the antigettier condition. A true belief is warranted only if it is not gettiered (i.e., a true belief counts as knowledge only if it’s not gettiered). So, I’m confused by your comment to me where you say that antigettierization is not epistemic. Since antigettierization is a component of warrant, doesn’t that make it epistemic?

    Suppose you disagree. The components of warrant are not epistemic; it’s only the components taken together that is epistemic. But this then seems to have the bad result that justification is not epistemic. (Plausibly, justification is a component of warrant but not sufficient for it.) So, it seems that you should either include antigettierization or exclude justification. I say include antigettierization.

  26. Andrew, I already addressed that above too, but let me say more.

    I’m not terribly averse to including Gettier stuff in the picture, it’s just that I sometimes worry it is too caught up on the causal part of the picture, and I don’t think that’s really of the essence of the epistemic. What I haven’t worked out for myself is how the causal and the normative go together. Part of me thinks that antigettierization is in the basing relation and so causal and so not “really” epistemic and part of me holds to the essential dependence view and I have a couple of versions of how that should look. So I never really meant to exclude it, I just didn’t care include it because I think it’s peripheral. For the record, here’s my shot at an account of “essential dependence.”

  27. What puzzled me from the start was (1) the way in which you were proposing to run the reduction—and also (2) the lack of suitable reduction bases in some cases. Very briefly, here is the puzzlement about (1) again. You wanted to explain all cases of apparent epistemic blameworthiness as either involving (i) lack of fit (by all appearances, at the level of propositional justification!), or (ii) non-epistemic failings. If this was the strategy, the complaint was that it is both (a) impoverished and (b) uncharitable to the opposition. (a), because there’s a lot more to epistemic evaluation than propositional justification, and surely this will be *more useful* for the project of explaining away apparent cases of epistemic irresponsibility. (b), because epistemic blameworthiness—if it existed—would obviously be an ex post phenomenon, so *of course* you’re going to miss it if you are focusing on ex ante evaluations. Perhaps you never were focusing just on those, but several things—both a footnote in the paper, and your reply to Ray—suggested that you were.

  28. I’m puzzled by your puzzlement. I’m metapuzzled.

    1. I certainly don’t see why you use the “!” since I am, after all, *reducing* and naturally with an error theory. I think that sometimes people get mislead and confuse the two. In “Reducing Responsibility” I show how that can work. But for the most part I think that’s rare. I think so-called epistemic irresponsibility cases almost always cases of moral or prudential irresponsibility. I don’t think it’s a natural kind, so I don’t have to provide a single, natural reduction base.

    2. Again, I’m reducing, it’s *supposed* to be “impoverished.” That’s an uncharitable word, but so be it. I think there is less there than the responsibilist does, so she’ll naturally, if she wants to use uncharitable language, see my world as impoverished.

    3. “there’s a lot more to epistemic evaluation than propositional justification” Isn’t that what’s under debate here?

    4. There has been a tremendous amount of ink spilt by you guys on this here, but from almost the beginning I asked someone to give me a case. In fact, #1 in my original post says “Take the case of a particular instance of so-called epistemic irresponsibility…” So the way to break my reducing machine is to give me a case that I can’t reduce. I admitted that I’m not one of those sharp guys who can come up with counter-examples right quick (I’m fortunate to have two colleagues who can do so almost instantly). So maybe it won’t take long to find one and the monkey wrench will break the teeth of the gears off my reducing machine. But, fellas, ya gotta give me the case and let me go to work on it.

  29. Let me set aside cases for just *one* second. (3) does not seem right—but this seems to *help* you, which was what I intended in the first half of the remark. The key question is this: why can’t you go beyond the propositional evaluations, and help yourself to *some* doxastic ones in the project of reduction? Even people who believe in (i) epistemic blameworthiness may think it’s distinct from (ii) doxastic unjustifiednes. (i) may suffice for (ii), but not be necessary for (ii) (e.g., in excusable cases). I believe in both, and think they’re different things, though related in *some* ways. If this is so much as a *coherent* view, there’s nothing amiss in helping yourself to the doxastic evaluations. Indeed, if you tried to explain away apparent cases of epistemic irresponsibility by saying instead that they are *mere* cases of doxastic unjustifiedness (perhaps plus some non-epistemic stuff), you’d do better. So why not do this?

  30. I see. And in my reply to Andrew I was trying to say that I don’t have that much of an aversion to appealing to doxastic justification–appeal to the basing relation is, for me, reference to doxastic justification, for it is the distinguishing feature. And I either said that explicitly above or on one of my replies to one of your emails.

    I still can’t see how to fit responsibilist virtue into the basing relation, though I’ve been saying for years that reliabilist virtue is best seen as an account of the basing relation.

    You are certainly right that I should make this explicit. At one stage I had “epistemic justification” in there where I now have “evidential justification” (or whatever) but a referee thought that begged the question. I should have tried to get away with “the traditional notion of epistemic justification” or something.

  31. Precisely. What was puzzling was (i) wording in the paper, the original post and other places that limited you to using propositional j. ideology (e.g., fit) in the error theory, in spite of (ii) admissions *later on* that there’s more to properly epistemic evaluation—esp. when it comes to doxastic justification. You’d sell the strategy better if you were up front about not being committed to using just the propositional j. ideology in the error theory. Otherwise the error theory is too extreme. BTW, the reason I spent time talking about excusable violation is because that’s the case where one would be able to pry apart doxastic unjustifiedness and epistemic blameworthiness. Whether there are any clear cases in epistemology depends on one’s other views. I think externalists should see a lot of these cases. It’s a neat strategy for handling the demon world, and Williamsonians seem to use it a lot these days. I also think they can motivate this way of chopping up the space by analogy with other domains (not just morality), but that’s a different issue. [I also apologize for the unitary paragraphs again and again—even with standard HTML tagging I can’t seem to break up paragraphs on this blog on my computer. I can get italics though!]

  32. I’m certainly happy to make explicit the potential role of doxastic justification, but in terms of actual numbers of reducing cases, I don’t think the cardinality of the union of the two is much greater than that of just propositional justification because, as I said, I think the vast majority of cases reduce to moral or pragmatic issues. In the very original version I didn’t even mention propositional J. I only added it after a reader thought my not mentioning meant that I didn’t think it could ever be reduced to that. But then when I added my Craig the Creationist case I realized that there was a natural way to reduce at least part of the problem to propositional justification. Also, I still lack an actual case to illustrate a reduction to doxastic justification, but I’ll try to think of one for the book chapter I’m writing right now which is the latest version of this argument.

  33. (Also, I quite agree about building in virtue-theoretic or reliabilist stuff into the basing relation in *some* cases—esp. good cases. To genuinely believe P on the basis of a good reason R, it’s not enough *merely* to (i) have R as one’s reason for believing P, and (ii) for R to be a good reason for believing P. One must also, in some sense, have R as one’s reason for believing P *because it is* a good reason for believing P. I’d use virtue-theoretic apparatus to unpack the “because” there. Errol Lord and I are writing a joint paper on this, derived from similar parts of our dissertations and agreement in p.c. about the topic. The view has a disjunctivist feel—we in effect think there are just two different basing relations in the good and bad cases, though we think the one in the good case is more fundamental. We use this view to address Turri’s objections to the standard way of analyzing doxastic justification in terms of propositional justification and basing. I also think the idea helps with speckled hens. That’s where I first used it.)

  34. Kurt, I suggested at the CSPA and at a Northwestern Brown Bag talk that things will be importantly different in the good and bad cases–though I despise anything like disjunctivism–but not in the basing relation but in whether the reason–the same in each case. My suggestion was that in each case the reason was an appearance but in the good case, since the reason is veridical, there is reliability where it needs to be. That isn’t an intrinsic feature of the reason, though, it’s just a fact about how one is differently situated in the good and bad cases. In the good case, the reason is the same–the appearance–but there is, contingently, a reliable process instantiated, if you want to put it that way. The key fact, in my view, is that in the good case, the sign is objectively correlated with the truth of what it indicates. This requires me to appeal to a natural reference class, which is now new problem and also to allow knowledge in fake barn country, which has always seemed right to me anyway.

    So I don’t think the *goodness* of the reason needs to be part of the explanation of why S believes P when she has R, but that she *considers* it so (implicitly or de re or in some weak form of awareness or direct acquaintance).

  35. But when, earlier, you did claim to think that some virtue-theoretic or reliabilist element could figure into the basing relation, what did you have in mind? I was just indicating approval of the generic theme, and noting a species I like: your using R as your reason for believing P, when R is in fact a good reason to think that P, plausibly has to manifest some competence for you to get doxastic justification from using R. It’s this competence that ensures that you are, in some sense, tracking the *fact* that R is a good reason for believing P in using R as your reason to think that P, and so using that reason *because it’s good*, rather than just accidentally. Rather than viewing this competence as an extra requirement beyond basing, we see something in the ballpark as key for believing *for a normative epistemic reason*. In Williamsonian jargon, believing for a good reason is not a mere *composite* of (a) using R as one’s reason in believing, and (b) R’s being a good reason for so believing. It is a *prime* relation. Obviously, defending the “prime” picture is something for a whole paper. But if you had a different strategy, I’d be interested to hear it. The generic strategy of using a sophisticated view of the basing relation to capture facts about doxastic justification is under-explored, and I was excited to hear you’d tried it out.

  36. Pingback: The First Princeton-Rutgers Undergraduate Philosophy Conference - Sudophilosophical

Leave a Reply

Your email address will not be published. Required fields are marked *