Irresponsible Ignorance

Foley says the following about irresponsible ignorance:

If I do not have beliefs one way or the other about P, but it is epistemically rational for me to believe that I have not expended enough time and effort in arriving at an opinion about P, given its importance, then my ignorance is irresponsible. (“Response to Woltersdorff,” Contemporary Debates in Epistemology, pp. 338-9

The concept of irresponsible ignorance is important for Foley to explain. He maintains that the fundamental notion of rationality is epistemic rationality and that our ordinary ways of assessing cognitive states can be explained in terms of this fundamental notion. Woltersdorff’s challenge is that irresponsible ignorance is not explained by Foley, and yet it is an important way in which we evaluate cognitive states.

I don’t think Woltersdorff’s challenge is met by the above account.

The idea in the Foley account, I take it, is to determine the level of importance of a subject matter and line up effort with importance. Foley doesn’t quite say this, but I take it the idea is that effort must rise at least to the level of importance of the subject matter. Presumably, it’s OK if effort exceeds importance; what’s not OK is for it to fall short. When it falls short, and ignorance is present, then the ignorance is irresponsible.

There’s an ambiguity in Foley’s presentation, though, and either way of reading it leaves me unconvinced. To see the ambiguity, notice that Foley doesn’t require that one’s time and effort be appropriate to the importance of the issue. What’s required is something about what one reasonably believes. What’s not clear is whether the importance of the issue is inside the scope of the reasonable belief operator. Here are the two possibilities:
1. S reasonably believes (time and effort is at least as much as importance demands).
2. Importance demands T&E of level n and S reasonably believes that level n has been reached.

Reading 2 leaves me uncomfortable in the following cases.
a. You’ve reasonably taken the level of importance to be below n, but mistakenly and reasonably think you’ve expended more effort than is required.
b. You reasonably believe the importance of the matter demands a level of T&E much higher than n, but you are lazy and only reach level n. You reasonably believe that you’ve reached level n, and thus reasonably believe yourself to be irresponsible.

Reading 1 leaves me uncomfortable as well. Suppose you reasonably believe that your T&E matches the importance of the subject matter, but you are wrong: you haven’t investigated as much as the subject matter demands.

That sounds bad, but we can make it worse. The situation as described is about the importance of P. For one’s ignorance of P to fail to be irresponsible, clause 1 has to be true with respect to P. Clause 1, however, involves another proposition, Q, a conditional one involving P. Can’t one have a reasonable belief about Q and yet not have investigated this issue, the Q issue, sufficiently? As far as I can tell, Foley’s conditions for reasonable belief are no guarantee that Q has been given due diligence, and that leaves Woltersdorff’s challenge in place: the notion of responsible belief, and especially, irresponsible ignorance, doesn’t appear to be explicable in terms of Foley’s notion of epistemically reasonable belief.


Comments

Irresponsible Ignorance — 23 Comments

  1. Jon, you say,
    2. Importance demands time and effort of level n and S reasonably believes that level n has been reached.

    Reading 2 leaves me uncomfortable in the following cases.
    a. Youâ??ve reasonably taken the level of importance to be below n, but mistakenly and reasonably think youâ??ve expended more effort than is required.

    So you reasonably believe that the level of importance of P is below n, or I(P)n. So we have (i) and (ii).

    i. S reasonably believes I(P)n.

    Of course it turns out false that E(P)>n and, for all we know, it is also false that
    I(P)

  2. Mike, try to enter your comment again. Remember that if you use less than and greater than symbols, you have to use their html code numbers, since otherwise they’ll be treated as opening or closing an html command.

  3. Sorry about that. I’ll skip the greater than/less than signs.

    You say:

    2. Importance demands T&E of level n and S reasonably believes that level n has been reached.

    Reading 2 leaves me uncomfortable in the following cases.
    a. Youâ??ve reasonably taken the level of importance to be below n, but mistakenly and reasonably think youâ??ve expended more effort than is required.
    —————————
    Ok we have these two being true, if I’m reading this right:
    (i)S reasonably believes that the importance of (some information) P is below n.
    (ii)S reasonably believes that his effort in learning about P is greater than n.

    Now it is false that the S’s effort in learning about P is greater than n, and for all we know it is false that the importance of P is below n. I can’t tell, given the description of the case. You don’t say exactly what worries you about (a), but I’m guessing that you want to say that (given (a)) S is epistemically irresponsible (in some way) relative to P or, at least, given (a), you don’t want to say that S is epistemically responsible relative to P. But I offhand can’t see why. It can’t be that he is reasonably believing falsehoods, since epistemically responsible people can do that (just as morally responsible being can perform wrong actions). Maybe your objection here is more global, I don’t know.

  4. Jon,

    One other quite different point. Suppose I am planning to lend my car to Jones. It crosses my mind that someone might have tampered with the steering mechanism on my car, but I have no evidence at all that this is true. It is just a passing, unsubstantiated thought. Now let P be the proposition that someone did tamper with the steering mechanism on my car. P strikes me as a very important proposition, especially given that I am about the lend my car to someone. But it cannot be true that I am epsiemically irresponsible for not having the car checked out before lending it to Jones. Now imagine that, as it happens, P is true. It doesn’t seem to affect the conclusion that I need not have checked that out. It doesn’t seem irreponsible at all. So it’s not obvious how importance of information is directly related to the effort I must make to verify that information.

  5. Mike, I like the case you give in 4; very nice. On the former point, my discomfort stems from not wanting to ascribe irresponsibility in such cases. We assume, on reading 2, that the level of importance demands T&E of level n. (We should also want to know what this notion of demand or requirement is, but I’m assuming that it is a requirement deriving from one’s practical concerns). You reasonably believe otherwise, that something below n is required–call it ‘m’. Moreover, you reasonably believe that you’ve expended T&E greater than m. You still can’t tell whether the claim in question is true, so you remain ignorant. Is your ignorance irresponsible?

    I think not, and the only way I see to defend that it is irresponsible is to separate blameworthiness from responsibility here, and say that you’re not blameworthy for being ignorant but you’re irresponsible nonetheless. Not enough internalism in this story, as I see it. Merely getting a fact wrong shouldn’t have this power to make you irresponsible, even if it is a fact about how important a particular subject matter is.

  6. Jon, help me see the next step in the dialectic here. Why not just add a clause stating that it is epistemically rational for S to believe that Q has been investigated sufficiently? I see some danger of infinite regress looming, but I don’t think that’s what you were adverting to.

    So we’d have something like this (using “ERB(p)” for “it’s epistemically rational to believe p,” indexed to S; E = Effort (which we’ll assume takes some time); I(p) = Importance of (p)):

    S is inquiry-responsible wrt p iff (i) ERB(I(p)=n) & (ii) ERB(E = or > n).

    Help me see the problem that still remains here.

  7. Mike, that’s a nice example, but it seems to me that we want the importance to be not just the value of the consequence if it were to happen, but the value *weighted* by its epistemic probability: i.e. it’s expected utility in this case. Now even though your p might be dire, given the description the epistemic probability is very low indeed so that the expected harm is very low and so the value of the importance of the matter is quite low as well, thus not requiring much investigatory effort.

    Sorry for the inexcusable polysyndeton!

  8. Trent, took me awhile to see what was up here; you’re talking about reading 1, right? I was focused on reading 2, and was lost…

    Your def. replaces what I took to be a conditional in reading 1 with a conjunction. No problem there, I think, so let’s use your definition. The question in my mind–and it’s just a question right now–is whether it can be reasonable to believe p while one is at the same time epistemically irresponsible in believing p. Foley doesn’t address this point, if I remember correctly, but I think on his theory this will be possible. Remember that what is rational is what conforms to your deepest standards, where this point is clarified counterfactually: what you’d settle on given unlimited reflection. Notice that no reflection at all is needed for this to be true about a belief. So even the most casually formed belief can turn out to be rational for Foley. That shouldn’t always be enough for epistemic responsibility. And if one is rational in believing the Q in question, but irresponsible in so doing, I don’t think the rationality in question makes the ignorance in question immune from the irresponsibility charge.

    But don’t ask me to say what more is needed–I’ve just begun to see that some of my assumptions about this matter are mistaken, and I’m in the process of rethinking them…

  9. Trent, thanks. But it is the epistemic probability that further investigation is supposed to disclose. That is, had I investigated, I would have discovered that the epistemic probability is nearly 1. How likely is it that someone tampered with the steering given that it looks to me like someone did. That has to be close to 1.
    Suppose we determine importance by the epistemic probability prior to investigation. In that case NY times has no obligation to further investigate their reporters. The chances that another reporter will file fictional stories is close to zero. But it would be badly irresponsible of them not to continue to uncover such information. So we can’t determine importance by prior epistemic probabilities.

  10. Trent,

    A quick point. The expected disutility in the case I describe is not low. It is extremely high. The bad steering mechanism could cost someone his life and that is (obviously) extremely disvaluable. So even at a low probability we have a high expected disvalue. Let me just explicitly modify the case so that this is in fact what happens. Jones borrows the car, loses control and so on. If that won’t do it, let the case include a group borrowing the car and losing their lives in an accident. It is still not true that I ought to have investigated.

  11. Mike, good food for thought, but I’m not quite convinced. re: your first point I had in mind the epistemic probability *at the time at which you had no evidence* per the original story, i.e. “It crosses my mind that someone might have tampered with the steering mechanism on my car, but I have no evidence at all that this is true. It is just a passing, unsubstantiated thought.” At this point, I don’t grant that the disutility is inordinately high. It now occurs to me, in fact, that someone might have messed with *my* steering, it’s epistemically possible (on my favored view of epistemic possibility anyway), but I don’t plan to investigate because the epistemic probability at this point is trivially non-zero and the value of my life is large but finite (and then there’s the diminished utility of a life ridden with investigating every passing worry). Now, as you say, it might be that *if I were to investigate* I’d conditionalize to the point where the epistemic probability was high, but then, of course, the point is moot since whether we should investigate is precisely what’s in question. You say that “it is the epistemic probability that further investigation is supposed to disclose” but that’s not quite right. There will be an epistemic probability at each time reflecting the structure of my confidence states at that time. If all goes well, what further investigation will reveal is the *objective* probability. Similarly with your second point. At the time at which you have no evidence the *expected* disutility is low though the *actual* disutility may be high. Finally, if you stipulate that the consequences get arbitrarily high, then I’ll just say that in fact you should have checked. Case in point: I was a part-time children’s scholbus driver as an undergrad to earn extra cash. Before each and every time you take a bus out you have to do a 21-point inspection (including steering of course) even though there’s no special reason to expect any problem with the system.

    So it seems that standard decision theory is all we need here.

  12. Mike, a quick additional note. Perhaps part of the problem here is that I took you quite literally when you said “I have no evidence at all that this is true.” So even though the disutility of the bad-steering outcome is high it’s weighted by a barely-non-zero (or even perhaps zero!) value which swamps it.

    I might also add that When dealing with greatly disutile outcomes which have low-cost solutions it’s easy to fall under the Pascal Principle: Better safe than sorry. So it won’t take much cudgeling to get me to say that that it’s irresponsible not to check (it’s just that, again, I took “no evidence” very literally). If you had very much evidence at all it would be irresponsible not to check. [The cut-off between “nearly none” and “not much” evidence is vague, but, I think, substantive in this type of case.]

  13. Thanks Trent:

    I’m a little troubled about your suggestion that sufficient investigation will move us from the epistemic probability that H to objective probablity that H. To learn the objective probability (making the huge concession that there is such a thing) that H, you’d have to have *all* of the relevant evidence. In virtually every case, it doesn’t matter how much investigation you do, there will be relevant evidence you do not possess. In most interesting cases, you are simply are not going to reach the objective probability of H no matter how long you investigate.
    Returning to the case I discussed. I don’t think it matters so much how you specifically might respond. What matters is how people do respond; what we in fact take to be worthy of investigation. So how is it that we in fact respond? Does anyone leaving the university consider it irresponsible not to investigate their cars for explosives despite the fact that there is some chance that an entire group of carpoolers could be killed by an explosion? No, no one thinks that. Does anyone consider it necessary to check their stove before lighting it (JJ Thomson’s example) despite the fact that there’s a small chance a gas explosion could kill everyone in the next apartment? No, no one thinks that. Does anyone consider it necessary to frisk every guest at their year-end pool party to see whether anyone has brought a grenade (a variation on “gone postal” cases). No, no one, despite the fact that there is some probability that someone has brought a grenade and hundreds of people could be killed/harmed. There are countless examples of particular actions or omissions that have a very low probability of horrendous outcomes. The expected disutility in many of these cases is high. But no one (certainly not I) consider it irresponsible not to investigate these cases further.
    It is altogether beside the point to note that following a *rule* to the effect that we should always check these things is overall not maximizing. Who’s talking about rules? I’m certainly not. I am talking about the expected disutility of particular actions.
    I am claiming that there are actions that have a high expected disutility (I’ve noted a few of the thousands above) and no one believes it necessary to conduct further investigation before performing them. These are best called permissible risks, despite their expected disvalue. These sorts of action are very common.

  14. Hi Mike, I said that “if all goes well” conditionalized epistemic probability will track objective probability. Rarely if ever does all go well, but I think we all hope that epistemic probability tracks objective probabilities where objective probabilities exist. I would be happy to take one step back to personalist subjective probability (which is all I need) and say that if all goes well it tracks epistemic probability as understood as what a suitably idealized agent would hold (“suitably” would have to carefully avoid the conditional fallacy for one thing).

    Regarding the cases: we are agreed that it’s not irresponsible not to check, that I never questioned. Only I explained that fact by claiming that the expected disutility is sufficiently small owing to the fact that though the outcome would be extremely disutile, the probability (whether subjective or quasi-objective epistemic) of it occurring is so very small (your words: “I have no evidence at all”).

    Now low expected disutility *would* explain the excusibility and I’m calculating expected utility in the standard way and I’m reading low subjective probability off of “I have no evidence”, so I feel pretty good about this so far. We’re in agreement about the cases, so we’re together on the first-order stuff.

  15. I can’t think of many things of greater disvalue than the untimely death of human beings. In these cases we have a low probability of the untimely death of several human beings. It is not hard to imagine a case with the untimely death of many more (if you think numbers count). The probability of this happening is low, sure, but not infinitesimal. So, the expected disutility of not investigating should be quite high. I agree that the probability is low. The only place we disagree is on the disvalue of the death of several human beings. I’m not sure how to argue for that.

  16. I greatly disvalue the death of any number of persons, it’s just, as I said, the probability you get from “no evidence at all” is very low indeed, I only post to say that I don’t think it needs to be infinitesimal(though perhaps it is in the “no evidence at all” case). I think we’re not so far apart as we might seem on this.

  17. One last quick thought. As I said, it wouldn’t take much evidence to get the expected disutility high enough to make it formally irresponsible not to check. However, I think there might be a category of excusable irresponsibility where we leave off blame. I only post this idea because Jon raised a similar issue earlier and Peter Vallentyne uses the idea for the following kind of case: It’s just plain wrong to harm an innocent that good may come of it or that evil may be avoided. Standard case: I’m going to shoot everybody in the class if you don’t pick one person (not yourself) and shoot them in the stomach. Justice does not permit it and I think most would consider it reprehensible. However, it’s just a fact that when you start making the case extreme people admit they’d probably do it and most of us admit this to ourselves before we admit it to others. Case in point: It would be unjust to lock Saddam Hussein in prison for the rest of his life (though surely better than he deserves, he apparently just loves Doritos) without a trial. But suppose in some crazy scenario you face this choice: either you arrange for him to be imprisoned without a trial (as if we don’t know what the outcome would be) or I’ll torture every living creature on Earth for everlasting time. Well, I think you’d do it and I don’t think anyone (except perhaps the ACLU and AI) would really blame you. It would be an excusable injustice. We formally admit it unjust but don’t blame you (though we might have to punish you even but maybe not).

    I don’t like excusable injustices or separating blame from irresponsibility, but it might be forced upon us in the end even if only in rare cases.

    So my own suspicion is that almost any concrete evidence raises disutility (so to speak) enough to require investigation. But if it turned out that there were just too many cases which resulted from this, the above is probably the first thing I’d fall back upon while trying to work out the idea I mentioned earlier about the diminished utility of a life ridden with checking everything.

  18. Thanks, Trent. Let me see whether we agree on the following. Let E be my evidence that there is an explosive device wired to my car. Presumably E will include the evidence for this sort of thing happening in general and in my locale, etc. Let P be “there is an explosive device wired to my car”. Let’s agree that Pr(P/E) low. The disutility U(P) is high. We have some disagreement on the value of Pr(P)U(P),(where Pr(P) is my updated probability of P on E).
    Sound right?
    Here’s the worry. I am standing beside my car and the idea crosses my mind that there might be an explosive device wired to it. I KNOW that there is evidence available to me that I do not possess (I haven’t looked under the car and I can do so). I KNOW that the evidence is easily obtained. And I KNOW that the evidence obtained might radically affect the value of Pr(P)U(P).
    Now you say, “but the value of Pr(P/E) is very low (and so is the value of Pr(P)U(P)), so you have no reason to check”. But the fact that Pr(P/E) is low does not entail that Pr(P/E & E’) is low. The evidence that I have for P is meagre, but that does not show that the *evidence available* for P is meagre. So the fact that Pr(P/E) is low gives me no reason to believe that, if I gather more easiy obtainable evidence E’, Pr(P/E&E’) will also be low. So it gives me no reason not to investigate. For all I know Pr(P/E&E’) might be very high (as in fact it is, as the case is described). So, we arrive here: For all I know
    Pr(P/E&E’) is high, if it is high then not investigating will have a very bad outcome, I can easily look under the car. Should I investigate?

  19. I’d say probably not (i.e. it’s not the case that you should check, not to be confused with it’s being the case that you shouldn’t check: I think it’s permissible). You needn’t check because you’ve got no reason to believe further investigation would uncover evidence of tampering. You say “the fact that Pr(P/E) is low gives me no reason to believe that, if I gather more easiy [sic] obtainable evidence Eâ??, Pr(P/E&Eâ??) will also be low. So it gives me no reason not to investigate.” But of course not having a reason not to investigate doesn’t entail that you have a reason *to* investigate. Swinburne, in his _Epistemic Justification_ explicitly discusses the issue of when further investigation makes sense and I think he rightly says that one important consideration is the probability that further investigation will lead to change in evidential status of important beliefs. In the case you describe my prior on there being any tampering is going to be so low I won’t expect investigation to change things. Let P = “The car’s been tampered with.” Let Q = “If I inquire further, Pr(P) will change significantly.” My prior on Q is very low. If I’ve got no special reason to check (as I don’t in your story) and I’ve got no reason not to check, the default is probably not to check. [It gets tough to calculate because paranoia-elimination has some utility but getting down and checking has some disutility as does starting the habit of indulging paranoid urges, etc.] So if I had reason to believe that I wouldn’t be starting a bad habit and checking had very little disutility, then maybe I’d check, but here’s what I’m worried about: there might be a murderous gunman in my closet. If there were, it would be really bad and it’s so easy to check. But there might be a gunman under my bed or in my shower or in the attic or the other room. The disutility of endorsing a policy of checking on such things is huge. It’s not clear I’d value that life much. At any rate, I think it’s quite excusable not check on such things even if doing so has positive expected utility (as I think it has not for reasons stated (i.e. no evidence of badness therefore negligible (or no) expected disutility) though I hasten to add once more that almost any positive evidence whatsoever would make it (inexcusably) irrational not to check. [This went much longer than I’d hoped, but I wanted to do your excellent question some kind of justice.]

  20. _I think he [Swinburne] rightly says that one important consideration is the probability that further investigation will lead to change in evidential status of important beliefs. In the case you describe my prior on there being any tampering is going to be so low I wonâ??t expect investigation to change things.

    Agreed, your prior is low, or Pr(P/E) is low. So this makes it very unlikely on E that there is a wired explosive. But you say “In the case you describe my prior on there being any tampering is going to be so low I wonâ??t expect investigation to change things.” But this doesn’t follow. The fact that E is strong evidence we possess against P, and E’ is strong evidence in favor of P, it does not follow that E is strong evidence against E’. That is, your strong evidence against P is not strong evidence that investigation won’t discover E’. In the strongest case, you might expect the following to be true.

    1. [(E’ entails P) & (E disconfirms P)] -> (E disconfirms E’).

    But (1) is not a valid rule of (dis)confirmation. Indeed, it is possible that E’ entails P, E disconfirms P and E *confirms* E’. So (1) is false. A fortiori (2) is false.

    2. [(E’confirms P) & (E disconfirms P)] -> (E disconfirms E’).

    But then you can’t conclude that E is evidence that there is no E’. Therefore E is not evidence against investigating further. I agree that we ought not to investigate further. But E cannot be the reason.

  21. My statement about my prior wasn’t supposed to “follow” it was merely a report of my subjective probability in that case. It wouldn’t be E that would disconfirm E’ in any event, it would be the total state of my background beliefs (including the relative (in)frequency of tampering and the lack of any evidence thereof).

    So let P and Q be as before; let ‘R’ state my belief about the relative frequency of tampering and let ‘E’ state the evidence I have for P (ex hypothesi none), let ‘B’ state all my other background beliefs. I’m saying that Pr(Q/R&E&B) is, for me, very low. Hope that helps clarify my position. Thanks for the discussion, epistemic responsibility is one of my main areas of interest. I hope Jon will blog sometime on the relationship between responsibility and praise/blame (hint hint).

  22. Toss all of it in, if you like. Let’s include R,E,B…add C and D. If I’ve left out a letter or two, say E or F, toss those in too. Even G, if you like. Makes not a jot of difference. You wind up with (1).

    1. [(E’ confirms P) & (R&E&B&C&D disconfirms P)]–> (R&E&B&C&D disconfirms E’)

    (1) is a flatly false principle of confirmation. There any number of counterexamples to this sort of principle. Takes an historical inquiry to find them all: Jeffreys, Otte, Carnap, and on and on. Hope that clarifies my “position”.

  23. Alas, but I’m not trying to defend, apply, or invoke (1). I’m only trying to explain the way my utility assignments and confidence states would lead me to behave.

    I think this shows that I *can’t* be appealing to (1), for ex hypothesi at the time at which I’m deciding whether to investigate further, E’ doesn’t describe any of my belief-states, so I can’t have (Eâ??confirms P) as a premise. I’ve been in full-on personalist mode the whole time.

Leave a Reply

Your email address will not be published. Required fields are marked *