Ethical Internalism in Epistemology

Think of ethical internalism as a view that insists that moral beliefs and attitudes are intrinsically motivating. This kind of internalism/externalism issue has had little play in epistemology, and its absence is puzzling.

So suppose we distinguish epistemic principles from logico-inductive ones, as we find in Chisholm. Suppose we then consider the position that insists that e can’t be evidence for p for S unless S’s being aware of e or believing e inclines S to believe p; unless, that is, e’s presence in S’s noetic system is intrinsically motivating with regard to belief that p.

I’m leaving out lots of subtleties here, but the details aren’t my present concern, which is two-fold. First, would such a view be somehow a lot less plausible than the similar kind of internalism in ethics? Second, would that kind of internalism in epistemology somehow give grounds for thinking that evidential connections are nonfactive?


Comments

Ethical Internalism in Epistemology — 45 Comments

  1. Pingback: Desert Landscapes » Blogosphere roundup

  2. Jon,
    As it happens, there has been a small bit of discussion of this among ethicists (i.e., non experts in epistemology!) In particular, I’m thinking of Eiljah Millgram’s paper in Nous “Williams’ argument against External Reasons”. He thinks it’s pretty obvious that arguments against external reasons for belief are nonstarters. As I remember, the Shope-style counterexamples play an important part of his argument. So suppose there is a theory such that if you found the reasons for the theory convincing, the theory would be false (perhaps it is a theory about, or entailing, something about what you find convincing.) He seems right about that, I think. There are reasons for belief that cannot incline one to have the belief for which they are reasons.

  3. Robert, I’m not sure I understand the case properly, so let me try out what I think is going on.

    Suppose you have evidence that a theory is true. I’m not sure what comes next. Is it that if you believe the theory on the basis of the evidence, then the theory is false? That doesn’t undermine the proposal. Maybe it’s that you know that if you believe the theory on the basis of the evidence, then the theory is false? This is the sort of case that the distinction between propositional and doxastic justification is geared to solve. Sometimes belief undermines the evidence for it, so that if you follow the inclination that the evidence provides, the belief itself will undermine your justification for believing it. So in this case, if internalism were true, you’d be motivated to believe the theory, but know that if you followed your inclination, you’d have additional evidence (which you don’t now have) that the theory is false. So what you know is that for the theory to be true, the inclination will have to be resistable.

    Here’s a case I’ve used in the past regarding such. Suppose you know that you’ve never squared a double digit number. Then you have evidence that you’ve never considered the claim that 12 squared is 144. But if you follow your evidence and add the belief, the belief will be irrational.

    This case is a bit different that what I interpret the Millgram case to involve, but the same principle is at work in both: belief itself can create or undermine evidence.

    So I’m not sure yet that this kind of argument could be used to show that evidence is not intrinsically belief-motivating.

  4. Jon,
    More about the cases: The structure of one case Millgram had in mind was one in which one grasps evidence supporting a theory that has consequences for your epistemic abilities. But the theory predicts one’s inability to infer the theory from that evidence. Maybe the theory is too complicated or the inference too abstract for you, or you have some epistemic failing. If you could make the inference to the theory, then the theory would be false and the evidence suddenly inconclusive.
    A stronger case is one in which there is sufficient evidence for a theory that has consequences regarding your inability to grasp the evidence; it predicts that you cannot even find the evidence at all compelling. If the evidence inclined you to believe that theory, then the theory would be false, since it implies that you cannot find it compelling.
    It seems these kinds of cases show it is possilbe for there to be ‘external’ evidence supporting a theory, evidence that not only is not compelling to the person who entertains it, but cannot be compelling.

  5. Robert, these cases are interesting, but I think misdiagnosed. Let’s take them from last to first:

    A stronger case is one in which there is sufficient evidence for a theory that has consequences regarding your inability to grasp the evidence; it predicts that you cannot even find the evidence at all compelling. If the evidence inclined you to believe that theory, then the theory would be false, since it implies that you cannot find it compelling.

    There are two cases here, one where the theory predicts that the evidence can’t be compelling and the other where it predicts the evidence can incline toward belief. The first sort is compatible with the internalistic theory I suggested, since that theory only talks about inclination toward belief. The second theory may be incompatible with it, but maybe not. If the theory says, “you can get evidence for me, but none of the evidence will motivate you to believe me,” that’s OK if the evidence is insufficient to justifying believing over withholding. Another option for the internalist here is to say that you have a competing inclination that wins out, an inclination not to follow the first inclination to believe when the evidence is too weak. So suppose the theory says something stronger: “you can get evidence for me that is sufficient to justify believing me, but you won’t be inclined to believe me.” That’s possible too, if there are non-epistemic motivations that outweigh the epistemic motivation to believe. So, try one more time: “you can get evidence sufficient to justify believing me, but even if you have no competing non-epistemic motivations, you’ll have no inclination to believe me.” We’re getting closer here, but notice that for the theory to provide a counterexample, we have to suppose that the theory is true or possibly true, and I don’t see why we should think that.

    So try the first case:

    one in which one grasps evidence supporting a theory that has consequences for your epistemic abilities. But the theory predicts oneâ??s inability to infer the theory from that evidence. Maybe the theory is too complicated or the inference too abstract for you, or you have some epistemic failing. If you could make the inference to the theory, then the theory would be false and the evidence suddenly inconclusive.

    To make this case work, the theory will have to be incomprehensible (otherwise inferring it will be easy), so we’ll need a case in which one’s evidence is evidence for a theory, but the theory is beyond one’s powers of conception. In another thread, I toyed with the idea that this is not possible–that one’s powers of conception are a good limit to what one has evidence for (even if the very same information would be evidence for the theory for someone with greater intellect). And I think that’s the right answer here for such an internalist theory. Maybe another way to put the point is that epistemic reasons are perspectival in a way that moral reasons may not be, so counterexamples to the kind of internalism may be more difficult that Millgram thinks if he’s assuming too little perspectivalism regarding epistemic justification.

  6. Jon,
    I’m going to think about your last post. Let me first try this out on you.
    I am taking your proposal to be that , necessarily, e is evidence for p for S only if, if S grasps e then S is inclined to believe p.
    (if it’s not this, or you want to revise, then the rest is irrelevant)
    Suppose p is ‘S is unable to pick up on social cues’ and the evidence is whenever and wherever social cues are present, S fails to pick up on them.
    It seems the evidence is evidence for S to believe he is unable to pick up on social cues.
    But if S grasped the evidence, then he wouldn’t be inclined to believe that he is unable. It would be, after all, then false that he is unable to pick up on social cues.

    In any case, I think I’m getting away from the spirit of Millgram’s idea, since he simply thinks that Williams style arguments against externalism are too broad. If there’s anything in them, it wouldhave to be something about reasons for actions in particular, and not just reasons tout court, that makes externalism implausible.

  7. Robert, your social cues example is interesting. Were you thinking of anyone in particular???

    I don’t think an internalist should clarify the view as you do, primarily because of the phenomenon of belief itself undermining and creating evidence. So if we say that what it is for e to be evidence for p for S has something to do with inclination or motivation to believe, the thesis will have to be clarified in a way that accommodates the point that e can be evidence for p for S and yet if S were aware of, or believed, e, it would cease to be evidence for p for S. I think that’s the explanation that ought to be given of the social cue case (though it is evidence for the claim that the person rarely gets social cues).

    That leaves the question how to clarify the internalist idea I just cited. The conditional account you give looks like a natural way to clarify the view, except it doesn’t accommodate the way in which belief can create and undermine evidence.

  8. I believe that an exact analogue of the plausible versions of (what metaethicists call) “ethical internalism” applies to reasons for belief, just as it does to reasons for action.

    However, I wholeheartedly agree with Robert that many formulations of ethical internalism fall to Shope-style counterexamples. Fortunately, though, ethical internalism doesn’t have to be formulated in a way that makes it vulnerable to such counterexamples.

    (Also, it’s important to remember that not all versions of ethical internalism embrace Williams’ claim that all reasons for action are “relative to the agent’s subjective motivational set”: e.g. neither Nagel’s nor Korsgaard’s version of internalism embraces Williams’ claim.)

    One version of ethical internalism is “ethical judgment internalism” (EJI), according to which there is a necessary connection between ethical judgments and motivation. One plausible version of EJI is as follows: Necessarily, if you are rational, and you judge ‘I ought to F’, then this judgment will motivate you to F. Another plausible version is: Necessarily, if you are in the habit of forming judgments about what you ought to do, then you have a general disposition to be motivated to do whatever you judge that you ought to do. (You can have such a disposition even if it isn’t manifested in every case.)

    The epistemological analogue of this is that when if you’re rational, and you judge ‘There are compelling reasons to believe p’, this judgment will incline you to believe p; you must also have a general disposition to believe whatever you judge yourself to have compelling reason to believe.

    Another version of “ethical internalism” postulates a necessary connection, not between your judging that there is a reason for you to F and your being motivated to F, but between the existence of a reason for you to F and your being motivated to F.

    This version of internalism is indeed hard to formulate in a way that isn’t vulnerable to Shope-style counterexamples. But a somewhat similar thesis is true, I believe: Necessarily, if it is rational for you to F, then there is a possible process of good reasoning that could lead from your current state of mind to your F-ing. Moreover, this possible process of good reasoning consists of a sequence of steps, each of which is a step of a sort such that one has a general disposition to take steps of this sort when you consider them.

    E.g. if it is rational for you to believe p, then there is a possible process of good reasoning that leads from your current state of mind to your believing p. This process of reasoning consists of steps like instances of modus ponens; and you have a general disposition to make modus ponens inferences when you consider them.

  9. Ralph, excellent ideas here. The distinction between judgment internalism and existence internalism with respect to reasons is especially helpful. I don’t detect in your remarks any temptation to think about justification nonfactively as a result of either kind of internalism, but maybe there’s more to the story?

    I’m most attracted to the existence version of the view, though I’m not sure that there’s a consistent viewpoint here. (My remarks to Robert show one difficulty–the bare bones view has to be qualified to avoid obvious counterexamples, and it is not clear to me whether the required qualifications undermine the motivation for the view.)

  10. I agree, Ralph, that the analogy would be necessarily complicated because of the vast sea of distinctions in the area (unlike Millgram’s paper, which is focussed solely on Williams, and is not particularly concerned to work out an analog). Steve Darwall has made innocently wading into this area very difficult! The judgment/existence internalism distinction is only the beginning to those that might be of great importance to this very issue. And many of these complications come precisely from a desire to avoid Shope-style counterexamples. I myself think these cures introduce new problems, but your own versions for belief look promising. (Though, re this last proposal, it might be rational for you to believe that you have no disposition to make modus ponens inferences when you consider them, mighten it?)
    The most attractive aspect of the analogy is in the arena of reasons for belief, rather than judgments about them. The basic puzzle for me is this: It does seem that if there is a reason for you to believe something somehow this should be connected to your (for lack of a better word) inclination to believe it. It looks almost as if it is of the very essence of a reason to believe that it at least has the capacity to be compelling in this way–to the person for whom it is a reason at least. Getting a good clear idea of what this connection is is one problem. But another problem seems to me to be simply understanding how the connection between a reason to believe something and believing it could be routed through the believer’s motivational system at all and yet leave the believer behaving in an epistemically responsible way. (Perhaps I’ve read more into the proposal than is there.) The connection between a reason and the belief for which it is a reason seems to need to be something quite different from the connections we are likely to find plausible in the practical arena. What I’m thinking of is pretty unsophisticated, but amounts to something like the worry that believing something because you want to believe it is made no better when your wanting to believe it was engendered by a genuine reason to believe it. Being motivated to believe what you believe seems to me to be mostly a bad thing. Somehow the reason to believe must directly determine your belief *without* motivating you to believe it.
    I may be reading more into the proposal than needs to be there, however. Maybe the reason isn’t really routed through motivation.

  11. John–it’s good to hear someone is working on this, but we want to hear more! Problems you see, etc.; and do you find the view suggesting that epistemic evaluations are nonfactive?

  12. Uriah, interesting idea, that ethical internalism entails epistemic internalism. But even if you defend the spirit of ethical internalism by insulating it from the epistemic realm, don’t you think there may be worse problems left untouched?

    It seems that internalism about non-moral reason for action needs more defense. In most respects, it seems much more attractive to be an internalist about non-moral than moral reasons for action. The existence of a prudential reason for us to do something, for instance, seems to require a connection to what we are instrumentally rationally motivated to do. But moral reasons seem to want a weaker connection to what we are instrumentally rationally motivated to do. Indeed, it is internalism about moral reasons that troubles so many people precisely because it seems (paraphrasing Frankena’s immortal words!) to trim rmorality to fit what happens to motivate us. That may not be fair to the plethora of internalist views out there, but I am quite sure it gets at the core of externalist uneasiness.

    As you say, one way to preserve both moral and nonmoral internalism would be to deny outright the voluntarism required for the consequence, but it would be nice if there were something about reasons for prudential and moral action strictly speaking that blocked the inference.

  13. There are lots of fascinating points here. I can’t possibly comment on them all. Here are just a couple of thoughts off the top of my head.

    On Robert’s point that it’s not clear “how the connection between a reason to believe something and believing it could be routed through the believerâ??s motivational system at all”: ‘motivation’ is a potentially misleading term.

    Sometimes ‘motivation’ is used to refer to (what is sometimes in the philosophy of action called) “desire” or “wanting” or “pro attitudes”. But then it wouldn’t make any sense to say that an action (or an intention) is motivated by a desire.

    So it seems that there is another use of ‘motivation’ such that to say that an action or attitude is “motivated by” some antecedent mental states is to say that there is a correct folk-psychological explanation according to which the agent has that attitude, or does that action, precisely because she is in those antecedent mental states.

    It is this second sense of ‘motivation’ that is relevant to epistemology, I think. (It’s controversial, but I think in this second sense the term refers to what is sometimes called the “basing relation”.)

    On Uriah’s worry about “‘ought’ implies ‘can'”: there are different sorts of ‘ought’ I think. Every kind of ‘ought’ implies at least logical possibility. Some kinds of ‘ought’ also imply something more like a reliable ability; but by no means all of them do. E.g. in some contexts, I think it could be quite true to say “I ought to win the lottery”, even though the speaker certainly doesn’t have a reliable ability to win the lottery (or compare Macbeth’s line on his wife’s death “She should have died hereafter”). So I’m not at all worried about Uriah’s attempted counterexample.

    On moral vs. non-moral reasons, etc.: with respect to “ethical judgment internalism”, I have generally started out by defending the version that focuses, not on judgments about relatively “thick” reasons for action (like moral reasons etc.), but on “thin” normative judgments (such as judgments about what one has all-things-considered most reason to do). If one isn’t motivated to act by these judgments, then one is simply being akratic; and I think it is fairly plausible that (i) such akrasia is irrational, and (ii) every agent has at least some (typically highly fallible) disposition not to be akratic in this way.

  14. Ralph, it appears that you’ve written some things on this topic that I’ve missed; can you point me to them? I’m especially interested in your remarks about akratic responses, since that issue comes very close to the issue I’m trying to sort through about cases where you know that something would be evidence if it were true, but you also know that were you to acquire the evidence, believing the information would block its confirming power. My hope is to explain away these kinds of cases as a problem for the internalism in question…

    I especially liked your remarks about motivation. The idea of there being a difference between routes through affective states and routes not through such states was very helpful.

    On Uriah’s worry, which I don’t think I’ve sorted through entirely yet, my first reaction, however, is to wonder whether such worries have adequately internalized the lessons of frankfurt-style counterexamples.

    Finally, one idea for Robert and the wonderful Frankena remark. I think it’s no surprise to hear Frankena worry here: the more “fallen” you conceive human beings to be, the less plausible internalism about moral reasons will be. And the same point, I think, underlies your concern that internalism is more plausible about practical reason than about moral reason…

  15. Robert, I haven’t thought about this much, but my sense is that internalism about moral reasons is more attractive than ethical internalism about epistemic reasons (though both appear less attractive than, say, internalism about prudential reasons).

    To be sure, internalism about moral reasons faces the Frankena problem, which Brink (for instance) plays up a lot in his attacks on internalism. But that’s a matter for another discussion. Someday I’ll post my view on this on Desert Landscape. In short, my view is that a Bernard Williams-style formulation of internalism can get around it. On this formulation, crudely put, internalism is the thesis that S has a reason to phi only if S can *bring herself* to be motivated to phi – which is not the same as claiming that S has a reason to phi only if S *is* motivated to phi. The “tailoring” or “trimming” of morality then becomes much more minimal. (Check out http://www.arizonaphilosophy.com/index.php?p=38 for more on this.)

    Another issue: I would resist Ralph’s move to the second sense of motivation. In its original formulation, it is essential to ethical internalism that it connects reasons for action to motivation in the sense of a pro-attitude, or more generally a mental state that has a telic, world-to-mind direction of fit, rather than a tethic, mind-to-world direction of fit. If we formulate ethical internalism about epitemic reasons in terms of the latter type of state, then the analogy with ethical internalism might become idle.

    It is true that it sounds strange to say that “S’s phi-ing was motivated by her desire to phi.” But there are other ways to put it that sound better: e.g., “S was motivated to phi by her desire to phi.” The source of the problem is that speaking of actions being motivated is odd, whereas speaking of agents being motivated to act is not.

    Finally, regarding “ought implies can”: this principle is intended to apply to the normative “ought.” When S judges that she ought to win the lottery, if the use of “ought” is not metaphorical there, then at the very least it is a non-normative use. BTW, I find that another topic underdiscussed in the literature is the logical relation between the “ought implies can” and ethical internalism. But that’s a topic for 30 other posts…

  16. There are so many issues here to pursue! Like everyone else, I will only venture a couple of comments here that interest me.

    Uriah, I now see my comment on non-moral reasons for action can be read to mean exactly the opposite of what I intended. Internalism about non-moral and like reasons seems very very attractive; so if that kind of internalism entailed internalism about epistemic reasons, and and internalism about epistemic reasons were ill considered, then that would be a worse result than if ethical internalism entailed it. Externalism about moral reasons is more palatable than externalism about non-moral reasons (to me, anyway, though I hold out hope for a defensible internalism some day).

    The Frankena motto to me encapsulates the worry I have about epistemic internalism (Jon rightly picks up on my (regrettable) Calvinist upbringing!). I can only think of this in metaphors and pictures at the moment, and the things I find unnerving may not hang on whether Uriah’s or Ralph’s views on epistemic motivation win the day. The problem is (for lack of a better way of putting it) the contingency of human nature. It’s routing evidence through contingent feature of human nature that raises my concerns. Evidence and reasons, it seems, shouldn’t be trimmed to what happens to move a person to belief, whether that’s a pro attitude or not. It’s not so much the ‘fall’ of human psychology, but its contingency in this regard, that raises concerns for me.

    Suppose, then, we make the internalism Williams style, just for the sake of argument. Then we would say:
    ‘e is evidence for S for p only if e is related in the right way to some element in S’s epistemic motivating factors M that would move her to believe that p’

    Let the motivating factors M be somehow isolated from the S’s motivations to act in some way. They are factors within S’s noetic structure that move her from evidence to belief, belief to belief, etc., whatever those things are.

    My worry then can be restated in this way: the elements of M seem every bit as contingent and variable across times and persons as are desires to act and the like. What moves S to belief changes over time, for instance. But we (well, I at any rate) want to be abel to say things like ‘All the evidence at the time was there; it just wasn’t possible for me to be moved by it back then’, or ‘He just isn’t in a state in which he can be moved by the evidence in front of him.’ (Denial’s not just a river in Egypt!)

  17. That’s a cool idea, Jon. Count me in. Michael Gill and I have been working on a paper that applies to metaethics (in particulars, reasons for action) certain moves that parallel moves that have been made in discussions of epistemic reasons (reasons for believing). It would fit well in a conference on these sorts of issue. I’m thinking it may be tricky to carve a conference topic that’s not too narrow and not too wide, though.

  18. Robert, one other thought on your worry about the contingency of human nature. Suppose we adopt what I take to be a realistic assessment of the human epistemic condition: we shouldn’t assume that we’re really good at telling what the truth is, and we shouldn’t assume either that we’re very good at telling what are the objective marks or indicators of truth. What we have left, once we realize the effects of fallibility here, is to pursue the truth by the whatever light has been given to us. It is to form and hold beliefs that answer to our deepest standards for how to pursue the competing goals of getting to the truth and avoiding error now. In the fundamental sense of rationality relevant to epistemology, this is what it is to be rational (though I’m being intentionally noncomittal about identifying this kind of rationality with epistemic justification).

    Now, suppose we think of rationality-makers on this account. Further, assume that it is not a mere accidental generalization that human beings are interested and curious about the way the world is, about what is true. To make the contingency of human nature a concern here, the deepest standards to which our epistemic practices need to conform in order for us to be rational would have to be standards that are somehow implicit in our makeup and yet were idle wheels in the explanatory account of the beliefs we form and hold. This is possible in particular cases, analogous to akratic action, perhaps. And if one adopts Foley’s line that one’s deepest standards are the conclusions you’d come to after an unlimited amount of time to reflect, then your worry is sufficient to undermine an ethical internalism here. But the right view of Foley’s theory, I think, is that it’s the result of wanting to operationalize too much the notion of deepest standards. One can be epistemically irrational because of conflicting concerns, such as practical well-being, and even an interest in getting access to a greater amount of truths in the future. But if we control for all of these things, what we should expect is that one’s deepest standards for getting to the truth and avoiding error now are just the sorts of things that get displayed in belief, just as we should expect that, when we control for interfering factors, a person’s deepest moral standards bubble to the surface to be displayed in their actions (in both cases, of course, the bubbling up is only plausible for the all-things-considered application of the key notion, as Ralph suggested above). The crucial point is the ceteris paribus clause of course, which here as elsewhere threatens to trivialize the view. But if one can develop a theory that doesn’t reduce to triviality in this way, it would be an immensely satisfying account. Maybe even correct…

    Uriah–you’re right that a standard conference idea would be hard. Maybe the solution is to think more in terms of a workshop, with less emphasis on the presentation of drafts of papers, and more focus on a question or set of questions a bunch of us might be interested in, with the idea that publishable thoughts will come out the backdoor of the workshop rather than being brought to a conference… Such times are often much more useful than conferences anyway–if you’re like me, the parts of a conference I cherish most after returning home are the latenight conversations about ideas that have not yet developed into anything publishable but which benefit from being batted around, and sometimes beaten up, in the process…

  19. Jon, I think a workshop on the topic is a great idea, or perhaps a broader conference on unexplored areas in which ethical theory and epistemology could benefit from sharing insights.
    From your last comment, I’m starting to get a better focus on what’s bothersome about epistemic internalism, and it connects up to Uriah’s worry about Ralph’s distinction in motivation.

    I think we could be optimistic about human nature when we’re talking about our beliefs, in the sense that they are more or less oriented toward the truth, or conforming to the world. But why be optimistic about the other elements of our psychology, in particular, the desire-like parts. Return to practical matters for a moment: Desires simpliciter have an orientation to the world that is precisely the reverse of belief. While belief is deferential to the world, desires are parts of our psychology that demand change from it, that it be otherwise than what it is. If the internalist connection is to something desirelike, then it seems to be pulling in the wrong direction.

    One area of the world to which we might want our epistemic nature not to be entirely deferential is our own psychology, and, in particular, our set of beliefs. We should have attitudes about the sorts of beliefs we should and shouldn’t have, put bluntly. Suppose, then, there is some desire-like non-deferential aspect of our psychology that disposes us to change our set of beliefs to match it. That looks something like a desire, though it’s object is belief. That desire-like thing would have to be something like a desire for truth, a desire to have one’s beliefs conform to the world. Then, your account would be something like this:

    A given consideration wouldn’t be evidence for an agent for a given proposition unless it inclines him *in virtue of his desire for truth* to believe that proposition.

    I don’t know why we might all have a desire for truth; sounds too highminded. Is it constitutive of epistemic rationality? Sophism seems epistemically rational to me.

  20. Robert, you’re right that we shouldn’t construct an account that presumes that everyone has a love of truth. The route you suggest for the internalist here seems too indirect to me, though. I don’t have a view here yet, I only know what I want to be true!

    So, here’s what I want. Suppose part of the difference between there being evidence and your having evidence is some at least implicit recognition of the supporting relationship the information provides for certain claims and not others. You don’t, that is, have the information as evidence and then search around or deliberate as to what it might confirm (though our theory should explain the phenomenon that such a description points to, even if it denies the correctness of the description). That it is evidence for what it is evidence for is something that is worn on its face (akin to the assumption of some of the ancients that to do something truly bad you have to turn your face from the good alternative and focus exclusively on the good-making features of the bad action–that is what I mean by having the goodness-for-believing-p worn on the face when you have the information in the way of evidence). In this way, one’s evidence makes the truth of a view intelligible to one, at least prima facie; it makes holding the belief in question an attractive one in intellectual respects.

    So what does this recognition consist in so that it moves one toward belief in this way? I don’t know, but there’s something to the idea that having evidence is something beyond merely having information that, according to correct principles of confirmation, supports p. (Unless, of course, what goes into the correctness of such a principle accounts for this intelligibility.) What I want to say, though, is that the recognition or awareness that yields intelligibility makes, on its own, believing the claim attractive, without needing to be routed through some standing desire for the truth. We can’t always get what we want, of course, but I really want this!!

  21. Jon, we can’t always get what we want, but if we try sometimes, we just might find, we get what we need!

    I agree we don’t want a love of truth to do the work here, and your description of what you want brings out nicely the picture. I wonder whether this is really a kind of internalism, however.

    Let’s assume, as I think many do, that it is constitutive of belief that it is a diposition to respond to the world in the way you describe. That would mean that, among other things, evidence e for p disposes es possessor to conform his mental states to p. To come to possess e is nothing more than to become so disposed by e. (I say ‘among other things’, since we’re supposing this is a rational disposition of some sort.)

    This seems to be the reverse sort of ‘magnetism’ that we find in the ethical case. In the latter case, possessed information doesn’t lead to my mental states changing to p, but the world changing to ¬p. My mental states resist in the case of practical internalism. Reasons to act are connected to resistance to the world. Reasons to believe are connected to accomodation; my mental state caves in and conforms in the case of belief.

    So perhaps we can say that there is a parallel connection to a disposition wanted in both cases. But there is a vital disanalogy in the cases as well. I think Austin described this as the difference between having a hat and trying to fit it to the right head, versus having a head and finding the hat that fits it.

  22. Very interesting.
    Nobody has addressed

    Second, would that kind of internalism in epistemology somehow give grounds for thinking that evidential connections are nonfactive?

    Here’s a brief answer: Judgment Internalism provides grounds for thinking that (judgment of) evidential connection is non-factive. Existence Internalism does not.
    In ethics, neglecting that difference leads to the mistake of thinking that the Humean theory of motivation is essentially connected to expressivism. Those aren’t essentially connected, unless the two kinds of Internalism are.

    I foresee a terminological nightmare coming. Soon, every view in epistemology will be called ‘internalism’ or ‘externalism’.

  23. I agree, Jamie, that judgment internalist would make the connection factive only if it held that only true evidential beliefs engender belief in the evidenced proposition. Those who think it is the content of the belief–thinking of a consideration under the heading of ‘evidence’, e.g.,–that is motivationally important will not requre the belief that e is evidence to be true. Whether or not there is a connection to existing evidence would not be important. Those who think evidence motivates, but who think it motivates only through belief or awareness of it, will have a hybrid. e is evidence for p, then, only if believing that e is evidence for p motivates belief in the evidenced proposition p.
    It is an odd view, however. It makes the existence of evidence contingent on our believing it to be so.

  24. Sorry for the bloggorhea…another thought.
    It strikes me that Jon’s first formulation of the position connects the existence of evidence with the motivational efficacy of belief in that evidence. That seems to require the evidential connection to be factive, even if the connection to motivation of existing evidence is routed through belief in it.

    Take the RHS alone, however, and ask whether we should hold that view. The issue then is why should it be true that awareness of e engenders belief in p, the proposition for which it is evidence? One answer is the content answer: S must be thinking of e under the heading of evidence for p for it to motivate belief in p. Another answer, however, is that it isn’t the distinctive nature of thinking under the heading of ‘evidence’ that gets the mind moving. It’s the evidence itself that does it. If so, then somehow it would have to be possible for there to be a connection between existing evidence for a proposition, awareness of that evidence, then motivation to believe that proposition, requiring S to be aware of it but moving S not because S is thinking of e *as* evidence for p. e might in fact move S *both* to think of it as evidence for p *and* to believe p.

  25. Robert, I am lost. Remember that I am not an epistemologist and go gentlier.

    Ohhhhhh….

    To clarify: I understood the word nonfactive in the original to mean what nonfactual means in Kit Fine’s “The Question of Realism”. Is that how you were understanding it? Or were you understanding it to mean what factitive means? Or something else? Or maybe the original hypothesis, that I am lost, was correct.

  26. Jamie, perhaps it is I who am lost, since your original comment seemed right. And I’m gentlier than the other roughhousers ’round this blog ;-). (BTW, I don’t fully understand what Fine means by ‘factual’ remember!) I am also an interloper, as you know. Jon can help.

    I took the issue to be whether e’s evidential connection to p for S could be a matter of fact in some way, were it to be the case that necessarily e is evidence for p only if, once S is aware of e, he is inclined to believe p. Maybe being evidence just is nothing but what inclines a believer to believe.

    I suppose one way that it might work is that the awareness of e couldn’t incline S to believe p unles that state of awareness were in some way *noncognitive*. But that cognitive states don’t motivate is at least a somewhat controversial proposition among ethicists, anyway, and no special issue would be raised concerning evidence.

    Better, e’s status as evidence for p is dependent on its capacity to move S to belief, once S is aware of it. There’s something about e and it’s connection to p that makes it evidence, viz., once S is aware of e, S is inclined to believe p. That runs existence internalism about evidence (e is evidence only if it motivates under some circumstance to believe p) through judgment internalism (the circumstance is that S is aware of e…maybe *as* evidence, I don’ t know….and awareness of e is what does the motivating).

    There is a further issue, that was not raised in Jon’s original post, that you raised, which is essentially this: Is *believing* something to be evidence (rather than it’s actually being evidence) connected to the inclination to believe what it’s evidence for? If you think beliefs don’t motivate, then you might think that so-called belief is a really some attitude you have toward e. Or you might think evidential beliefs do motivate, but not in virtue of their content, but in virtue of their subject matter. Then, it might be that only true evidential beliefs motivate, because evidence itself is connected to inclining the believer to believe what it is evidence for.

    That, at any rate, was my take on it. But Jon or you may show me to the door of this blog!

  27. Robert and Jamie, this is just what I was hoping to do, get some ethicists who know more about ethical internalism than most of us epistemologists do to talk about this stuff. Robert has referenced rock-n-roll and Jamie sex; all we need now is some drugs and we’d have to holy trinity of us over-40’s crowd… In that vein, Robert, you can check out any time you want, but you can never leave… but please don’t check out either…

    So, I think Robert is on track as to what I’m interested in. Let e be evidence for p. Then, if I become aware of e–either through belief or experience–then, as long as e remains evidence for p in my becoming so aware, I’m now disposed to believe p. I needn’t think of e as evidence for p for this to be true, and the disposition needn’t filter through any judgment on my part that e is evidence for p. The one complication, however, is that I want the theory of evidence itself to be subject-relative, and if this were the best of possible worlds, it would be subject-relative in just the way bayesian conditionalization imagines it to be: the theory of evidence would be encoded in one’s conditional degrees of belief. Since this bayesian view is false, all I can do is say, “like that, only different…” As you can see, I have a desire in search of a theory…

    What I glean from your discussion is that the view is not a version of judgment internalism. Would it be different if I endorsed the bayesian view about the theory of evidence? Such a view would involve the idea that some implicit judgment is present to the effect that e is evidence for p when it is, but this judgment is only connected in an ancillary fashion to the inclination to believe. What is true is that e wouldn’t be evidence for p without this implicit judgment, but I don’t see my way through to an answer to the question of whether this implicit judgment is part of the story of what motivates belief when one has the evidence e…

  28. Hmmm.
    As I said, it seems to me that Judgment Internalism is a reasonable grounds for nonfactualism — as reasonable as it is in ethics, anyway. Here’s an argument.

    According to JI, the state of taking e to be evidence for p is necessarily accompanied by a inclination to believe p in the presence of e. (I’m ignoring the Shopish difficulties with JI.)
    But, distinct states cannot be necessarily connected.
    So, the state of taking e to be evidence for p must be (part of?) an inclination to believe p in the presence of e.
    But inclinations to believe are not beliefs. So, the state of taking e to be evidence for p must not be a belief.

    That’s nonfactualism about evidence.

    Yeah, it’s a little rough.

  29. Bayesianism
    Darn, I thought Bayesianism was true. Hanging around this street corner is disillusioning.

    I don’t exactly see how to fit Bayesianism into this context. I guess that according to Bayesians, taking e to be evidence for p is having c(p|e) > c(p), where c(.) is your credence function and c(.|.) defined in the usual way. Now, how exactly is that credence function connected to an inclination to believe p in the presence of e? Well, Bayesians think that we are rationally required to update by conditionalizing. That doesn’t seem to lean Bayesians toward nonfactualism. (Have I missed something?) For e to be evidence for p (for a subject S) is for c(p) and c(e&p) and c(e) to have certain values when c(.) is S’s credence function. So that’s a fact all right. It’s a fact about S.

    I wonder if I’m overusing boldface or italics. It looked so cool in Ralph’s post.

  30. Jamie, putting your two comments together, here’s my worry about bayesianism. The crucial point in your post about judgment internalism is that distinct things can’t be necessarily connected. Now suppose two things: first, that having evidence inclines necessarily toward belief, and second, that what it is to be evidence is encoded in one’s conditional judgments. Then part of the explanation of the necessary connnection between having evidence and the disposition toward belief is the implicit judgment that constitutes something’s being evidence for the content of that belief. So we’re going to get some kind of necessary connection between the implicit judgment that constitutes e’s being evidence for p for S, and S’s being inclined to believe p when S comes to have evidence e. So when the inclination to believe p occurs, it is necessarily connected to S’s believing or experiencing e and S’s implicit judgment that consitutes e’s being evidence for p for S.

    So my worry, put in your language, is that if necessary connections have to be true in virtue of partial constitution, then the implicit judgment is not itself a belief. I’m not quite sure of how we get from here to nonfactualism, but if the inference works in the case of judgment internalism, it looks like it will work here as well. Or am I missing something???

  31. I get it.
    I never thought of Bayesianism as having enough metaphysical depth to include or entail one position or another about what states constitute what. It seems to me, just off hand, that there are at least a couple of Bayesian positions on your question. One is a Judgment Externalist position: judgments of evidence, a Bayesian could say, are not necessarily related to inclinations to believe, but only rationally connected. (There is a similar and somewhat popular move in metaethics.)

    The other way to go would be like this:
    Nonfactualism about judgments of evidence is true. Bayesianism is the nonfactual judgment that for each person evidential connections are constituted by conditional credences.
    That’s analogous to being an expressivist in ethics and also a utilitarian.

    I’ll think about this some more.

  32. Jamie, I see the distinction. I think, without existence internalism about having evidence, there is no question that a bayesian can be a judgment externalist. The danger comes, it seems to me, when existence internalism is adopted, together with a subjective account of evidence itself. Then it looks like existence internalism slops over onto the implicit judgment that defines the concept of evidence.

    I’m not sure about this, though. The nature of the necessary connection here is one source of worry, since the modality tying the implicit judgment to the nature of evidence is different from the modality in which having evidence is to be inclined to believe. This issue feeds into another, since we have a kind of transitivity at work in getting from the implicit judgment to the inclination to believe, and transitivity involving mixed modalities should always raise suspicions…

  33. Jon, I didn’t follow the difference in modality remark, but would like too….help. (Could be those chemicals from my youth…oops, we got our trifecta!)

    I think you’re right that the danger is in combining existence internalism about evidence with the view that e’s being evidence for S for p is a fact about S. Even if evidential relations are factual, they can be subjective, as Jamie points out (that is, the connection between e and p is a fact about S rather than a fact about the world outside of S). That is probably what bothers me about a noncognitivist view about judgments about evidence. It’s the idea that evidential relations are in some sense dependent on subjects that puzzles me.

    But perhaps evidential relations can be dependent S, but not subjective.

  34. Robert, the remark about different modalities was a bit incautious. What I should have said was that it’s not clear the modalities are the same. One modality is the necessity of definitions: what it is for e to be evidence for p for S is defined in terms of the conditional probabilities for S. When it comes to the necessity of internalism, tying the having of evidence to the disposition to believe, we have, perhaps, the necessity of constitution. As long as both are kinds of the same sort of necessity–perhaps metaphysical necessity–then there is no problem here. My concern is whether in fact that is the case.

  35. Let me take another try (not at the modal issue, though).

    The Bayesian says the fact that e is evidence for p for S is the fact that Cs(p|e) > Cs(p).
    That Cs is the credence function for S. And maybe that�s not exactly the right Bayesian account, but close enough for our purposes, I hope.

    The question is how this could be so if Existence Internalism is true. Cutting a couple of corners, it looks like Bayesianism could be compatible with Existence Internalism only if the fact that Cs(p|e) > Cs(p) is the fact that believing e inclines S to believe that p.

    Well, maybe it is! If you�re into armchair functionalism, that doesn�t look like too bad an example. It would have to be hedged around with ceteris paribuses and maybe a few normallys.

  36. Jamie, you’re exactly right here. I now see that what I was doing was confusing a factive bayesian account of evidence with what I think has to be done to bayesianism to handle the paradoxes of confirmation. So when I spoke of one’s deepest epistemic standards, I tend to think of that as being implicitly judgment-like. But that’s a different view…

    I’m still confused by the argument you gave above, though. The one about taking e to be evidence for p. Your argument concludes that such taking is not a belief, and conclude that this is “nonfactualism about evidence.” Here’s one source of my confusion. Suppose we adopt a Fumerton-like view that the taking in question is a type of acquaintance (and acquaintance is a different mental state than belief). Would that be nonfactualism about evidence?


  37. Suppose we adopt a Fumerton-like view that the taking in question is a type of acquaintance (and acquaintance is a different mental state than belief). Would that be nonfactualism about evidence?

    No. It’s nonfactualism iff the taking is not representational — not ‘cognitive’, if you will. Sorry about that.
    I’m finding it a bit difficult to write comments in this tiny window. It makes my style too telegraphic. Anyway, I’m sticking with that excuse for now.

  38. I bet I can fix the window size! I’ll try anyway. I know how to change the size of the comment popup window, which I’ve been meaning to change anway. I’ll try to figure out the other as well…

Leave a Reply

Your email address will not be published. Required fields are marked *