Knowledge and Rational Action

As a newby, it seems appropriate that my first post on CD should be a bleg (blog-beg).  I’m revising a paper in light of some great comments by Matt McGrath from the Orange Beach/USA Pragmatic Encroachment Workshop and am going to be begging for guidance in the relevant literature.  I’ll give a brief intro and then post the rest below the fold.  The broad topic is the connection between knowledge and rational action.  I don’t think the linguistic stuff on assertion has very long legs, but I’m very interested in the arguments for interest relativism from the nature of rationality.  Hawthorne’s book didn’t pay enough attention to decision theory (for my taste) or fallibilism, but what’s interesting is that Jeremy and Matt put it front and center (rather than the old bank cases or other to-me-sketchy linguistic data or to-me-fuzzy assertion stuff).

They defend a principle which entails the following.

(FM)  If you know that p, then your strength of epistemic position regarding p is sufficient for you to be rational in acting as if p.

They give pretty precise definitions of “acting as if p” and it gets a bit baroque, but they are careful and that’s good.  What I really like about their approach–in addition to what I said just above–is that they focus on the normative component which tunes out a lot of static about the basing relation and the doxastic side of the picture.  They nicely isolate the key issue (for me).

From 2002 on–when I first read the MS of Knowledge and Lotteries– the following kind of counter-example to FM-style principles has seemed right to me.

The Specialist and the Ubertest D is the worlds foremost expert on fatal condition C.  D examines patient P and comes to know on the basis of her expertise that P has C.  Now, there happens to be a rather amazing test which costs only a penny to perform and never gives false results (ever).  P is ready, willing, and able to take the test.

That’s taken from my Master’s Thesis in 2003 or 2004, but it’s essentially the case that first came to mind (as similar cases have for many people. Jessica Brown’s surgeon case is very similar, and Stew Cohen pointed out in an early PPR symposium that such cases would be easy to generate).

[I think any good definition of “acting as if p” should have it come out that in this case D should not act as if she knows that P has C, since that would mean foregoing the test, which clearly isn’t rational.  I don’t *think* we have to re-hash the intricacies of F&M’s definition of “acting as if p” for me to seek the help I need here.]

Such c-e’s have come quick and been decisive for many of us, but Matt has always put a lot of stress on the somewhat abominable conjunction (SAC).

(SAC)  “I know that the test will come out negative, but we have to take it anyway.”

or the more focused

(SACF)  “I know that F-ing will have the best consequences, but it wouldn’t be rational for me to F.”

I have to confess that (AC) doesn’t sound bad to me and that I don’t put much stock in (ACF) since it’s a fomula, not a plausible utterance, and devoid of context.  Still, if we want to address the concerns of real-life interest-relativists–or at least it’s defenders–we need to do the following:

TASK  Explain why the SAC’s sound bad, if true.

Unsurprisingly, I want to pull something like a WAM on the SAC’s.  In our work together, Patrick Rysiew and I have defended WAMing CKA’s (consessive knowledge attributions) and we’ve been careful to distance ourselves a bit from standard Gricean accounts of extra-semantic conveyance.  My own view is that felicity is a function of expectation.  When we here what we expect, things are copacetic.  When we hear something unexpected, we wig out.  We don’t consciously keep track of data, of course, and we can’t always tell what’s bothering us, and the infelicity comes in degrees (doesn’t this sound more plausible than a flat-footed version of the Gricean account?).

So I think in the normal case–by far the most common–knowledge will suffice for action, so there’s a strong expectation that instances of “S knows that p” can acceptably be followed by instances of “S rationally acts as if p.”  So here’s the first bleg.

BLEG 1  Does anyone know of any psychological research on how fast people go from (Fa1 & Ga1), (Fa2 & Ga2),… to Necessarily, for all ai(Fai –> Gai)?

I’m guessing we do this really, really fast (it’s like a dominance heuristic), so that ordinary language considerations lead to hasty generalizations.  No one, I take it, doubts that knowledge is by and large sufficient for action–think of all the little instances all day long.  What I’m doubting is that this is sufficient to insulate (FM) from the c-e’s.  That is, since we know that people over-generalize in this way, c-e’s should have their full force against claims that start “Necessarily, for all…”

Some people have really liked this gambit, a few–two in particular!–didn’t.

The other explanation that strikes me as plausible for why SAC’s sound bad–when they do–is that we are typically loose in talking about reasons.  In particular, I think a proposition is not a reason for action unless it mentions both a chance and a goal (or a good or something teleological).

E.g. when we are in a context where we’re wondering whether we should cross the icy pond or walk all the way around and we know–with, say, 95% evidential probability, reflected in rational degrees of belief–that the ice will hold us.  It is common–but a bit sloppy–to say this.

REASON-BAD  That the ice will hold me is a reason to cross the pond.

If we regiment our language a bit–though not distort the ideas behind it, we get something more like this.

REASON-GOOD  That there’s a 95% chance that doing so will meet my goal of saving some time is a reason to cross the pond.

So that leads to my 2nd bleg.

BLEG 2  Can someone point me to places in the literature on reasons for action that seem to confirm or dis-confirm my hypothesis concerning REASON-BAD and REASON-GOOD?

Thanks!


Comments

Knowledge and Rational Action — 13 Comments

  1. Hi Trent,

    I just want to figure out exactly what you want to object to (provided you get the help you want from the literature). Suppose this proposition has probability 1 for you:

    A) crossing the pond will meet my goal of saving some time.

    Question: is A a reason you have to cross the pond? Or would you still insist that the following is the reason you have to cross the pond:

    B) There is a 100% chance that crossing the pond will meet my goal of saving some time.

    That is, do you want to say that all reasons one has have to be ultimately about probabilities? If so, I’m not sure there’s an obvious counterexample to the principles, here, because it’s not obvious that it’s epistemic weakness that’s standing in the way of “the ice is thick enough” being a reason you have. (The objection would more be to the argument from the principles to the denial of fallibilist purism, I think.)

    But if A can be a reason you have when it’s certain, I’m not sure why it can’t be a reason you have when it’s less than certain. The reason it gets to be a reason you have when it’s certain (if it does), I presume, is that its probability is high enough to allow the expected utilities to work out right. But its probability can be high enough to do that if it’s less than certain, as well.

    So, I’m not sure I see yet the case against our principles, even if REASON-GOOD is to be preferred over REASON-BAD.

    The reason mentioned in REASON-GOOD differs from the reason mentioned in REASON-BAD in two ways: first, it explicitly mentions a probability and, second, it mentions a means and a goal. I think it would be helpful to distinguish these issues.

  2. My apologies for the length of this comment…

    Just to be clear, I don’t think

    (SAC) “I know that the test will come out negative, but we have to take it anyway.”

    has to be bad. You might think you have to administer the test because that’s the protocol, or in order to relieve anxiety, or you might want to administer it because you simply desire certainty.

    But I do think that if you set up a case right there will be a piece of reasoning that seems very plausible and which is hard to explain on low-standards invariantism. I think the business about a penny makes it hard to judge. Make the cost something real — not a lot — but something that clearly matters at least a little bit! And make something hinge on the action. DeRose’s bank case B has the right structure, but so do lots of other cases that arise all the time (backing up files when finishing up one’s book, locking the doors before going on vacation, etc).

    Think about Bank Case B. Here is a piece of reasoning:

    We know the bank is open tomorrow. So, we know that if we come back tomorrow, we will deposit our checks without having to wait in the long lines today. So, it makes more sense to just come back tomorrow.

    Now, one could question the reasoning by questioning the knowledge-claims. But as far as the relation between those knowledge claims and the conclusion about what it makes sense to do, the reasoning seems attractive. The low-standards theorist has to concede that if the stakes are high enough and the inductive evidence for the bank being open tomorrow strong enough, the knowledge-claims are true and the conclusion about what makes sense to do is false. The question is then why it seems the reasoning is still good.

    I should say that one of the main reasons we wrote the book was to go beyond the main material of our 2007 PPR paper (written in 2004) — intuitions concerning clashing conjunctions. We wanted to note that certain pieces of reasoning seemed strong and then to try to give an account of why in terms of deeper principles about knowledge, reasons, and justification.

    The account was, very roughly, this. If you know that P, then P is warranted enough to be a reason to PHI; and what is warranted enough to be a reason to PHI is warranted enough to justify you in PHI-ing. This is developed in chapter 3 of the book (esp. sections 2-6).

    These principles can be brought to bear on the particular cases. Take the high stakes bank case B. Suppose you know that the bank is open tomorrow, and therefore that you’ll be able to deposit your check tomorrow. Then it looks like you have at least *a* reason to come back tomorrow, the very proposition you know – viz. that you’ll be able to deposit your check then without having to wait in long lines today. Weighing this consideration against other considerations, it would seem to win out. So, if you did know the bank was open tomorrow, you would have a justifying reason to just come back tomorrow.

    We can divide up our opponents, who deny the final conditional, into two camps. First, there are those who think that even though you have the relevant knowledge, what you know doesn’t count even as a reason you have to come back tomorrow. Jeremy points out that someone might think this on the grounds that only propositions about probabilities and expected utilities can be reasons. (See pp. 86-7 of the book for a response.) Alternatively, someone might think the knowledge isn’t a reason because you need more than knowledge for the thing to be a reason. (See pp. 69-76 for a response). Second, there are those who grant that what is known is a reason but who insist that it is defeated by other considerations. (Our response is found on pp. 77-82.)

    We were hoping that by introducing these considerations about reasons and defeat, we might have a more enlightening debate – don’t know about you, but I’m weary of dueling intuitions about what clashes and how bad the clash is.

  3. Jeremy,

    1a. I’m not sure I’m objecting to anything. Far from it, it could be that, like Jon said at the PEW, I’m finding that I don’t really disagree after all. The question will be if what I have to say effects whether your principles can break the supervenience of a certain kind of positive epistemic status on evidence. It’s not *obvious* to me how interest relativity will creep in on the account of reasons I favor, but that could be because I’m thinking so much about decision theory these days! But, in short, you’re right, I was not clear that what I may be finding is that my beef is not with the principles themselves, but with whether they can rightly lead one to interest relativism.

    1a’. However, when you say that there might be no coutner-example because it’s not epistemic weakness preventing the proposition from being a reason, that’s potentially misleading, because if my theory of reasons is correct, no knowledge claim is ever, strictly speaking, a reason. So the problem would have to do with a category mistake. Still, it seems to me that there’s a middle way. When we speak in the loose and popular sense of facts as reasons (or alleged facts), it’s true to say that the doctor’s strength of epistemic position is sufficient to know that the patient has the disease, yet the doctor can’t rationally act as if p, can’t do what has the best results if p (and not just for reasons about protocol or anxiety).

    1b. A similar phenomenon occurred when I heard Jason read the “Knowledge and Action” paper at the International Linguistics and Epistemology conference at Aberdeen some years ago. He talked about decision theory depending on knowledge of chances. The Jeffrey in me still thiks you don’t actually need knowledge of chances, just some kind of basic belief in chances, but it does change the key of the debate.

    1c. If there is, as Fumerton suggested at the PEW, some core notion of epistemic probability used in decision theory which is not interest-relative, then *that’s* what I’m really interestsed in. Matt assented that there was such a notion, but he may not have meant to do that, it was quick in discussion. Do you think there is such a notion?

    2. Regarding your question, I’d say A is still not, strictly speaking, of the form of a reason for action, but, rather, a standard syncopation.

    3. So the view I’m exploring here–and suppose I’ve always taken for granted in a way–is that the reasons for action are, strictly speaking, chances of success (so, yes, both probabilities and something teleological must be in there). I thought I’d discovered a great way to connect this with decision theory, but it turns out it was inspired by what might be a misreading of Weirich’s _Multidimensional Utility Analysis_, esp. pp 12ff, and 76ff). At any rate, the resulting view is very similar to that expressed there.

    4. The basic idea is that when you make a decision rationally, you make it on the basis of a pro and con list (a pro is a reason for, a con a reason against). To start, we might put the possible good outcomes on the pro side and the possible bad outcomes on the con side. But of course–and Weirich notes that Benjamin Franklin noticed this–we’d want to weight each of these by their magnitude of goodness or badness. But then of course–as the Port Royal Logic pointed out–we’d also want to weight them by their probability. But then what we end up with as the items on our list are the individual summands in the general summation which is the formula for expected utility. But since they are the items on the pro/con list *they* are the reasons.

    Matt,

    1. Right, a SAC might be excused by a FM’er (gotta keep my keys straight on that one!) for the sorts of reasons you say. I was testifying that they don’t sound bad to me even with that stuff controlled for. I’m sensitive to features of this case you might not like, and will try to accommodate, but they are *just* the sorts of case that seem to some of us to go against FM. The extremity is there for the same reason it usually is in attempted c-e’s, to isolate variable and to sharpen intuitions. (Have you replied to Jessica Brown? One of her cases is different from the high-stakes situation I describe, but where the pay-offs are heavily asymmetric.)

    2. Regarding the propriety of reasoning in Bank Case B, I think Hawthorn gets it just right in the section where he–very very briefly!–considers the position of simple, moderate invariantism. He says there something like what we ought to say is that in such cases we’re–rightly–ignoring the very small chance of error. Such coarse-grained reasoning in BCB is an efficient heuristic shortcut. In fact, I’ll just quote him, it’s the best passage in the whole book.

    “Where the difference between zero probability and small probability makes no difference, we can use the concept of knowledge to effectively evaluate reasoning. But where that difference makes a difference, the concept of knowledge is too blunt an instrumentt.” (148)

    I have “Hurrah!” and “Bravo!” written in the margins. 🙂

    3. So we really are headed in opposite directions here–as far as our respective research projects here are concerned. I admit the datum:

    BCB-GOOD The reasoning in BCB is, prima facie, good.

    (Feel free to tweak that if you want, in fact I really would like to get the characterization *exactly* right.)

    You explain the datum by moving knowledge to the fore, I explain it per Hawthorne’s accurate surmise of what we SMI’ers hold (Back of the bus, knowledge!). A hardline SMI’er might argue that BCB-GOOD is false and that knowledge talk is just totally misguided. Somebody somewhere calls it a “stone-age” concept. But that’s not me, I think there are good Cog Sci reasons why BCB-GOOD is correct: it has the kind of goodness all useful heuristics have. We might call this SMI-light/enlightened, or

    SMILE Citing knowledge in practical reasoning is an appropriate heuristic for most situations.

    4. We SMILEers just think that in certain circumstances the heuristics get it wrong–issue the wrong result–and that expected utility analysis gives the right result (and, just like when it is pointed out to people that they’ve committed the base-rate fallacy, they tend to endorse the formal stuff upon reflection, even if they promptly go back to committing the fallacy, so, it seems to me, upon reflection we ought to recognize that FM sometimes issues the wrong result, and that the “deeper principles” are ones about expected utility, and, ultimately, of probability. (Did you mean to endorse Fumerton’s thesis that there must be *some* notion of positive epistemic status and that this should probably be the epistemic probabilities in decision theory?) [Caveat: The stuff about knowledge of chances might make knowledge foundational in a way, but it’s still a foundation *for* decision theory. I’m still pretty Jeffrian about probabilitiy judgments, but I’ll admit to not being very satisfied with his account of what those are, but that’s a subject for another post.]

  4. Just a few more thoughts. I won’t try to answer all your interesting points (this comment will be long enough as it is).

    I meant to say before that I would be worried if there were no linguistic evidence (or evidence from the way we speak) that suggested a knowledge-action connection. I do think there is such evidence. Saying “you know that’s not going to happen” is a common way of criticizing someone for worrying about a bad possibility. On the way back from PEW, the in-flight safety video told us that when we put on the oxygen mask we should “know that oxygen is flowing, even if the bag doesn’t inflate; so there is no need to worry.” Similarly, one can defend one’s playing it safe by saying “I just didn’t know that it would work out ok at the time.” And so on. All these patterns of use are found even when the stakes are high (e.g., it’s high stakes if I’m wearing that oxygen mask!). Another gem, from a website against the Large Hadron Collider: “Stop the LHC until we know it is safe!”

    I don’t think any of this is decisive evidence for a strong knowledge/action link, but it is some evidence. And the fact that the patterns are found in high stakes situations is a problem for any view according to which knowledge is only sufficient for actionability in low stakes cases. (Btw, we do reply to Jessica Brown in chapter 3, section 1.)

    I do worry that your approach isn’t going to explain why the move from “We know we’ll be able to deposit the checks tomorrow without the hassle of waiting in long lines” to “it makes more sense to come back tomorrow” should sound so plausible. The idea that in normal low stakes cases knowledge is enough for actionability just doesn’t seem to predict that it will seem to be enough in high stakes cases. Why is it that knowledge seems enough for actionability in these cases but “having good evidence” doesn’t seem to be enough? After all, normally having good evidence is enough for actionability.

    I think it would be interesting if in order to resist arguments like ours one needed to deny that propositions/facts about how things stand in the world — such as *the car will hit me unless I get out of the way* — can be reasons for action. That would be surprising, I’d think. You might check the literature on practical reasons on that (Dancy, Broome, Scanlon, Schroeder, Raz, etc.).

    Moreover, I’d think you’ll end up thinking that reasons are a motley crew. Reasons for belief very often are about how things stand in the world. My reason for believing I’ll die unless I jump back might be that the approaching car will hit me unless I jump back. This reason seems to be about things that matter very much to my practical life. But you’ll have to say it’s the wrong kind of thing to be a practical reason. I wonder if you’d think reasons for joy, grief, pride, etc. must be probabilistic, too, or is it just action?

    I suspect, also, that you’ll end up embracing something like our principles (this relates to what you said about the Stanley paper). Presumably, merely having a belief that action A has the highest expected utility isn’t enough to give one a justifying reason to do A — the belief could be irrational, based on wishful thinking. But then how warranted must the belief be? Must you have probability 1 for *A has the highest expected utility* for that to be among your reasons? That would be too strong, wouldn’t it? The idea that knowledge is enough to make these things a reason becomes attractive.

    I wonder why you don’t plump for the hard-core Bayesian view that scraps reasons altogether. An action is rational iff it has the highest expected utility of the available actions. End of story!

    Finally, just to be clear: Jeremy and I don’t want to rule out an expected utility account of rational action. The hope was that the two stories — the reasons story and the expected utility story — would “agree”, so that whenever a reason justified you in PHI-ing, PHI-ing would also have the highest expected utility.

  5. Hi Trent:

    I get confused in these kinds of cases that (appear to) conflate epistemic obligations with other (e.g. professional) obligations.

    (SAC) sounds abominable when you take the deontic “have to” of its second conjunct to have an epistemic flavor to it (epistemically have to); but when it carries a deontic of professional obligation, it doesn’t sound bad at all – your expert (who sounds like a doctor) may have a professional obligation to confirm or check or whatever, since their procedural guidelines require it, etc. (Same thing goes for Brown’s cases, and something similar infects many of Lackey’s testimony cases where she has a teacher or a doctor testifying that p… To such cases I want to protest that Lackey is conflating epistemic reasons with other professional or prudential reasons, and in doing so gets some traction with one’s judgments about the cases.)

    So if you keep the deontic notion epistemic across (SAC), it tends to sound bad.

    But (SACF) is a different animal, since it invokes a moral deontic notion by mentioning “best consequences”. So it too conflates deontic senses, and twice over depending on your favor moral theory: e.g. plug in “torture this prisoner” for “F” in (SACF). If you’re not a consequentialist, then (SACF) will sound fine.

    Finally, I worry about your appeal to expectations to account for felicity. Douven (Phil Review, 2006: 473ff.) once made this kind of pragmatic move (which he has since retracted) to try to account for Moorean paradoxes like “P, but I don’t know that p”. But (to borrow someone else’s example), we also aren’t used to, and don’t expect, hearing things like “I am the vegetarian chipmunk Beyonce.” Yet the Moorean construction seems far more infelicitous than the latter; indeed, the latter seems utterly felicitous (it’s grammatical, I can grasp its meaning, etc.).

  6. Matt, just got back from the Plantinga retirement celebration and have had a chance to read your comments.

    1a. “Saying “you know that’s not going to happen” is a common way of criticizing someone for worrying about a bad possibility.“

    It’s also a common way of criticizing someone for hoping in a good possibility. “You know you’re not going to win the lottery, so get back to work.” So, while I admit that there’s some linguistic evidence for knowledge action principles (KAP’s), I think there’s also evidence against them, just as there are in the case of lottery knowledge. That’s one reason I discount that type of evidence quite a bit. But we seem to be in agreement (in conversation and the closing lines of your book) that theory can triumph over cases. We seem to both be in search of the best overall theory, and that sometimes means sacrificing intuitions. My main reason for endorsing this is the ambiguity of such evidence.

    1b In particular, I think all those examples you give are great examples of the ambiguity phenomenon.

    i. “we should “know that oxygen is flowing, even if the bag doesn’t inflate; so there is no need to worry.“ I.e. we should *realize* or *accept* or *be aware of the fact that*, etc.

    ii. “ “I just didn’t know that it would work out ok at the time.”“ I.e. I couldn’t be *sure*.

    iii. “Stop the LHC until we know it is safe!”“ I.e. know for sure.

    My point, in part, is that these statements could support a knowledge-certainty principle as well that I don’t think either of us would accept as fallibilists. Though I like a lot about Austin, in a way I see him as doing exactly the wrong thing. I love the way he teases out differences in usage between seemingly synonymous phrases, but the fact is that blurring is right there in the common use that ultimately accounts for the so-called lexical meaning. The phenomenon people should really pay attention to, methodologically, is the contrast between loose and strict speech. I think this is one of the keys to a purist take on the linguistic evidence for contextualism and interest relativism. (Chisholm, Unger, and others thought this a pretty important phenomenon.)

    2a. “The idea that in normal low stakes cases knowledge is enough for actionability just doesn’t seem to predict that it will seem to be enough in high stakes cases.“

    Well, stakes are a smear and I think it’s only the very odd cases indeed that knowledge isn’t enough, since lots of justification is almost always enough.

    2b. “Why is it that knowledge seems enough for actionability in these cases but “having good evidence” doesn’t seem to be enough? After all, normally having good evidence is enough for actionability.“

    i. This is a good question, but I do think there is a sufficient answer. One part of the issue concerns the various implicatures of “good”. I think this is very complicated. Saying of something that it was “good” is often do “damn with faint praise”. From what I’ve been able to tell, what “good” conveys is more sensitive to intonation than any term I can think of. Think of really strong and really week expressions of “That was a good meal.” That can clearly be damning or celebrating.

    ii. Also, many purists think that knowledge requires not just good evidence, but *very* good evidence. Feldman likes “beyond a reasonable doubt” and Conee tends to lean even stronger. That can be pretty high.

    iii. We can apply both these considerations to the “good evidence” version of the challenge why “the move from “We know we’ll be able to deposit the checks tomorrow without the hassle of waiting in long lines” to “it makes more sense to come back tomorrow” should sound so plausible.“ What that gets us is the question: How plausible the move from “We have very good evidence we’ll be able to deposit the check tomorrow without the hassle” or “It’s beyond a reasonable doubt that we’ll be able to…” to “it makes more sense to come back tomorrow”?

    My answer: Very plausible. My thesis: Evidence is doing all the work. [Side note: The Jeffreyan “radical probabilist” in me still resists the idea that all evidence need be knowledge, but I’m not opposed to the idea that all our evidence–if evidence is propositional instead of experiential–might *turn out* to be items of knowledge, simply in virtue of, say, being directly acquainted with their truth-makers. This would require a pretty deflated account of belief or a rejection of K –> B (neither of which seem so bad to me), but I don’t really talk much about outright belief, degrees of belief seem to do all the work.]

    3. Re: reasons. I met with Jonathan Dancy a month or so ago and he have me a very nice reading list on reasons, he was very helpful, and all those are on there. I’m inclined to accept the proposal made early (only to be rejected) by Williams that there are (at least) two kinds of reasons. For reasons that won’t surprise you, I think the kinds of reasons that are primary are the kind a person can have and be motivated by. But it is a complicated literature and I do plan to do some writing on this.

    4. “ My reason for believing I’ll die unless I jump back might be that the approaching car will hit me unless I jump back“

    I’m not sure how technical you are intending to be here, but I don’t see why I can’t accept this. I do have a draft of a reply to the recent very nice piece on the ontology of reasons by John Turri which he discussed on CD not too long ago. There are a lot of details there too, but it’s all stuff I really like to think about. This is all such fun stuff!

    5. I typically *don’t* talk in terms of reasons for action unless I’m talking loosely. I took it that the Weirichian picture I painted above was a way of making reasons talk and EU fit together hand-in-glove. In particular, I don’t take “A has the highest EU” to be a reason but rather a summary of reasons. After all EU flanks and *identity* symbol which has what I consider the reasons on the other side. EU = ([pros] – [cons]) where each reason has the form U(Oi)*Pr(Oi if A). Often, though, we loosely just refer to the Oi. When we get a little more sophisticated we consider their “intensity” (Franklin). A bit more sophistication leads to a “weight” (Port Royal Logic), and yet more to a general summation.

    PS – Sorry for missing the reference to JB, I was pretty intently looking for changes from 2007 in order to update my critique thereof. I’ll be going through it very systematically now.

  7. Mssr. Benton, (Sorry, two Matts! :-))

    1. What kinds of cases? Who’s the alleged conflater?

    2. I tend to think we can control for the non-epistemic readings of the modals, so it seems false to me that “if you keep the deontic notion epistemic across (SAC), it tends to sound bad.“

    I think I can screen off the professional obligation without having to do this, but you can just focus on some other person, a parent, say, and they’ll give the same verdict. Worried about parental obligations (whatever those are)? Then make it a robot or something. Maybe that will help.

    Plus, there’s bad and there’s bad. There’s prima facie bad and then there’s bad even upon reflection. My point has been that if we reflect on the peculiar kinds of cases that constitute the clearest kinds of counter-examples, we can see why they seem weird.

    That we’ll get such cases seems guaranteed by a set threshold of evidence for knowing and a variable threshold of evidence for acting (since the EU can remain flat as costs rise and fall, probability of success required to act rises and falls too). You just have to plug in the right numbers and you get trouble. I got a paper that really laid this out rejected because by the time the reviewer got it Cohen’s PPR symposium piece had already come out.

    3. Back to conflation. The “have to” in the SAC is simple practical rationality. I take it that two things in my Specialist and Ubertest are clear: 1. The Doctor knows she doesn’t have the desease, and 2. Not running the test has neg EU. The numbers just guarantee this unless you take a radical view on lotteries which I think will land you in utter skepticism or utterly disassociate epistemic reasons from probability (we can make the Dr’s knowledge causal and then translate it to a probability for the EU calc).

    SACF also is intended to be read not morally, but simply in terms of practical rationality. If someone wants to make a moral theory identical with that, good luck (I’d kind of like to myself) but I don’t think that’s going to make a difference here.

    4. Thanks for the Douven reference! I’ll check that out. So far I’m not convience by the problem, since, in addition to being really surpised by an utterance in a context, the contact also provides *other* expectations. This leads to a kind of harmless contrastivism. When someone says a Moorean thing, we have a particular type of expectation-thwarting: “Why would someone say *that* and then *that*?!” When someone says that other thing (which, by the way, is now my Facebook status), we have a much more general kind of surprise, one which might best be charactarzed as WTF!

  8. Trent, I don’t quite understand why you’re telling the story in the way you are, with the stuff about “expectations”. You’ve already put in play the line that people have made a hasty generalization, probably tacitly, to a universally-quantified principle. Why isn’t that, by itself, basically enough to accomplish the TASK from the main post? The SACs sound bad, because people have a plausible-but-false belief that they are false, based on that principle?

  9. Jonathan, I do think that’s a sufficient explanation of why the SAC’s sound bad. I hope you cog sci folks will supply me with some nice empirical confirmation!

    The reason I go on to tell the story about reasons is to explain a separate but related phenomenon: why Bank Case B reasoning sounds good.

    Matt has pressed me really hard to do that in a more theoretical way. I think the expectation stuff works here too, it’s just that I’ve been challenged repeatedly to give an alternative account of the relationship between reasons talk and EU talk.

    Since Jeremy and Matt have discussed how this looks on their view at length, I figured the least I could do was sketch my view! 🙂

  10. Sorry, I’m missing a step, I think. Why can’t the same explanation handle Bank Case B and its ilk? (At a minimum, I don’t see what the “expectations” move will help with, that the tacitly-and-illicitly-endorsed-universal-generalization doesn’t take care of already.) The would-be knower in the case ends up wanting to endorse the plan of action of going to the bank, even though they took themselves to know that it would be open; because of the illicit universal generalization, they think (mistakenly) that this means they need to yield on the knowledge claim. And it sounds good to the reader when the reader has the same tacit universal generalization operative as they follow along.

    Putting it differently: the would-be knower in the case only has to yield their knowledge because they don’t want to end up in a situation where they seem committed to a SAC. So any account that defuses SACs, thereby defuses Bank Case Bs, too. No?

    Btw, I really like Jenifer Nagel’s take on these issues, too, if you’re not already familiar with it.

  11. Jonathan, I am with you that to defuse SAC is to defuse Bank Case B. Yet I’ve been challenged to give an account of reasons talk’s connection with decision theory, and I want to meet that challenge for a number of reasons.

    One, is that it seems like a cool thing to do. Another is that Fantl and McGrath do it, so I ought to do it too. Another is that one of my mentors has a theory that suggests to me a very natural way to do it. Another is that another mentor of mine is the one who challenged me, and I want to attempt to do so for strictly personal reasons as well.

    In fact, the challenge I’m attempting to address here came at the Pragmatic Encroachment Workshop (https://sites.google.com/site/orangebeachusa/) where Jenefer Nagel was as well. I do indeed think she’s done a great job of marshaling some cog-sci stuff. But I’m more interested in the epistemic status stuff than the doxastic picture.

    That is, it is not enough for me to explain why people would fail to ascribe knowledge in high stakes (via lack of confidence, lack of closure do to source monitoring, or for any other reason). What I want to know is whether there are cases where one’s *strength of epistemic position* is sufficient for knowledge in one case, but insufficient in another, where the only thing that changes are practical interests.

    Matt thinks type-B Bank Cases provide some evidence, but it’s the argument from reasons in Chapter 3 that’s the real meat and potatoes.

    That kind of argument caries *much* more force than the ordinary language arguments Jason uses (though of course, as in the case of Austin there is both a laudable attention to detail and an impressive command of linguistic data).

    I’m a grandson of Chisholm, so I’m accustomed to the need for paraphrase. It never would have occurred to me to think of ordinary language as anything but a blunt instrument. Carnap had the right idea and Maher’s explication of explication is really good. Expected utility analysis explicates the ordinary practice of practical deliberation, confirmation theory explicates the ordinary notion of theoretical deliberation.

  12. Just a few final thoughts, and then I’m done.

    You write:

    “1a. “Saying “you know that’s not going to happen” is a common way of criticizing someone for worrying about a bad possibility.“

    It’s also a common way of criticizing someone for hoping in a good possibility. “You know you’re not going to win the lottery, so get back to work.” So, while I admit that there’s some linguistic evidence for knowledge action principles (KAP’s), I think there’s also evidence against them, just as there are in the case of lottery knowledge.”

    I don’t get this argument. Whether “you know you’re not going to win the lottery” is true or not, it is being used to criticize action (or urge action). Why is this evidence against a knowledge/action connection?

    Second, I’m worried that your “generalization” move is going to overgeneralize. I see lots of philosophers and they are all non-billionaries. But I don’t find any clash in “She’s a philosopher but also a billionaire.” Surprising, yes; clash, no. The conditional “If X is a philosopher, then X is a non-billionaire” doesn’t seem necessarily true. And the reasoning: “So and so is a billionaire, and so is not philosopher” is not exactly powerful.

    You suggest that evidence beyond a reasonable doubt will seem enough for actionability. Perhaps. But at the same time, consider what the high stakes bank person, who is worrying whether it’s open the next day, will say. She’ll say things like “well, hmm, it might be closed tomorrow; and I can’t just take for granted that it’ll be open tomorrow.” It isn’t clear to me that this person doesn’t have doubts, and that the doubts aren’t reasonable. In other words, I don’t think it’s clear that in Bank Case B the speaker has justification beyond a reasonable doubt: he seems to doubt and seems to do so reasonably. You’ll need to argue that this is a mistake, just as is the case for the intuition that the Bank Case B guy doesn’t know.

    On reasons. People usually distinguish between there being reasons for a person to PHI and a person having or possessing a reason to PHI. I think Feldman makes this distinction (there being evidence out there and one’s having or possessing it). When Jeremy and I talk of reasons in the book, we had in mind having or possessing reasons. It’s not enough to have P as a reason to PHI that P is true. It looks like one needs to know, or at least have some justification for P for it to be a reason you have. And then the question of how much arises. This question arises just as much if the reason is some fact about the world as if it is something like this: “Outcome O would be good to degree d and the conditional probability of getting O given I do A is r”.

  13. Good thoughts Matt,

    1. The idea was that if we have knowledge in lottery cases–and I’m in agreement with Hawthorne 2004 that we at least sometimes do–then that causes problems for KAP’s. Maybe you think they don’t.

    2a. I think there’s a huge difference between the kinds of cases, though it’s hard to put. In the PHIL&CASH case the properties are connected in ways that we are very familiar with in everyday life, that we think a lot about explicitly. We know how PHIL and cash come together, PHIL goes to school, gets a job, gets a little slip in the mail, etc.

    But with K&A it’s far more subtle, it’s part of how we think and talk and we don’t really think and talk much about how we think and talk unless we’re weirdos (as we are!).

    So the move from K&A, K&A, K&A, K&A… to For all x, if Kx then Ax without even noticing it. It’s part and parcel of my theory that K is a heuristic concept, a tool for not having to think of the details.

    2b. To me, the challenge is similar to that arising from the kind of linguistic evidence Unger gives for K entails C(ertainty) (Jason has a very detailed discussion of this). I think K often is a stand-in for C-e where e (sorry, don’t know how to make an epsilon) is negligible. So instances of utterances of the form “K & ~C” sound just as weird as “K & ~A” or perhaps even weirder. Yet neither of us fallibilists think K entails C.

    Here’s a little chestnut on that score sent to me by my colleague Todd Burus. It’s from an NPR story on the treacherous world of global climate change reporting.

    “We miscommunicate all the time,” Matson says.
    “For example, we use the term ‘uncertainty’ all the time in science; it represents a quantitative statement of how well we know something. But think of what uncertainty means for most people — it means we don’t know.””

    Why is that? The skeptic has a simple explanation. I’m not worried too much about it though because I have the same kind of error theory: people are constantly treating knowledge as certain, it’s part of the epistemic adaptive toolbox. C is a stand-in for C-e and K and C are constantly conjoined, K&C, K&C, K&C, K&C…so the mind just goes along with for all x, if Kx, then Cx. The folk don’t think much about this, it’s a habit of thought and speech. But, thankfully, teach someone a bit of fallibilist epistemology and we realize the unconscious mind had over-generalized. Praise be the reflective mind! 🙂

    3. I think “reasonable doubt” is ambiguous. The speech you give is expressive of doubt and is “reasonable” in the sense that you wouldn’t have to be dysfunctional in order to fixate on that doubt a bit in high stakes. However, I don’t think that kind of psychological doubt has epistemic effects (well, it might remove belief, but it doesn’t, on my view, affect strength of epistemic position directly).

    I think the standard Rich and Earl have in mind is that for there to be a reasonable doubt concerning p is for there to be some proposition d which if true would substantively disconfirm p and for which I which I have a good reason to believe. I don’t think there’s any good reason to believe the bank won’t be open. I just think that’s being mildly paranoid.

    4. Right, I’m talking about reasons people possess, which is really tricky to talk about, like the issue of whether “A has highest EU” is a reason or a statement about reasons (I take the latter position to avoid double counting).

    There are two ways to go here that I’ve been going back and forth between. One, which you won’t like, is that the item you mention is a reason I have simply in virtue of my believing it. I tend to think that the normative status of the belief is quite separate from the normative status of the decision that issues from it.

    Another way to go is to say that I do have to accept it at some level based–conscious or unconsciously–based on some kind of evidnece, but I don’t have to know it. This is the Jeffreyan route. It’s just probabilities “all the way down.” I think we often do know it, but I don’t see why we’d need to. This raises a tricky issue of higher-order probabilities, and I don’t have a solution to that problem, but I think that problem has a life of it’s own.

    As far as I can tell the issue is exactly parallel with the probability judgment component of decision theory. I judge Pr(O/A) = r. Do I *know* that? I don’t know. I know I’m not certain of it. The latter is all it takes to raise the nasty higher-order probability question.

Leave a Reply

Your email address will not be published. Required fields are marked *