As a newby, it seems appropriate that my first post on CD should be a bleg (blog-beg). I’m revising a paper in light of some great comments by Matt McGrath from the Orange Beach/USA Pragmatic Encroachment Workshop and am going to be begging for guidance in the relevant literature. I’ll give a brief intro and then post the rest below the fold. The broad topic is the connection between knowledge and rational action. I don’t think the linguistic stuff on assertion has very long legs, but I’m very interested in the arguments for interest relativism from the nature of rationality. Hawthorne’s book didn’t pay enough attention to decision theory (for my taste) or fallibilism, but what’s interesting is that Jeremy and Matt put it front and center (rather than the old bank cases or other to-me-sketchy linguistic data or to-me-fuzzy assertion stuff).
They defend a principle which entails the following.
(FM) If you know that p, then your strength of epistemic position regarding p is sufficient for you to be rational in acting as if p.
They give pretty precise definitions of “acting as if p” and it gets a bit baroque, but they are careful and that’s good. What I really like about their approach–in addition to what I said just above–is that they focus on the normative component which tunes out a lot of static about the basing relation and the doxastic side of the picture. They nicely isolate the key issue (for me).
From 2002 on–when I first read the MS of Knowledge and Lotteries– the following kind of counter-example to FM-style principles has seemed right to me.
The Specialist and the Ubertest D is the worlds foremost expert on fatal condition C. D examines patient P and comes to know on the basis of her expertise that P has C. Now, there happens to be a rather amazing test which costs only a penny to perform and never gives false results (ever). P is ready, willing, and able to take the test.
That’s taken from my Master’s Thesis in 2003 or 2004, but it’s essentially the case that first came to mind (as similar cases have for many people. Jessica Brown’s surgeon case is very similar, and Stew Cohen pointed out in an early PPR symposium that such cases would be easy to generate).
[I think any good definition of “acting as if p” should have it come out that in this case D should not act as if she knows that P has C, since that would mean foregoing the test, which clearly isn’t rational. I don’t *think* we have to re-hash the intricacies of F&M’s definition of “acting as if p” for me to seek the help I need here.]
Such c-e’s have come quick and been decisive for many of us, but Matt has always put a lot of stress on the somewhat abominable conjunction (SAC).
(SAC) “I know that the test will come out negative, but we have to take it anyway.”
or the more focused
(SACF) “I know that F-ing will have the best consequences, but it wouldn’t be rational for me to F.”
I have to confess that (AC) doesn’t sound bad to me and that I don’t put much stock in (ACF) since it’s a fomula, not a plausible utterance, and devoid of context. Still, if we want to address the concerns of real-life interest-relativists–or at least it’s defenders–we need to do the following:
TASK Explain why the SAC’s sound bad, if true.
Unsurprisingly, I want to pull something like a WAM on the SAC’s. In our work together, Patrick Rysiew and I have defended WAMing CKA’s (consessive knowledge attributions) and we’ve been careful to distance ourselves a bit from standard Gricean accounts of extra-semantic conveyance. My own view is that felicity is a function of expectation. When we here what we expect, things are copacetic. When we hear something unexpected, we wig out. We don’t consciously keep track of data, of course, and we can’t always tell what’s bothering us, and the infelicity comes in degrees (doesn’t this sound more plausible than a flat-footed version of the Gricean account?).
So I think in the normal case–by far the most common–knowledge will suffice for action, so there’s a strong expectation that instances of “S knows that p” can acceptably be followed by instances of “S rationally acts as if p.” So here’s the first bleg.
BLEG 1 Does anyone know of any psychological research on how fast people go from (Fa1 & Ga1), (Fa2 & Ga2),… to Necessarily, for all ai(Fai –> Gai)?
I’m guessing we do this really, really fast (it’s like a dominance heuristic), so that ordinary language considerations lead to hasty generalizations. No one, I take it, doubts that knowledge is by and large sufficient for action–think of all the little instances all day long. What I’m doubting is that this is sufficient to insulate (FM) from the c-e’s. That is, since we know that people over-generalize in this way, c-e’s should have their full force against claims that start “Necessarily, for all…”
Some people have really liked this gambit, a few–two in particular!–didn’t.
The other explanation that strikes me as plausible for why SAC’s sound bad–when they do–is that we are typically loose in talking about reasons. In particular, I think a proposition is not a reason for action unless it mentions both a chance and a goal (or a good or something teleological).
E.g. when we are in a context where we’re wondering whether we should cross the icy pond or walk all the way around and we know–with, say, 95% evidential probability, reflected in rational degrees of belief–that the ice will hold us. It is common–but a bit sloppy–to say this.
REASON-BAD That the ice will hold me is a reason to cross the pond.
If we regiment our language a bit–though not distort the ideas behind it, we get something more like this.
REASON-GOOD That there’s a 95% chance that doing so will meet my goal of saving some time is a reason to cross the pond.
So that leads to my 2nd bleg.
BLEG 2 Can someone point me to places in the literature on reasons for action that seem to confirm or dis-confirm my hypothesis concerning REASON-BAD and REASON-GOOD?