Probabilistic Detachment

A detachment rule for an operator tells you conditions under which the operator can be removed from that which it governs. So, to use a straightforward example, the detachment rule for the necessity operator is just the inference rule that allows us to infer p from []p:
[]p |- p.

I’m interested in detachment rules for the probability operator. If the operator P is taken to mean “it is probable that”, then the analogue of the inference rule for the necessity operator is a disaster:
P(p) |- p.
The counterexamples should be obvious, but I’ll give one anyway. In a ten ticket lottery, this rule would allow you to conclude that the winner is one of the first 6 tickets sold.

There is pressure, however, to find some true detachment rule, even if the simple one is obviously false.

The pressure comes from the fact that statistical knowledge is possible. Sometimes, a sample teaches us that most North Dakotans do not have PhD’s, for example. So some version of the following has to be true: In certain circumstances a high enough probability for p allows one to detach the probability operator and conclude that p is true.

My interest is in what these circumstances have to be like in order for such a detachment rule to be acceptable. There are some true versions that I’m not interested in, however. For example, a body of evidence might confirm both the high probability of p and p itself. In such circumstances, the operator can be detached. But what I intend for the schema to require is that there are conditions on which a belief in p can be doxastically justified by being based on the probability claim itself, and not where the acceptability of premise and conclusion has merely a common cause.

One more caveat. It may be that there are perfectly fine instances of the schema for some ordinary kind of justification or rationality, but not for the kind of justification that fills the third slot in the typical account of knowledge. If that is true, it would be worth finding out, since if we are entitled to various beliefs that we don’t have sufficient evidence to know, that ought to tell us something about practical reason and appropriate assertion, I would expect.

The pressure to reject all instances of the schema above when discussing the kind of justification involved in an account of knowledge comes from lottery cases. The worry is that we’ll end up having to say that in lottery cases that you can know that your ticket will lose. But I’m also inclined to think it a mistake to deny that there is any such thing as purely statistical knowledge, so I’m trying to sort through all this. Before saying anything about what I’m thinking, though, I’ll stop here and see what others might think about detachment rules for probability operators like the one above.


Comments

Probabilistic Detachment — 28 Comments

  1. Jon,

    This isn’t a direct answer to what you’ve asked, but it seems to me that a probabilistic rule of detachment is one of the things presupposed in certain sophisticated Gettier scenarios. Consider this bit from Shope, The Analysis of Knowledge, p. 25:

    Mr. Nogot (Feldman’s Version): This case is similar to [the original Nogot case], except that S does not arrive at belief in p by relying on considerations about q [‘Nogot owns a Ford’], but instead by relying on belief in a true existential generalization from his evidence, of the form: “There is someone in S’s office who has given S evidence e.”

    This is an attempt to do an end run around “no false premise” solutions to Gettier. There are various difficulties with this scenario that are really off topic. But inter alia it looks like we’re going to need probabilistic detachment to move from Nogot’s behavior to the Gettierized claim.

    Lydia and I have more on this in our forthcoming book.

    Cheers!

    Tim

  2. Hi Jon,

    You say: “In certain circumstances a high enough probability for p allows one to detach the probability operator and conclude that p is true.”

    You then ask what these “certain circumstances” are like. How about this answer:

    Suppose that S’s total evidence E makes it rational for S to assign degree of confidence n to the proposition p. (I do not suppose that n is a real number: it could be an interval.) Then, E is sufficient evidence for S to detach p in circumstances C just in case, for S to know that p in C, S need not have a degree of confidence in p that is any greater than (any value in the interval) n.

  3. Ram’s suggestion seems to speak to the issue of the truth of the premiss (“S’s total evidence E makes it rational for S to assign degree of confidence n to the proposition p”), and perhaps to the issue of the transmission of warrant from premiss to conclusion, in addition to addressing the correctness of the inference. But a rule of detachment is a rule only of correct inference, which does not require the truth of the premiss, nor the transmission of warrant. Bearing that in mind, how about:

    Inductive Detachment K: if p being known implies p is probable to at least degree n, and p is probable to at least degree n, then p may be inferred.

    You might object that this rule of detachment is not complete, that is to say, it specifies a sufficient condition but not a necessary and sufficient condition. I’ve left it like that because it strikes me as true, but I think necessity of the antecedent is also defensible. A weaker rule that seems defensible and might undermine the necessity of the antecedent is

    Inductive Detachment J: if p being justifiably believed implies p is probable to at least degree n, and p is probable to at least degree n, then p may be inferred.

  4. Inductive Detachment J and K don’t seem like very good rules. Take any situation you like, and take a proposition that has a probability of zero. The rules tell us that that zero-probability proposition may be inferred.
    Ram Neta’s rule sounds good (and helped me understand what we were looking for). It invalidates Cut, but that’s probably inevitable.

  5. Nick, a quick question about your detachment rule.

    Inductive Detachment K: if p being known implies p is probable to at least degree n, and p is probable to at least degree n, then p may be inferred.

    The rule K looks trivial. Doesn’t the antecedent entail that P(p) = 1? ‘p being known’ implies that p is true. But maybe you do not take (as I do) ‘p is true’ to entail that ‘P(p) = 1’. In that case, it looks like, for any value n (n

  6. I’ll try to finish the thought above (I mistakenly used a less than sign in the post).

    In that case it looks like, for any value n (n less than 1)
    we can construct a lottery counterexample. Let n be the probablity that p = I hold a losing lottery ticket (suppose n is similarly the probabiliyt that any other player holds a losing lottery ticket). As Jon notes above, I cannot detach p from Pr(p) = n. But K licenses detaching p. It seems pretty clear that the argument goes for any value of n. So what did I miss in K?

  7. Questions for Jon and Ram:

    Jon,
    There is one thing we know about the relevant circumstances (in addition to the high-probability requirement): There must be no internally defeating evidence. A number of authors use the no-defeater (or “no-overrider”) condition to generate solutions to the lottery paradox. Are you assuming that we’ll need to know more about those circumstances than what comes with a no-overrider condition? (The answer may be “yes”, if you don’t think the no-overrider condition is key to a resolution of the lottery. I can’t think of other reasons.)

    Ram,
    I’m afraid I can’t find an informative reading of your suggestion, if we already have a no-overrider condition. If ultima facie rationality (or “UFR”, which will, I suppose, contain a no-overrider condition, plus some causal condition that is irrelevant for present purposes) is all there is to the total evidence requirement, would it be unfair to paraphrase your suggestion as follows: “If evidence E is UFR-ready for knowledge, detachment is okay”? (Maybe this unfairly downplays the informativeness of the non-diminishing degree of confidence part.)

    But, as I understand it (and I may not, depending on his answer to the above question) Jon’s question is: What are the conditions under which highly statistically probable E is UFR-ready for knowledge? (To which, again, I’m tempted to reply: for one, only if UFR takes care of the lottery.)

    Do I miss it?

  8. Jamie: You’re right. Thanks! I formulated it trivially as a lower bound when I meant a greater lower bound (it was late at night). I should have said

    Inductive Detachment K (J ): if p being known (justifiably believed) implies n is the greatest lower bound on p’s probability, and p is probable to at least degree n, then p may be inferred.

    Mike: The general thought behind my suggestion was that a buck passing account could answer Jon’s question. The circumstances condition correct detachment by determining what greater lower bound is entailed by knowing (or justifiably believing). I don’t see knowledge implying truth to be a problem, since p can be true without Pr(p) being 1: it can be true that the train will arrive on time and the probability that it does can be 80%. If knowledge implies certainty then K is too strong but J might do With respect to lotteries, passing the buck means that the account you give —of why, for all n, you don’t know that you will lose a lottery of size n,— that account will imply that knowing in lottery cases implies the greatest lower bound is 1. But that’s OK. The point is just to pass the buck back to the account of knowledge or justification: that is to say, nothing special has to be said about the conditions for correct detachment. Inferring p is correct from a premiss that p is probable to degree n when n is greater than or equal to the greatest lower bound on degrees of probability required by knowledge (or justification) in that circumstance.

  9. There’s some sloppiness in my previus post (#7). I apologize. Hopefully, the main points are not affected.

    In any case, I should note that:

    1. where I wrote “no internally defeating evidence”, I meant no internally defeating evidence that is not itself defeated;

    2. this is better for the question I had in mind toward the end: “Under what conditions is ‘it is highly statistically probable that p’ UFR-ready for knowledge by detachment?”

    Ah, the joys of philosophy at breakneck speed!…

  10. Nick, you write,

    “it can be true that the train will arrive on time and the probability that it does can be 80%.”

    I’m guessing that you don’t mean by this that it is reasonable to assert “it is true that the train will be on time and the probability that it will be is .8”. If you assign .8 to propositions that you are know are true (though you are not certaint are true), then (among other things) you’re likely to lose a whole lot gambling. If it is true that the die came up 6 (and you are know this), why would you be prepared to wager anything that it did not? I’m sure you wouldn’t.

  11. Jon,

    I’m missing something. P(p) |- p will be valid only if P(p) = 1, otherwise it’s invalid (assuming P(p) is subjective).

    If P(p) is objective, then the same is true.

    I can’t see what else would license the detachment. However, P(p) |- K(p) is different (where ‘K’ means it’s known that). In that case, the issue about defeaters arises, but this is different from the original case. Is this second case what you have in mind?

  12. Jon,

    I’d be interested to know what sorts of horrible consequences must be accepted if we don’t think there is any such detachment rule. In assuming there is such a rule, are we assuming that someone could rightly detach the belief that p fully aware of the fact that the only grounds are probabilistic or statistical? If so, I’m sceptical.

    I suppose that part of my scepticism is due to the fact that I’m impressed by Nelkin’s solution to the lottery paradox and somewhat sympathetic to the suggestion floated by Adler and Weatherson that when it is salient to the subject that the grounds for p are merely statistical, the subject cannot believe p. This view is parallel to the view that if I know the only means by which X-ing could come about is that I Y, my Y-ing cannot be properly described as my intending to X but only something weaker like trying to X because intention has a strong belief requirement. Maybe the view isn’t right, but it’s not without its defenders. There should be a parallel view about belief according to which if the means by which the question ‘Is p true?’ is believed to be merely statistical, the most someone could believe is that p is likely insofar as ‘true belief’ is impossible unless someone tacitly assumes that the means by which they settled that question was effective given the situation.

  13. When I book a flight from Philadelphia to Denver I judge it rational to accept that I will arrive without serious incident since I stand rough odds of 1 : 350,000 of dying on a plane trip in the U.S. any given year, according to the National Safety Council, and stand considerably better odds than this of arriving without incident if I book passage on a regularly scheduled U.S. commercial flight. I judge these odds to be those that characterize my flight from Philadelphia to Denver. Nevertheless, I believe that there will be a fatality from an airline accident on a U.S. carrier in the coming year even though I don’t believe that there must be at least one fatal accident each year. Indeed, 2002 marked such an exception. (According to the 2002 National Transportation Safety Board there were 34 accidents on U.S. commercial airlines during 2002 but zero fatalities, a first in twenty years.)

    Notice that I am not speaking of my decision to board the plane, but my belief that I stand better than 1 : 350,000 odds of getting off alive. The reason why I believe I will land safely in Denver is that I view my trip to be a random member of the class of domestic trips on US domestic carriers. My epistemological stance toward the belief that I land safely is distinct from what other actions I may be willing to take given this stance, such as booking a car or hotel. In short, I believe that I will survive this flight to Denver. I believe it on statistical grounds. And I am reasonable to believe it on statistical grounds.

  14. Greg,

    Is it possible for you to infer that you’ll be in Denver given you have only the sorts of grounds you insist serves as the basis of your belief without having a thought of the following sort:
    (1) I’ll be in Denver, but there’s a chance I won’t.
    This seems to be a Moorean absurdity and yet it seems acknowledging the second conjunct in what you do in considering the premises while affirming the first is what you do in judging you’ll be in Denver. I thought in general if Bp & Bq constitutes a Moorean absurdity, you shouldn’t: Bp & Bq.
    Of course, the following doesn’t seem absurd:
    (2) I believe I’ll be in Denver, but there’s a chance I won’t.
    But that just seems a reason to think that you’re not expressing the belief you’ll be in Denver but the belief that it is quite likely that you’ll be.

    Maybe this isn’t a decisive objection, but it seems to be a cost you’d have to incur if you thought that there was a detachment rule of the sort you seem to think there must be in order for your beliefs about your travel plans to be rational.

  15. I took Jon’s argument to be based, not on the intuition that we can detach the probability operator for just any p, but that we can detach the probability operator from p’s that themselves are statistical. So, his example is that statistical evidence for the proposition that most North Dakotans have PhDs licenses the inference to the proposition that most North Dakotans have PhDs. It’s not that we can know by detaching the probability operator that we will survive this particular plane flight (though we certainly can, it doesn’t seem to be the intuition that’s driving his argument). It’s that statistical evidence that most people survive plane flights licenses the inference to the proposition that most people survive plane flights. In short, the most immediate way to accomodate the intuition that’s driving Jon’s argument is that we can detach probability operators when the p’s themselves have (something like) probability operators included. That is:

    P(P(p))|-P(p)

    So, statistical evidence that most balls in the vat are red licences the inference to the proposition that most balls in the vat are red, etc. Of course, it’s hard to see a principled reason why we could detach only the first and not the second of the two probability operators. But it’s funny that we have the intuition, in a lot of cases, that it is only appropriate to detach the first.

    -Jeremy

  16. The issue of second-order probabilities infects both direct inference and estimation.

    Estimation: We may have a population, North Dakotans, and wonder about the proportion of this population that has a PhD. In this case we might draw a sample X, count the proportion of PhD holders, n let’s say, and (if all goes well) draw the conclusion that n plus or minus 1.96σ are PhD holders, and have 95% confidence in this interval around n.

    Direct Inference: We may also know how many North Dakotans are PhD holders (from a survey done on the previous model! And note, Nelkinians, that we don’t go wobbly on accepting such statements as evidence!) and want to infer whether Smith, a North Dakotan, is a PhD. This is a direct inference, and like the plane trip or more generally, balls in an urn. If an urn contains 999 white balls and 1 black and you draw from this urn at random, it is reasonable to believe you will draw a white ball. That’s our epistemic position with respect to this urn, this sampling procedure, and white balls; how we choose to talk about it is another matter. And how we choose to bet is another issue still.

    For this reason I find the intuitions of Moorean arguments very thin reeds from which to hang an account of uncertain inference. Following the Moorean line guts the idea that our evidence for nearly all the things we reasonably believe is uncertain in character and admits of (epistemic) risk for error. Furthermore, I think that a strong line against the ‘linguistic turn’ in epistemology will exploit this, eventually. But to get to grips on taking uncertainty seriously will entail getting under the foundations of Knowledge and Its Limits. Most of the pieces to pull this off are already in the literature, which is not at all to say it will be easy to pull off.

    But opposition parties need slogans, so here are a couple: We are interested in our epistemic position, not our talk about our epistemic position. And how we talk is not good evidence (!) for where we stand.

    The serious (but indirect) consequences of following Nelkin’s line is to punt on the hard questions of determining epistemic position given uncertain, statistical evidence, and what (if anything at all) follows from combinations of pieces of uncertain evidence.

  17. Mike, there are a number of separate issues, which I see are being explored by Gregory and others since your remark, so I’ll keep this brief. I agree that an assertion of the form “p is true and the probability of p is less than 1” can sound like the second conjunct retracts the first. But I was concerned with the compatibility of a proposition being true with that proposition having a probability other than 1, not with what can be correctly asserted, and only in order to make the proposal compatible with accounts of knowledge that permit a known proposition to have a probability less than 1. That being said, it seems to me that the following is reasonably assertible: “We will arrive safely. The chance of an accident is less that 1 in 10 million.”. Furthermore, if intuitionism is true for truth, and truth for future tense sentences is a matter of present tendencies, we might even say that the chance of accident being so low is what makes it true that we will land safely.

  18. Another thought which might be useful for slotting into arguments on this topic: not all reasoning is demonstrative reasoning, by which I mean that not all arguments are demonstrative arguments.

    There is an argument in Nelkin’s lottery paper against rational acceptance to the effect that you can have all of the pieces on the left-hand side (premises) of your argument, reason correctly, and yet embrace a false conclusion but be in no position to find the mistake in your reasoning or your premises. This is bad, she thinks.

    But this is the standard one applies to demonstrative arguments. It is the standard one uses to unpick a failed proof, and is not what one does to evaluate a non-demonstrative, inductive inference.

    Instead, in non-demonstrative arguments we should draw a distinction between error and mistakes. And statistical methods do just this. The aim is to control error, not eliminate it, and to give you some measure of the epistemic risk you face for accepting falsehoods and rejecting truths. Mistakes, on the other hand, are things you can eliminate. These include botched calculations, a poor fit between the statistical model you’ve selected and your data…whether my epistemic position w.r.t. flying to Denver is like a random draw from a big urn of plane rides, for example. These mistakes are roughly similar to making a invalid inference or working with unsound premises, respectively.

    Detachment, I think, is a technical (syntactic) property that is natural to want in a formal representation of systems that behave like what I am describing, which is textbook inferential statistics. The recent trend is to notice that this behaviour doesn’t slot cleanly into standard probabilistic logic, and to (cleverly) argue that basic probabilistic logic does behave a lot like how we seem to talk about our epistemic position, and then draw the conclusion that we must not need detachment.

  19. Lots of very nice commentary here. A couple of quick questions. First, Nick, why think there is any greatest lower bound for either knowledge of justification? Maybe we can approach a positive answer here if we relativize to individuals and circumstances, but without that relativization, I think the rule K will be too permissive. And if we do relativize, I’m still not sure why we should think there is a greatest lower bound implied by knowledge or justification. I like the buck-passing idea, but I’m not convinced that there is any such implication. Or, more carefully, if there is such an implication, it will be so weak (allowing probabilities too close to .5) so that detachment is permissible in cases where we shouldn’t want to allow it. What do you think?

  20. Clayton, I don’t think the conjunction you cite is absurd. The inclination to think so is just a holdover from a bad epistemology, one that hankers for certainty before making doxastic commitments. Thorough-going fallibilists ought to willing to expand their doxastic commitments with the second conjunct in most cases of rational belief.

    I also think your principle about Moorean absurdities is false. What may be true is that Moorean absurdities are not assertible because of conversational implicatures that they carry, but that’s compatible with them being justifiably believed. There’s an argument in chapter 2 of my knowability book, I think, to the effect that you have to endorse a too-strong closure principle about justification to generate the result that conjunctions of the sort Moore worried about can’t be rationally believed. (I’m not being careful here about distinctions between rationality and justification, but I don’t think that matters in this context.)

  21. Jeremy, that’s an interesting suggestion. You worry about it in the last paragraph for just the reasons I would. I can’t see any reason why an unrestricted detachment rule would work for iterated probability operators. In particular, where the operator is what I suggested, we’re considering allowing the following:
    when it is more probable than not that it is more probable than not that p, we can infer that it is more probable than not that p.

    That seems as obviously mistaken as inferring that it is raining when it is more probable than not that it is raining.

  22. Jon: well, yes, maybe it is too weak, but here is how I thought it might work. You were interested in the circumstances in which detachment would be acceptable and my suggestion, broadly put, is that we determine those circumstances in terms of their correlation with a probabilistic necessary condition on knowledge: the probability being greater than the minimum necessary for knowledge. However finely we have to slice circumstances for knowledge, just so finely must we slice circumstances for correct inductive inference. So if for knowledge we must distinguish the circumstance of just having a lottery ticket meaning you don’t know you will lose from the circumstance in which you read in the paper the winning number and now you do know you lost (despite the chance of the paper printing an error), so too will we have to distinguish those circumstances for correct inductive inference. So let’s call circumstances distinguished in this way epistemic circumstances. I’m allowed to do this without ending up in circularity precisely because I’m offering a buck-passing account. I can leave the non-circular definition of epistemic circumstances to the theory of knowledge.

    It turned out to be a bit tricky to specify that necessary condition, since the natural first thought is Kp implies Pr(p) greater than x, but as James pointed out, that is trivial and we need the notion of greatest lower bound. If I remember rightly, the existence of a greatest lower bound for any set of numbers is implied by the completeness axiom for the real numbers, so we can be sure that any set of probabilities will have a glb.

    For precision, then, suppose that for a type of epistemic circumstance C the greatest lower bound on the set of probabilities of propositions known or knowable in all token circumstances of type C is x. If you are in a token circumstance of type C then, if Pr(p) is greater than x then inferring p from probably p is correct.
    Now let’s consider your challenge: it might be that in some epistemic circumstance, C, according to my suggestion Pr(p) = 0.55 will make inferring p correct, and but surely Pr(p) = 0.55 is too low to make the inference correct. OK, but consider. If this is the case then it is only so because our account of knowledge implies that in epistemic circumstance C, it is possible for Pr(p)=0.55 and for p to be known. If that really is the case, if there really are circumstances in which p can have such a low probability and yet be *known*, then my intuition is that inferring p from Pr(p) = 0.55 is not unreasonable, and being not unreasonable is sufficient for being rationally correct. If, on the other hand, this really *isn’t* rationally correct (and perhaps it isn’t), then I’d feel entitled to pass the buck back to the account of knowledge that allows for Pr(p)=0.55 and for p to be known. Surely the error is to be located in that account of knowledge rather than my proposal (isn’t buck passing *great*!).

  23. Nick, you say,

    Now let’s consider your challenge: it might be that in some epistemic circumstance, C, according to my suggestion Pr(p) = 0.55 will make inferring p correct, and but surely Pr(p) = 0.55 is too low to make the inference correct. OK, but consider. If this is the case then it is only so because our account of knowledge implies that in epistemic circumstance C, it is possible for Pr(p)=0.55 and for p to be known. If that really is the case, if there really are circumstances in which p can have such a low probability and yet be *known*, then my intuition is that inferring p from Pr(p) = 0.55 is not unreasonable, and being not unreasonable is sufficient for being rationally correct. If, on the other hand, this really *isn’t* rationally correct (and perhaps it isn’t), then I’d feel entitled to pass the buck back to the account of knowledge that allows for Pr(p)=0.55 and for p to be known. Surely the error is to be located in that account of knowledge rather than my proposal (isn’t buck passing *great*!).

    Nick, I don’t think there’s any reason to think that the account of knowledge will imply that the doxastically hesitant can’t have knowledge. From that it doesn’t follow that whenever the threshold of probability that the doxastically hesitant have is achieved, that the inference is safe. That looks like a mistake.

    Now, we could relativize the principle to individuals, and have the theory of knowledge imply for a given person with all their doxastic anxieties and hesitancies a given threshold. And, as you note above, the account will have to be relativized as well to the given propositional content, since we already know that the threshold for knowledge of lottery propositions is higher than the threshold for knowledge of perceptual propositions, for example. I wonder if this isn’t too much to expect a knowledge claim to imply.

    Your example about the lottery ticket and the newspaper raises an interesting question as well. Suppose we can’t infer that our ticket is a loser from a probability below n, but we can know that our ticket will lose from reading it even though the probability is below n. That is, the threshold for knowledge is one number and the threshold for safe probabilistic detachment a different number, presumably higher. That’s what I meant by suggesting that the account is too weak. The difference is what information the belief is being based on.

  24. Jon wrote:
    The inclination to think so is just a holdover from a bad epistemology, one that hankers for certainty before making doxastic commitments.

    That would be surprising if true as I don’t personally believe that certainty in any form is necessary for making doxastic commitments. It just sounds weird to me and lots of other folk just as ‘p but I don’t know it’ sounds weird and I’m not at all inclined to think that knowledge is required for making doxastic commitments (There’s nothing wrong with holding a Gettiered belief, for example). All I suggested was that there was something wrong with full belief that p and known uncertainty. I can’t see any reason to not weaken the commitment wrt to p in the face of known uncertainty.

    You might be right that there are some things justifiably believed that you can’t have warrant to assert, but I thought that it is was a pretty standard assumption in the literature on Moore’s Paradox to say that an approach that focuses only only on conversational implicatures is essentially incomplete as it does nothing to address what strikes us as odd with someone’s believing ‘p but I don’t believe p’. Maybe there’s something wrong with that assumption or maybe there are perfectly adequate solutions to that paradox that don’t try to account for the clash in terms of defeat or norms of theoretical reason but I’d have to see what they looked like before I was convinced that there wasn’t some cost you’d incur by saying that there’s nothing wrong with believing something like ‘Custer died at Little Big Horn but there’s a chance he didn’t’.

  25. Clayton, here’s the reason not to weaken commitment to p in the face of known uncertainty: I fully believe that full belief doesn’t need certainty. So it is perfectly fitting to know that p is not certain for me and yet be comfortable with full belief in p.

    Of course, one might insist that the meta-belief in question needs to be justified, so that if it isn’t, I don’t have a good reason. But the content is just the sort you’d expect when asking for a good reason here. It is not a reason that entails the claim in question, but good reasons are often non-entailing.

  26. Jon, OK, so these seem to be some issues that bear on correct detachment: 1. the relation of knowledge to doxastic hesitancy; 2. the relation of safety to correct detachment; 3. whether an account of knowledge can reasonably be expected to slice circumstances and persons as finely as may be necessary for correct detachment; 4. whether thresholds for knowledge and correct detachment can be distinct.; 5. the distinction between doxastic confidence and mathematical probability.
    That’s a lot to talk about! So instead I shall just make some brief remarks. 1. I take it that knowledge implies full belief and full belief rules out certain kinds of doxastic hesitancy; I think I can reason to probably p and then correctly reason to p, thereby coming to fully believe p; so knowledge and correct detachment share in rule out certain hesitancies because of their relation to full belief. 2. Your introduction of the word makes it sounds as if you think safety might be a criterion. 4. Certainly, if the criterion for correct detachment is safety and the probabilistic threshold for safety is distinct from the threshold for knowledge then my proposal will fall. However, if knowledge also requires safety we could strengthen my buck passing proposal by adding ‘and sufficiently safe for knowledge’ (although at that point, and depending on the exact account of safety, probably safety does all the work, so I pass the buck to safety rather than knowledge). 5. ‘Probably p’ might express doxastic confidence that is not certainty or doxastic certainty about a mathematical probability. So when thinking about detachment we might be thinking about moving from confidence that is less than full belief to full belief, from certainty in a mathematical probability to confidence, or from certainty in a mathematical probability to full belief. I’ve been uncomfortably aware that my remarks might be off the point for you since I am thinking mostly about doxastic confidence but you introduced the question perhaps to address mainly the issue of correct detachment for mathematical probability.

  27. The real puzzle about detachment as I see it is that, if it is legitimate at all, it must include a “bookkeeping” component, so that detached propositions still carry some marker indicating the support they received from their evidence. Suppose we have two propositions P1 and P2, which we have detached from their probabilistic evidence by some rule of detachment we have decided is acceptable. Let us say that the probability of P1 relative to its evidence is .75 and the probability of P2 relative to its evidence is .99. Even though we have detached full beliefs in both P1 and P2 (this is what a rule of detachment is supposed to license), we cannot lose sight of the fact that P2 is more strongly supported by its evidence than P1, because we cannot be as confident employing P1 in subsequent inferences than we can P2. An adequate account of probabilistic detachment (if there is any) has to explain this. How do we have full belief in both P1 and P2, and yet remain sensitive to the degree of support these propositions received from their respective evidence? And how does this sensitivity impact subsequent inferences?

Leave a Reply

Your email address will not be published. Required fields are marked *