Competent Deduction Closure

Williamson and Hawthorne, among others, endorse a closure principle about knowledge that employs the concept of a competent deduction. The straightforward version is that if you know p and competently deduce q from p, then you know q. As most realize, there are qualifications needed, but I’ll ignore them here.

Here’s a question about the principle. Suppose a competent deduction of q from p by S occurs, and suppose that the deduction is constituted by a certain sequence of brain states in S. Now, suppose we have another person, S’, just as competent at deduction as S who also knows p. Further, suppose that a mad neuroscientist produces in S’ the same sequence of brain states that constituted the deduction of q from p by S. Did S’ just competently deduce q from p?

Another question. Assume S’ comes to believe q as a result of what the neuroscientist does. Does S know q?


Comments

Competent Deduction Closure — 22 Comments

  1. There’s so much packed into this scenario, I can’t wait to see how others respond. I suppose that since competence is a property that ascribe to agents their being agents who follow rules or, in general, do something correctly, out of their own agency and not because they are caused to do so by some intervention, that the mad scientist’s interventions make the vocabulary of competence/incompetence inapplicable. So, by Williamson’s standards, it would seem that S does not know that q, insofar has her knowing that q is a matter of competently deducing it from p.
    I know that’s very rough, but it seems to me that we cannot use the vocabulary of competency, in the sense Williamson uses it, except insofar as we assume that the subject of the vocabulary, S, is an agent and not, say, a brain mechanism or process.
    Perhaps some other questions might be: why should we suppose that a competent deduction by S of q from p is something that can be “constituted” by brain states? What does “constituted” mean here? Is it a marker for a reductive analysis? or do we mean by it that there is a physical subtratum upon which certain cognitive/epistemic processes supervene?
    Sorry if this is very naive… Your scenario is very interesting. Thanks.

  2. Eric, W&H want a closure principle that encodes the idea that knowledge is extended by deduction, and so we want something like a complete principle, one that covers all the cases that we want explained. So I expect they’d agree with you that the answer to the two questions go together: either it’s a deduction and S knows, or it’s not a deduction and S doesn’t know.

    If we adopt your answer, the key is whether agency is involved in coming to believe q on the basis of knowing p. I’m tempted to say the same thing, but that’s a strange position to have to endorse. When we think of classic cases of inferential knowledge, we usually don’t require that agency be involved. For example, I walk in the room looking for my son, don’t see him, and come to believe that he isn’t there. This latter belief is inferential, but I don’t think I performed any act of agency in coming to believe it.

    Just so, suppose I know p and intuitively see the implication of q by p and come to believe q. I have performed no action that I can detect, though my belief in q is an inferential one. And such cases, one would think, are classic examples in which a deductive consequence of a belief comes to be known. So if such inferential knowledge doesn’t involve competent deduction, then the competent deduction principle covers only a subset of the cases we want explained.

  3. One response would be that a performance of a competent deduction is more than a series of brain states (at least, if brain states are the sorts of things your envisaged mad neuroscientist would have full control over). The performance of a competent deduction might require the performer’s application of the relevant logical rules to be appropriately (non-deviantly?) related to (e.g.) her learning of these rules.

  4. Carrie, I’m not sure that non-deviancy helps here. Non-deviancy means, variously: not caused by a mad-scientist, not caused by random, highly improbably coincidental causal factors (a la the “birth” of Davidson’s swampman), not due to the fistful of hallucinogenics I downed for breakfast with a mouthful of Wild Turkey, etc…. I think there has to be a spelling out of ‘competency’ that does not define it negatively, especially in terms of the lack of deviancy. Simply put, it seems like ‘non-deviant’ is just a synonym for ‘competent,’ or ‘correct,’ or ‘appropriate’–all of which require a principled elucidation. But might it not be that any pricipled elucidation will end up either too general to be informative or too disjunctive to really be principled? I’m not sure…

  5. Jon,
    The mention of non-deviance was meant to be merely illustrative of the kind of thing one might want to say; the general point is that, however you want to spell it out, there seems to be more to competence in deduction than the various brain states through which the subject progresses. Of course any appeal to appropriateness or non-deviance is a promissory note pending further explication of those notions, but don’t you share the intuition that there is *some* necessary condition on competence which is not met in your mad neuroscientist case due to the strangeness of the setup? If so, we have an answer to your questions (albeit one which awaits further development).

    (PS Shameless plug: my own best efforts with notions like ‘non-deviance’ are represented in my ‘Knowledge and Explanation’ paper, available on my website :))

  6. W&H say _if you know p and competently deduce q from p, then you know q_.

    It is hard to see this as a requirement that *I* do the actual deduction. It looks like a requirement that I rely on nothing less than a competent deduction. Suppose I use a computer program that is more reliable that I am at making inferences. Instead of going through the proof, I run it on the program. It has to be true (no?) that W&H would have me committed to the conclusion of that deduction. Supposing that the neuro-scientist didn’t randomly put me in the sequence of mental states resulting in q–supposing that the scientist knew exactly the sequence that correctly inferred q, that he is reliable and put me through the sequence of brain-states exactly in order to have me arrive at q–I don’t see why I cannot use his reliability in drawing inferences (just as I can use a computer program).

  7. Mike, I don’t think we want a closure principle to cover the case of deduction handed over to a machine. You’re right, though, that we do want an epistemic principle that encodes how one can learn in this way.

    Carrie, I’m not quite sure that the relation to prior learning is going to be necessary. I expect that it’s at least possible for there to be innate competence, and I suspect that this is actually so as well. So substituting an etiological condition for an agency condition won’t cover all the cases that I’d like to have a closure principle address.

  8. Jon,
    I think I want to make the same sort of response here as I made to Eric – that the mention of learning was merely illustrative of the general point, which is that there seems to be *some* necessary condition on competence which is not met in your mad neuroscientist case due to the strangeness of the scenario. If we thought competence with certain rules was innate, then we might demand that, in order for a competent deduction to have taken place where the rules used were innate, the subject’s use of the rules should be appropriately related to her innate capacity to use those rules.

  9. I see, Carrie, but I worry how whatever such a requirement is, it could fail to be encoded in sequences of brain states. In the original case, there was no difference between S and S’ in terms of capacities and the like, and yet one sequence of brain states is produced by the made scientist and one is not. I’m puzzled how the very same sequence could count as appropriately related to the capacity in one case and not in the other. Perhaps the answer is that the difference is causal: in one case, the capacity is causally active in the production of the new belief, and in the other case it isn’t. But surely the mad scientist can mimic that as well; it’ll just be remote causation of the immediate causal story.

  10. “I donâ??t think we want a closure principle to cover the case of deduction handed over to a machine.”

    I’m not sure why. I’m sure you know there’s been lots of controversy over whether the so-called “four color theorem” has been proven. Most of the proofs require the use of some sophisticated software. Suppose the closure of mathematician S’s beliefs includes the four color theorem. And suppose we have reliable software to prove the theorem. Can’t we use the software to show that S is committed, under competent closure, to that theorem? Don’t we want our commitments under closure to include those that (i) we cannot competently deduce ourselves but (ii) can be competently deduced?

  11. Mike, it is true that the mathematician is committed to the conclusion of the software in question, but that’s not a closure principle. If we start with what a mathematician knows and perform software proofs and even make the mathematician know of the results, it doesn’t follow that the mathematician knows. S/he’s committed to the results so long as knowledge of the premises is retained and knowledge of the adequacy of the software, but none of that implies that s/he believes the results.

  12. Jon, let me see if I’ve got it. First (1).

    (1) S starts with knowledge K, S runs the proof on the computer deriving p, S knows that p was properly derived from K and yet S might not know or believe p.

    Therefore (2) cannot be a closure principle.

    (2) If you know p and use a computer to (competently) deduce q from p, then you know q.

    But the W&H principle (3) does not have this problem and so might well be a closure principle.

    (3) If you know p and competently deduce q from p (yourself), then you know q.

    But then (3) cannot have this analogous worry. You look at q and learn “my God, I’ve shown that relativity theory is false!” Now you’ve competently derived that astounding conclusion (imagine). You reasonably say “I don’t believe that conclusion; I need to do the proof again.”

    In this case you might say, “well, wait. I know it was competently derived, so I don’t need to do the proof again. I know q.”
    But then in the former case you might say “well, wait. I know the theorem was competently derived, so I don’t need to do the proof myself. I know q.”

  13. Mike, the principle I started with is an encrypted version of what W&H actually say. The full Hawthorne principle is: if you know p, come to believe q by competently deducing it from p, retaining your knowledge of p throughout, then you know q. Your (3) would suffer from the same problem as the computer version.

    So, you might think, let’s add that you come to believe the conclusion of the computer program on the basis of that program’s correctly deducing it from p. Then you know q. I hadn’t seen that before, and I like the point; it goes to the question of how complete the W&H principle is (which they don’t claim that it is, but which a complete account of closure should be aiming for).

  14. Jon,
    Sure, causal stories are one way some people might want to develop the point. Suppose we run something along those lines. Then, you say, “surely the mad scientist can mimic that as well; itâ??ll just be remote causation of the immediate causal story.” I guess the response to that would be that one of two things could be going on there:
    1) The fact that the scientist interfered (at whatever stage) is enough to make it the case that there is something deviant about the causal link between the capacity and the relevant brain states (in which case we don’t have S’ performing a competent deduction)
    or
    2) if you don’t think that sounds right, then you ought to say that the scientist’s interference through remote causation of the immediate causal story amounted to the scientist’s *causing S’ to perform a competent deduction* (in which case there seems to be nothing -that is, nothing special- worrying about the application of the closure principle in this scenario).

  15. Carrie, that sounds right, and that’s the question, I think: is what the scientist does something that leaves a competent deduction in place, or not? My worry was that we’ll have to talk about agency here to decide the answer, and that’s troubling to me, since belief formation doesn’t seem to be the right kind of thing for talk of agency to encroach on. That’s the lesson of the problems for belief voluntarism. But maybe I’m overgeneralizing that concern here; I just don’t know.

  16. Jon,

    >is what the scientist does something that leaves a competent deduction in place, or not?

    It seems more palatable to say it is in the “remote causation” case than in the sort of case you originally described.

  17. Carrie, that sounds right, though I still can’t tell if it helps us avoid having to think of comp. ded. in terms that require agency. That’s my biggest worry, I think.

  18. Jon,

    > I still canâ??t tell if it helps us avoid having to think of comp. ded. in terms that require agency

    I’m inclined to think it does. What’s wrong in your original case is that there’s the wrong kind of link (causal link if you like) between capacity and performance, so that it’s not a competent deduction. No need for mention of agency here. And in the remote causation case (we might be inclined to think) there *is* a competent deduction because the link (causal if you like) between capacity and performance is of the right kind. (Again, no need to talk about agency.)

  19. Greetings, all and sundry! I’m just learning my way around the Internet, and this looked interesting.

    “Williamson and Hawthorne, among others, endorse a closure principle about knowledge that employs the concept of a competent deduction. The straightforward version is that if you know p and competently deduce q from p, then you know q. As most realize, there are qualifications needed, but Iâ??ll ignore them here.

    “Hereâ??s a question about the principle. Suppose a competent deduction of q from p by S occurs, and suppose that the deduction is constituted by a certain sequence of brain states in S. Now, suppose we have another person, Sâ??, just as competent at deduction as S who also knows p. Further, suppose that a mad neuroscientist produces in Sâ?? the same sequence of brain states that constituted the deduction of q from p by S. Did Sâ?? just competently deduce q from p?”

    If S and S’ are taken to be physioepistemic twins–if the exact same brain states and brain processes in S give rise to exactly the same epistemic states and processes as in S’–then it seems reasonable to suppose that if the mad neuroscientist induces in S’ the same sequence of brain states that constituted the deduction of q from p by S, then he will also have induced in S’ the same epistemic processes and states as in S, so that yes, S’ will have been caused by the mad scientist to competently deduce q from p; if S and S’ are not taken to be physioepistemic twins, then the induction of the same sequence of brain states that constituted the deduction of q from p in S might constitute something quite different in S’, so S’ need not have just competently deduced q from p. What the question really seems to be getting at is either whether or not identical brain states and processes give rise to identical epistemic processes and states in S and S’–which seems to hinge on whether or not they are, in the relevant respects, physioepistemic twins–or, on the other hand, whether the mad scientist’s manipulation of the brain of S’ gives rise to epistemic processes and states in S’ at all. The former question seems to be a matter of stipulation, as human brains aren’t identical; the latter seems to be pretty clearly answerable in the affirmative, doesn’t it? Am I missing something?

    “Another question. Assume Sâ?? comes to believe q as a result of what the neuroscientist does. Does S know q?”

    Again, if they are physioepistemic twins, then yes, since the same epistemic processes and states have been induced in S’ as in S; if they are not physioepistemic twins, then it’s going to depend on whether the mad scientist’s actions have induced the proper epistemic processes in S’ to competently deduce q from p.

    Keith Brian Johnson

  20. Keith, W and H may think differently here, but I think it is pretty obvious that the scientist-induced belief isn’t competent deduction and isn’t knowledge. The second claim could be made false by filling in additional details to the case, but in any case, it won’t count as knowledge based on deduction.

  21. Do I take it you think it’s obvious that the mad neuroscientist hasn’t in fact induced the same epistemic processes and states in S’ that S had when deducing q from p?

    Keith Brian Johnson

Leave a Reply

Your email address will not be published. Required fields are marked *