CIA and Critical Thinking

There are several interesting items in this chapter from a CIA handbook on expertise and aggregating expert opinions, but one point raises an issue for those in the business of teaching “Critical Thinking”.

Experts are individuals with specialized knowledge…but that expertise does not necessarily transfer to other domains.[6] A master chess player cannot apply chess expertise in a game of poker; although both chess and poker are games, a chess master who has never played poker is a novice poker player. Similarly, a biochemist is not qualified to perform neurosurgery, even though both biochemists and neurosurgeons study human physiology. In other words, the more complex a task, the more specialized and exclusive is the knowledge required to perform that task.

The point raised in this quote is the finding that people tend to have difficulty transferring domain specific expertise. Although it isn’t mentioned here, those difficulties also include the ability to recognize the very same problem that a person may find easy to solve in a domain familiar to him when the problem appears in a less familiar domain.

The question for courses in Critical Thinking is this: Are the skills taught in a typical CT course a form of expertise, and therefore involve limitations that other forms of expertise have in terms of the time it takes to acquire those skills and the limited scope of their applicability, or are the skills taught in CT courses lasting, of a general nature and therefore a (partial) solution the problem of crossing domains of expertise? Doubtless these are not exclusive cases, but the distinction is useful to focus attention on the philosophical issue of what the relationship is between logic and reasoning, and also to focus attention on the practical (and measurable) question of whether CT courses are effective. CT course descriptions often promise a lot, and it is a fair question to ask whether they deliver.

I have my doubts on both counts, but welcome contrary arguments and data.


CIA and Critical Thinking — 10 Comments

  1. My off the cuff thought is that often what gets taught in “critical thinking” classes is very domain specific, and even misleading. Often, I think, reasoning which gets classified as logically bad in such classes is really good reasoning employing relevant background knowledge not made explicit in the premises of an argument. For example, it seems highly relevant to me that certain global-warming-deniers get paid by Exxon, even if this is often classified as fallacious ad hominem (Sp? I can’t spell in English, let alone Latin) reasoning. Part of what is ignored in such classification is that these are cases in which we are supposed to take some “expert’s” word for it, and that requires assessing the expert.

    So I am a skeptic about “critical thinking” classes as they are often taught, though I suspect that if you focussed on particular domains you could do better.

  2. I share your skepticism about critical thinking courses, Mark, although I think that they are put on with the right intentions. For instance, I think that we could agree that there are better means for assessing geophysicist Smith’s denial of global warming than noting his employer. There is a resilient peer-review system covering this branch of science, and there are (still) a few reputable media outlets in English that have good science reporters, nearly all of whom have been reporting a broad consensus about climate change for some time now. Smith’s terms of employment might in the end explain why his denial is spurious, but his employment status should not be considered part of the grounds for assessing his climate-change denial.

    Suppose that this is a reasonable policy, a specimen of critical thinking, as I believe that it is. Question: Do critical thinking courses effectively teach people to do this? Does the exercise of cramming this paragraph into a valid form help? Do the Latin labels get us anywhere?

    Critical thinking seems to involve untangling causal and evidential relationships among events, not finding valid or invalid form. It also calls upon people to assess the reliability of the testimony they hear about such events, and to assess the trustworthiness of the agent delivering the news.

    If this picture is correct, mainstream instruction in “informal logic and critical thinking” is founded on an impoverished methodology and should be scrapped. Current practice is akin to playing at rocket science with sling-shots.

  3. Gregory,

    I think we are in agreement on the big picture, but not perhaps the example since I would want to give some weight to a person’s possible motives in saying something, especially when we are being asked to take their saying so as a reason to think it. (As opposed to a situation where a person’s making an argument is supposed to get us to take the argument itself as our reason rather than their saying it.)

    I guess I think that what it makes sense to rely on depends on what other information you have. If you think that a person’s assessment of possibly controversial information can be influenced by trying to please an employer, knowing that a person’s expert opinion on a subject concerns matters of great interest to that employer seems like relevant info in assessing the weight to give the fact that they claim it. Of course if you have other info that can either bolster or undermine that info. That too will depend on what other background info you have — like knowing what peer review is, that the peers are not themselves all in the employ of someone with an interest in the outcome.

    My point in using the example was something about which I think we agree. The epistemic value of a given bit of evidence itself often depends on background information, and informal logic classes tend to ignore that. I think that very often the arguments which get the disparaging Latin labels would be formally just fine if extra background assumptions were introduced into the formalization.

    So I think I agree that the model for these classes is impoverished. It would be interesting to think about how to do it better and also, as I think you asked, how broad such a class could be or if it would have to focus on some relatively narrow area of inquiry.

  4. Perhaps the difference is smaller still. I too think that an agent’s motives can be relevant when assessing the reliability of his testimony, but I don’t think that motives are sufficient. Or, better put, I am uneasy when I am in a situation in which I have a single agent’s testimony, information about his motives, and no other source of evidence. Fortunately, in practice, this is rare. Still, sometimes that is all the evidence one has when a judgment needs to be made. Perhaps we would have to work hard to find disagreement between us in such cases.

    My concern is to avoid being in such circumstances, because it seems to me to be a situation in which our evidence is impoverished. We cannot help but make lousy decisions in such situations. So, while I can imagine us agreeing in cases like this, I would come to it very grudgingly; I would press to gather more information, if possible, including testimony from other experts.

    Perhaps the difference, if there one remains, is the premium I would put on gathering more information.

    The worry driving this is whether we Balkanize a community if we give too much credence to a person’s possible motives, or membership in a group whose motives we might imagine are uniform: just because a Republican says p does not mean that p is false. God help us all if we lose this principle.

    I don’t at all wish to suggest that you endorse any of this; rather, my worry is a general one about the consequences of this trend to blur practical interests and epistemic position.

  5. OK, I think we nearly completely agree. And your middle point, that information gathering is crucially important is another one that I don’t see much emphasized in most classes of the sort we are talking about. But then maybe I am not familiar enough with versions that do work this in. I may not have gathered enough information, which is why I characterized my initial response as off the cuff.

  6. There are rudiments of a solution. The first point is that critical thinking is a skill, like bike riding, which is learned by doing it. One doesn’t learn to ride a bike by being told to pedal and turn in the direction of a fall, and one does not learn to think better by being told which Latin labels apply to which textbook examples.

    Second, if you grant my premise that the basic activities of critical thinking involve (i) evaluating causal and evidential relationships, (ii) assessing the plausibility of the message, and (iii) assessing the trustworthiness of the messenger, then:

    One new pedagogical tool addressing (i) is the course on the qualitative methods of causal reasoning developed by Richard Scheines at CMU; (I hope to learn more about this course and report back.)

    It is also important to keep (ii) and (iii) distinct. (ii) asks you to check the message you are receiving against what you know, whereas (iii) asks you to consider the source. (ii) is concerned with belief kinematics: change, update, and merging; (iii) is concerned with trustworthiness and security (in current AI lingo), which brings in issues of deception, competence, motive.

    One approach to critical thinking, then, would be to build a curriculum for students that cumulatively developed and then continuously exercised these skills. (i) concerns relationships between content; (ii) concerns how that *should* change a person’s view; (iii) concerns how to identify and filter out noise.

    The CMU coursework package appears to be a very nice introduction for (i) that also serves double-duty as a foundation for young scientists who will go on to learn how to construct their own experiments. Their ingenious idea is that (ideally) an entire student population, scientists and humanists alike, would have the same basic foundations. I see it as a bold proposal for solving the scientific-literacy problem.

    My ideas about (ii) and (iii) are still murky, but it should be clear that students need to be exposed to a little more than first-order logic and Bayes’ theorem. (ii) raises issues of representation that are increasingly important, and not just in philosophy and science, but commerce and medicine.

  7. FWIW, I have seen very good logic students fooled by that test (I forget the name) which asks you which cards you have to turn over to verify a material conditional. This suggests that even formal logic is not a skill that transfers easily. (It also suggests that the rah-rah portion at the beginning of my logic text, which says you should take logic because you can apply it to every intellectual discipline, is false.) I wish I knew of some way to teach “skills” of critical thinking, but my own untested hunch is that there is nothing for it but to inculcate a general attitude of not taking much for granted, being aware of what you are taking for granted when you do, and having some experience with the many subtle ways inferences can go wrong. Philosophy is great training for this.

  8. Hi Heath,

    I have seen very good logic students fooled by that test (I forget the name) which asks you which cards you have to turn over to verify a material conditional.

    I think you are referring to Wason’s selection task.

  9. This comment comes 10 days later than the previous one, but I’ve only just gotten around to reading this entry.

    One additional reason that knowing about the source of an argument, and the biases of she who offers it, is important in cases of “expert testimony” is that most such arguments are defeasible (including, but not limited to those that are probabilistic). So the argument itself may be perfectly good, but if the expert has a bias, she may leave out additional relevant information. Unlike the “deductive paradigm” (on which argument analysis in critical reasoning courses is usually based), defeasible arguments should include the “total relevant evidence” available to the expert, since additional information can undermine the strength of such arguments. An expert with a bias may well leave out relevant information that “goes the other way”. (That’s one reason it can be so easy to lie with statistics.)

  10. Hi Jim,

    Knowing the source of an argument and knowing the biases of that source are two different things. Although I take your point about the impact sins of omissions may have on defeasible arguments, it can be a tricky business assessing credibility and reliability of sources.

Leave a Reply

Your email address will not be published. Required fields are marked *