Learning from Inferiors

Here’s an interesting experiment.  Figure out how susceptible you think the word of an intellectual superior should be in defeating whatever evidence you have for believing what the superior denies.  Then figure out how susceptible you think the word of an intellectual inferior should be in defeating whatever evidence you have for believing what the inferior denies.  Plot each on a scale from 0 to 1 (where zero reflects never revising in the face of contrary testimony, and 1 reflects always acquiescing).  Should the results sum to 1?

Learning from Inferiors — 10 Comments

1. Rarely (=1), I would imagine. Wouldn’t the measure generally be sub-additive?

2. Maybe. I’m being coy, though. If it is the quality of the testifier that is doing all the epistemic work here, why wouldn’t we expect inverse symmetry here?

3. But Jon, if experts can be compared against one another with some better than others, and inferiors can be compared against one another with some inferior to others, why think that any superior/inferior pair will be such that the answer to your question is 1 (after doing the addition)?

I worry that any sense that this should be so reflects at least two pretty substantial assumptions. Roughly put, they are (1) that the categories of “superior” and “inferior” cannot be further disaggregated (so that any superior is as epistemically good as any other superior, and any inferior is as epistemically good — i.e., as bad — as any other inferiors); and (2) that the goodness of those falling in the “superior” category and the non-goodness of those who are “inferior” should be in such a relation that when summed they should come to 1. I can see reasons for questioning both of these assumptions.

(I don’t think that my worries assume the falsity of inverse symmetry either; though I don’t quite know what to think of that.)

4. Good point Sandy. So let’s control for degree of superiority and inferiority. The question then is, when sameness of degree is displayed in opposite directions, should we get inverse symmetry.

5. Wouldn’t the asymmetry between truth and falsity–there are limited ways to get it right and unlimited ways to get it wrong–provide a reason to deny the above inverse symmetry claim?

6. Ted, I doubt that will help. It sounds more like an argument for never trusting anyone or anything, ever. Not a good argument, of course, but I don’t see how it could be an argument for anything about who to trust and who not to trust.

7. Jon, Granted the asymmetry between truth and falsity, you might acquiesce to a superior b/c she is in a better position to discern the truth, but never acquiesce to an inferior (acting in their inferiority) because the ways he can go wrong are legion. Here’s a simple example: suppose we have a system that has four and only four states (a, b, c, and d). You have two machines that yield reports on the states–one machine is your superior and you acquiesce to. E.g., if you think b and it says a then you revise to a. The other machine, however, is your inferior: when it says, e.g., c, you think it has the luck of the draw of getting it right. Seems like you should be unmoved in this case. Agree?

8. I notice that “ways it can go wrong” doesn’t enter into the story. All that is involved is likelihood of being right. And if you always believe what the reliable machine says, and never what the unreliable one says, that gets the inverse symmetry I asked about.

9. It’s not clear from what I wrote above, but I intended the example to be one in which one acquiesces only sometimes to the reliable machine (say 7x out of 10). The unreliable machine can go wrong in 3 ways; compared to going right in only 1 way. Does that help?

10. Also, in the example, since the reliable machine is your superior I take it that there’s good evidence that a report from this machine is likely to be true (more likely than your own verdict). So you can learn something from this machine. But with the unreliable machine–your inferior–the probability that it’s report is correct is just the prior probability of the content of that report. You don’t learn anything from that. So it looks like you can occassionally acquiesce to the reliable machine, but never acquiesce to the unreliable machine.