Why do we recognize induction as a category?

Dear Colleagues,

Here’s a basic (and possibly wrong-headed) question. Probably, if I were a better student of logic and/or epistemology, I would see the answer, but at the moment I don’t.

Why do we even recognize a category of inductive inferences (or, as some would have it, inductive standards of inference)? Why not treat all inferences as deductive, but with some sort of probabilistic qualification built into the conclusions of some of those deductive inferences?

That is, instead of characterizing such inferences in these terms …

1. This is a fair, six-faced, cubical die and I shall roll it in the normal way.
So probably
2. I shall not roll a “6.”

3. The first 999 crows I saw were black
So probably
4. The next crow I see will be black.

… why not characterize them in the following terms?

1. This is a fair, six-faced, cubical die and I shall roll it in the normal way.
Therefore (certainly):
2*. Probably, I shall not roll a “6.”

3. The first 999 crows I saw were black
Therefore (“certainly”):
4*. Probably, the next crow I see will be black.

Of course, I see that there’s a difference between the above arguments and standard deductive arguments such as:

5. If Fluffy is a vixen, then Fluffy is a female.
6. Fluffy is a vixen.
Therefore,
7. Fluffy is a female.

But that difference could be expressed in terms of the presence or absence of a probability qualifier in the conclusion, and not in terms of a fundamentally different kind of inference.

So I repeat my question: why do we allow for two kinds of inference, instead of only one?


Comments

Why do we recognize induction as a category? — 15 Comments

  1. One thought: There seems an important difference between inferences that are valid in virtue of their logical form, and those that are cogent or reasonable, but only as a matter of substantive rationality.

    It’s unclear what new rules of deductive inference would allow you to draw “probably” conclusions, just in virtue of the logical form of the premises. (We may take green/grue to suggest that no such purely formal rules are available.)

  2. In the same vein as Richard’s point: I can add premises to the arguments with “probably” conclusions that undermines the appropriateness of those inferences, but not so for “Fluffy is a vixen” argument:

    1. The first 999 crows I saw were black.
    2. The subsequent 1,000,000 crows I saw were white.
    Therefore:
    3. Probably, the next crow I see will be black.

    That argument does not look so good.

  3. I agree with Richard. It’s not clear what the rules of inference would be. We’d also have to change our understanding of deductive validity as purely formal. Notice in the Fluffy argument, I need not know anything about vixens, females, or Fluffy to know that the conclusion follows from the premises. In the die argument I need to know what it means for a die to be fair, what the normal way to roll is, etc.

  4. Going one step down the Tortoise-Achilles path, we could get formally valid arguments here by adding premises.

    E.g.
    1. The first 999 crows I saw were black (and I’ve seen no non-back crows).
    2. If I’ve seen 999 instances of a thing with a certain property, and none without it, then the next instance I see will probably have that property too.
    Therefore (“certainly”):
    3*. Probably, the next crow I see will be black.

  5. Consider. I am interested in the height and weight of 10 year old children in a certain school district. I go to the schools and measure the heights and weights of a sample of the class, and given that there is no good reason to think that the sample is biased, I infer that the observed quantities in this sample are representative of the entire population.

    In textbooks one sees the proviso put in positive terms, say, that the sample is random. But, what one wants is assurances that the sample is representative. But if you had the assurance that the sample was representative, had the grounds to put that in positive terms, and thus to put the argument in deductive form, you would not be in the position of needing to draw the inference: for you would effectively be in a position of describing (or reasoning about) the population. All statistical reasoning would be descriptive, and science would be much easier than it in fact is.

    As for a form of this kind of reasoning, you can think of this as a Reiter default rule: the antecedent is the descriptive bit about the sample, the conclusion applies the description to the conclusion, and the ‘justifications’ are conditions which, so long as none are provable given what you know, none, in other words, tip you off to the sample being unrepresentative, allows you to draw the conclusion.

  6. Mark, I’m with Lewis on this one. To put it another way, a deductive system is monotonic: add anything to the premises and the conclusion still follows if it did prior to the addition. Inductive inferences are non-monotonic, as Lewis’s example shows. So the logics are quite different.

  7. Richard, Lewis, Randy, Gregory, Colin & Jon,

    Thanks for those comments. The overall point about non-monotonicity is helpful and (to me) convincing. Just the sort of thing I was hoping for!

    Mark

  8. In most cases it seems that the conclusions of our inferences are about non-probabilistic states of affairs, not about the probabilities of propositions. Case in point: some years ago my son asked me where the milk was. “Probably in the fridge,” I said. He opened the door to the fridge, saw the milk, and said, “yes, you were right.” Discovering the milk would not have supported his (correct!) judgment if my claim had been a probability statement. My rule of thumb: in most cases, `probably’ functions as a conclusion indicator in the context of an argument.

  9. Dear Mark,

    The non-monotonic reasoning literature is a rich one, dating from the 1970s and christened in an AI journal special issue from 1980. But, for a (non-technical) reference which anticipates the idea of a logic for non-demonstrative inference, have a look at (Fisher 1922, 1936), referenced and discussed (briefly) in this review of the lottery paradox.

    Best, GRW

  10. Two thoughts:

    1. Characteristically inductive arguments take us from samples to populations (as GRW suggests). Often, we add two features to the conclusions of such inferences — some vagueness and some probability. For example, in the school-sampling case, we might look at a few children and then say that with probability p, the average height of the children is between h1 and h2. But those two features need not be added in order to have the *form* of an induction. You could simply conclude with a point estimate: “The average height of the children is h.” (The case is parallel to singular statistical syllogisms, like: 90% of the balls in the bucket are red; this ball was drawn from the bucket; therefore, this ball is red.)

    2. Some people have claimed that there are inductive arguments with conclusions that cannot be assigned any probability. Peirce seems to have thought this about some inductive arguments with respect to infinite populations and with respect to extrapolations to the future. John Norton gives a general argument against probability as the one true logic of induction. (See this paper (pdf), for example.) If their arguments are right, then induction is not likely to be replaceable with deduction plus a probability function.

  11. One thought. Just as you can think that an argument of the form:

    99% of A’s are B’s
    Therefore
    the next A is also a B

    can be rewritten, with no loss of any formal property, as:

    99% of A’s are B’s
    therefore
    probably the next A is also a B,

    you can also think that an argument of the form:

    P&Q
    therefore
    P

    can be rewritten as:

    P&Q
    therefore
    necessarily P

    So, the probably/necessarily attribute indicates what kind of vero-functional support there is between premises and conclusion – and that could mean they imply two different kinds of inferential patterns.

  12. Despite agreement with the responses here I am quite sympathetic to the idea of thinking of inductive inferences as deductive (though this may just be a label). I think perhaps a conditional statement could be built into the premises which assigns a quantitative probability and the conclusion has the same quantitative probability. Taking Lewis’s example:
    1. The first 999 crows I saw from t1 to t999 were black.
    2. The probability of all crows being black is .99999.
    Therefore, 3. The crow I see at t1000 has a .99999 probability of being black.
    Now, Lewis’s addition does not seem to make this argument invalid:
    1. The first 999 crows I saw from t1 to t999 were black.
    2. The probability of all crows being black is .99999.
    3. The next 1,000,000 crows I see from t1000 to t1,000,000 will be white
    4. Therefore, 4. The crow I see at t1000 has .99999 probability of being black.
    I think there is nothing wrong with the deductive validity of this argument. It is probably not sound. 2. is an independent premise, and this probability cannot be established deductively within the argument but outside. The probability could even be one as in pulling out a black ball from an urn containing 5 black balls. The probability could also be given in a range hence introducing vagueness, yet we could have a deductive inference. I think bringing in a quantitative probability premise and tense predicates we may express traditional inductive inferences as deductive ones.

    Priyedarshi Jetli

  13. @Luis Rosa: Hempel remarks (in Philosophy of Natural Science, even, if memory serves) on the difference between uncertainty attaching itself to the proposition (e.g., P, r% of Ps are Q’s, therefore Q is at least as likely as r%) and uncertainty attaching itself to the consequence operator (e.g., P, r% of Ps are Q’s, therefore [to degree .r] Q). Keynes tried fleshing out the latter idea, which didn’t work out that well. (See Ramsey, F. P.) Kyburg worked out another version built on a scaffolding supplied by Fisher. Statistical default logic is a distillation of this approach.

    Thus, @Jonathan Livengood, I am sympathetic to Norton’s position.

    Another source of (non-probablistic) examples may be found in the computer science Knowledge Representation and Reasoning (KRR) literature exercising the distinction between “open worlds” and “closed worlds”, and arguments/motivations for treating “negation as failure” is another keyword source. Not everything there is compelling; the 80’s and 90’s literature on “commonsense reasoning” is naive. But, more recent work to do on implementations of systems for the semantic web or other inference engines for large databases provide more persuasive (even if dry) examples, and seeing instances of concrete worry provides room for looking back at the commonsense reasoning literature with a bit more charity.

  14. Once again, folks, thanks for these comments. I don’t have enough expertise to improve on (let alone _contradict_) your comments, but I am finding them interesting and helpful all the same!

  15. “Case in point: some years ago my son asked me where the milk was. “Probably in the fridge,” I said. He opened the door to the fridge, saw the milk, and said, “yes, you were right.” Discovering the milk would not have supported his (correct!) judgment if my claim had been a probability statement. My rule of thumb: in most cases, `probably’ functions as a conclusion indicator in the context of an argument.”

    Hi, Andrew,

    I’m a bit mystified by what you wrote above. I’d say that you did make a probability claim: a claim of epistemic probability. You were telling your son (though maybe you didn’t mean to) that, for all you knew (given the evidence available to you), the milk was in the fridge. Upon finding the milk, he confirmed to you that your evidence was not misleading (though maybe that was not quite what he meant to do). The relevant fact in the exchange, however, was that he did find the milk – not quite the fact that you hadn’t been misled by the evidence. I don’t see how that contradicts the thought that what you did do was give him a piece of information that was ostensibly about your mental life. And what he did do was infer that the milk probably (as a matter of objective fact) was where you had evidence to believe it was. Or maybe he “jumped” to the conclusion that it was where you had evidence to believe it was. Still, what he did say, in reply, was that your evidence was leading to true belief. He was elliptical, though, in saying simply that your belief was true.

    Is there something I’m missing here?

Leave a Reply

Your email address will not be published. Required fields are marked *