2.5. Inductive Arguments
By Stephanie Gibbons and Justine Kingsbury, adapted by Marc Chao and Muhamad Alif Bin Ibrahim
The renowned fictional detective Sherlock Holmes often speaks of deducing his conclusions from evidence. However, the reasoning he employs is not actually deductive reasoning.
In this chapter, we use the term “inductive arguments” broadly to refer to all forms of non-deductive reasoning. This includes probabilistic arguments, enumerative inferences, arguments from samples, analogies, causal reasoning, and inference to the best explanation. Although some of these forms are sometimes treated separately in philosophy, here they are all considered types of inductive arguments because they support conclusions without guaranteeing them.
Consider this excerpt from the Sherlock Holmes story “A Scandal in Bohemia”:
Dr Watson visits Holmes after a long absence. Holmes figures out that Watson has started practising medicine again, and that he has been out in bad weather lately, and that he has an incompetent servant, even though Watson has not told him any of these things.
…”my eyes tell me that on the inside of your left shoe, just where the firelight strikes it, the leather is scored by six almost parallel cuts. Obviously they have been caused by someone who has very carelessly scraped round the edges of the sole in order to remove crusted mud from it. Hence, you see, my double deduction that you had been out in vile weather, and that you had a particularly malignant boot-slitting specimen of the London slavery. As to your practice, if a gentleman walks into my rooms smelling of iodoform, with a black mark of nitrate of silver upon his right forefinger, and a bulge in the side of his top hat to show where he has secreted his stethoscope, I must be dull indeed if I do not pronounce him to be an active member of the medical profession.”
In this passage, Holmes draws three conclusions:
- Watson has been out in bad weather.
- Watson has an incompetent servant.
- Watson has resumed practising medicine.
The evidence for conclusions 1 and 2 is the leather on the inside of Watson’s left shoe, scored by six almost parallel cuts. The evidence for conclusion 3 is the smell of iodoform, a black mark of nitrate of silver on Watson’s right forefinger, and a bulge in the side of his top hat.
Even if these are good reasons to believe the conclusions, they do not guarantee them. There are other logically possible reasons for the cuts on Watson’s shoe, the smell of iodoform, etc. The unstated premise is that the hypothesis that Watson has resumed practising medicine is the best explanation for the observed facts. Such arguments can never guarantee the truth of their conclusion; it is always possible that another explanation is correct. Therefore, they are not deductive arguments. Nonetheless, non-deductive arguments can provide strong reasons to believe their conclusions. We will now discuss various types of non-deductive arguments, concluding with arguments like Holmes’s, known as inference to the best explanation.
One way to indicate that an argument is non-deductive is to precede the conclusion with “Probably”. This shows that the conclusion is not guaranteed, only likely. To test whether an argument is deductive or non-deductive, consider whether it makes sense to add “Probably” before the conclusion. For example, if someone argues that since all mice have tails and Minnie is a mouse, Minnie has a tail, it would not make sense to add “Probably” before the conclusion “Minnie has a tail”. The premises, if true, make the conclusion certain. Conversely, if the argument is “Almost all mice have tails, and Minnie is a mouse, so Minnie has a tail”, it makes sense to add “Probably”, indicating a non-deductive argument.
Non-deductive arguments are never valid and therefore never sound. However, since they do not aim to be, we do not criticise them for being invalid or unsound. Instead, we evaluate non-deductive arguments based on their “strength” and “cogency”.
Strength in the evaluation of non-deductive arguments serves a similar role to validity in the evaluation of deductive arguments. A non-deductive argument is considered strong if its premises, assuming they are true, provide substantial reason to believe the conclusion, although they do not guarantee its truth. Like validity, strength pertains to the relationship between the premises and the conclusion and is independent of the actual truth of the premises.
An argument is cogent if it is both strong and has all true premises.
When evaluating non-deductive arguments, we ask the same two questions as we do for deductive arguments:
- What is the connection between the premises and the conclusion?
- Are the premises true?
For non-deductive arguments, we address the first question by discussing strength. When we refer to cogency, we are addressing both questions.
There is an important difference between validity and strength. Validity is binary; a deductive argument is either valid or invalid. It cannot be partially valid. Strength, however, is a matter of degree. Some non-deductive arguments provide nearly complete support for the conclusion, while others offer minimal or no support.
Consider the following three arguments:
Argument One:
Premise 1: | 96% of politicians are dishonest. |
Premise 2: | Winston is a politician. |
Conclusion: | Winston is dishonest. |
Argument Two:
Premise 1: | 75% of politicians are dishonest. |
Premise 2: | Winston is a politician. |
Conclusion: | Winston is dishonest. |
Argument Three:
Premise 1: | Most politicians are dishonest. |
Premise 2: | Winston is a politician. |
Conclusion: | Winston is dishonest. |
The first argument is very strong, the second is less strong, and the third is even less strong. Strength is a matter of degree.
A strong argument is one in which the premises provide significant support (though not conclusive) for the conclusion: if the premises were true, the conclusion would likely be true.
Strength is a matter of degree, unlike validity.
A cogent argument is a strong argument with all true premises.
Probabilistic Arguments
Probabilistic arguments occur when the likelihood of the conclusion can be clearly established given the premises. These arguments closely resemble deductive arguments in their structure.
Consider this argument:
Premise 1: | All sheep in New Zealand live on farms. |
Premise 2: | Alice is a sheep in New Zealand. |
Conclusion: | Alice lives on a farm. |
Assume for a moment that Alice is a New Zealand sheep (i.e., Premise 2 is true). The argument is valid. However, it cannot be sound because the first premise is a “hard” generalisation, which does not allow for any exceptions. As a hard generalisation about all sheep in New Zealand, Premise 1 is false. There are undoubtedly some rogue sheep, some that have escaped into the bush, and some kept as pets that do not live on farms. Thus, although the argument is valid, it is unsound.
We could modify Premise 1 to a “soft” generalisation, which has a better chance of being true. A soft generalisation makes a general claim about a group but allows for exceptions. For example, if Premise 1 stated “Nearly all sheep in New Zealand live on farms”, then it would be true.
However, the argument would no longer be valid:
Premise 1: | Nearly all sheep in New Zealand live on farms. |
Premise 2: | Alice is a sheep in New Zealand. |
Conclusion: | Alice lives on a farm. |
In this argument, the premises do not guarantee the conclusion. It is possible for the premises to be true while the conclusion is false, as Alice could be one of the few rogue bush-sheep or a pet.
This type of argument is not valid, but can be very useful. The premises do not guarantee the conclusion, making the argument invalid. However, it provides strong support for the conclusion. The truth of the premises is sufficient to show that the conclusion is probably true.
This type of argument is not a failed deductive argument; it does not intend for the conclusion to follow with certainty. We can indicate this in the argument frame by including the word “Probably” before the conclusion, as follows:
Premise 1: | Nearly all sheep in New Zealand live on farms. |
Premise 2: | Alice is a sheep in New Zealand. |
Conclusion: | [Probably] Alice lives on a farm. |
Arguments of this nature will vary in strength depending on how probable the premises make the conclusion. Some arguments have premises that make their conclusions very probable and are thus very strong.
Premise 1: | There are 99 black marbles in this bag and one white marble. |
Premise 2: | In my fist is a marble randomly selected from the bag. |
Conclusion: | [Probably] The marble in my fist is black. |
Here, it is 99% probable that the marble in my fist is black, making this a very strong argument.
It is important to note that the statement “The marble in my fist is black” is still either true or false. It cannot be 99% true. It is either 100% true or 100% false. The 99% applies to the probability that it is true, not to the truth itself.
We can change the probabilities in such arguments by altering the proportions of marbles:
Premise 1: | There are 75 black marbles in this bag and 25 white marbles. |
Premise 2: | In my fist is a marble randomly selected from the bag. |
Conclusion: | [Probably] The marble in my fist is black. |
This conclusion is still probable. This non-deductive argument is weaker than the previous one but remains strong enough to be useful.
In the marbles example, it is easy to accurately measure the degree of probability of the conclusion. Most ordinary probabilistic arguments lack this level of precision.
Premise 1: | Most university students do not have children. |
Premise 2: | Betty is a university student. |
Conclusion: | [Probably] Betty does not have children. |
Here, the conclusion is probable, but we cannot assign a precise degree of probability to it.
Argument Patterns
The same types of argument patterns can occur in probabilistic non-deductive arguments as in deductive arguments.
The argument “No mammals lay eggs. Perry is a mammal. Therefore, Perry does not lay eggs” is a valid argument. It follows the general pattern of Modus Tollens. (If this is unclear, try converting the generalisation in the first premise into a conditional: “If something is a mammal, then it does not lay eggs”.) However, the first premise of this argument is false. There are three species of mammals that lay eggs, the most well-known being the platypus. Thus, we can soften the generalisation in the first premise:
Premise 1: | Hardly any mammals lay eggs. |
Premise 2: | Perry is a mammal. |
Conclusion: | [Probably] Perry does not lay eggs. |
This follows the Modus Tollens pattern, except it uses a soft generalisation instead of a hard generalisation in the first premise. It is a non-deductively strong argument.
It is important to remember that a fallacious argument pattern cannot be improved by weakening the generalisation. Here is an example to illustrate this point:
Premise 1: | All geese are birds. |
Premise 2: | Borka is a bird. |
Conclusion: | Borka is a goose. |
The basic pattern of this argument is the fallacy of affirming the consequent (using a generalisation instead of a conditional). There are many birds that are not geese, and Borka could be one of those.
This argument cannot be improved by weakening the generalisation in Premise 1. This would result in an argument like this:
Premise 1: | Most geese are birds. |
Premise 2: | Borka is a bird. |
Conclusion: | [Probably] Borka is a goose. |
This is a weak argument. Borka is not likely to be a goose simply because it is a bird. Once again, Borka could be another type of bird. Thus, this argument commits a non-deductive version of the fallacy of affirming the consequent. The fundamental problem with the structure of the argument cannot be resolved by changing one of the premises from a hard generalisation to a soft one.
Kinds of Soft Generalisation
Any statement that makes a claim about a group or category of things can be considered a generalisation. Hard generalisations include terms like “All”, “None”, “Always”, and “Never”. Soft generalisations include terms such as “Almost all”, “Almost none”, “Many”, “Most”, and “Some”. Some soft generalisations are useful in probabilistic arguments, while others are not.
The goal is to demonstrate that the conclusion is probable. To have any strength, the argument must show that the conclusion is more likely to be true than not.
Consider this argument:
Premise 1: | The majority of people enjoy ice cream. |
Premise 2: | Alex is a person. |
Conclusion: | [Probably] Alex enjoys ice cream. |
This argument has some strength, although not much. If the premises were true, the conclusion would be more likely to be true than not. However, Premise 1 might not be accurate, so the argument cannot be cogent.
We might attempt to improve the argument by further softening the generalisation in Premise 1 to make it more likely to be true. This might result in:
Premise 1: | Many people enjoy ice cream. |
Premise 2: | Alex is a person. |
Conclusion: | [Probably] Alex enjoys ice cream. |
While Premise 1 is now more likely to be true, the argument remains weak. The premises do not provide a strong reason for accepting the conclusion. The term “many” does not specify a proportion of all people that could make the conclusion probable. Words like “many” do not convey precise proportions; they merely indicate that there are at least several people who do so. It is important to consider whether the generalisation is sufficiently robust to make the conclusion probable.
Extended Probabilistic Arguments
Just as deductive arguments can be extended, so can non-deductive ones. It is important to consider how any probability within the argument affects the probability of the final conclusion.
In an extended argument with a single soft generalisation, the probability of the conclusion will reflect the degree of probability in the soft generalisation:
Premise 1: | Nearly all university students write assignments on computers. |
Premise 2: | Betty is a university student. |
Conclusion 1: | [Probably] Betty writes her university assignments on a computer. |
Premise 3: | Everyone who writes assignments on a computer can read. |
Conclusion 2: | [Probably] Betty can read. |
This is a strong argument. The probability of Conclusion 2 is the same as that of the intermediate Conclusion 1, and the degree of probability of Conclusion 1 comes from the soft generalisation in Premise 1.
However, each additional soft generalisation in an extended argument will further dilute the probability of the final conclusion.
Consider this argument:
Premise 1: | Most university students hand in their assignments. |
Premise 2: | Conrad is a university student. |
Conclusion 1: | [Probably] Conrad hands in his assignments. |
Premise 3: | Most students who hand in their assignments pass their courses. |
Conclusion 2: | [Probably] Conrad passes his courses. |
Here, the inference from Premise 1 and Premise 2 to Conclusion 1 is not particularly strong. It is further weakened by the soft generalisation in Premise 3. By the time Conclusion 2 is reached, the probability assigned to the final conclusion by the premises is low. This argument is not strong.
If the dilution issue is not clear from that example, consider this one, where the problem is more evident:
Premise 1: | Most of those currently in the university library are university students. |
Premise 2: | Conrad is currently in the university library. |
Conclusion 1: | [Probably] Conrad is a university student. |
Premise 3: | Most university students drink in the evenings. |
Premise 4: | It is evening. |
Conclusion 2 | [Probably] Conrad is drinking. |
It is unlikely (though not impossible) that Conrad is drinking in the library. Even if all the premises of this argument were true, the conclusion is not likely. This is because the group of people likely to be in the university library in the evening is different from those who are likely to be drinking.
Sometimes the generalisations in an extended argument will be strong enough to make the final conclusion probable, and sometimes they will not. There is no precise way to determine the probability of the conclusion when using imprecise quantifiers such as “nearly all” and “few”. Instead, consider the number and type of generalisations made and make a judgement call about whether the probability of the conclusion has been diluted too much.
A Note on the Use of “Probably”
When presenting non-deductive arguments in standard form, we often insert “[Probably]” before the conclusion to indicate that the argument is intended to be non-deductive. The square brackets signify that this term is not part of the conclusion itself or the argument itself; it merely indicates the type of argument being used. While this can be helpful, it is important to note that it does not reflect the success of the argument. Additionally, inserting “Probably” cannot improve a weak argument. Consider the following example:
Premise 1: | Nearly all dogs have four legs. |
Premise 2: | Fido is a dog. |
Conclusion: | Fido has four legs. |
This is a strong argument. It remains strong regardless of whether “Probably” is placed before the conclusion.
Furthermore, inserting “Probably” before the conclusion does not indicate that an argument is strong, nor will it improve a weak argument. Consider this example:
Premise 1: | Nearly all dogs have four legs. |
Premise 2: | Fido has four legs. |
Conclusion: | [Probably] Fido is a dog. |
This is not a strong argument, as it commits the fallacy of affirming the consequent. The presence of “Probably” cannot change this. You should view “Probably” as a useful way to indicate that an argument is non-deductive, but it does not provide any information about the argument’s success.
Enumerative Inferences
Imagine a turkey living happily on a turkey farm. Each morning, the farmer brings corn for it to eat, which is enough to keep the turkey happy. One morning, as the farmer approaches, the turkey might think, “Hooray, breakfast!”. If the turkey is reasoning at all, it is reasoning non-deductively: every morning so far, the farmer has brought corn, so today he will bring corn again. Unfortunately, it is Christmas morning, and the turkey makes a grave mistake by running happily towards the farmer, who this time is carrying an axe.
The turkey’s reasoning, if it was reasoning at all, was quite sound: it was based on true premises that provided substantial reason to believe its conclusion. Nevertheless, the conclusion was false. The moral is that no matter how strong your non-deductive argument, it is still possible for the conclusion to be false.
Someone reasoning in this manner is taking a large number of observed cases and inferring that a pattern will continue. They have collected data and are extrapolating from it to formulate a conclusion. Inferences of this type are sometimes called “inductive inferences”. However, since this is not the only type of induction, we refer to them as “enumerative inferences”. This term reflects the process of collecting a number of cases and reaching a conclusion about a new case based on that list.
Enumerative inferences differ from probabilistic arguments. Consider this probabilistic argument:
Premise 1: | There are 75 black marbles in this bag and 25 white marbles. |
Premise 2: | In my fist is a marble randomly selected from the bag. |
Conclusion: | [Probably] The marble in my fist is black. |
In this argument, the proportions of the contents of the bag are known, and because this is a mathematical example, the degree of probability of the conclusion can be precisely calculated (it is 75% likely that the marble in my fist is black).
Now, suppose I have a bag of 100 marbles, but I know nothing about their colours. I draw out the first 99 marbles, and they are all black. Based on this, I conclude that the 100th marble will also be black. My argument looks like this:
Premise 1: | Marble 1 is black. |
Premise 2: | Marble 2 is black. |
Premise 3: | Marble 3 is black. |
⋮ | |
Premise 99: | Marble 99 is black. |
Conclusion: | [Probably] Marble 100 is black. |
I cannot assign a precise degree of probability to this conclusion. There are infinite possibilities for the colour of the remaining marble. However, it seems more reasonable to suppose it is black, given the contents of the bag so far, than to suppose it is another colour.
In everyday life, we reason like this frequently. When I assume the sun will rise tomorrow, I am extrapolating from many instances of the sun rising. This has happened every day of my life, and I expect it to continue. Similarly, I assume that if I get hit by a bus, I will be injured, based on what usually happens when people are hit by buses and my past experiences with large, heavy objects. Even the belief that the laws of physics will continue to apply is justified through an enumerative inference. Such arguments are very important and useful.
Not all enumerative inferences are strong, and they can be difficult to assess. Consider the marbles example again. When I know there are 100 marbles in the bag and the first 99 are black, it seems reasonable to conclude that the 100th marble will be black. But what if I did not know how many marbles were in the bag? What if I had only drawn 10 marbles? Can I still justifiably conclude that the next marble will be black?
Several factors must be considered when assessing an enumerative inference:
- sample size
The more data collected, the stronger the enumerative inference. This is why an inference about the colour of the next marble is stronger when 99 marbles have been tested compared to only 9. The sample size of sunrises is enormous, making us very confident that the sun will rise tomorrow.
- sample size relative to the total population
If I know there are a million marbles in the bag and I have tested 99, I will feel less confident about the next marble than if there are 100 marbles in the bag and I have tested 99.
The size of the total population can also vary depending on the conclusion. Sometimes a conclusion is about the next case alone. Consider:
Argument 1:
Premise 1: | The sun has risen every day of my life. |
Conclusion: | [Probably] The sun will rise tomorrow. |
I feel very confident about this conclusion. The sample size is all the days of my life, and the total population is all the days of my life plus one (i.e., the next day). The sample is a large proportion of the total population.
Compare:
Argument 2:
Premise 1: | The sun has risen every day of my life. |
Conclusion: | [Probably] The sun will rise every day for the rest of my life. |
and
Argument 3:
Premise 1: | The sun has risen every day of my life. |
Conclusion: | [Probably] The sun will rise every day forever. |
In Argument 2, the total population is unknown, but I optimistically assume I am halfway through my life. This means I am extrapolating from known cases to about the same number of future cases.
In Argument 3, the conclusion is so broad that it is unlikely to be true. We know the world will end someday, so there will eventually be a last day. A conclusion that extends too far beyond its sample results in a sample that is a very small proportion of the total population.
- sample collection method
Suppose I am given 10 bags of marbles, each containing 10 marbles. If I take 10 marbles from the first bag and they are all black, I have some reason to think all the marbles in all the bags are black, but it is not a particularly strong reason. If I take one marble from each bag and they are all black, I have a much stronger reason to think all the marbles are black.
Generally, the more data collected and the more random the data collection, the stronger the enumerative inference. If you find yourself rejecting almost all enumerative inferences, your standard for reasonableness is likely too high. We use these inferences all the time, and it would be impossible to function without them. You would have no reason not to step in front of a bus.
It is important to note that the possibility of being wrong is not sufficient grounds for rejecting an enumerative inference. This possibility of inferring a false conclusion from true premises is a feature of all non-deductive arguments. Consider the turkey example again. The turkey is justified in its conclusion, even though it will eventually be wrong. One day the sun will not rise, but that does not mean you are unjustified in believing it will rise tomorrow.
Arguments from Samples
When we sample or survey some (but not all) members of a group and then draw a conclusion about the group as a whole, we are engaging in non-deductive reasoning. Consider the following example:
A nationwide poll of a random sample of thousands of homeowners revealed that 70% of them oppose increases in social welfare payments. Therefore, it was concluded that roughly 70% of the adult population of New Zealand opposes such increases.
In this case, there was an evident issue with the argument: all the individuals surveyed were homeowners, yet the conclusion was drawn about the entire adult population, not just homeowners. The sample was not representative.
Now, suppose we conduct a more accurate survey: instead of only asking homeowners, we draw our sample randomly from the adult population of New Zealand. Suppose the results indicate that 55% of the sample of thousands of adult New Zealanders oppose increases in social welfare payments. We then conclude that 55% of all adult New Zealanders oppose such increases.
This is a stronger argument than the previous one because the sample from which we are generalising is representative (as far as we can tell) of the wider population. However, it is important to note that the argument remains non-deductive. Unless every single member of the wider population is polled (in which case it is no longer an argument from a sample), the conclusion that what is true of the sample is also true of the wider population is not guaranteed.
In addition to the representativeness of the sample, we must also consider the size of the sample. In the example above, if we had only surveyed 10 randomly selected New Zealand adults instead of thousands, we should not generalise the results to the entire adult population of New Zealand as the sample size would be much too small.
Analogy
An analogy highlights the similarities between two different things.
For example, if I say, “The mind is like a computer: it takes certain inputs, processes them, and then produces results”, I am drawing an analogy. I am suggesting that the mind and a computer are alike in some way. However, I am not presenting an argument or drawing any conclusion from this analogy. I am merely drawing parallels, perhaps to encourage you to think about the mind differently or to illustrate how the mind functions.
An argument by analogy, on the other hand, involves pointing out the similarities between two or more things and then drawing a conclusion based on those similarities.
Consider a scenario where I am deciding whether to purchase a particular car. It is a ten-year-old Honda Civic with 75,000 kilometres on the odometer, a little rust, and only one previous owner who drove it carefully and serviced it regularly.
I might reason as follows:
My previous car was a Honda Civic, and when I bought it, it was ten years old with 75,000 kilometres on the odometer, a little rust, and only one previous owner who had maintained it well. That car served me well for five years, requiring minimal repairs. Therefore, this car, being similar to my last one, will likely serve me well too, and I should buy it.
Premise 1: | Car A was a Honda Civic, ten years old, with 75,000 kilometres, a little rust, well-maintained, and it served me well for five years with minimal repairs. |
Premise 2: | Car B is a Honda Civic, ten years old, with 75,000 kilometres, a little rust, well-maintained. |
Conclusion: | Car B will last five years and require minimal repairs. |
General Structure of Arguments by Analogy:
Premise 1: | A has characteristics W, X, Y, and Z. |
Premise 2: | B has characteristics W, X, and Y. |
Conclusion: | B will also have characteristic Z. |
Considerations for Evaluating an Argument by Analogy:
- How similar are the things being compared?
- Are the similarities relevant? (For example, if the similarities mentioned were all about colour, which is irrelevant to the car’s performance, it would not be a strong argument.)
- Are there any relevant differences between the things being compared?
- How many similar cases are we dealing with? (For instance, if I had owned three cars with similar characteristics that all served me well, the argument would be stronger.)
Causal Reasoning
Suppose several individuals experience upset stomachs after a dinner party. Here are the details of what the various attendees consumed:
Foods eaten by those who became ill:
- person A: ham, potato salad, coleslaw
- person B: ham, rice salad, lettuce salad
- person C: ham, pasta salad, carrot salad.
Foods eaten by those who did not become ill:
- person D: chicken, rice salad, coleslaw
- person E: sausages, pasta salad, lettuce salad
- person F: bean salad, potato salad, carrot salad.
It is likely that the ham caused the illness.
Why?
- All those who became ill consumed ham.
- All those who did not become ill did not consume ham.
- There is no other food item that was exclusively consumed by those who became ill.
- Consuming spoiled ham is a known cause of upset stomachs, and we understand the mechanism by which this occurs (unlike, for example, a shared characteristic such as wearing red shirts).
It is important to note that the ham could have caused the illness even if not all ham-eaters became ill. Consuming ham might have increased the probability of illness without guaranteeing it, as some individuals may have stronger constitutions.
Causal statements assert that one thing causes or does not cause another. For example, smoking causes lung cancer, drinking coffee after dinner keeps me awake, and reading logic textbooks after dinner makes me sleepy.
Causal statements are common in both everyday conversation and scientific research. Understanding the effects of actions is crucial for decision-making. Doctors need to know the causes of diseases to treat them effectively, and airlines need to determine the causes of plane crashes to prevent future incidents.
Causal arguments consist of a causal claim supported by reasons for believing that claim. For instance, if American Airlines claims that a plane crash occurred because the altimeter malfunctioned and visibility was poor due to low clouds, their reasons might include records showing the altimeter reading fifteen thousand feet just before the crash, despite the mountain being much shorter, and a tape recording of the pilot’s exclamation upon seeing the mountain emerge from the fog. Listing these reasons as premises and the causal claim as the conclusion forms a causal argument.
Causal arguments are non-deductive. In the plane crash example, even with all the evidence, it is not 100% certain what caused the crash. However, it can still be a strong argument.
Consider a more general causal claim: Attending St Peter’s Cambridge leads to better NCEA results. Suppose statistical analysis shows that the average marks of St Peter’s students are higher than the national average. Does this provide good reason to believe the causal claim?
No, not on its own. Correlation does not imply causation. Other possibilities should be considered before accepting such a causal argument:
- Coincidence: It might be pure chance that St Peter’s students performed better.
- Common cause: There might be an underlying factor that both increases the likelihood of attending St Peter’s and achieving good marks, such as having wealthy parents or parents who prioritise education. To rule out these alternatives, a more complex study should be conducted. Compare a group of students similar to St Peter’s students in all relevant respects except for the school they attend. If St Peter’s students perform better than this control group, and all other relevant factors have been considered, the causal claim is more justified.
- Opposite direction of causation: Sometimes, the cause and effect are mistaken. For example, New Hebrides Islanders believed lice caused good health because healthy individuals were infested with lice, while sick individuals were not. In reality, lice left their hosts when they developed a fever. Thus, getting sick caused the absence of lice, not the other way around. Observing the order of events can help determine the direction of causation.
Having a theory to explain the causal process is also important. Discovering that lice do not like high temperatures provides additional reason to believe that illness causes the absence of lice rather than the absence of lice causing illness.
Here is a causal argument:
Premise 1: | Most people who take mega-doses of Vitamin C when they have a cold recover within a week. |
Conclusion: | Mega-doses of Vitamin C cure colds. |
We are justified in believing this conclusion only if we have considered and ruled out likely alternatives.
It might be that people naturally recover from colds within a week, with or without Vitamin C. This can be tested by collecting data on the recovery speed of people who do not take mega-doses of Vitamin C.
Alternatively, those who take Vitamin C might also engage in other health-promoting behaviours, such as eating chicken soup and going to bed early, which could be the actual factors contributing to their quick recovery. To test this, observe a control group that is similar to the test group in all relevant respects (diet, sleeping habits, etc.) except for taking Vitamin C, and compare the two groups.
Inference to the Best Explanation
An inference to the best explanation occurs when you have a phenomenon or observation that requires an explanation, and you conclude that the best available explanation is true simply because it is the most plausible explanation for that phenomenon. Sherlock Holmes’ reasoning presented earlier is likely an example of an inference to the best explanation, although he does not explicitly outline all the steps. For instance, he concludes that Watson has resumed practising medicine based on the bulge in Watson’s top hat, which would be caused by carrying a stethoscope (implicitly assuming that only practising doctors carry stethoscopes).
To fully articulate the argument, Holmes would need to consider alternative explanations for the bulge in the top hat and justify why the explanation that Watson has resumed practising medicine is the best one. However, it is evident that this is Holmes’ line of reasoning, where Watson’s return to medicine is the best explanation for the bulge in his top hat, thus providing a reason to believe that Watson has indeed resumed his medical practice. This form of reasoning is common in detective stories and scientific reasoning. Often, the reason we believe in certain unobservable entities (such as electrons) is that their existence provides a good explanation for observable phenomena (e.g., the fact that lights turn on when a switch is flipped).
The argument from design, which argues for the existence of God, can be construed as an inference to the best explanation. It can be outlined as follows:
Observations:
- Organisms are complex and intricate.
- They are well adapted to their surroundings.
- Their parts work together to enable the whole organism to function.
Possible Explanations:
- God designed organisms to be just the way they are.
- Organisms evolved by natural selection without any supernatural involvement.
- Organisms evolved by natural selection, but God designed them to do so.
- God created organisms 6,000 years ago in such a way that it appears they have been around much longer and have evolved.
- Organisms came to be the way they are through completely random processes.
Explanation 5 is not a very good explanation. When evaluating which explanation is the best, the first consideration is whether the observations would be surprising if the explanation were true, or if they would be expected. Typically, we seek explanations for surprising phenomena. A good explanation makes the phenomenon unsurprising. Explanation 5 fails this test as it leaves the complexity and intricacy of organisms unexplained.
The other four explanations pass this initial test. To complete the argument, since the conclusion is that God exists, we need reasons to believe that either 1, 3, or 4 is a better explanation than 2. There is evidence against 1 (such as fossils and vestigial organs). However, there is no scientific evidence that decisively distinguishes between 2, 3, and 4. One reason for preferring 2 might be its simplicity. One reason for preferring 3 might be that it explains more, such as the origins of life, which 2 does not. (We are not resolving this question here; this is merely an illustration of how inferences to the best explanation work)
Another example: suppose you observe that milkmaids do not contract smallpox even when smallpox is widespread. The explanation might be that milkmaids contract cowpox, a relatively mild illness, which provides immunity to smallpox.
The fact that milkmaids do not contract smallpox does not conclusively prove that cowpox provides immunity to smallpox. There could be other explanations. Perhaps cows have magical properties that protect those who spend time with them from smallpox. Perhaps milkmaids consume more milk, which contains a substance that protects against smallpox.
What makes these alternative explanations less plausible?
- Consistency with other accepted theories: What seems like a good explanation depends partly on your background assumptions. For example, I assume that magic does not operate in the world, so I do not need to appeal to magic to explain everyday phenomena. You might not share this assumption.
- Results of experimental testing: Do other milk drinkers have immunity from smallpox? (As it happens, they do not.)
Inferences from evidence to explanations are not deductively valid. It is always possible that the explanation is incorrect, despite the evidence.
Chapter Attribution
Content adapted, with editorial changes, from:
How to think critically (2024) by Stephanie Gibbons and Justine Kingsbury, University of Waikato, is used under a CC BY-NC licence.