5 Chapter 5 (Logical Fallacies)

Logical Fallacies

What does it mean to say that an argument contains a logical fallacy? Simply put, a logical fallacy is an error in reasoning. While most people try to be reasonable, there are a number of ways that humans frequently make errors in judgment. Looking for logical fallacies is an excellent way of evaluating the rigor of a source. Avoiding logical fallacies in academic writing is also important to make sure that the work that is submitted is reflective of a critical and informed thought process. Note that most people–including academics and researchers–are prone to errors in reasoning. Therefore, it’s important to always evaluate logical progressions, even (or especially) when we agree with the conclusions.

 

Argument from anecdote

The anecdotal fallacy is when someone uses personal experience or an isolated example as a means of proving a point (or of discrediting a related point). One reason people are vulnerable to the anecdotal fallacy is that narratives and personal experiences can feel more “real” than numbers or statistics. For example, if a study says that a particular car manufacturer is more likely to have safety problems then a competitor (e.g. 200 problems per 1000 vehicles compared to 50 problems per 1000 vehicles), those numbers are less emotionally powerful than my personal experience with a car made by that manufacturer, that I owned for 10 years “without a problem.” Unfortunately, those stories don’t actually make the favored manufacturer safer. They just mean that the car I owned was fine—perhaps because it was never placed in a situation where the safety concerns were an issue.

This does not mean that a source is flawed as soon as it uses narrative or example. What it means is that a narrative or example does not prove anything. It doesn’t even really do much to support something. However, a narrative or example can do a great job of explaining or of illustrating something.

What’s the difference?

The difference is in terms of how the narrative is being used. Imagine an argument that involves deciding whether or not someone should eat crunchy vegetables as part of a healthy diet. A story about a man who choked on a carrot and died does not prove that eating crunchy vegetables is more dangerous than the alternative. However, if I have already established that crunchy vegetables are rich in fiber, and that fiber is part of a healthy diet, then a story about a man who suffered from digestion problems until he added crunchy vegetables is not proving anything. It is putting a human face on a point that has already been made.

In crude terms, the story never proves the point. Instead, the point must be proven with actual evidence, and that evidence can then be supported with an example that shows the process or event in action. Students conducting research need to be wary of sources that use examples or individual stories to “prove” things instead of proving the point through other means and then adding stories as well.

Proof in this case would need to include all of the same basic components of any chain or argument. First, is what we think we are seeing really there? In other words, when people start eating crunchy vegetables, do we know that it’s the vegetables that are providing the benefit or is it something else (for example, just the act of thinking about being healthier).

One way to examine this is to see what happens if people think about being healthier and are given non-crunchy vegetables to eat (but are told they are healthy) while other people of the same age and general health are given crunchy vegetables to eat. Over enough time and with enough difference in results, it becomes reasonable to say that the vegetables (or the belief) are involved in the difference.

Note that the experiment described above helps to answer a second question—and that is the question of which is the cause and which is the effect? Assume for the moment that there is in fact some sort of correlation between eating crunchy vegetables and being healthier. Let’s imagine that we have noticed a trend in people in general, in that the people who eat crunchy vegetables are only half as likely to get certain diseases as people who do not. Great! Does that prove that eating crunchy vegetables is healthy? It does not. Why not? Because we still need to determine whether or not being healthy makes people crave crunchy vegetables, or if eating the crunchy vegetables actually causes being healthy. Actually, it’s more complicated than this, because the correlation could be coincidence (it just sort of happened) or it could be caused by a third factor (people who are well-off enough to have access to fresh and crunchy vegetables might also have access to other things that make them healthy, like fresh air and enough free time to exercise).

What this means is that any story, true or hypothetical, is typically only useful in explaining what other, more objective, evidence shows. However, stories can be really good at providing that kind of explanation!

 

The Ad Populum Fallacy

Coming from the Latin for “to the people” (and sometimes called the bandwagon fallacy), this fallacy involves thinking that because an idea is popular or believed by many that the idea must be valid. In arguments, it frequently manifests as assuming that a point does not need to be proven or established because “everyone knows” it is true already. Another version is that the more popular idea is inherently “better” than the less popular idea. Reality, however, is not democratic in nature. Consider the once widespread belief that “people only use 10% of their brain”, which alleges that most people have vast reserves of intellectual capacity that go unfilled. This single misconception has been used to justify a number of other beliefs and suppositions. However, there is very little evidence to support it and a significant amount of evidence to contradict it.

Someone appealing to the belief that “everyone knows we only use 10% of our brain” is actually beginning from a flawed position. Not only are there a number of people who know better than to believe this falsehood, the fact that people believe it does not make it true.

This creates a very tricky balancing act. First, someone engaging in argument needs to be sure that they use sources that do not make assumptions based on what “everyone knows,” but it is often difficult to sort out what needs to be proven and what can be assumed for the sake of efficiency. For example, even very basic neurology texts do not bother to explain how the authors know that human beings have brains. The simple test is whether or not support is at least available on demand, and whether or not that support is independent of the opinions or beliefs of individuals. A more complex test is whether or not the argument in question depends on the assumption.

If an argument talks about psychic powers being potentially real because they tap into the “unused 90%” of the brain, then that argument depends upon the belief that a significant portion of the brain goes unused. On the other hand, if an argument talks about how people learn new languages and it references the existence of brains, but the data is present to show that languages really are acquired in a consistent fashion, then it no longer really matters (for this argument and only for this argument) if the organ doing the learning is the brain or the heart (and indeed, once upon a time, many people did believe that thinking took place in the heart—they were no more right then than they would be now-belief doesn’t make it real).

 

Correlation Fallacy

One consistent mistake people make is seeing that two events occur in proximity to one another and then trying to associate them by saying that one caused the other. Because this is sometimes the case, it can be easy for researchers to trick themselves into thinking that it is always the case. However, correlation does not mean causation, and oftentimes the cause can be mistaken for the effect, the cause and effect can both be effects of a third factor, or they can simply be unrelated phenomena.

Take for example the age-old adage in American football that teams that run the ball more often are likely to win. Even if this turned out to be true, statistically, that fact does not tell us whether teams that are winning run the ball in order to slow down the game (and thereby preserve their lead) or whether running the ball helped them get the lead (thereby demonstrating a usable strategy). Which is the cause and which is the effect? As has been pointed out by the statistical analysis organization Football Outsiders multiple times, the actual data supports the trend or running to protect a lead, not running to create a lead. In fact, as has been pointed out, two or more instances in a row of a quarterback kneeling with the football has a higher correlation with winning that running, but few people would suggest kneeling to win. Why? Because here it is more obvious which is the cause and which is the effect.

However, many people will observe an event and assume that the event causes things that come afterward (this is a specific fallacy called Post Hoc, Ergo Propter Hoc—after the thing, therefore because of the thing). One of the original arguments made to allege the (completely disproven) link between autism and MMR vaccines was note that autism rates seemed to increase as vaccination rates increased. Even beyond biological and chemical flaws with these claims, purely numeric problems with this misuse of statistics were how it ignored the simultaneous increase in efforts to correctly identify autism (in other words, rates didn’t go up, but tracking improved) and the fact that when vaccination rates later fell autism rates continued to climb.

A good way of examining whether or not an event is causing other events is by looking for controls on the effect (natural or artificial) and looking for mechanisms of action. What does this mean? The first means that cause-and-effect can be evaluated if one is absent and the other still happens. If autism rates climb (or hold steady) even as vaccination rates decline, then it seems unlikely that the MMR vaccine is causing the increase. Likewise, autism rates hold steady (or go down) as vaccination rates increase, that also creates a disconnect.

The second crosscheck is trickier, and it again requires that researchers understand what they are looking at. Is there a reasonable explanation for how the supposed cause could actually create the supposed effect? In the case of the MMR vaccination issue, an uneducated person might believe this was the case—some versions of the vaccine used a form of mercury as a preservative, and some forms of mercury had been shown to have neurological effects on people.

However, there are a number of problems with this explanation. First, not all versions of the MMR vaccine had this preservative, and the data for autism rates were roughly the same where the preservative was used and where it was not. Second, the form of mercury used in the vaccines was not the same form of mercury that had the neurological effects in question. Third, the types of neurological effects associated with mercury are not similar to autism. There are other, technical, issues as well (such as the absence of lingering mercury poisoning in the subjects who were studied). In short, knowing even a little bit more about the subject does much to undermine the connection.

However, a simple search engine quest will still reveal a number of people who repeat the disproven, logically unsound argument.

 

The Middle Ground Fallacy

Many people have a very well-meaning impulse to seek compromise and to try to be “fair,” claiming that everyone might have a valid point. Unfortunately, this reflex is very easy to manipulate. Worse, it is grounded in an emotional reaction and not in evidence. To some extent, it depends upon the bandwagon/ad populum fallacy. Essentially, the middle ground fallacy assumes that if two (or more) people have differing opinions, then each of them must have at least some validity to their points. Therefore, the “right” answer must involve compromise.

Imagine that some people believe that the pyramids of Egypt were created by architects and laborers thousands of years ago using a combination of engineering principles and hard work, building off of the earlier burial structures known as mastabas. Now imagine that other people believe that the same pyramids were made without local predecessors by extraterrestrials using antigravity technology. The middle ground fallacy would suggest that some pyramids were made by aliens, or perhaps that aliens made the mastabas and then left behind their antigravity devices so the locals could figure out pyramids on their own.

While this example is (hopefully) laughable, trickier examples exist. In fact, canny manipulators can end up presenting their own viewpoints as the compromise. If I want college tuition to stay the same as is currently is, all I need to do is find someone arguing that tuition needs to be raised and someone else saying that it needs to be lowered. I then present my “fair” compromise of keeping it constant!

Sources that commit the middle ground fallacy frequently do so by having incomplete support for one of the two extreme positions, arguing that simply finding people who believe those positions is actually enough of a reason to offer a compromise (and for why this doesn’t hold up, see the earlier section on ad populum). Instead of compromise, a strong argument should actually present evidence in support of its position, and countervailing evidence should actually be put into context.

Consider the pyramids example. There is strong archaeological, historical, and cultural evidence in support of the “architects and laborers” origin of these works. There is nothing resembling the evidence for the “extraterrestrials and antigravity” explanation. There is therefore no need for compromise.

 

Observational Selection

The fallacy of only counting the examples that support your position and ignoring the rest. This fallacy is also known as card-stacking, cherry-picking, and the fallacy of exclusion, this is a fallacy because it does not take into consideration all (or even most) of the data. The classic illustration is how believers in astrology frequently count the times that a horoscope or other prediction seems to turn out “right” as evidence in the favor of astrology while ignoring the other predictions that did not work out as evidence against astrology.

In student writing, an equivalent might be a pro-gun rights paper where a student mentions the number of times citizens with guns stop violent crimes while ignoring the violent crimes committed by citizens with guns.

One of the easiest ways for sources to cherry-pick data is to offer numbers or stories without context. If my friend tells me that he tripled the number of books he went on from one year to the next, that doesn’t tell me if he added two books (from one to three) or if he added twenty (from ten to thirty).

 

 

License

Research, Evidence, and Written Arguments Copyright © by jsunderb. All Rights Reserved.

Share This Book