Essay 1 Instructions
Reread pages 397—404, “Informal Fallacies,” in Writing Arguments and review the hand out from Yourlogicalfallacyis.com. Think about when you have encountered fallacies in your life. Think about times when you have both perpetrated fallacies and times when you have accepted fallacies. Pick a prominent example. Please notice I do not want you to invent a fallacy.
Using what you have learned about fallacies, analyze the example you have selected. Be as specific and detailed as possible. Be sure to identify which fallacy you have chosen from those in the text or from the handout. Draw conclusions and explain how you plan to use your newfound skills in the future.
Although content is going to be primary, organization, spelling, and grammar are also important. Ensure that they are all under your control before you hand anything in. In other words, I expect you to do College Level writing. You should use this as an opportunity to show off your ability to think clearly, precisely, and logically, as well as write elegantly! Treat all your writing as if it were a job application. It is hard for me to imagine that you could even begin to cover this topic in less than 750 words. To ensure a good grade you will probably want to do more!
To receive full credit for this assignment you must submit your writing by noon, on the day specified in the syllabus, to Brightspace. As always, ensure that your work is done in Microsoft Word format and uses APA formatting.
(Pages 397-404 inserted below):
In this appendix, we look at ways of assessing the legitimacy of an argument within a real-world context of probabilities rather than within a mathematical world of certainty. Whereas formal logic is a kind of mathematics, the informal fallacies addressed in this appendix are embedded in everyday arguments, sometimes making fallacious reasoning seem deceptively persuasive, especially to unwary audiences. We begin by looking at the problem of conclusiveness in arguments, after which we give you an overview of the most commonly encountered informal fallacies.
In real-world disagreements, we seldom encounter arguments that are absolutely conclusive. Rather, arguments are, to various degrees, “persuasive” or “nonpersuasive.” In the pure world of formal logic, however, it is possible to have absolutely conclusive arguments. For example, an Aristotelian syllogism, if it is validly constructed, yields a certain conclusion. Moreover, if the first two premises (called the “major” and “minor” premises) are true, then we are guaranteed that the conclusion is also true. Here is an example:
This syllogism is said to be valid because it follows a correct form. Moreover, because its premises are true, the conclusion is guaranteed to be true. However, if the syllogism follows an incorrect form (and is therefore invalid), we can’t determine whether the conclusion is true.
In the valid syllogism, we are guaranteed that Quacko is a feathered animal because the minor premise states that Quacko is a duck and the major premise places ducks within the larger class of feathered animals. But in the invalid syllogism, there is no guaranteed conclusion. We know that Clucko is a feathered animal but we can’t know whether he is a duck. He may be a duck, but he may also be a buzzard or a chicken. The invalid syllogism thus commits a “formal fallacy” in that its form doesn’t guarantee the truth of its conclusion even if the initial premises are true. From the perspective of real-world argumentation, the problem with formal logic is that it isn’t concerned with the truth of premises. For example, the following argument is logically valid even though the premises and conclusion are obviously untrue:
Even though this syllogism meets the formal requirements for validity, its argument is ludicrous. In this appendix, therefore, we are concerned with “informal” rather than “formal” fallacies because informal fallacies are embedded within real-world arguments addressing contestable issues of truth and value. Disputants must argue about issues because they can’t be resolved with mathematical certainty; any contestable claim always leaves room for doubt and alternative points of view. Disputants can create only more or less persuasive arguments, never conclusive ones.
The study of informal fallacies remains the murkiest of all logical endeavors. It’s murky because informal fallacies are as unsystematic as formal fallacies are rigid and systematized. Whereas formal fallacies of logic have the force of laws, informal fallacies have little more than explanatory power. Informal fallacies are quirky; they identify classes of less conclusive arguments that recur with some frequency, but they do not contain formal flaws that make their conclusions illegitimate no matter what the terms may say. Informal fallacies require us to look at the meaning of the terms to determine how much we should trust or distrust the conclusion. In evaluating arguments with informal fallacies, we usually find that arguments are “more or less” fallacious, and determining the degree of fallaciousness is a matter of judgment. Knowledge of informal fallacies is most useful when we run across arguments that we “know” are wrong, but we can’t quite say why. They just don’t “sound right.” They look reasonable enough, but they remain unacceptable to us. Informal fallacies are a sort of compendium of symptoms for arguments flawed in this way. We must be careful, however, to make sure that the particular case before us “fits” the descriptors for the fallacy that seems to explain its problem. It’s much easier, for example, to find informal fallacies in a hostile argument than in a friendly one simply because we are more likely to expand the limits of the fallacy to make the disputed case fit. In arranging the fallacies, we have, for convenience, put them into three categories derived from classical rhetoric: pathos, ethos, and logos. Fallacies of pathos rest on flaws in the way an argument appeals to the audience’s emotions and values. Fallacies of ethos rest on flaws in the way the argument appeals to the character of opponents or of sources and witnesses within an argument. Fallacies of logos rest on flaws in the relationship among statements in an argument.
This is perhaps the most generic example of a pathos fallacy. Arguments to the people appeal to the fundamental beliefs, biases, and prejudices of the audience in order to sway opinion through a feeling of solidarity among those of the group. Thus a “Support Our Troops” bumper sticker, often including the American flag, creates an initial feeling of solidarity among almost all citizens of goodwill. But the car owner may have the deeper intention of actually meaning “support our president” or “support the war in ______.” The stirring symbol of the flag and the desire shared by most people to support our troops is used fallaciously to urge support of a particular political act. Arguments to the people often use visual rhetoric, as in the soaring eagle used in Wal-Mart corporate ads or images of happy families in marketing advertisements.
This fallacy persuades an audience to accept as true a claim that hasn’t been proved false or vice versa. “Jones must have used steroids to get those bulging biceps because he can’t prove that he hasn’t used steroids.” Appeals to ignorance are particularly common in the murky field of pseudoscience. “UFOs (ghosts, abominable snowmen) do exist because science hasn’t proved that they don’t exist.” Sometimes, however, it is hard to draw a line between a fallacious appeal to ignorance and a legitimate appeal to precaution: “Genetically modified organisms may be dangerous to our health because science hasn’t proved that they are safe.”
To board the bandwagon means (to use a more contemporary metaphor) to board the bus or train of what’s popular. Appeals to popularity are fallacious because the popularity of something is irrelevant to its actual merits. “Living together before marriage is the right thing to do because most couples are now doing it.” Bandwagon appeals are common in advertising where the claim that a product is popular substitutes for evidence of the product’s excellence. There are times, however, when popularity may indeed be relevant: “Global warming is probably caused by human activity because a preponderance of scientists now hold this position.” (Here we assume that scientists haven’t simply climbed on a bandwagon themselves, but have formed their opinions based on research data and well-vetted, peer-reviewed papers.)
Here the arguer appeals to the audience’s sympathetic feelings in order to support a claim that should be decided on more relevant or objective grounds. “Honorable judge, I should not be fined $200 for speeding because I was distraught from hearing news of my brother’s illness and was rushing to see him in the hospital.” Here the argument is fallacious because the arguer’s reason, while evoking sympathy, is not a relevant justification for speeding (as it might have been, for instance, if the arguer had been rushing an injured person to the emergency room). In many cases, however, an arguer can legitimately appeal to pity, as in the case of fund-raising for victims of a tsunami or other disaster.
This fallacy’s funny name derives from the practice of using a red herring (a highly odiferous fish) to throw dogs off a scent that they are supposed to be tracking. It refers to the practice of throwing an audience offtrack by raising an unrelated or irrelevant point. “Debating a gas tax increase is valuable, but I really think there should be an extra tax on SUVs.” Here the arguer, apparently uncomfortable with the gas tax issue, diverts the conversation to the emotionally charged issue of owning SUVs. A conversant who noted how the argument has gotten offtrack might say, “Stop talking, everyone. The SUV question is a red herring; let’s get back to the topic of a gas tax increase.”
Arguers appeal to false authority when they use famous people (often movie stars or other celebrities) to testify on issues about which these persons have no special competence. “Joe Quarterback says Gooey Oil keeps his old tractor running sharp; therefore, Gooey Oil is a good oil.” Real evidence about the quality of Gooey Oil would include technical data about the product rather than testimony from an actor or hired celebrity. However, the distinction between a “false authority” and a legitimate authority can become blurred. For example, in the early years of advertising for drugs that treat erectile dysfunction, Viagra hired former senator and presidential hopeful Bob Dole to help market the drug. (You can see his commercials on YouTube.) As a famous person rather than a doctor, Dole would seem to be a false authority. But Dole was also widely known to have survived prostate cancer, and he may well have used Viagra. To the extent a person is an expert in a field, he or she is no longer a “false authority.”
Literally, ad hominem means “to the person.” An ad hominem argument is directed at the character of an opponent rather than at the quality of the opponent’s reasoning. Ideally, arguments are supposed to be ad rem (“to the thing”), that is, addressed to the specifics of the case itself. Thus an ad rem critique of a politician would focus on her voting record, the consistency and cogency of her public statements, her responsiveness to constituents, and so forth. An ad hominem argument would shift attention from her record to features of her personality, life circumstances, or the company she keeps. “Senator Sweetwater’s views on the gas tax should be discounted because her husband works for a huge oil company” or “Senator Sweetwater supports tax cuts for the wealthy because she is very wealthy herself and stands to gain.” But not all ad hominem arguments are ad hominem fallacies. Lawyers, for example, when questioning expert witnesses who give damaging testimony, often make an issue of their honesty, credibility, or personal investment in an outcome.
This fallacy is closely related to ad hominem. Arguers poison the well when they discredit an opponent or an opposing view in advance. “Before I yield the floor to the next speaker, I must remind you that those who oppose my plan do not have the best interests of working people in their hearts.”
The straw man fallacy occurs when you oversimplify an opponent’s argument to make it easier to refute or ridicule. Rather than summarizing an opposing view fairly and completely, you basically make up the argument you wish your opponent had made because it is so much easier to knock over, like knocking over a straw man or scarecrow in a corn field. See pages 125–126 for an example of a straw man argument.
This fallacy occurs when someone makes a broad generalization on the basis of too little evidence. Generally, the evidence needed to support a generalization persuasively must meet the STAR criteria (sufficiency, typicality, accuracy, and relevance) discussed in Chapter 5 (pages 92–93). But what constitutes a sufficient amount of evidence? The generally accepted standards of sufficiency in any given field are difficult to determine. The Food and Drug Administration (FDA), for example, generally proceeds cautiously before certifying a drug as “safe.” However, if people are harmed by the side effects of an FDA-approved drug, critics often accuse the FDA of having made a hasty generalization. At the same time, patients eager to have access to a new drug and manufacturers eager to sell a new product may lobby the FDA to quit “dragging its feet” and get the drug to market. Hence, the point at which a hasty generalization passes over into the realm of a prudent generalization is nearly always uncertain and contested.
Sometimes called by its Latin name pars pro toto, this fallacy is closely related to hasty generalization. In this fallacy, arguers pick out a part of the whole or a sample of the whole (often not a typical or representative part or sample) and then claim that what is true of the part is true for the whole. If, say, individuals wanted to get rid of the National Endowment for the Arts (NEA), they might focus on several controversial programs funded by the NEA and use them as justification for wiping out all NEA programs. The flip side of this fallacy occurs when an arguer picks only the best examples to make a case and conveniently forgets about examples that may weaken the case.
The Latin name of this fallacy means “after this, therefore because of this.” The fallacy occurs when a sequential relationship is mistaken for a causal relationship. (See Chapter 12, page 259, where we discuss this fallacy in more depth.) For example, you may be guilty of this fallacy if you say, “Cramming for a test really helps because last week I crammed for my psychology test and I got an A on it.” When two events occur frequently in conjunction with each other, we’ve got a good case for a causal relationship. But until we can show how one causes the other and until we have ruled out other causes, we cannot be certain that a causal relationship is occurring. For example, the A on your psych test may have been caused by something other than your cramming. Maybe the exam was easier, or perhaps you were luckier or more mentally alert. It is often difficult to tell when a post hoc fallacy occurs. When the New York police department changed its policing tactics in the early 1990s, the crime rate plummeted. But did the new policing tactics cause the drop in the crime rate? Many experts suggested other clauses, including economist Steven Levitt, who attributes the declining crime rate to the legalization of abortion in the 1970s (and hence to a decline in unwanted children who might grow up to be criminals).
Arguers beg the question when they provide a reason that simply restates the claim in different words. Here is an example: “Abortion is murder because it is the intentional taking of the life of a human being.” Because “murder” is defined as “the intentional taking of the life of a human being,” the argument is circular. It is tantamount to saying, “Abortion is murder because it is murder.” In the abortion debate, the crucial issue is whether a fetus is a “human being” in the legal sense. So in this case the arguer has fallaciously “begged the question” by assuming from the start that the fetus is a legal human being. The argument is similar to saying, “That person is obese because he is too fat.”
This fallacy occurs when an arguer oversimplifies a complex issue so that only two choices appear possible. Often one of the choices is made to seem unacceptable, so the only remaining option is the other choice. “It’s my way or the highway” is a typical example of a false dilemma. Here is a more subtle one: “Either we allow embryonic stem cell research, or we condemn people with diabetes, Parkinson’s disease, or spinal injuries to a life without a cure.” Clearly, there may be other options, including other approaches to curing these diseases. A good extended example of the false dilemma fallacy is found in sociologist Kai Erikson’s analysis of President Truman’s decision to drop the A-bomb on Hiroshima. His analysis suggests that the Truman administration prematurely reduced numerous options to just two: either drop the bomb on a major city, or sustain unacceptable losses in a land invasion of Japan. Erikson, however, shows there were other alternatives.
The slippery slope fallacy is based on the fear that once we put a foot on a slippery slope heading in the wrong direction, we’re doomed to slide right out of sight. The controlling metaphor is of a slick mountainside without places to hold on rather than of a staircase with numerous stopping places. Here is an example of a slippery slope: “Once we allow app-based ride services to compete with regular taxi companies, we will destroy the taxi business and the livelihood of immigrant taxi drivers. Soon anyone who wants to pose as a ride service will be able to do so, using uninspected vehicles and untrained drivers, leading to more accidents and crimes against passengers.” Slippery slope arguments are frequently encountered when individuals request exceptions to bureaucratic rules: “Look, Blotnik, no one feels worse about your need for open-heart surgery than I do. But I still can’t let you turn this paper in late. If I were to let you do it, then I’d have to let everyone turn in papers late.” Slippery slope arguments can be very persuasive—and often rightfully so because every slippery slope argument isn’t necessarily a slippery slope fallacy. Some slopes really are slippery. The slippery slope becomes a fallacy when we forget that we can often dig a foothold into the slope and stop. For example, we can define procedures for exceptions to rules so that Blotnik can turn in his paper late without allowing everyone to turn in a paper late. Likewise, a state could legalize app-based ride services, but regulate them to prevent a complete slide down the slope.
In Chapter 11 on definition and resemblance arguments, we explained that no analogy is perfect (see our discussion of analogies on pages 225–226). Any two things being compared are similar in some ways and different in other ways. Whether an analogy is persuasive or false often depends on the audience’s initial degree of skepticism. For example, people opposed to gun control may find the following argument persuasive: “Banning guns on the basis that guns accidentally kill people is like banning cars on the basis that cars accidentally kill people.” In contrast, supporters of gun control are likely to call this argument a false analogy on the basis of dissimilarities between cars and guns. (For example, they might say that banning cars would be far more disruptive on our society than would be banning guns.) Just when a persuasive analogy turns into a false analogy is difficult to say.
The name of this fallacy means “it does not follow.” Non sequitur is a catchall term for any claim that doesn’t follow from its premises or is supported by irrelevant premises. Sometimes the arguer seems to make an inexplicably illogical leap: “Genetically modified foods should be outlawed because they are not natural.” (Should anything that is not natural be outlawed? In what way are they not natural?) At other times there may be a gap in the chain of reasons: “Violent video games have some social value because the army uses them for recruiting.” (There may be an important idea emerging here, but too many logical steps are missing.) At still other times an arguer may support a claim with irrelevant reasons: “I should not receive a C in this course because I currently have a 3.8 GPA.” In effect, almost any fallacy could be called a non sequitur because fallacious reasoning always indicates some kind of disconnect between the reasons and the claim.
Sometimes arguers try to influence their audience’s view of something by creating a loaded label or definition. For example, people who oppose the “estate tax” (which calls to mind rich people with estates) have relabeled it the “death tax” in order to give it a negative connotation without any markers of class or wealth. Or to take another example, proponents of organic foods could create definitions like the following: “Organic foods are safe and healthy foods grown without any pesticides, herbicides, or other unhealthy additives.” “Safe” and “healthy” are evaluative terms used fallaciously in what purports to be a definition. The intended implication is that nonorganic foods are not safe and healthy.
Individual task: For each argument on page 404, explain in writing the extent to which you find the argument persuasive or fallacious. If any argument seems doomed because of one or more of the fallacies discussed in this appendix, identify the fallacies and explain how they render the argument nonpersuasive. Remember that it is often hard to determine the exact point where fallacious reasoning begins to kick in, especially when you consider different kinds of audiences. So in each case, consider also variations in audience. For which audiences would any particular argument appear potentially fallacious? Which audiences would be more likely to consider the argument persuasive?
Group task: Share your analyses with classmates.
Visit Appendix Informal Fallacies in MyWritingLab to complete the For Writing and Discussion and to test your understanding.
If you have any questions, please contact me. Bid is FIRM!
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more