I love philosophy and science. I also love flowcharts because they can compress many pages of instruction into a simple chart. And three researchers from George Mason University and the University of Queensland have combined these three loves in a paper about climate change denialism. In their paper, they create a flowchart that shows how to find over a dozen fallacies in over 40 denialist claims! In this post, I’ll explain this argument-checking flowchart. First, we will identify a common denialist claim and then evaluate the argument for it.

The Argument-checking Flowchart

Step 1: Identify Claim

Claims are just propositions that are true or false. The Eiffel Tower is 324 meters tall. Pure water is H 2 0. Nick’s cat is the most adorable thing, ever. You get the idea.

Here’s a common claim that the flowchart paper mentions (≠ endorses): “Climate change is natural” (Cook, Ellerton, and Kinkead 2018). Now that we have a claim, we need to find its argument.

Step 2: Construct Argument

People often make free-floating claims — claims for which they provide no argument. In that case, the claim cannot even be evaluated. To borrow a phrase from Wolfgang Ernst Pauli, the claim would be “not even wrong” — i.e., it would be worse than wrong.

But occasionally people actually argue for their claim, even if only implicitly. Here’s an implicit argument for the aforementioned climate change denying claim: “Earth’s climate has changed naturally before, so current climate change is natural.” We can formalize the argument like this:

Premise 1: The climate has changed in the past through natural processes.

Premise 2: The climate is currently changing.

Conclusion: The climate is currently changing through natural processes.

Step 3: Determine Inference

Arguments can be deductive or inductive. Deductive arguments make their conclusion necessarily true or false. Inductive arguments merely make their conclusion more or less probably true — thus admitting the possibility that the conclusion is wrong. One way to test whether an argument relies on deductive or inductive inference is to check whether and how its premises support its conclusion.

Step 4: Check validity (a.k.a., support)

Now we can start to evaluate the argument itself. If an argument is deductive, then we can show that it does not support its premises so long as we can describe a counterexample: a case in which all the premises are true, but the conclusion is false.

Deductive support?

The argument that we are considering is not deductively valid. After all, it could be true that climate has changed naturally in the past (premise 1) and that climate is currently changing (premise 2) and still be the case that the climate is currently changing for unnatural (or human-caused) processes. So these premises do not deductively support their conclusion.

Inductive support?

But maybe the argument is inductively valid. If it is, then the premises would make the conclusion more probable. In this case, they might. If climate change has been natural in the past (premise 1), then that does impact the probability we assign to its being natural in the present and future (conclusion). This assumes, of course, that premise 1 and premise 2 are all that we know about climate change — a big assumption to say the least.

It is important to note that inductive arguments cannot refute claims. That is because inductive arguments admit the possibility that their conclusion is false. So even if this inductive argument were flawless, the argument could not fulfill the climate change denialist’s goals. This is because an inductive argument cannot refute any of the claims that the denialist wants to deny. It can merely render them more or less probable. (This is what the flowchart means by “refutation requires deductive logic”).

Hidden Premise?

Let’s return to the invalid deductive version of the argument. When we find that an argument is invalid, we should try to fix it. One way to do that is to imagine a premise that makes the rest of the premises support their conclusion — a.k.a., a hidden or suppressed premise. Here is a hidden premise that does that:

(Hidden) Premise 3: If something happened naturally in the past, then it happens naturally in the present and future.

But there is a problem with this premise: it might be false. Think about it. Humans manufacture all sorts of stuff that used to occur only naturally. So when we fix the argument’s premises so that they support their conclusion, we find a new problem: not all of the premises are true. And that means that the conclusion might not be true. So, once again, even the improved version of the argument seems to fail.

Step 5: Ambiguity checking

Suppose that the meaning of a premise or conclusion is unclear. You get this a lot from pseudo-profound bullshit like, “The energy of the universe is perfection.” It’s not at all clear what that means. And if the premises and/or conclusion of an argument are unclear, then we cannot evaluate the argument. Again, it’d be “not even wrong.”

In our example, the hidden premise contains ‘natural’. That word is famously ambiguous. Does it mean something like, “Not caused by humans”? If so, does that rule our being partly caused by humans? And what do we mean by ’caused’? Does it mean something like “created from nothing” or something more modest like “the consequence of…”. Until it is clear precisely what ‘natural’ means, we cannot evaluate the argument. (We can, of course, ask the people giving the argument to clarify.)

Fallacies Related to Ambiguity

By analyzing the argument for clarity, you can find not only ambiguity but other problems — like the problems in this table that is adapted from Cook et al’s supplementary materials.

Ambiguity Using ambiguous language/terminology in premises to lead to a misleading conclusion. Equivocation When the same word (or phrase) is used with two different meanings. Equivocation is a subset of the ambiguity fallacy. Misrepresentation Misrepresents a situation or scientific understanding. Oversimplification A specific type of misrepresentation. Simplifies a situation in such a way as to distort scientific understanding, leading to erroneous conclusions.

Now, suppose that someone clarifies the terms of their argument. Fixing the argument like this changes the argument. And that might break the argument in a new way. So fixing one part of the argument does not necessarily fix all of the argument — as we found when we made our argument valid with a hidden premise. The lesson we are learning here is that we will need to re-evaluate the argument every time we fix/clarify it.

Step 6: Fact-checking

Fact-checking can reveal more fallacies — like the ones adapted from the aforementioned supplementary materials.

Appeal to Conspiracy Proposes a secret plan among a number of people, generally to implement a nefarious scheme such as conspiring to hide a truth or perpetuate misinformation. (See also The Bias Fallacy). Cherry picking Selectively chooses data leading to the desired conclusion that differs from the conclusion arising from all the available data. Fake experts Cites dissenting non-experts who are promoted as highly qualified while not having published any actual climate research. False cause Post hoc ergo propter hoc — after this therefore because of this. Automatically attributes causality to a sequence or conjunction of events. False dichotomy Presents only two alternatives, while there may be another alternative, another way of framing the situation, or both options may be simultaneously viable. False equivalency Assumes that two subjects that share a single trait are equivalent. Impossible expectations Demands unrealistic standards of certainty before acting on the science. In particular, expects deductive proof from inductive reasoning. Magnified Minority Presents a small dissenting group as larger and more significant than they really are. Single cause Assumes there is a single, simple cause of an outcome. Slothful induction Ignores relevant and significant evidence when inferring to a conclusion.

Why is fact-checking the last step?

We can wait to check the facts behind an argument until the end of our evaluation. Why? Two reasons.

First, we can often show how an argument fails before we ever get around to fact-checking. Think about the power of that point for a second: even if all of the premises of someone’s argument are true, their argument can still (and often does) fail. So fact-checking is cool I guess, but argument-checking is at least as important — I’m looking at you, journalists.

Second, people often give bad arguments for true conclusions. So even if we evaluate the argument and find that it fails, we will still need to fact-check the conclusion. Because even if their argument is wrong, the conclusion might be right — more on that in the next post on how arguments are supposed to work.

The “Evaluate The Argument” Challenge

Now that you know how to use the flowchart, practice using it. Find an argument whose conclusion aims to refute something that you believe — because it is easier to find problems in arguments that disagree with us. Then evaluate the argument using the flowchart above. Write out the results of each step: claim, argument, etc. Once you have completed each step, summarize why and how the argument is good or bad on Facebook, Twitter, Reddit or wherever. And then link back to this post so that people can join our argument-checking team. ????

Further Reading

Maybe you want to dive into the deep end of arguments about about climate science. Have I got the book for you! This book “offers insight into the reasons we should believe what climate models say about the world but addresses the issues that inform how reliable and well-confirmed these models are.” Click the book cover to find out more. And if you want to learn more about the philosophy of climate science for free, then check out the Climate Science entry in the Stanford Encyclopedia of Philosophy.