Fact checkers perform a vital public service. The truth, however, is contentious. So fact checkers take criticism from all sides. Sometimes, they deserve it. For example, Greg Marx wrote in the Columbia Journalism Review,

But here’s where the fact-checkers find themselves in a box. They’ve reached for the clear language of truth and falsehood as a moral weapon, a way to invoke ideas of journalists as almost scientific fact-finders. And for some of the statements they scrutinize, those bright-line categories work fine.



A project that involves patrolling public discourse, though, will inevitably involve judgments not only about truth, but about what attacks are fair, what arguments are reasonable, what language is appropriate. And one of the maddening things about the fact-checkers is their unwillingness to acknowledge that many of these decisions—including just what constitutes “civil discourse”—are contestable and, at times, irresolvable.

Whether or not fact checkers wield it as a "moral weapon", they certainly use the "language of truth and falsehood", and some of them attempt to define "bright-line categories". This is most true for PolitiFact and The Fact Checker, which give clear cut, categorical rulings to the statements that they cover, and whose rulings currently form the basis of the malarkey score here at Malark-O-Meter, which rates the average factuality of individuals and groups.

The language of truth and falsehood does "invoke ideas of journalists as almost scientific fact-finders." But it isn't just the language of truth and falsehood that bestows upon the art of fact checking an air of science. Journalists who specialize in fact checking do many things that scientists do (but not always). They usually cover falsifiable claims, flicking a wink into Karl Popper's posthumous cup of tiddlies. They always formulate questions and hypotheses about the factuality of the claims that they cover. They usually test their hypotheses against empirical evidence rather than unsubstantiated opinion.

Yet Fact checkers ignore a lot of the scientific method. For instance, they don't replicate (then again, neither do many scientists). Moreover, fact checkers like PolitiFact and The Fact Checker use rating scales that link only indirectly and quite incompletely to the logic of a claim. To illustrate, observe PolitiFact's description of its Truth-O-Meter scale.

True – The statement is accurate and there’s nothing significant missing.



Mostly True – The statement is accurate but needs clarification or additional information. Half True – The statement is partially accurate but leaves out important details or takes things out of context. Mostly False – The statement contains some element of truth but ignores critical facts that would give a different impression. False – The statement is not accurate. Pants on Fire – The statement is not accurate and makes a ridiculous claim. [Malark-O-Meter note: Remember that the malarkey score treats "False" and "Pants on Fire" statements the same.]

Sometimes, fact checkers specify in the essay component of their coverage the logical fallacies that a claim perpetrates. Yet neither the Truth-O-Meter scale nor The Fact Checker's Pinocchio scale specify which logical fallacies were committed or how many. Instead, PolitiFact and The Fact Checker use a discrete, ordinal scale that combines accuracy in the sense of correctness with completeness in the sense of clarity.

By obscuring the reasons why something is false, these ruling scales make it easy to derive factuality metrics like the malarkey score, but difficult to interpret what those metrics mean. More importantly, PolitiFact and The Fact Checker make themselves vulnerable to the criticism that their truth ratings are subject to ideological biases because...well...because they are. Their apparent vagueness makes them so. Does this make the Truth-O-Meter and Pinocchio scales worthless? Probably not. But we can do better. Here's how.



When evaluating an argument (all claims are arguments, even if they are political sound bites), determine if it is sound. To be sound, all of an argument's premises must be true, and the argument must be valid. To be true, a premise must adhere to the empirical evidence. To be valid, an argument must commit no logical fallacies. The problem is that the ruling scales of fact checkers conflate soundness and validity. The solution is to stop doing that.

When and if Malark-O-Meter grows into a fact checking entity, it will experiment with rating scales that specify and enumerate logical fallacies. It will assess both the soundness and the validity of an argument. I have an idea of how to implement this on the web that is so good, I don't want to give it away just yet.

There are thousands of years of formal logic research that stretch into the modern age. Hell, philosophy PhD Gary N. Curtis publishes an annotated and interactive taxonomic tree of logical fallacies on the web.

Stay tuned to Malark-O-Meter, where I'm staging a fact check revolution.