In many ways, 2016 was the year of fake news. Its greatest hits include a conspiracy that David Brock is involved in a pedophile ring run out of the basement of a D.C. pizza parlor resulting in a gunman firing shots at its entrance (never mind that the store doesn’t even have a basement), a claim that an FBI agent involved in the Clinton email scandal was murdered, a fake Trump quote calling the Republican party dumb, and a baseless claim that 3 Million illegal immigrants voted in the presidential election.

Dubious news has, of course, been a part of American journalism for centuries. During the American Revolution, Benjamin Franklin used a fabricated issue of a reputable Boston newspaper to spread a false story about a Native American plot to collude with the British. Over a century later, sensationalist headlines in New York papers drove bloated concerns about conditions in Cuba, and may have largely generated the public will for the Spanish American War. As William Randolph Hearst said himself, “You furnish the pictures, I’ll furnish the war” (well… he probably never said it). So what makes our generation different? Is journalistic error something we just have to accept? Is this age of media worse, or are its effects magnified? What, if anything, prevents us from dealing with this problem, at least as well as our ancestors did?

After all this is 2016; we have algorithms… and data… and technology. If we just get Google and Facebook to step up and do the work for us, filter the fake news and promote what’s important, then everything will be ok… won’t it? While promoting accuracy in the news is, as a matter of course, a worthy principle, I think that this treatment has wider effects: threatening to widen political polarization and quell open discourse. Every solution has costs, you may say, but news filters are less like chemotherapy and more like a lobotomy. In fact, I think Facebook’s solution should be to reverse its trend of censorship and leave the burden on users to determine what to read, what to believe, and what to share. It isn’t just the creation and dissemination of news that is in trouble, it’s how we engage with it. How you think is more important than what you think, and right now the way we think is broken.

“I’m a Democrat, would a Democrat believe this?”

When you see a headline pop up on your Facebook feed, how do you evaluate it? Do you read the story word for word? See if they cited sources to back up their statistics? Weigh the strength of their logic? Consciously or not, you probably rely less on content and more on questions like: who shared this? Who wrote it? Do I tend to agree with this source? I’m a Democrat, would a Democrat believe this?

In a society dominated by newspapers, like that of Franklin or Hearst, there were precious few sources of news, and each subsisted on a carefully built reputation. When journalists became incentivized to step beyond the truth (to drive a political agenda in the case of Franklin, or simply because sensationalism sells, in the case of Hearst), some of them inevitably would. But any time they stepped out of line, there would be backlash and distrust amongst readers; sales would decline. The collective power of citizens to evaluate source credibility constituted a natural system of checks and balances. Granted, The National Enquirer and Globe still sold papers with faulty and sensational headlines, but they were disbelieved by most of the population, and they didn’t shake trust in the system as a whole.

Then came Social Media, where 62% of adults now get at least some of their news according to pew. Sites like Facebook, Twitter, and Reddit flattened the information hierarchy and democratize the way stories are shared, lowering the barriers to widespread publishing. This increased the volume of headlines, but it also spiked the variety of news producers creating those headlines. Now, news is consumed ala carte. Anyone can have a microphone for a few seconds, amplified very directly according to interest by users. As a result, readers can no longer easily evaluate the reputation of every news source they see, and they began to retreat to identity groups. People fall back on increasingly polarized political parties to define which sources they trust and, increasingly, which ones they

read. The info-graphic to the right shows the mirror opposite profiles of trusted news sources for liberals and conservatives. Social media also expedited this trend by targeting content based on each user’s past interactions. The more liberal you become, the more you tend to engage with liberal news, the more Facebook fills your feed with news shared by your liberal friends, the more liberal you become. You can get the same polarizing information from your search results as well. Google gives you tailored links which you are more likely to click on, but you also skew your results with how you word your search, just try it for yourself: “Was 911 an inside job” yields much different results than “911 conspiracy theory”. If you go into a search with a certain point of view, you are likely to find some kind of support, no matter how absurd the view.

This kind of targeting also favors our short term ‘fight or flight’ thinking, which rewards immediate pleasure, over our longer term brains which are more concerned with truth or morality. People aren’t likely to associate their clicks or page views with any sense of moral duty, whereas purchasing a newspaper is a somewhat clearer act of support for a publication. Actually writing a story isn’t as careful of an act as it used to be, because many of the news outlets doing it aren’t expecting to go viral. It used to be that editors would look to publish a story knowing that a microphone was in their hand, so the responsibility they assumed was serious. It’s like having drone pilots dropping bombs from the safety of a trailer North Dakota. Sure, there is a human somewhere driving the decision, but it’s harder to feel the weight of that decision the farther removed you are from the outcome.

There are no Facts, Only Interpretations

Of course, the media share some blame as well. The rise of pundits in the 24 hour news cycle didn’t just denigrate the quality of the news with their lower factual accuracy (20% of CNN pundit claims are lies, Fox at 60%), they also began to denigrate objectivity by mixing fact and opinion. While science has embraced a trend of openness, the media stayed information-greedy, treating preferred information as an advantage in the news cycle rather than a public good. Even for those who are open with primary sources and dedicated to fact checking like politifact or factcheck.org, it’s difficult to get through to people when the truth is complicated. A mere 29% of Americans (12% of Trump supporters) trust media fact checkers; that’s lower than the number of people who think that the FDA is withholding a cure for cancer (40%).

Take a claim like the one Trump made that 3 million illegal immigrants voted for Clinton. Politifact wrote a 2,000 word rebuke, laying out expert opinion, studies of voter registration, and reasoning about the scale of the claim to come to the logical conclusion that the claim was unfounded. The article describes a study cited to support the claim, and links to a counter-study which debunks its accuracy titled “The perils of cherry picking low frequency events in large sample surveys”….zzzzz. Geez, I studied statistics and that put me to sleep. The point is: to follow a claim all the way down the rabbit hole and rebut it on first principals is difficult. Anyone who is motivated to believe something and doesn’t have lots of time available to weigh the evidence is going to be able to find some kind of crude rationale. Without the reputation of news sources to fall back on, we are left in a chasm of haphazard dogmatism.

When there’s no longer such things as facts, and no one has trust in the media, we are ripe to be seriously mislead. Fake news headlines spread like wildfire. Some from sites masquerading as more reputable sources. Some from sites consistently slinging out bullshit and never being called on it (here’s a categorized list of dubious and click-baity sites). People believe what they want to believe, however detached from reality. Americans these days are dangerously ill-informed about political or economic facts, prone to believe conspiracy theories, and often confident in their wrong beliefs.

How Facebook Could Widen the Great Divide

It seems natural, given the role of platforms like Facebook and Twitter in the spread of fake news, that we would ask them to build in safeguards against it. One proposal is for human curators to remove fake stories from newsfeeds or add a badge indicating that the story is debunked. Another approach may be to address the credibility of the source, by either promoting trusted outlets in newsfeeds in Facebooks relevance algorithm, or to provider banner information about ‘trustworthiness’ under each shared link. Ultimately, any of these ideas are using versions of the same tactic. Given that we can no longer evaluate the reputations of a gigantic number of news producers, we are asking to shift the burden of credibility to a smaller number of news aggregators and social media platforms. Facebook, by determining what gets flagged or filtered, would cease to be just a platform, and would become an arbiter of truth.

Winston Churchill once said “A lie can travel halfway around the world before the truth has a chance to get its pants on” (not really… sorry to fool you twice). Viral stories often spread to the majority of viewers in a matter of hours, so any meaningful fact check of a story would have to be completed, along with an algorithm to find and tag variations of the story from multiple sources, in an impossibly tight time window. Studies show that retractions spread less far and are less impactful than incorrect stories. It turns out, rather, that the more times you hear a claim, the more likely you are to believe it.

Then, we must contend with the blurry definition of what exactly constitutes a false claim. Should Facebook condemn any contention given without accompanying evidence? What if the claim is true but the evidence is not shown or referenced? How should we deal with articles whose titles are misleading, but don’t specifically contain false information? For claims ‘known’ to be false, what level of confidence should we hold before tagging a claim? If instead we decide to flag news by source, then where do we draw the line for reputable: 90% accurate in its claims? 80%? Even with a reasonable guidelines for filtering, mistakes are going to be made. At some point, some Facebook users will start to question the validity of the tagging and filtering mechanism, and perhaps some of them will think that it doesn’t conform to their views. If the definitions for falsehoods to be filtered are so stringent that no one will question them, then nothing really useful will be accomplished. Because it isn’t just fabrication or misrepresentation of objective facts that we need to be concerned about. It’s also incomplete inquiry, poor moral reasoning, and biased analysis, all of which are (in certain contexts at least) appealing to our credulity at an alarming rate.

When we ask Facebook to filter or tag news, we are asking them to become like a newspaper editor, and shoulder the burden of credibility from their sources. The problem is that when people inevitably judge the character of the filtering and tagging mechanism, they are going to weigh how well it has aligned with their worldview. Imagine if, instead of seeking different news sources, liberals and conservatives sought different social networks and scoured the web with different search engines. Obviously the threat of splitting their audience in half would dissuade Facebook or Google from getting into the news filtering business, but also think of the social implications. Imagine the polarization that would occur on a Reddit dedicated to conservatives upvoting content they all agreed with, or a Facebook full of liberals only sharing with like-minded individuals. Note that a more polarizing shift wouldn’t even require a full fracture in social media networks, it could just be that users take more of their controversial discussions offline or to a different medium (in fact, email is still a dominant medium for sharing fake news).

It’s also possible that sites like Facebook and Twitter have too much structural advantage for a shift like this to occur. People like Facebook because their friends are on it, and they have too much social capital built up for anyone (ahem…Google+) to challenge them. Perhaps people will simply accept the factual authority of social media sites, or ignore their fact checking methods entirely. This picture may be more dangerous than we think. Facebook already took a shot at having humans curate trending stories, and they ended the practice abruptly due after complaints that the team was suppressing conservative news. While Facebook may already have some trust issues which would make this task difficult for them, the larger issue is that there is no company or group I would trust to be the unchecked arbiter of truth for the whole population. Once we view Facebook as an institution from which citizens cannot escape (similar to the state), then the moral imperatives of free speech apply.

The New Gatekeeper

Samuel Johnson, upon compiling the first English dictionary, was approached by a lady of London who complimented Dr. Johnson on omitting the profane or improper words from his dictionary. “Madame,” he slyly replied, “I find, however, that you have been looking for them” (this one is probably at least partially true). I think this parable illustrates the futility of prior restraint. Banning the discussion of a bad idea will never eradicate the idea itself, but merely prevent us from publicly disarming its substance. It may also prevent us from learning from a different point of view, lest the argument of the heretic contain a kernel of truth.

The problem with the ability for Facebook to unilaterally label or filter information, is that they would be swimming in murky waters of facts, arguments, and opinions. It may be the case that I would agree with every judgement Facebook makes. In, fact, Facebook could align itself with the will of the overwhelming majority of users, but still the suppression of the minority view is a problem. Because, when we suppress or marginalize an opinion (even an incorrect one), we haven’t just infringed on the rights of the expresser of that opinion, we also infringe on the rights of everyone else who’s right it is to listen and hear the argument. Freedom of speech means nothing to support only the speech one agrees with. Freedom of speech exists to protect individuals from the will of the majority, and to protect the majority from a blindness brought on by the pressures of conformity to popular opinion. If all but one member of the human race were to be unanimous in a single opinion, then the protection of the dissident’s right to expression would become even more important. We cannot assume that certainty of knowledge from one person’s perspective constitutes absolute certainty of knowledge. The only reliable means of filtering fact from fiction should be the uninhibited discourse of the populous. If Facebook and Google are to be the new inescapable means by which information is shared, then they must embrace this principle.

So what can social media sites do? Facebook and Google have already landed an economic blow, banning known fake news networks from their ad networks. They also are a great bastion of information regarding how news spreads. They could conduct a lot of useful research to help trusted news sources figure out how to spread the truth more effectively, and to help promote digital media literacy. Primarily, I think Facebook and Google should base their relevance engine of the direct actions of humans on a given article, and cease targeted predictions, which interject an amoral agent into the mix, tipping the scales for relevance in unseen ways. Facebook in particular should also roll back the current rules by which they censor content on a slew of subjects including violent speech, hate speech, Holocaust denial, and even attacks on Turkey’s first president Kemal Ataturk. As soon as you filter one type of speech, you have given a de facto endorsement to any speech you didn’t filter.

How can we engage better with news?

The success of free discourse depends, of course, on the ability for argumentation and dialogue to consistently favor fact, truth, and wisdom. It is, therefore, incumbent upon us as a populous to commit to responsible discourse. Below are a few of my own precepts on engaging constructively (in a social media environment in particular):

Divorce your opinion from your identity. Your opinions are mostly a product of your environment anyway.

Don’t take a position on every issue. Everyone likes to have talking points so as not to look dumb, we need to create a culture where not taking a position is seen as preferable to taking a wrong position.

Seek dissenting viewpoints. Try to learn something that may change your mind, rather than trying to score debate points.

Vote with your page views, and with your shares. I try to give every news source a chance, but if I can’t establish its legitimacy I’m not going to share it.

Reserve your trust for established sources. Trust is earned, it is not the default position.

Recognize and combat your own bias. If the truth of a claim would benefit you, you should be especially skeptical of that claim.

Pick your battles. If you want to contradict someone, be ready to go down the rabbit hole and argue a point on first principles. Remember that being offended at a differing viewpoint is not, in itself, an argument.

This isn’t an easy problem to solve. The problems and their causes are hard to see. Blame is shared. Progress is incremental. Mistakes will be made. Gandhi once said, “be the change you want to see in the world” (I’m starting to wonder if any of the quotes I read are real). Perhaps I’m idealistic, but I think that before we ask the powers that be to do our thinking for us, we should give this concept a shot.