There’s a family of quotations, routinely misattributed to the likes of Winston Churchill and Mark Twain, that run something like this: “A lie gets halfway around the world before the truth has a chance to put its pants on.” Wherever the line comes from, it is correct. Misinformation moves quickly. And when misinformation is shared, it does immediate damage and continues to do so even after it is corrected.

At sensitive moments — for instance, during or just ahead of an election — misinformation can have a significant, even decisive, effect on an outcome. So, too, can irresponsible media coverage.

AD

AD

In 2017, Nate Silver wrote, “The Comey letter probably cost Clinton the election.” The letter, sent just before Americans went to the polls, discussed the FBI investigation into Hillary Clinton’s use of a private server while she was secretary of state. The letter was covered extensively and disproportionately by the media; it further compromised the stumbling Clinton campaign. The letter wasn’t misinformation, but it was proof of concept: A tactical (mis- or dis-)information strike at just the right time can be powerful, especially if the media fumbles its coverage.

In a free society, the public sphere is populated by diverse voices, minimally constrained. In the digital public sphere, it is easier than ever to weaponize information for political purposes. Bad information is an old problem. But the speed, reach, volume, ease and effect of sharing bad information has given the old problem a new, more sinister, character.

In July, Caroline Orr of the National Observer first analyzed the #trudeaumustgo hashtag and found that it was “driven in part by inauthentic activity including artificial amplification and automation.” Later, Marc Owen Jones, an assistant professor of Middle East studies and digital humanities at Hamad bin Khalifa University, looked into another suspicious hashtag #Trudeaucorruption. He found that it was mostly run by the same accounts as #trudeaumustgo. Faceless, nameless bots and trolls affecting what we talk about and how we talk about it.

AD

AD

What is to be done? Researchers and journalists ought to help citizens understand and manage information online, especially during elections. Ahead of the October vote in Canada, there are a handful of projects and groups (for instance, the Digital Democracy Project) studying and monitoring the quality of information circulating online. BuzzFeed News and the Toronto Star have teamed up to cover misinformation. We need more such efforts.

Building institutional capacity to push back against misinformation is a good start. The media must practice extraordinary forbearance and good judgment when deciding what to share and when to share it, lest they become useful tools for misinformation peddlers. The more of this done in real time, the better.

Misinformation efforts — at home and abroad — succeed by preying on our psychological limits (we are often biased, emotional rationalizers) and the social, political, cultural and economic cleavages of states. They leverage digital media platforms and technologies to exploit us. Now more than ever, we must cooperate across partisan and other divides to resist misinformation efforts and protect the public sphere and democracy, despite our worst impulses.