The Automatic Weapons of Social Media

It’s time for the platforms to admit their response is flawed, and work together to protect our civil discourse.

This is not an easy essay to write, because I have believed that technology companies are a force for good for more than 30 years. And for the past ten years, I’ve been an unabashed optimist when it comes to the impact of social platforms like YouTube, Twitter, and even Facebook. I want to believe they create more good than bad in our world. But recently I’ve lost that faith.

What’s changed my mind is the recalcitrant posture of these companies in the face of overwhelming evidence that their platforms are being intentionally manipulated to undermine our democracy. This is an existential crisis, both for civil society and for the health of the businesses being manipulated. But to date the response from the platforms is the equivalent of politicians’ “hopes and prayers” after a school shooting: Soothing murmurs, evasion of truly hard conversations, and a refusal to acknowledge the core problem: Their automated business models.

I’m not advocating the elimination of those models, any more than any sane person would advocate the elimination of guns. I am, however, arguing for curbs on the most destructive elements of those models: The machines which create the carnage. In the case of guns, it’s weapons of warfare like the AR-15. In the case of the platforms’ models, it’s self-service dashboards which allow anyone with rudimentary knowledge of a platform’s engagement algorithms to leverage that platform’s APIs to target public discourse.

Case in point: Russian linked accounts pivoting to gun issues in the wake of the Parkland shootings this week. That these bots and trolls are actively sowing discord is not in dispute — read this from NPR, or this from Wired. The top trending hashtags over the past 24 hours were all driven by identified propaganda accounts. Independent researchers have irrefutable proof that these actors are leveraging social media — in this case Twitter — to force divisive and often false narratives into our public discourse. (The same is true of #BlackPanther propaganda, yet another proof point of my argument).

Why, when faced with exactly these facts in the past, have companies taken the stance that “hoaxes happen, but they are ultimately discredited by our user community”?

Perhaps it’s because the complexity and scope of these platforms are beyond comprehension or control. This is the only non-cynical conclusion I can draw, and when it comes to advertising, I’ve even argued as much. “We built this thing, it’s super complicated, and people are outsmarting us, the creators, on our own platform. It’s out of our hands!”

But I don’t believe that explanation when it comes to dealing with bots and trolls. Information warfare leveraging social media is sophisticated and nuanced, as this Molly McKew thread superbly demonstrates. But does that mean it’s beyond the abilities of the smartest engineers and product managers in the world?

No, a more reasonable explanation for why Twitter, Facebook, and Google have not taken a more ambitious approach to stopping abuse of their platforms is because they are afraid to do so. Taking strong action would place limits on the driving force of their growth and their profits: Automation. And it would require that they acknowledge that their working interpretation of the law which protects them from liability, specifically Section 230 of the Communications Decency Act, is flawed. I’ve broken down the issues behind 230 elsewhere, so let me dig into the subject of automation.

Just like guns, social media platforms can be automated — they all have APIs that allow accounts to post content in an automated fashion. And just like guns, when accounts are automated, they can do significant damage.

There are plenty of good use cases for automation — publications posting their articles, developers creating chatbots, artists pranking society, advertisers creating customized messaging for specific audiences. But when malicious actors get their hands on the tools, divisive carnage ensues.

The results seriously threaten democracy. Bots have targeted the FBI and Robert Mueller, and aided by a President well aware of where his support truly emanates, has driven significant swings of public opinion, corrupting the very essence of our nation’s rule of law.