Photo : John Locher ( AP )

Presidential candidate Beto O’Rourke wants to stem the spread of hate speech on the internet after several recent mass shootings, including one in O’Rourke’s hometown of El Paso earlier this month, have been linked to online extremism. One solution proposed in a recent blog post on his campaign website would hold online platforms accountable for what their users post.


It involves amending a bill you might be familiar with at this point: the Communication Decency Act’s Section 230. You know, the one conservative lawmakers have been fighting to change for months now over perceived bias online despite not appearing to fully understand what the bill means.

This legislation gives legal immunity to websites and providers from whatever awful crap its users may get up to. It’s why a site like Twitter doesn’t face legal action if some rando egg tweets out threatening messages or libel.


O’Rourke’s plan would change this. In his post, he suggests eliminating this immunity so social media platforms and providers could be sued if they “knowingly promote content that incites violence.” So when a mass shooting is streamed live on Facebook or disturbing manifestos get posted to 8chan minutes before another tragedy, those websites could be held partially legally responsible for hosting that content.

Another of his proposals would also tack on a new requirement to social media platforms’ terms of service, demanding they ban hateful activities, which his post defines as those that “incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation or disability.” To avoid potential abuses, platforms would have to make this process transparent and provide a way for users to appeal these decisions.

Given that nearly all of the internet’s most frequented sites allow users to post their own content, this new policy that would have severe repercussions. While content moderation should be a priority for online platforms, particularly given the rise in online extremism O’Rourke references, the kind of across-the-board accountability would discourage companies from allowing users to post at all. If its users become a liability, Twitter’s going to be a lot more concerned about what that rando egg is tweeting, or what anyone is tweeting for that matter, because providing a platform to speak online would carry enormous financial and legal risk.