Last week, the Knight First Amendment Institute urged Rep. Alexandria Ocasio-Cortez (D-NY) to unblock critics on Twitter. The Knight Institute has led a push to treat politicians’ social media accounts as public forums, filing a successful lawsuit against President Donald Trump for his Twitter-blocking habits. Ocasio-Cortez argued that the issue was more nuanced, though: she said she was blocking “less than 20 accounts” and that it was for harassment, not political viewpoints.

Whether that’s accurate, the law suggests that blocking should be a last resort. A legal ruling from last year encouraged politicians to use Twitter’s “mute” function, which lets someone simply avoid seeing tweets from a user they dislike. It’s a fair way to protect the First Amendment online. But if the goal is to actually encourage meaningful speech, this hands-off approach raises real problems — even if it’s ultimately the right legal approach.

Court rulings against Trump have emphasized that his tweets’ reply sections create an “interactive space” that’s accessible to millions of people. Anybody who clicks on one of Trump’s tweets can see the replies, and, taken together, the responses function similarly to an offline public town hall. Trump isn’t required to read anybody’s messages, so he’s free to use the mute function, which just hides people’s tweets on his personal timeline. But blocking somebody would prevent them from seeing or engaging with tweets. And politicians generally can’t deny people the right to amplify their voices by replying to a social media post any more than they could kick them out of a meeting for expressing an unpopular opinion.

On the internet, speech can be a censorship tool

Social media poses some unique problems that physical spaces don’t, however. It can operate at a scale that wouldn’t be possible offline, and it’s easy to hijack a conversation or amplify a point of view with automated posts or a handful of dedicated people acting in bad faith. Trolls can attack anyone who participates in a conversation, not just politicians, and they can do it across all of social media, not just in a single thread or post. This can turn supposedly open spaces into deeply hostile or unnavigable ones — not just for public figures like Trump or Ocasio-Cortez, but for anybody who wants to engage with them.

As writer and law professor Tim Wu, journalist Zeynep Tufekci, and many others have pointed out, new tactics like troll armies and spammed responses have made traditional First Amendment protections less effective at promoting free speech online. “It is no longer speech or information that is scarce, but the attention of listeners,” explained Wu in a 2017 Knight Institute blog post. “No one quite anticipated that speech itself might become a censorial weapon, or that scarcity of attention would become such a target of flooding and similar tactics.”

Muting doesn’t offer a real solution to these problems. It might stop Trump or Ocasio-Cortez from seeing something they don’t want to, but for everyone else reading threads, repetitive or abusive comments can still drown out good-faith responses. Twitter muting notably doesn’t prevent trolls from sharing posts with their non-muted followers who can then join the fray. There’s not really a meaningful equivalent to “muting” a Facebook or Instagram comment, either.

As we’ve discussed before, this is complicated and relatively uncharted legal territory. American Civil Liberties Union staff attorney Vera Eidelman stresses that people should have expansive First Amendment rights on government social media pages, whether that means protections for entire accounts or individual comments. “I think that it’s essentially the same line, because it’s excluding someone’s speech on the basis of their viewpoint from a designated public forum,” she says. (This only applies to public officials’ accounts, not entire social media platforms, which can ban whoever they want.)

“This is something that society has dealt with every time that a broader mass of people have had access to systems of mass publication.”

Offline government meeting organizers can impose some rules on people’s speech, as long as they’re reasonable “time, place, and manner” restrictions. That includes setting speaker time limits or kicking out people who seriously disrupt the meeting — although something can’t be “disruptive” simply because it’s politically offensive. Some spaces might be considered limited public forums where comments have to adhere to a particular topic, although Eidelman notes that these requests can also unfairly set the political terms of a conversation.

Most blocking lawsuits so far have focused on determining whether politicians’ accounts are public spaces at all — not how to regulate the time, place, or manner of speech on them. Now that courts have repeatedly confirmed they’re public, we might start to see a more nuanced discussion of what the rules for those spaces are. But real-life town halls can be incredibly contentious. Around the rise of the tea party in 2009 and 2010, attendees shouted down members of Congress or even hanged lawmakers in effigy, and a leaked memo outlined hard-nosed tactics for “rattling” representatives. It’s doubtful that online ones would be any more restrained.

Eidelman also emphasizes that the internet hasn’t necessarily changed everything. Bots, for example, can let people amplify a message artificially but so can printing flyers and leaflets, which is protected. She notes that while online spaces can get flooded with content, people can still generally post and read comments without space or time limits — as opposed to a physical meeting where only a set number of people can speak. “I think this is something that society has dealt with every time that a broader mass of people have had access to systems of mass publication,” she says.

And while abuse and harassment are huge issues online, they’re also nebulous terms inevitably seen through a lens of partisan politics. Mike Masnick of Techdirt notes that Trump could easily claim critics are harassing him. He’s already got a penchant for complaints of “presidential harassment.” There’s a serious risk in letting political figures control who gets access to them. In 2017, ProPublica spoke to dozens of people who described being cut off from their elected officials on Facebook and Twitter.

At the same time, we’ve watched parts of social media become less accessible simply because they aren’t moderated, as the angriest users drive some of the most vulnerable away. Some spaces, like Trump’s Twitter feed, will probably always be churning fonts of chaos. But other government pages might want to balance good-faith acceptance of criticism with rules that will help clear citizen speech win out over hostile, repetitive spam. This might be hard to achieve, and in the long run, it might not be worth the risk of political censorship. In the short term, though, it could be another example of how more speech doesn’t necessarily mean freer speech for everyone online.