The House Judiciary Committee will hold a hearing on “The Filtering Practices of Social Media Platforms” on April 26. Public attention to this issue is important: calls for online platform owners to police their members’ speech more heavily inevitably lead to legitimate voices being silenced online. Here’s a quick summary of a written statement EFF submitted to the Judiciary Committee in advance of the hearing.

Our starting principle is simple: Under the First Amendment, social media platforms and other online intermediaries have the right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities not to be silenced by harassment.

The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

But we won’t make the Internet fairer or safer by pushing platforms into ever more aggressive efforts to police online speech. When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier.

Indeed, for every high profile case of despicable content being taken down, there are many, many more stories of people in marginalized communities who are targets of persecution and violence. The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

That’s why we must remain vigilant when platforms decide to filter content. We are worried about how platforms are responding to new pressures to filter the content on their services. Not because there’s a slippery slope from judicious moderation to active censorship, but because we are already far down that slope.

To avoid slipping further, and maybe even reverse course, we’ve outlined steps platforms can take to help protect and nurture online free speech. They include:

Better transparency

Foster innovation and competition; e.g., by promoting interoperability

Clear notice and consent procedures

Robust appeal processes

Promote user control

Protect anonymity

You can read our statement here for more details.

For its part, rather than instituting more mandates for filtering or speech removal, Congress should defend safe harbors, protect anonymous speech, encourage platforms to be open about their takedown rules and to follow a consistent, fair, and transparent process, and avoid promulgating any new intermediary requirements that might have unintended consequences for online speech.

EFF was invited to participate in this hearing and we were initially interested. However, before we confirmed our participation, the hearing shifted in a different direction. We look forward to engaging in further discussions with policymakers and the platforms themselves.