The Web loves to wave the flag of free speech. Look no further than Twitter during the Arab Spring, or activists who turn to YouTube. And of course, there’s Reddit.

We moderate two of Reddit’s flagship subreddits—IAmA and AskReddit. IAmA empowers anyone in the world to pose questions about an individual’s profession, life, or even just something interesting that happened to them. Notable IAmAs include President Barack Obama, NSA whistleblower Edward Snowden, the American Civil Liberties Union, and the Electronic Frontier Foundation. AskReddit facilitates discussion around open-ended questions, some serious, some not so serious.

WIRED Opinion About Courtnie Swearingen and Brian Lynch are both moderators for two of Reddit's largest communities.

IAmA has over 8 million subscribers, and AskReddit currently boasts the largest subscriber base at over 9 million. They receive tens of millions of unique visitors each month and generate thousands of discussions. These subreddits are generally positive in nature, and there are many others like them. They are focused on the same goal: discussion and community.

But where there are people on the Internet, there is also horrible, negative commentary. For simplicity’s sake, we will refer to communities that support racist or otherwise discriminatory viewpoints as hate speech communities.

In the wake of the recent upheaval with Reddit, the new leadership has committed to helping the moderators mitigate these types of speech. This is a great step, as combating it on our own is untenable. The things people say online with or without a connection to their real identity provides a disturbing glimpse into some of the darker parts of humanity. Many times, our female moderators have endured rape threats in public comments and private messages (though it’s not always limited to women). Most moderators on our teams have been called disgusting and racist names, threatened with physical violence, and some have been subjected to doxxing attempts. More than once, female moderators were told they needed to just “shut up, lay back, and be a good woman.” When users are unsure of the gender of the moderators, they tend to hurl threats of rape or insults based on sexual orientation—often at the same time for added effect!

While social media can push huge charitable campaigns, build businesses, and fuel revolutions, it also acts as a clever staging area for the worst viewpoints to assemble and attempt to dominate discourse. Steve Huffman, Reddit’s CEO, understands this blurry line between fostering free speech and harboring hate. “There is value in free speech, and it's hard to get the good parts if you throw away the bad. In the not so distant past, discussing gay marriage would have been considered obscene by many.”

Reddit, along with the majority of the Internet, struggles with how to regulate this in the best interests of its users, its business, and its desire to allow free speech. Reddit also struggles with the existence of hate speech sheltered under the umbrella of free speech principles—principles the site supports. But what Reddit is demonstrating is that it’s possible to provide these communities a space to congregate without supporting or contributing to the perpetuation of their ideas.

The origins of all this can be traced to the United States Supreme Court Reno v. the American Civil Liberties Union. In that case, the Court held that Internet is afforded the same protections given to the print press, and struck down an overbroad piece of the Communications Decency Act limiting obscene material. The First Amendment ultimately allows for speech of many kinds, including burning a cross in the yard of a black family (R.A.V. v. City of St. Paul) or the rights of corporations to donate to political causes or candidates through the use of Super PACs (Citizens United v. FEC).

Most moderators on our teams have been called disgusting and racist names, threatened with physical violence, and some have been subjected to doxxing attempts.

The right to be free of government persecution for certain speech is vigorously defended. The ACLU has defended the KKK and the rights of the Westboro Baptist Church. The principles defended in these cases extend to other forms of speech. However, corporations and Web platforms are not obligated to provide rights in line with the full extent of the First Amendment. First Amendment applicability does not extend to private actors—those that are not government entities or acting on behalf of the government.

Free speech on the Internet is a wonderful idea, but can be infuriating in practice. This is particularly true when companies want to champion those ideals, but also need to continue sustainable growth as well.

Social media platforms have deployed a few different strategies for coping with this—most revolve around the outright ban of content. Sites like Facebook and YouTube hire thousands of content moderators to keep the most unsavory elements out of their feeds. Instagram uses banned hashtags to mitigate pornography. Tumblr banned blogs that encouraged self harm.

But the fundamental issue with content bans is they’re inherently limiting and fail to kill off “group think” that drives the content in the first place.

The problem with controversial or hate speech typically is tied to how social media is built. Users pick out viewpoints that reflect their own, exclude other ideas, and exist in echo chambers that amplify and reinforce their thoughts. They are right, everyone else is wrong. Once a community like this hits a critical mass, the ideas they scream propagate across the site and the users take great offense at dissenting or competing viewpoints. This is not simply limited to “hate speech” but all ideas. Then, this behavior gets noticed when ideas that would typically be frowned on are suddenly being screamed through a bullhorn.

The ACLU and ThisIsTheMovement.org recently did an AMA relating to the one-year anniversary of the Michael Brown incident in Ferguson. This AMA was linked to in some of the racist communities on Reddit, and their related off-site chat rooms. As a result, the AMA was flooded with comments suggesting blacks have lower IQs than whites and are inherently prone to violence, among other things. These are not the kinds of comments you typically see in an AMA, yet due to the growth of these communities, the ideas are rapidly spreading across Reddit.

Reddit is navigating relatively uncharted waters now. And as a company, it tends to wear its heart on its sleeve—you can easily see the intent of any community-related policy or decision it makes.

It appears there are two approaches Reddit has taken to dealing with “hate speech” communities. One is outright banning the community in question. Most recently, this approach was taken with the controversial subreddit r/fatpeoplehate. Almost every time a subreddit of decent size was banned, its community members made it a personal mission to flood other communities with its ideas in protest. This influx of hate-based speech on other forums caused tremendous damage to the reputation of Reddit for new members considering joining. New members only saw posts related to the banning of some niche communities, and had no way of knowing this wasn’t business as usual. The outright banning of content tends to create a sense of martyrdom in the name of free speech, meaning some of Reddit’s darkest ideas spread to unrelated communities.

Reddit’s second approach, and its new method under Huffman, involves isolating and refusing to support communities beyond the bare bones of the site’s architecture. They are allowed to exist, but only in a small corner with no resources and no permission to aggressively promote an agenda across the site. This allows the site to choke out racists and bigots without giving them motivation to act as martyrs for free speech. There are also plans to roll out new tools for the community to combat racism and sexism on its own with admin assistance. As part of this plan, the administrators have created a subreddit called ModSupport, where moderators and administrators can openly talk about the problems that moderators face, what tools they are using to combat them, and how admins can help support the moderators in their goals.

Reddit has declared its intention to contain hateful subreddits, and to relegate them “to the corner” so to speak.

The community members and moderators have a lot of visibility to the problems that this behavior causes, and to the way new tools might be implemented. As moderators, we work hard to maintain a semblance of decorum on our subreddits, and almost daily have similar (although smaller scale) discussions on free speech versus harmful speech. We make judgment calls on the issue several times per day, and have utilized some of the strategies that Reddit is now implementing. Our unique experience at the forefront of this tenuous battle is what puts moderators in a great position to help Reddit achieve the balance it’s looking for.

This tactic isn’t only more cooperative, but a historically proven strategy against hate groups. Stetson Kennedy used a similar approach to dismantle the Klan by gathering data, disseminating it to remove their mystique, and ultimately funneling their secrets to the Superman radio show, wherein Superman would destroy the KKK. It empowered the rest of the population to ridicule what turned out to literally be ridiculous, and helped weaken the Klan’s ideals. Reddit’s ambitions follow a similar strategy: Empower the community to kill the basic ideas associated with hate speech.

Kennedy’s approach is an apt comparison for what Reddit is trying to do now. Reddit's administration has an opportunity to utilize the community, product features, and its leadership to call out hateful speech, and the harm it does to discourse at large. Echo chambers like those that exist on certain sections of Reddit (or in other groups on other social media platforms, blogs and sites) are a detriment to their respective communities, and to society. Users whose negative beliefs are validated and reinforced by self-selecting groups that share these same beliefs can feel empowered to take that sentiment elsewhere—not only into the rest of the Internet, but into real life.

Whether online platforms build in additional safeguards directly to the content submission process, or like Reddit, utilize their own infrastructure, hopefully they will not only be able to contain it but also reduce the likelihood that such speech finds its way into an echo chamber. Reddit has declared its intention to contain hateful subreddits, and to relegate them “to the corner” so to speak. But for the strategy to be truly effective, the entire community has to help put it in them corner, too.

We need Reddit to quickly and efficiently utilize whatever strategies it ultimately chooses. Now is the time for the platform to make a change. Balancing free speech principles and productive discourse is a challenging endeavor, and with the right tools and carefully crafted site-wide policy, massive communities can self-regulate. And Reddit can lead the charge.