Social-media platforms like Facebook, Instagram, and YouTube have long relied on human moderators to manually comb through content and remove violent and offensive material that ranges from racist and sexist hate speech to graphic video of mass shootings. Often working on contract, at minimum wage with few benefits, moderators can find themselves pulling long hours while being pummeled with content that takes a serious toll on their mental health.

Automoderators are an attempt to mitigate the tedium and negative effects of such work. Developed by Redditor Chad Birch as a way to augment his ability to moderate the r/gaming channel, AutoMod is a rule-based tool for identifying words that violate a certain page’s posting policies. It’s since gone into wide use—Reddit adopted it sitewide in 2015, and the hugely popular game-streaming platforms Twitch and Discord followed suit soon after.

Whether AutoMod is actually a time-saver is questionable, though. On the one hand, automoderators are very good at what they do—if they’re programmed to find swear words, they will find and block posts that contain them without fail. It can send notifications to posters about problematic content, which Jhaver says is “educational,” in that authors can learn what was wrong with whatever they posted.

That’s not a small feat. As Jhaver and his colleagues note, about 22% of all submissions on Reddit between March and October of 2018 were removed. That comes out to about 17.4 million posts in that time period.

But let’s say the word is important for context in the post—a discussion in 2016 of soon-to-be-president Donald Trump’s infamous comment about grabbing a woman’s genitals, for example. Such posts would get flagged because of the offensive language, even though discussing that language is the point of the post in the first place. Jhaver says this frustrates users, who then have to go back and ask moderators to reinstate the post.

And in a social-media world where troubling content increasingly consists of offensive memes, live-streams of shootings, or other visual, textless content, AutoMod’s reliance on finding keywords is a big liability.

Robert Peck, a moderator for the large subreddits r/pics and r/aww, knows this all too well. Each of those pages is image driven, and each has millions of followers posting far more content than anyone could be reasonably asked to sift through.

Still, he says that even though it cannot analyze images, AutoMod has made his work easier. “Users add descriptors to images directly, and we can check those titles,” he says. “We look for account fattening or spam that have accounts that automate posts. They often use parentheses. We can tell AutoMod to look for those patterns.”

Like it or not, AutoMod and its ilk are the future of social platform moderating. It will probably always be imperfect, because machines are still a long way from truly understanding human language. But this is what automation is supposed to be all about: saving people time on tedious or objectionable tasks. Being able to concentrate on posts that require a human touch makes a moderator’s job that much more valuable, and allows both moderators and posters to focus on having better conversations.

It won’t solve the problem of people posting nasty, malicious, or otherwise deleterious content—that will still be one of the thorniest problems afflicting the modern internet. But it is making a difference. Peck says he’s grateful for AutoMod’s ability to help him “batch process” posts. “It’s a powerful piece of technology and quite user friendly—nowhere near the difficulty of programming an equivalent bot,” he says. “[AutoMod] is my most powerful tool, and I’d be lost without it.”