Among Facebook users, Zuckerberg said, “I’m personally quite worried that the isolation from being at home could potentially lead to more depression or mental health issues.” To prepare for the potential onslaught, Facebook is ramping up the number of people working on moderating content about things like suicide and self-harm, he added. Another concern is the spread of misinformation—always an issue online, but particularly during a public health crisis. As part of its wider response to Covid-19, Facebook also announced it’s rolling out a Coronavirus Information Center to the newsfeed, where people can get updated information about the pandemic from authoritative sources.

As complaints over the spam glitch grew on Tuesday, those affected as well as some former Facebook employees wondered if it could be connected to the company’s recent workflow changes. “It looks like an anti-spam rule at FB is going haywire,” Facebook’s former security chief Alex Stamos said on Twitter. “Facebook sent home content moderators yesterday, who generally can't [work from home] due to privacy commitments the company has made. We might be seeing the start of the [machine learning] going nuts with less human oversight.”

Facebook’s vice president of integrity, Guy Rosen, quickly swooped in to clarify: “We’re on this—this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. We're in the process of fixing and bringing all these posts back,” he wrote in a reply to Stamos on Twitter. (When asked for more detail as to what happened Tuesday evening, Facebook policy communications manager Andrew Pusateri directed WIRED to Rosen’s tweet.)

But researchers say problems like Tuesday night’s could become more common in the absence of a robust team of human moderators. YouTube and Twitter announced Monday that their contractors would be sent home as well, and that they too would be relying more heavily on automated flagging tools and AI-powered review systems. Leigh Ann Benicewicz, a spokesperson for Reddit, told WIRED on Tuesday that the company had “enacted mandatory work-from-home for all of its employees,” which also applies to contractors. She declined to elaborate about how the policy was impacting content moderation specifically. Twitch did not immediately return a request for comment.

With fewer moderators, the internet could change considerably for the millions of people now reliant on social media as their primary mode of communication with the outside world. The automated systems Facebook, YouTube, Twitter, and other sites use vary, but they often work by detecting things like keywords, automatically scanning images, and looking for other signals that a post violates the rules. They are not capable of catching everything, says Kate Klonick, a professor at St. John's University Law School and fellow at Yale’s Information Society Project, where she studies Facebook. The tech giants will likely need to be overly broad in their moderation efforts, to reduce the likelihood that an automated system misses important violations.

Keep Reading The latest on artificial intelligence , from machine learning to computer vision and more

“I don’t even know how they are going to do this. [Facebook’s] human reviewers don’t get it right a lot of the time. They are amazingly bad still,” says Klonick. But the automatic takedown systems are even worse. “There is going to be a lot of content that comes down incorrectly. It’s really kind of crazy.”

That could have a chilling effect on free speech and the flow of information during a critical time. In a blog post announcing the change, YouTube noted that “users and creators may see increased video removals, including some videos that may not violate policies.” The site’s automated systems are so imprecise that YouTube said it would not be issuing strikes for uploading videos that violate its rules, “except in cases where we have high confidence that it’s violative.”

As part of her research into Facebook’s planned Oversight Board, an independent panel that will review contentious content moderation decisions, Klonick has reviewed the company’s enforcement reports, which detail how well it polices content on Facebook and Instagram. Klonick says what struck her about the most recent report, from November, was that the majority of takedown decisions Facebook reversed came from its automated flagging tools and technologies. “There's just high margins of error; they are so prone to over-censoring and [potentially] dangerous,” she says.