Facebook has problems. Fake news. Terrorism. Russian propaganda. And maybe soon regulation. The company’s solution: Turn them into artificial-intelligence problems. The strategy will require Facebook to make progress on some of the biggest challenges in computing.

During two congressional sessions last month, CEO Mark Zuckerberg referenced AI more than 30 times in explaining how the company would better police activity on its platform. The man tasked with delivering on those promises, CTO Mike Schroepfer, picked up that theme in a keynote and interview at Facebook’s annual developer conference Wednesday.

Schroepfer told thousands of developers and journalists that “AI is the best tool we have to keep our community safe at scale.” After the congressional hearings, critics accused Zuckerberg of invoking AI to mislead people into thinking the company’s challenges are simply technological. Schroepfer told WIRED Wednesday that the company had made mistakes. But he said that for Facebook—with more than 2 billion people on its service each month—AI is the only way to address them.

Even if the company could afford to have humans check every post, it wouldn’t want to. “If I told you that there was a human reading every single one of your posts before it went up it would change what you would post,” Schroepfer says.

Facebook already uses automation to police its platform, with some success. Since 2011, the company has used a tool called PhotoDNA, originally developed by Microsoft, to detect child pornography, for example. Schroepfer says the company’s algorithms have steadily improved enough to flag other images it wants to keep off its platform.

First came nudity and pornography, which Schroepfer describes as “on the easier side of the spectrum to identify.” Next came photos and videos that depict “gore and graphic violence”—think Isis beheading videos—which at a pixel-by-pixel level are difficult to distinguish from more benign imagery. “We're now fairly effective at that,” Schroepfer says.

But tough problems remain. Schroepfer says Facebook in recent months has been investing a “a whole heck of a lot more” into the teams working on problems like election integrity, bad ads, and fake news. “It's fair to say we've pivoted a whole lot of the energy of the company over the last number of months towards all of these issues,” he says. Zuckerberg said earlier this week that he expected to spend three years building up better systems to catch unwanted content.

Facebook’s plan for an AI safety net faces larger challenges on problems that require machines to read, not see. For software to help fight fake news, online harassment, and propaganda campaigns like that mounted by Russia during the 2016 election, it needs to understand what people are saying.

LEARN MORE The WIRED Guide to Artificial Intelligence

Despite the success of web search and automated translation, software is still not very good at understanding the nuance and context of language. Facebook’s director of AI and machine learning, Srinivas Narayanan, illustrated the challenge in Wednesday’s keynote using the phrase “Look at that pig!” It might be welcome to someone sharing a snap of their porcine pet, less so as a comment on a wedding photo.

Facebook shows some progress with algorithms that read. On Wednesday, the company said that a system that looks for signs a person may harm himself had prompted more than 1,000 calls to first responders since it was deployed late last year. Language algorithms helped Facebook remove almost 2 million pieces of terrorist-related content in the first quarter of this year.

Schroepfer says Facebook has improved its systems for detecting bullying by training them on fake data from software taught to generate insults. In a process called adversarial training, both the abuse hurler and blocker become more effective over time. That places Facebook among a growing number of companies using synthetic, or fake, data to train machine learning systems.