“We can now catch this sort of thing — proactively,” Mr. Schroepfer said.

The problem was that the marijuana-versus-broccoli exercise was not just a sign of progress, but also of the limits that Facebook was hitting. Mr. Schroepfer’s team has built A.I systems that the company now uses to identify and remove pot images, nudity and terrorist-related content. But the systems are not catching all of those pictures, as there is always unexpected content, which means millions of nude, marijuana-related and terrorist-related posts continue reaching the eyes of Facebook users.

Identifying rogue images is also one of the easier tasks for A.I. It is harder to build systems to identify false news stories or hate speech. False news stories can easily be fashioned to appear real. And hate speech is problematic because it is so difficult for machines to recognize linguistic nuances. Many nuances differ from language to language, while context around conversations rapidly evolves as they occur, making it difficult for the machines to keep up.

Delip Rao, head of research at A.I. Foundation, a nonprofit that explores how artificial intelligence can fight disinformation, described the challenge as “an arms race.” A.I. is built from what has come before. But so often, there is nothing to learn from. Behavior changes. Attackers create new techniques. By definition, it becomes a game of cat and mouse.

“Sometimes you are ahead of the people causing harm,” Mr. Rao said. “Sometimes they are ahead of you.”

On that afternoon, Mr. Schroepfer tried to answer our questions about the cat-and-mouse game with data and numbers. He said Facebook now automatically removed 96 percent of all nudity from the social network. Hate speech was tougher, he said — the company catches 51 percent of that on the site. (Facebook later said this had risen to 65 percent.)

Mr. Schroepfer acknowledged the arms race element. Facebook, which can automatically detect and remove problematic live video streams, did not identify the New Zealand video in March, he said, because it did not really resemble anything uploaded to the social network in the past. The video gave a first-person viewpoint, like a computer game.

In designing systems that identify graphic violence, Facebook typically works backward from existing images — images of people kicking cats, dogs attacking people, cars hitting pedestrians, one person swinging a baseball bat at another. But, he said, “none of those look a lot like this video.”