Wait—did Facebook just address a problem before it became a colossal nightmare?

That’s one way of looking at the company’s new policy on deepfake videos. In a blog post published Monday, Monika Bickert, Facebook’s vice president of global policy management, announced that deepfakes will be joining nudity, hate speech, and graphic violence on the list of Facebook’s categories of banned content. Over the last few years, the company has developed a reputation for reacting to problems only after they publicly blow up in Mark Zuckerberg’s face—whether it’s the spread of hate speech, Russian influence campaigns, or data breaches. So it’s notable that Facebook is taking a strong position on deepfakes before the technology has actually gotten sophisticated enough to create truly convincing fake videos of real people—a potentially apocalyptic scenario for humanity’s ability to tell truth from falsehood.

But there’s another way to look at the policy, which is that it doesn’t do very much to address the types of misleading videos that are already much more prevalent on the platform. (The vast majority of deepfakes currently involve crafting pornographic videos using real women’s faces—a terrible problem, but one already covered by Facebook’s ban on porn.) To run afoul of the policy, a video has to meet two criteria: It must be manipulated “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say,” and it must be “the product of artificial intelligence or machine learning.” What this leaves out, of course, is so-called “shallow fake” or “cheap fake” videos—the kind of selectively edited or out-of-context snippets that humans are already adept at creating and spreading. Bickert’s blog post explicitly says that the policy doesn’t apply to video “that has been edited solely to omit or change the order of words”—even though that can be just as misleading as a total fake. Last week, for example, a video made the rounds in which Joe Biden said, in an apparently xenophobic spirit, “Our culture is not imported from some African nation or some Asian nation.” In fact, the full quote in context made clear that Biden meant Americans have only ourselves to blame for not taking sexual violence seriously enough.

“I think the new ban on AI-driven deepfakes is a step in the right direction, but it’s disappointing that Facebook’s new policy apparently won’t result in the removal of provably false videos doctored with less advanced means,” said Paul Barrett, the deputy director of NYU’s Center for Business and Human Rights and an expert on political disinformation. He pointed to high-profile examples like last year’s viral video doctored to make Nancy Pelosi appear to be slurring her speech. Facebook emphasizes that this type of content is subject to its third-party fact-checking program, and that when a video is flagged as false or misleading, users must click through a prominent disclaimer before viewing or sharing it. But Facebook won’t take the post down.

It’s not hard to see why the company would be more comfortable banning deepfakes than more old-school forms of misleading video. A system that can rely on automated software to sniff out the presence of AI-enabled manipulation doesn’t depend as heavily on human determinations of what’s true and what isn’t. On the other hand, the policy exempts parody and satire, which would seem to require precisely the kind of interpretive judgment that the company abjures to the point of outsourcing fact-checking to third parties.