On Monday, Facebook announced a new policy to ban artificial intelligence-generated “deepfakes” as well as videos “edited or synthesized … in ways that aren’t apparent to an average person.” In theory that sounds like a welcome effort to curb disinformation.

But the new policy won’t, for example, cover subtly edited videos like last year’s slowed down viral Nancy Pelosi clip. And politicians and their ads remain off limits to Facebook’s third-party fact checkers. A company spokesman told me Tuesday that if Facebook determines a politician has shared manipulated media in an ad, Facebook will remove it. But, as far as I can tell, bogus content — even outright lies — is still allowed, as long as it isn’t manipulated by artificial intelligence.

And the company left a big loophole: Facebook will not censor political speech if it is in the public interest to see it. “If a politician posts organic content that violates our manipulated media policy, we would evaluate it by weighing the public interest value against the risk of harm. Our newsworthy policy applies to all content on Facebook, not just content posted by politicians,” the spokesman wrote in an email.

In an effort to assuage the public on political disinformation, Facebook has only muddied the waters. Would Facebook allow distribution of a deepfaked video if it were shared by the president as rationale for the bombing of Iran? Such a hypothetical post would certainly be newsworthy, though such a video would be a clear violation of Facebook’s rules. What if the president shared a less manipulated “shallowfake” video, like the Pelosi video, of a potential 2020 opponent? These are, admittedly, edge cases, but when it comes to foreign policy, national security and election integrity, edge cases are often the highest profile and consequential.