For months, Facebook has struggled with deceptively altered videos, but late Monday evening, the company cracked, announcing that it would ban deepfakes as part of a new policy on manipulated media. But while Facebook’s new policy bans many of the most egregious examples of manipulation, the new push has been bogged down by concerns over the new rules’ limitations and some unusual confusion about exactly what the policy covers.

Most ominously, the new policy has been met with harsh criticism from members of Congress — the very people Facebook was hoping to impress ahead of a congressional hearing about online deception that’s scheduled for Wednesday morning.

“The real problem is Facebook’s refusal to stop the spread of disinformation.”

In the initial blog post outlining the change, Facebook’s vice president of global policy management, Monika Bickert, said videos that have been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and were created by artificial intelligence or machine learning algorithms would be removed under the new policy. Yet, two of the most hotly criticized altered videos — one edited to make House Speaker Nancy Pelosi sound drunk and another clipping a statement from former Vice President Joe Biden, suggesting he made racist remarks — were likely created using widely available editing software similar to iMovie or Photoshop.

Because those videos weren’t edited to make Biden or Pelosi say new words, they probably wouldn’t be covered by Facebook’s new policy. If Facebook’s fact-checkers reported them as false or misleading, the company could still add links to news articles debunking them, but it would likely leave them up. That’s what happened in the case of the Pelosi video last November, and it looks to be the company’s plan for future cases.

Not surprisingly, many members of Pelosi’s staff are unimpressed with the new rules. In a statement to The Verge responding to the updated policy, Pelosi’s deputy chief of staff Drew Hammill acknowledged the change, but said, “Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”

Even Biden’s campaign railed Facebook for doing the bare minimum when it comes to fighting disinformation online. Biden’s campaign spokesman, Bill Russo, echoed the Pelosi team’s statement, saying, “Facebook’s announcement today is not a policy meant to fix the very real problem of disinformation that is undermining face in our electoral process, but is instead an illusion of progress.”

“Banning deepfakes should be an incredibly low floor in combating disinformation,” Russo continued.

“Banning deepfakes should be an incredibly low floor in combating disinformation.”

In the hours after the new policy was announced, even Facebook officials seemed confused about whether the company was banning deepfakes in political ads. After some flip-flopping with reporters over whether politicians could pay to promote deepfakes, Facebook landed on prohibiting them from being used in advertising. “Whether posted by a politician or anyone else, we do not permit manipulated media content in ads,” a Facebook spokesperson told The Verge.

This decision is a slight reversal of what Facebook and its CEO Mark Zuckerberg said last year about moderating political speech. Last October, President Donald Trump’s reelection campaign ran an ad on Facebook making misleading claims about Joe Biden and his son Hunter’s relationship with the Ukrainian government. That ad set off a debate over whether social media platforms like Facebook and Twitter should allow politicians to lie in digital advertisements. Weeks after the false Biden ad was posted, Twitter announced that it would be banning all political advertisements. Facebook didn’t announce any changes, decidedly allowing politicians across the globe to unjustifiably smear their opponents online.

I shared inaccurate information earlier. I sincerely regret the error and need to correct the record. With regard to our new policy, whether posted by a politician or anyone else, we do NOT permit in ads manipulated media content, as defined here: https://t.co/CAtmBPczlG — Andy Stone (@andymstone) January 7, 2020

That decision to allow politicians to say whatever they want in ads started to chip away on Tuesday after Facebook said it wouldn’t allow deepfakes anywhere on its platform. But since 2016, Facebook has given itself a newsworthiness exemption for deciding whether to remove posts that violate its community standards.

According to a September blog post, Facebook’s newsworthiness exemption doesn’t apply to advertising, but feed posts. This could give politicians some wiggle room when it comes to posting manipulated videos. If Facebook deems a future deepfake or shallowfake as “newsworthy” it could be left up for people to like and share across the platform.

One thing’s for sure: lawmakers are paying attention. Tomorrow, the House Energy and Commerce Committee will be holding a hearing on deepfakes and synthetic media. Bickert, who authored Monday’s blog post, will be representing Facebook and taking questions from lawmakers.

“As with any new policy, it will be vital to see how it is implemented and particularly whether Facebook can effectively detect deepfakes at the speed and scale required to prevent them from going viral,” House Intelligence Committee Chairman Adam Schiff (D-CA) said in a statement on Tuesday.