Facebook announced late Monday night that it has banned manipulated videos — also known as deepfakes — ahead of the 2020 election.

The move was announced in a blog post confirming a report from The Washington Post.

“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Monica Bickert, Facebook's vice president of global policy management, wrote on the blog.

ADVERTISEMENT

Bickert is set to testify before the House Energy and Commerce Committee on deepfake technology along with experts in the field.

Facebook's new policy explicitly does not cover parody or satire videos, or videos that omit or change the order of words.

Instead, the policy focuses on videos that have been “edited or synthesized” by technology like artificial intelligence in a way that is not "apparent to an average person."

That distinction means the new policy would likely not cover the video of Speaker Nancy Pelosi Nancy PelosiPowell warns failure to reach COVID-19 deal could 'scar and damage' economy Overnight Defense: House to vote on military justice bill spurred by Vanessa Guillén death | Biden courts veterans after Trump's military controversies Intelligence chief says Congress will get some in-person election security briefings MORE (D-Calif.) that went viral last year which had been edited to make her appear intoxicated.

It also would seemingly not apply to the video clip of Democratic presidential candidate and former Vice President Joe Biden Joe BidenThe Memo: Warning signs flash for Trump on debates Senate Republicans signal openness to working with Biden National postal mail handlers union endorses Biden MORE that appeared to show Biden espousing white nationalist talking points that was heavily circulated on Twitter last week.

Pelosi spokesman Drew Hammill hit Facebook over the rule change, tweeting that the "real problem is Facebook’s refusal to stop the spread of disinformation."

Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation. https://t.co/JMhMD9ufaF — Drew Hammill (@Drew_Hammill) January 7, 2020

This is not Facebook's first effort to tackle deepfake content.

In September, the company created a "deep fake detection challenge," inviting researchers to compete to create methods to automatically identify the content for prize money.

The social media giant has also partnered with Reuters to provide courses for newsrooms on how to identify manipulated media.