Getty Images

Facebook, Twitter and Google aren't prepared for deepfakes ahead of the US presidential election, a top Congressman said after the tech giants sent letters last week about how they deal with high-tech doctored videos and other kinds of media manipulation. The companies "have begun thinking seriously" about the challenges, said Rep. Adam Schiff, a Democrat from California, but it's their responsibility to prevent their platforms from being weaponized.

"It's clear they are far from ready to accomplish that," Schiff said in a statement.

Deepfakes are sophisticated video forgeries, created automatically by artificial intelligence, that can make people appear to be doing or saying things they never did. Though computer manipulation of video has existed for decades, artificial intelligence is making deepfakes and other so-called synthetic media more accessible and harder to detect.

The companies -- YouTube-owner Google, Facebook and Twitter -- all responded in letters dated Wednesday to questions Schiff sent them two weeks earlier. After he oversaw Capitol Hill's first hearing about deepfakes, in June, Schiff sent written questions to the companies about manipulated media, deepfakes and a simplistically doctored video of House Speaker Nancy Pelosi that went viral earlier this year.

In its response, Facebook repeated stats that its third-party factchecker Lead Stories released in May about how far the Pelosi video spread on Facebook's network. The original video got 2.3 million views and was shared 46,500 times before it was flagged as false, and views and shares dropped after Facebook's policies kicked in, the company said.

Twitter said it was aware of two variants of the Pelosi video -- one that has nine retweets and 797 video views, and another, different video shared by President Donald Trump's account. That one was retweeted 31,100 times and had 6.37 million video views.

In its letter, Google didn't specify what the video's reach has been on YouTube, instead requesting "a closed-door briefing."

And none of the companies addressed the fact that additional copies of the Pelosi videos are being shared and viewed on their platforms.

As for deepfakes specifically, Facebook said it was considering options to better deal with them, which follows CEO Mark Zuckerberg's comments in June that Facebook may treat deepfakes as a different beast than the manipulated media it's used to. Facebook's letter also promised to publicly announce any significant changes to its approach to manipulated media as it learns more about machine-powered fakes.

Facebook also said it recently updated its policies so that any content identified as false or misleading by third-party fact-checkers is automatically cut off from running ads and making Facebook money.

Twitter said it'll remove deepfakes disrupting election integrity when the company becomes aware of them. It said deepfakes of "intimate media" made without the subject's consent -- basically, revenge porn or celebrity face-swaps into pornography -- would cause the original poster's account to be suspended.

Google was the most vague on deepfakes, saying it's involved in "advancing research and best practices" to defend against them and that its recommendation algorithms are always being developed to promote authoritative sources, which are less likely to mislead using a deepfake. Google also didn't respond to messages asking for more details.