Lawmakers and experts are sounding the alarm about "deepfakes," forged videos that look remarkably real, warning they will be the next phase in disinformation campaigns.

The manipulated videos make it difficult to distinguish between fact and fiction, as artificial intelligence technology produces fake content that looks increasingly real.

The issue has the attention of lawmakers from both parties on Capitol Hill.

“It is almost too late to sound the alarm before this technology is released — it has been unleashed … and now we are playing a bit of defense,” Senate Intelligence Committee Vice Chairman Mark Warner Mark Robert WarnerIntelligence chief says Congress will get some in-person election security briefings Overnight Defense: Trump hosts Israel, UAE, Bahrain for historic signing l Air Force reveals it secretly built and flew new fighter jet l Coronavirus creates delay in Pentagon research for alternative to 'forever chemicals' House approves bill to secure internet-connected federal devices against cyber threats MORE (D-Va.) told The Hill.

ADVERTISEMENT

Asked whether this is the next phase of disinformation campaigns, Warner replied "Absolutely."

Experts say it is only a matter of time before advances in artificial intelligence technology and the proliferation of those tools allow any online user to create deepfakes.

“It is regarded by political and technology experts as the next weapon in the disinformation warfare,” Fabrice Pothier, senior advisor with the Transatlantic Commission on Election Integrity, told The Hill.

Pothier worries that technological advances will make it harder to detect false or doctored videos.

“I think it is probably going to be very hard to just use the human eye to distinguish something that is fake from something that is real,” he said.

Intelligence experts say the threat from deepfakes is immense and warn of potentially dangerous scenarios if the technology to create them is not reined in.

“This technology should be considered criminal, counterterrorism or even counterespionage behavior,” said Bob Anderson, principal at The Chertoff Group.

For example, experts posed hypotheticals such as terrorist groups ISIS or al Qaeda manufacturing videos of American soldiers creating atrocities on the battlefield as propaganda; videos falsely showing political candidates making controversial remarks before an election; or CEOs announcing incorrect financial projections.

The fallout from such scenarios could be disastrous.

Sen. Marco Rubio Marco Antonio RubioFlorida senators pushing to keep Daylight Savings Time during pandemic Hillicon Valley: DOJ indicts Chinese, Malaysian hackers accused of targeting over 100 organizations | GOP senators raise concerns over Oracle-TikTok deal | QAnon awareness jumps in new poll Intelligence chief says Congress will get some in-person election security briefings MORE (R-Fla.), a member of the Senate Intelligence Committee, warned that the threat needs to be taken seriously.

“America’s enemies are already using fake images to sow discontent and divide us. Now imagine the power of a video that appears to show stolen ballots, salacious comments from a political leader, or innocent civilians killed in conflict abroad,” Rubio told The Hill in a statement.

Peter Singer, a fellow studying war and technology at New America, said deepfakes will “definitely be weaponized” whether it is for “poisoning domestic politics” or by hostile nation-state actors to gain an edge on the battlefield.

Hany Farid, a computer science professor at Dartmouth College, said many forces are coming together to create a “perfect storm” to facilitate the rapid spread of fake content.

“We have the ability to create misinformation. We have the ability to easily distribute it widely. And then we have a welcoming public that is going to consume what is circulating around without giving it a second thought,” Farid told The Hill.

The 24-hour news cycle and expansion of social media platforms will only compound the problem, experts say.

ADVERTISEMENT

Deepfakes are already here, including one prominent incident involving actress Scarlett Johansson. Johansson was victimized by deepfakes that doctored her face onto pornographic videos.

“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” she told The Washington Post in December, calling the issue a “lost cause.”

Other women who are less well known have recounted similar harrowing stories, claiming videos were produced for sinister intentions ranging from revenge to humiliation.

“Once it gets more widespread and cheaper, anyone can do it,” Pothier added, predicting that could be the case in as little as one or two years.

Experts also pointed to the reaction from other manipulated data to highlight the threat.

Last year, a doctored image purported to show a gun control advocate who survived the shooting at Parkland high school tearing up a copy of the Constitution. The actual undoctored image was of the activist tearing up a bullseye from a gun range.

Gun rights activists and Russian trolls, though, helped spread the false image, said Singer.

Other cases have resulted in bloodshed. Last year, Myanmar's military is believed to have pushed fake news fanning anti-Muslim sentiment on Facebook that ignited a wave of killings in the country.

Experts and lawmakers said they expect a technological race between the tools used to create deepfakes and methods to counter them.

Warner says the solution requires collaboration between the tech community and policymakers. Attempts to legislate the issue could quickly become ineffective as technology changes.

“If we just do this legislatively without active involvement of technology companies, we are not going to get it right,” Warner warned.

Pothier said his group, which aims to combat election interference, is working with the London-based artificial intelligence firm ASI Data Science to develop a program that any user could run to determine "whether a video or audio file is a deepfake or not.”

Experts and lawmakers also discussed requiring manipulated videos to have a disclaimer noting they are edited.

Pothier said the notification could be like disclosures on campaign ads detailing who funded the advertisement.

Warner indicated to The Hill that he may introduce deepfake legislation which would include measures on validating identities.

Some experts said tech companies must also take more responsibility to remove fake content from their platforms.

Singer said one idea, which he jokingly referred to as the " 'Blade Runner' rule," is the concept that the public should have a “right to know whether you are interacting whether you are interacting with a robot or not, or with something that is fake or not.”

Farid noted that many platforms are already “unmanageable” and don’t have the infrastructure in place to quickly take down forged media.

It's also an unprecedented challenge due to the growth of the internet. For example, 400 hours of video are uploaded every minute to YouTube and 350,000 tweets sent per minute on Twitter.

Farid also argued that firms are not incentivized to remove this type of content.

“Let’s stop pretending like Silicon Valley is not exactly like every other industry in the world, they are,” he said.

The Pentagon, in particular its Defense Advanced Research Projects Agency (DARPA), is also working on programs to help rapidly identify whether content is a deepfake.

There is also the question of legal recourse, which remains a gray area. Some argue there should be a way for victims to push back, while others will say the content is protected under the First Amendment.

“You could regulate commercial speech and fraudulent speech — there may be areas where the A.I. technology is used for parody that are protected. But if the intent is to deceive, there is nothing that I think that protects that type of abusive practice,” Rep. Adam Schiff Adam Bennett SchiffTop Democrats call for DOJ watchdog to probe Barr over possible 2020 election influence Overnight Defense: Top admiral says 'no condition' where US should conduct nuclear test 'at this time' | Intelligence chief says Congress will get some in-person election security briefings Overnight Defense: House to vote on military justice bill spurred by Vanessa Guillén death | Biden courts veterans after Trump's military controversies MORE (D-Calif.), chairman of the House Intelligence Committee, told The Hill.

Farid said First Amendment speech must be balanced with the new, emerging challenges of maintaining online security.

“What do we want to do about the people creating pornographic videos with Scarlett Johansson's face superimposed on other people? Is that something we want to allow legally in society. We need to think about that,” he said.

Those legal questions are certain to grow as more sophisticated deepfakes go online.

Other countries are already working to ban deepfakes.

Australia, Farid noted, banned such content after a woman was victimized by fakes nudes and the United Kingdom is also working on legislation.

Singer, however, threw cold water on the idea of an outright ban, noting that there are positive aspects to the technology, citing its innovative use by the advertising, marketing and film industries.

One such example, he noted was the use of AI to create a realistic image of a young Harrison Ford instead of using a less convincing look-a-like actor in the Star Wars movie “Solo.”

The debate over deepfakes and doctored content is only intensifying.

Warner and other lawmakers say the U.S. must be better prepared to combat potential threats than they were in 2016 or even the 2018 midterms.

But for all of the focus on government and tech companies, Farid said the public must also share responsibility.

“We have to stop being so gullible and stupid of how we consume content online,” Farid said.

“Frankly, we are all part of the fake news phenomenon.”