What’s the first thing most of us do when we wake up in the morning? We reach for our smartphones and scan the headlines from our preferred news sources. We check Facebook, Twitter and Instagram.

As we consume endless images and videos, we’ve come to implicitly trust these sources to keep us updated on what is happening in the world.

But should we?

False information on the internet is becoming increasingly sophisticated with deepfakes—videos that can take a single image and manipulate it into a video sequence using algorithms, i.e. artificial intelligence and machine learning that process huge amounts of data to make it appear as if a person is doing or saying something that they did not actually do.

Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date.

Video sharing technology allows us a first-hand look at events we otherwise could not have witnessed. Because of social media and videos, viewers in the US can watch on-the-ground footage from the protests in Hong Kong. We have access to the atrocities being carried out in Sudan. No matter where we are, we’re able to access events around the world by simply powering on our smartphone.

Yet with deepfake technology becoming increasingly sophisticated and available, this access is being complicated in potentially dangerous ways. Without a process in place, creators of maliciously-intended deepfakes face no real consequences for creating videos that are hugely destructive to our societies.

Deepfakes in our democracy

What happens when it’s easy for anyone with a laptop and access to the internet to fake a video of a state leader declaring war on the United States, or vice versa?

With the 2020 election only months away, the threat of election interference is perhaps the most menacing and urgent when it comes to deepfakes.

We saw what happened with manipulated footage of Nancy Pelosi, and more recently the doctored video featuring Bernie Sanders in a disturbing singing performance. If the American public can be made to believe and trust altered videos of presidential candidates, our democracy is in grave danger. We are opening ourselves up to be vulnerable to continued foreign meddling.

Experts have been looking for ways to counter deepfake algorithms, including creating algorithms and training machine learning programs to process and log common mannerisms of public figures, match habitual facial expressions and mannerisms with certain actions, and determine when content is suspect.

Studying deepfake patterns has also revealed some useful keys, such as the fact that in fake videos, subjects tend to blink less than humans normally do, making it possible to detect when a video is a fake.

I have put forth a proposal for the DEEPFAKES Accountability Act, the first of its kind in the House, to hold perpetrators accountable and protect victims of deepfake content. But if a deepfake video is detected, it means it has already been out there and has had the chance to influence untold numbers of internet viewers. Without a comprehensive non-partisan industry-expert partnership system in place, merely detecting when content is fake does not adequately prevent damage.

In other words, Congress needs to get involved with experts, from data scientists to machine-learning computer engineers, to big tech leaders. As we develop better ways to detect fake content, we need to establish legal recourse, including irremovable digital watermarks and disclaimers on altered video content. We need to update statues for identity theft and false impersonation to include digital impersonation. Social media companies need to have access to trainings on the latest deepfake detection technology. Victims of deepfake information need a private right of action in order to have the opportunity to vindicate their reputation in court.

We need to work together to stop deepfakes from becoming the defining feature of the 2020 elections.

Deepfakes in money

The problem doesn’t stop at the elections, however. Deepfakes can alter the very fabric of our economic and legal systems. Recently, we saw a deepfake video of Facebook CEO Mark Zuckerberg bragging about abusing data collected from users circulated on the internet. The creators of this video said it was produced to demonstrate the power of manipulation and had no malicious intent—yet it revealed how deceptively realistic deepfakes can be.

If deepfake technology continues to evolve without a check, video evidence could lose its credibility during trials.

A statement by the CEO of any influential company can have an immediate impact on a company’s stock price, and therefore the stock market overall. Placed in the right way, deepfakes are a ready tool for economic manipulation, potentially interfering in financial markets by targeting high-profile industry leaders and companies.

Deepfakes in justice

Deepfakes threatens our criminal justice system. With the technology so readily available, it is possible for pretty much anyone to fake a surveillance video and use it as evidence for a criminal trial, for example. Deepfakes have the potential to look realistic enough to make it impossible for a jury to detect their fraudulence.

These instances are far less visible than videos of CEOs and politicians, but have had a devastating impact on people’s lives. AI can easily change what reality looks like; similar algorithms have assigned criminal records to the wrong person. In some cases, they have been instrumental in sending people to jail. In one instance, a teenager was wrongly arrested based on data from Apple face recognition.

If deepfake technology continues to evolve without a check, video evidence could lose its credibility during trials. Innocent people could be charged for crimes they did not commit. And people who are guilty of crimes could go free, potentially posing a threat to civilians.

Deepfakes are a threat to the truth on which we base our democracy. If we get ahead of this threat now, we can still prevent permanent damage to the fabric of our society in the future.

There are great benefits to advancements in technology, but we need to look at our creations with a critical eye, and call them out when they inflict harm on our justice system, economy, and society as a whole.

We need to take action now. If we don’t, we’re on borrowed time before the threat of deepfakes becomes dire for our elections, our government, and our society.