On February 7, a day ahead of the Legislative Assembly elections in Delhi, two videos of the Bharatiya Janata Party (BJP) President Manoj Tiwari criticising the incumbent Delhi government of Arvind Kejriwal went viral on WhatsApp. While one video had Tiwari speak in English, the other was him speaking in the Hindi dialect of Haryanvi. “[Kejriwal] cheated us on the basis of promises. But now Delhi has a chance to change it all. Press the lotus button on February 8 to form the Modi-led government,” he said.

One may think that this 44-second monologue might be a part of standard political outreach, but there is one thing that’s not standard: These videos were not real. This is what the original video was:

It’s 2020, and deepfakes have become a powerful and concerning, tool that allows humans to manipulate or fabricate visual and audio content on the internet to make it seem very real. They are quite like the face animations in Hollywood films, though not nearly as expensive, and with a dark side. Since its introduction in 2017, A-list celebrities have seen their faces pushed onto existing pornographic videos, making deepfakes an infamous tool for misuse.

When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”

Tiwari’s fabricated video was used widely to dissuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. According to Bakshi, these deepfakes were distributed across 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people.

So it’s not surprising that the prospect of building campaign businesses using deepfakes to influence the masses has alarmed fact-checking organisations and policy wonks. Many think deepfakes would take the ongoing war on disinformation and fake news to a whole new level—one that has already been dubbed a “public health crisis”.

Political Deepfakes: Genie out of the bottle

Ever since deepfakes blew up in 2017, the technology has been used extensively to create fake porn videos using existing celebrity video footage and AI algorithms. In fact, 96 percent of those videos are non-consensual deepfake pornography. The unprecedented, and thereby problematic, use of deepfakes has got to do with the fact that much of the code required to fabricate videos is publicly available on several code-repository websites, making it really easy to create such videos.

On the political front, the technology gained attention first in 2018, when a comedian impersonating Barack Obama delivered a PSA video on how deepfakes can be deceptive.

In a lesser-known incident, a video appearance by Ali Bongo, the president of the East African nation of Gabon, was believed to be a deepfake, culminating in an unsuccessful coup by the country’s military. But the political fallout on account of deepfakes has been fairly limited, until now.

With deepfake election campaigns though, we are crossing over into an era where it’s going to be impossible to trust what we see and hear. The video of Tiwari, seated in front of a green-coloured wall and talking to the camera, was used to reproduce a forged version where he says things he never actually said, in a language he doesn't even know! In this case, the speech was scripted, vetted and approved by the BJP for the creation of the deepfakes. But it’s not difficult to imagine someone faking a video to issue threats or hate against a specific section of the population.

While many of the popular deepfake videos are complete faceswaps, a subtler version is to alter only the lip movements from an original video to match the target audio. The Ideaz Factory claims to have done the latter for Tiwari’s video. “We used a ‘lip-sync’ deepfake algorithm and trained it with speeches of Manoj Tiwari to translate audio sounds into basic mouth shapes,” says Sagar Vishnoi, the chief strategist at The Ideaz Factory. The firm hired a dubbing artist to impersonate Tiwari reading the script in Haryanvi, which was then superimposed on the video.

BJP’s Bakshi says the response to those videos has been encouraging. “Housewives in the group said it was heartening to watch our leader speak my language,” he said, recounting one of the comments on a WhatsApp group. After the “viral” response, the party went ahead with a second video of Tiwari speaking English targeted at “urban Delhi voters.”

VICE shared the videos with researchers at the Rochester Institute of Technology (RIT) in New York, who believe that these were indeed deepfakes, but are awaiting confirmation on the same from their purpose-built software to automatically detect deepfakes. Ideaz Factory refused to share more information on the technology used, but Saniat Javid Sohrawardi, a deepfake researcher at RIT, says that “judging by the timeline of their work, I'd think that they used Nvidia's vid2vid code.” The only other well-known algorithm to achieve this task is face2face, an application that was used to make the Obama deepfake video.

In India, though, deepfakes still have some lags. In Tiwari’s videos, a few members on WhatsApp groups pointed out a brief anomaly in the mouth movement. But Vishnoi assures that minor kinks aside, “we have used a tool that has so far been used only for negative or ambush campaigning and debuted it for positive campaign.” He admits that the technology his firm uses is currently not mature enough to synthetically generate the target’s voice using algorithms. But they have plans to scale Tiwari’s “positive” deepfake campaign to upcoming Bihar elections and the 2020 US elections.

Tarunima Prabhakar, cofounder of Tattle, a civic tech project that is building a searchable archive on content circulated on WhatsApp, says, “The problem with the 'positive' campaign pitch is that it puts the genie out of the bottle.” This means that even if the firm somehow self-regulates and decides not to produce nefarious videos, other, possibly not as overt, companies, will come up with other uses to weaponise this technology. “To say only some forms of deepfakes are allowed by political parties, allows for a lot of subjectivity and interpretive power on who defines those forms,” Prabhakar says.

Blurring lines of truth

The increase in deepfakes, aided by the growing number of tools and services, have made it easy for non-experts to create deepfake videos. The Ideaz Factory is just one among several firms that has sprung up in India to profit from this access. There are deepfake portals and individual users across the world advertising to create custom deepfakes for as little as $30. Needless to say, this low barrier to entry has resulted in more covert deepfake operators, with the number of deepfakes doubling to 14,678 in 2019.

This also means that most of the deepfake content will inevitably bypass fact-checking and tech experts and resources who are trying to curb the menace. Pratik Sinha, the founder of AltNews, an Indian fact-checking website that verifies claims and assertions made on social media, tells VICE, “At this point in time, it’s impossible to fact-check or verify something that you don’t recognise is doctored.” When VICE shared the videos with Sinha to check their validity, AltNews was unable to deduce it as fake. “This is dangerous,” says Sinha, whose organisation has fact checked thousands of morphed images and manipulated videos in the three years of its operation. “It’s the first time I’ve seen something like this emerge in India.”

In a country like India where digital literacy is nascent, even low-tech versions of video manipulation have led to violence. In 2018, more than 30 deaths were linked to rumours circulated on WhatsApp in India. “Deepfakes are going to be a supercharger on the kind of misinformation we have,” Sinha said. While tools to reliably detect deepfakes are currently unavailable, there have been efforts by researchers to develop a few. Reality Defender, a browser plugin for detecting fake videos, is one of them.

However, experts like Sinha believe that no firm should be allowed to have a legitimate business around deepfake for election campaigns in India. In October last year, the state of California in the US passed a bill that made it illegal to circulate deepfake videos of politicians within 60 days of an election. The legislation was signed to protect voters from misinformation. But Prabhakar adds that in India, outlawing deepfakes is doomed to fail in implementation, as they would never be openly endorsed by political parties. “They would only continue to be operated by shadow firms,” she says.

However, there could be a solution. Vishnoi thinks there should be government policy around misinformation as a whole, and the way to counter negative deepfakes is through awareness campaigns. But Dr Matthew Wright, the director of the Center for Cybersecurity at RIT, sees the emergence of deepfake for election campaign as “a potentially positive use case as long as there is disclosure.” “Why should our political leaders only be accessible to those who can read, assuming the translation is easily available in the right written language?” he asks. “But if it’s used deceptively, that’s a different story, and I’m sure some will blur the lines.”