For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter.

This fall, Gabon was facing an odd and tenuous political situation. President Ali Bongo had been out of the country since October receiving medical treatment in Saudi Arabia and London and had not been seen in public. People in Gabon and observers outside the country were growing suspicious about the president’s well being, and the government’s lack of answers only fueled doubts; some even said he was dead. After months of little information, on December 9th, the country’s vice president announced that Bongo had suffered a stroke in the autumn, but remained in good shape.

Despite such assurances, civil society groups and many members of the public wondered why Bongo, if he was well, had not made any public appearances, save for a few pictures of him released by the government along with a silent video. Amid the speculation, the president’s advisors promised that he would be delivering his customary New Year’s address.

“Something doesn’t look right.”

But when Gabon’s government released the video, it raised more questions than it answered. Some Gabonese, seeing the video, thought there was little left to doubt about their president’s health. But Bongo’s critics weren’t sold. One week after the video’s release, Gabon’s military attempted an ultimately unsuccessful coup—the country’s first since 1964—citing the video’s oddness as proof something was amiss with the president.

While a variety of theories about the video have circulated in the country, Bruno Ben Moubamba, a Gabonese politician who has run against Bongo in the previous two elections, argues that the video is a so-called deepfake—the photoshopped equivalent of video where software can create forged versions of people saying and doing things that they never actually said or did.

While most media coverage of deepfakes has focused on horror scenarios of them being used against the U.S. and other western countries, experts warn that deepfakes could wreak the most havoc in developing countries, which are often home to fragile governments and populations with nascent digital literacy. Rather than, perhaps, a fake video of Amazon CEO Jeff Bezos announcing retirement, triggering a stock dive, misinformation in some countries could lead to coups, gender or ethnically motivated violence, threatening the stability of entire states.

The types of text-and image-based misinformation that have led to killings after spreading across Facebook in Sri Lanka and Myanmar and on WhatsApp in India could be supercharged by even more convincing deepfakes. Deepfakes could also become a tool wielded by autocratic governments to misinform and oppress their citizens. It’s not difficult to imagine a dictator faking a video of a foreign adversary issuing threats to scare a population into line with an authoritarian government, or, say, a power hungry government using a deepfaked video of a president to mask health ailments.

Gabonese activists were skeptical of Bongo’s rigid appearance in the video.

While experts say it’s impossible to definitively conclude if Bongo’s New Years address is a deepfake, Moubamba says his belief is fed by a number of factors. He correctly points out that Bongo’s face and eyes seem “immobile” and “almost suspended above his jaw.” He also rightly noted that Bongo’s eyes move “completely out of sync with the movements of his jaw.”

“The composition of several elements of different faces, fused into one are the very specific elements that constitute a deepfake,” Moubamba said.

In the broadcast, Bongo remains seated in front of a placid, coral pink backdrop. The camera cuts between two-front angles of Bongo, which could signal the short video was edited and not shot in one continuous take.

Some, particularly Bongo’s critics and political opponents, weren’t satisfied. Owono said that Gabonese activists were skeptical of Bongo’s seeming rigid appearance during the video, and critics took to Twitter to point out elements that made them think the video might be a deepfake. In addition to a general sense that something about the video felt not right, according to Owono, they zeroed in on two specific aspects: how little Bongo blinks during the broadcast (around 13 times over two minutes, not even half the average amount), and that his speech patterns seemed different than in other videos of him—possible signs of a deepfake.

One video expert Mother Jones spoke with doesn’t disagree with Moubamba’s assessment.

“I just watched several other videos of President Bongo and they don’t resemble the speech patterns in this video, and even his appearance doesn’t look the same,” says Hany Farid, a computer science professor at Dartmouth who specializes in digital forensics. “Something doesn’t look right,” he said, while noting that he could not make a definitive assessment.

The types of misinformation that has led to killings could be supercharged by deepfakes.

Uncertainly is widespread: Deeptrace Labs, a fake video detection firm, told Mother Jones it did not believe the video was a deepfake, though it also caveated that it could not make a definitive assessment.

Julie Owono, the executive director of the digital rights organization Internet Without Borders who flagged the deepfake controversy in Gabon to Mother Jones, explained that Bongo’s critics believe he has every reason to hide the true status of his health to protect his family’s 43-year long rule. If at any point Gabon’s president is found to be to be unfit to lead, the country’s constitution says the Senate President should become interim president, and a special election be held within 60 days. The Bongo family has dominated presidential elections since 1967, when Bongo’s father, Omar Bongo, became president. (Ali took over in 2009.) There has been speculation that Bongo’s Gabonese Democratic Party wanted to be certain they had a candidate ready before potentially triggering that process by conceding anything damning about the president’s health.

“It’s already a national debate whether or not President Ali Bongo was healthy,” Owono said. “The fact that he wasn’t able to designate a successor opens the field—opens the elections—to a wide array of candidates not chosen by the ruling family. So civil society groups believe this gives the government the motivation to lie.”

Environmental activist Marc Ona Essangui, a prominent member of Gabonese civil society, believes that the video is concerning and part of government attempts to mask health issues that could make Bongo unfit to serve. But he said over email that he thinks it actually is Bongo in the video, and that any differences in his appearance and speech patterns could be explained by a stroke.

Deepfake experts like Aviv Ovadya, the chief technologist for the University of Michigan’s Center for Social Media Responsibility, along with Dartmouth’s Farid, said that while it’s very difficult to know if the video is actually a deepfake, just the possibility is damaging.

“Whether or not it is real, it’s still creating the uncertainty,” Ovadya said. That uncertainty creates costs, he explained, either for media organizations forced to spend time and resources examining such videos, or for societies that are thrown into debates about authenticity.

While it’s difficult to know if the video is a deepfake, just the possibility is damaging.

Lower-tech versions of disinformation have already led to violence in some countries, and civil society and activist groups in those places expect deepfakes to make media manipulation even worse. If deepfakes do proliferate, social media platforms like Facebook, which has already been condemned by the United Nations for its role in helping spread misinformation and hate, will be ground zero for their proliferation.

“Everybody is praising how helpful AI will be for African governments. But no one is mentioning the risks, which are not science fiction,” Owono said. “We’ve seen what’s possible with written content, but we haven’t even seen yet what’s possible with video content.”

“It’s hard enough to verify information as it is, with parts of the country off limits and the press under threat. But this would essentially be rumors on steroids, which Myanmar is utterly unprepared for,” said Victoire Rio, an advisor to the Myanmar Tech Accountability Network. Myanmar, where Rio is based, has already seen some of the world’s most harrowing impact from misinformation. The Myanmar government and anti-Muslim activists have used Facebook to spread false information that has incited increased ethnic violence against the country’s repressed Rohingya minority.

Rio explained that Myanmar and countries like it with fragile, conflict-ridden governments are still acclimating to the Internet’s rapid information flow; deepfakes add a complicated layer to an already new media landscape.

“Myanmar basically went from information scarcity to information overload. People used to get their info from friends and family and rely on that interpersonal bond and trust,” she explained. “The digital world is a whole different beast, and it is going to take people time to develop resilience to disinformation tactics.”

Internet Without Borders’ Owono says internet platforms’ attempts to stem misinformation in the developing world pale in comparison to their efforts in western countries. “Platforms have to come up with solutions. The more time they take to tackle issues the worse things will get,” she warns.

Smaller nations will be reliant on Facebook and YouTube to counter deepfakes.

Tech companies insist they are paying attention. A YouTube spokesperson directed Mother Jones to a report its efforts to fight disinformation, which says the company is working with experts on deepfakes and “investing in research to understand how AI might help detect such synthetic content as it emerges.” The report notes that YouTube and its parent company Google are exploring ways to help civil society, academia, newsrooms, and governments develop their own detection tools for AI-based misinformation tools like deepfakes.

A Facebook spokesperson said that the company is also paying attention to the looming threat of deepfakes.

“We’ve expanded our ongoing efforts to combat manipulated media to include tackling deepfakes,” a spokesperson said, adding that the company is “investing in new technical solutions, learning from academic research, and working with others in the industry to understand deepfakes and other forms of manipulated media.” The company said that it has engineering teams focused on designing tools to spot manipulated pictures, audio and video.

Smaller, fragile nations don’t have the checks that help limit deepfakes’ impact in stronger nations, and will be particularly reliant on companies like Facebook and YouTube’s efforts to counter them. While experts predict that no country will be immune, developed nations with a robust, independent press and other democratic checks are less likely to be susceptible to government attempts to manipulate public opinion with deepfake videos.

Farid told Mother Jones that roughly half a dozen politicians from developing countries all around the world have asked him to analyze videos often purporting to capture them in a compromising sexual situation. (Citing confidentiality agreements, Farid declined to reveal the politicians’ names or home countries.) In a new era where video’s authenticity will be robustly questioned, Farid worries the lines of truth will be blurred in ways harmful to domestic stability.

“In some ways it doesn’t matter if it’s fake. That’s not the underlying issue. It can be used to just undermine credibility and cast doubt,” Farid said. “I don’t think we’re ready as a society. Our legislators aren’t ready. Technology companies aren’t ready. It’s going to hit us hard and we’re going to be scrambling to try to contain it.”