Project Maru is not nearly as forgiving as the human brain. When Gfycat’s engineers ran deepfakes through its AI tool, it would register that a clip resembled, say, Nicolas Cage, but not enough to issue a positive match, because the face isn’t rendered perfectly in every frame. Using Maru is one way that Gfycat can spot a deepfake—it smells a rat when a GIF only partially resembles a celebrity.

Maru likely can't stop all deepfakes alone; it might have even more trouble in the future as they become more sophisticated. And sometimes a deepfake features not a celebrity's face but that of a civilian—even someone the creator personally knows. To combat that variety, Gfycat developed a masking tech that works similarly to Project Angora.

If Gfycat suspects that a video has been altered to feature someone else’s face (like if Maru didn't positively say it was Taylor Swift's), the company can “mask” the victim's mug and then search to see if the body and background footage exist somewhere else. For a video that places someone else’s face on Trump’s body, for example, the AI could search the internet and turn up the original State of the Union footage it borrowed from. If the faces don't match between the new GIF and the source, the AI can conclude that the video has been altered.

Gfycat

Gfycat plans to use its masking tech to block out more than just faces in an effort to detect different types of fake content, like fraudulent weather or science videos. “Gfycat has always relied heavily on AI for categorizing, managing, and moderating content. The accelerating pace of innovation in AI has the potential to dramatically change our world, and we'll continue to adapt our technology to these new developments,” Gfycat CEO Richard Rabbat said in a statement.

Not Foolproof

Gfycat’s technology won’t work in at least one deepfake scenario: a face and body that don't exist elsewhere online. For example, someone could film a sex tape with two people, and then swap in someone else's face. If no one involved is famous and the footage isn't available elsewhere online, it would be impossible for Maru or Angora to find out whether the content had been altered.

For now that seems like a fairly unlikely scenario, since making a deepfake requires access to a corpus of videos and photos of someone. But it’s also not hard to imagine a former romantic partner utilizing videos on their phone of a victim that were never made public.

And even for deepfakes that feature a porn star or celebrity, sometimes the AI isn't sure what's happening, which is why Gfycat employs human moderators to help. The company also uses other metadata—like where it was shared or who uploaded it—to determine whether a clip is a deepfake.

'I can't stop you from creating fakes, but I can make it really hard and really time-consuming.' Hany Farid, Dartmouth College

Also, not all deepfakes are malicious. As the Electronic Frontier Foundation pointed out in a blog post, examples like the Merkel/Trump mashup featured above are merely political commentary or satire. There are also other legitimate reasons to use the tech, like anonymizing someone who needs identity protection or creating consensually altered pornography.

Still, it's easy to see why so many people find deepfakes distressing. They represent the beginning of a future where it's impossible to tell whether a video is real or fake, which could have wide-ranging implications for propaganda and more. Russia flooded Twitter with fake bots during the 2016 presidential election campaign; during the 2020 election, perhaps it will do the same with fraudulent videos of the candidates themselves.

The Long Game

While Gfycat offers a potential solution for now, it may be only a matter of time until deepfake creators learn how to circumvent its safeguards. The ensuing arms race could take years to play out.

"We're decades away from having forensic technology that you can unleash on a Pornhub or a Reddit and conclusively tell a real from a fake," says Hany Farid, a computer science professor at Dartmouth College who specializes in digital forensics, image analysis, and human perception. "If you really want to fool the system you will start building into the deepfake ways to break the forensic system."