It includes 3,000 AI-generated videos that were made using various publicly available algorithms.

The context: Over the past year, generative algorithms have become so good at synthesizing media that what they produce could soon become indistinguishable from reality. Experts are now racing to find better methods for detecting these so-called deepfakes, especially with the 2020 US presidential election approaching.

Deepfake drop: On Tuesday, Google released an open-source database containing 3,000 original manipulated videos as part of its effort to accelerate the development of deepfake detection tools. It worked with 28 actors to record videos of them speaking, making common expressions, and doing mundane tasks. It then used publicly available deepfake algorithms to alter their faces.

State of the art: Earlier this month, Facebook announced that it would be releasing a similar database near the end of the year. In January, an academic team led by a researcher from the Technical University of Munich created one called FaceForensics++ by performing four common face manipulation methods on nearly 1,000 compiled YouTube videos. With each of these data sets, the idea is the same: to create a large corpus of examples that can help train and test automated detection tools.

Cat-and-mouse game: But once a detection method has been developed to exploit a flaw in a particular generation algorithm, the algorithm can easily be updated to correct for it. As a result, some experts are now trying to figure out detection methods that assume the perfection of synthetic images. Others argue that reining in deepfakes won’t be accomplished through technical means alone: instead, it will also require social, political, and legal solutions to change the incentives encouraging their creation.

To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.