“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Sign up for On Politics to get the latest election and politics news and insights. Sign up for our politics newsletter

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.