Facebook’s A.I. research team (FAIR) has revealed their AI neural network, which can map skins onto people in videos during real-time.

Known as DensePose, the neural network can change the bodies of dozens of people in a video, allowing you to make everyone in the footage wear the same clothes, or have the same skin color.

“It uses a convolutional neural network that was built by first creating a human-annotated data set and then training a ‘teacher’ AI. In total, 50,000 images of human body parts were scrutinized by humans who then annotated more than 5 million data points which provided the training data for the network,” The Next Web reported. “Once the system understood how humans see other humans, it was ready to train its ‘learner’ how to see people the same way.”

“The end result is an AI that uses a 2D RGB image as input and applies it to any number of humans in a video,” they explained. “Instead of putting a celebrity’s face on someone else’s body, you could change the way people look in a video as if editing a Minecraft skin.”

Though the Next Web notes that “there have been other 2D image-mapping neural networks, this one is the first to put it all together in real-time and effectively ‘connect the dots’ without a depth sensor,” and the team behind DensePose want to use the technology to replace “character modeling in video games.”

The technology is similar to that used in “deepfakes” videos, where the faces of popular celebrities are mapped onto the bodies of porn stars in videos using A.I. to create fake, but realistic celebrity porn.

Pornhub and other websites recently banned deepfakes videos, which portrayed celebrities including Gal Gadot, Emma Watson, Scarlett Johansson, Maisie Williams, and Aubrey Plaza in explicit scenarios.

“I just found a clever way to do face-swap,” declared the programmer behind many deepfakes videos in an interview last year. “With hundreds of face images, I can easily generate millions of distorted images to train the network… After that if I feed the network someone else’s face, the network will think it’s just another distorted image and try to make it look like the training face.”

“Every technology can be used with bad motivations, and it’s impossible to stop that,” he continued. “The main difference is how easy [it is] to do that by everyone. I don’t think it’s a bad thing for more average people [to] engage in machine learning research.”

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington, or like his page at Facebook.