Here’s another example—skip ahead to around the 1-minute mark, which is where things get really surreal:

So, why would scientists want to do this—other than to show off their simultaneously dazzling and mega-creepy computer skills?

“One of the applications is in augmented reality and virtual reality,” said Ira Kemelmacher-Shlizerman, an assistant professor of computer engineering at the University of Washington, and one of the authors of a paper that is set to be presented at the International Conference on Computer Vision this month. “We’re not there yet, but our research leads to that.”

In other words, imagine if a video call didn’t just feature a person’s voice and two-dimensional face on a screen—but instead brought a projection of that person into the room with you. “What if you could do that, just to take it forward a few steps and make it more realistic? Make it like the person is in front of you in three dimensions, so you can see the actual expressions in detail,” Kemelmacher-Shlizerman told me.

There may also be applications in Hollywood. When the actor Brad Pitt played Benjamin Button in the story of a man who ages in reverse, filmmakers created his aging effect by coding his face in great detail in a lab setting. “The expressions were later used to create a personalized blend shape model and transferred to an artist created sculpture of an older version of him,” Kemelmacher-Shlizerman and her co-authors wrote in their paper. “This approach produces amazing results, however, [it] requires actor’s active participation and takes months to execute.”

What’s special about the latest work by Kemelmacher-Shlizerman and her colleagues is that the modeling comes from pre-existing photo collections—hence the focus on Hanks, who has been photographed a lot over the years. Other researchers in this space have relied on specific environments and lighting levels for their subjects in order to make realistic models.

“In the optimal setup, you’d say, ‘Let’s go to a lab, put 20 cameras around the room, decide on some lighting, and constrain all sorts of environmental conditions,” Kemelmacher-Shlizerman said. “The big breakthrough in our research is we’re doing it in completely unconstrained environments unlike other research in this space.”

The algorithm Kemelmacher-Shlizerman and her colleagues used assesses differences in lighting across huge troves of photographs by looking at changes in texture and various shadows on a person’s face. In one image, a shadow might be cast under a person’s nose, in another image, a shadow might fall under that person’s eyes. “We can use the shadowing to estimate the shape of a person’s face,” she said.

The purpose of the puppeteering effect, she said, is to see how good of a job they did in actually capturing a person’s likeness. (She acknowledged that this technology may open the door to impersonation without a subject’s consent, but says the same mechanism that creates a model in the first place could be used to “see if it’s fake or not.”)