A new method in “deepfakes” can take the “style” or “likeness” of one person or object and transfer it to another, a potentially nefarious method that could take the fake news epidemic to unfounded territory.

Deepfakes are fake videos that have been manipulated by artificial intelligence to make someone or something appear as something else. Researchers at Carnegie Mellon University expanded on this technology and used a machine-learning algorithm to imprint the facial expressions and mannerisms from one video’s subject to another.

In the study, recently presented at the European Conference on Computer Vision (ECCV 2018) in Munich, researchers used comedians John Oliver and Stephen Colbert as examples.

The experiment showed Colbert’s face and features becoming slightly contorted in order to mirror the way that Oliver is speaking. The video of Colbert is low-res, somewhat fuzzy and appears to have been tampered with.

Other examples included a daffodil blooming as a hibiscus, Barack Obama speaking with Martin Luther King Jr. characteristics and Donald Trump speaking with Obama characteristics.

“This method could help filmmakers work quicker and cheaper or help autonomous cars learn how to drive at night,” researchers wrote in a video that accompanied the study. “It could also be used to color black and white movies and even help teach self-driving cars how to manoeuver in the dark.”

While the research showed some promise for industries like film and automotive, person-to-person deepfakes could be the next hurdle in the battle against fake news and misinformation.