Another day, another deepfake: but this time they can sing.

New research from Imperial College in London and Samsung’s AI research center in the UK shows how a single photo and audio file can be used to generate a singing or talking video portrait. Like previous deepfake programs we’ve seen, the researchers uses machine learning to generate their output. And although the fakes are far from 100 percent realistic, the results are amazing considering how little data is needed.

By combining this real clip of Albert Einstein speaking, for example, with a photo of the famous mathematician, you can quickly create a never-before-seen lecture:

Getting a bit wackier, why not have everyone’s favorite mad monk, Grigori Yefimovich Rasputin, belting out the Beyoncé classic ‘Halo’? What a karaoke night that would be.

Or how about a more realistic example: generating video that not only matches the input audio, but is tweaked to communicate a specific emotion. Remember, all that was needed to create these clips was a single picture and an audio file. The algorithms did the rest.

As mentioned above, this work isn’t completely realistic, but it’s the latest illustration of how quickly this technology is moving. Techniques for generating deepfakes are becoming easier every day, and although research like this is not available commercially, it didn’t take long for the original deepfakers to bundle their techniques into easy-to-use software. The same will surely happen with these new approaches.

Research like this is understandably making people worried about how it will be used for misinformation and propaganda — a question that is currently vexing US legislators. And although you can make a good argument that such fears in the political realm are overblown, deepfakes have already had caused real harm, particularly for women, who have been targeted to create embarrassing and shaming non-consensual pornography.

Getting Rasputin to sing Beyoncé is just a bit of light relief at this point, but we don’t know how weird and terrible things might get in the future.