Researchers at the Samsung AI Center in Moscow developed a way to create "living portraits" from a very small dataset—as few as one photograph, in some of their models.

The paper, "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models," was published on the preprint server arXiv on Monday.

The researchers call this few- and one-shot learning, where the model can be trained using just one image to create a convincing, animated portrait. With a few more shots—as few as eight or 32 photographs—the realism improves even more.