Google is using more or less the same neural network approach as, say, Prisma does. Simply put, algorithms break pictures down into easily understandable parts, "learn" the artistic style of a painting (like the color palette and brush stroke technique), and combine them into a new image. But as Google explains, its style transfer tech is more complex. It can learn from multiple paintings -- whether they be different works from the same artist or movement, or entirely separate genres altogether -- and through "interpolation," create an entirely new type of filter that merges distinct styles.

Apparently, the search giant's system requires minimal computing power and is simple enough that it can be applied to live video. As the demo above shows, you can even edit the extent a video is transformed by any one of several different styles on the fly. Like Google's other experiments in using neural networks to colorize black and white photos or create trippy art, this advanced style transfer tech appears to be firmly in the research stages right now. Google does intend to release the source code for this project in due course, though, and we'd be pretty surprised if something akin to this didn't eventually become a fancy new feature in Google Photos.