Animation is beautiful, but creating moving pictures is incredibly labor intensive. The visual and arts departments who worked on the movie Moana alone numbered close to 300 people, according to the credit listings on IMBD. But a new process developed by researchers at Princeton has the potential to drastically simplify some parts of the process, with mesmerizing results.

The tool basically lets users choose a part of a static image that they want to be animated, raindrops in a storm scene, for example, or steam particles moving through a combustion engine.

The user then manipulates that part of their image to specify how fast they want the animation to move, at which point an algorithm takes over and extrapolates their instructions to all the other similar objects in the picture. It has the potential to save animators a lot of time, while also making it far easier for amateurs to make things like cinemagraphs — photographs where a single part of the image is animated.

“The person provides clues about what aspects of the scene they would like to animate,” explains co-author Adam Finkelstein in a statement. “The computer removes much of the difficulty and tedium that would be required to create the animation completely by hand.”

An algorithm helped transform what was a static image into this animation. Nora S Willett, Rubaiat H Kazi, Michael Chen, George Fitzmaurice, Adam Finkelstein, Tovi Grossman

It’s a technique that was harder to develop than you’d think. Machine learning is very good at identifying things in photographs, which are bound by rules of nature and relatively consistent. Images drawn by the human hand, naturally, are not consistent: Every person or artist has their own particular style.

“There’s such a wide range of drawing styles,” explains Nora Willett, a graduate student in Princeton’s Department of Computer Science and the paper’s lead author. “There’s just not enough data to train a machine to recognize every single fantastical drawing.”

To overcome this obstacle, the researchers designed an interface that made it easier for humans and machine learning to work together. They started with the Autodesk SketchBook Motion app, which can create animation but requires users to either make them by hand or compile dozens of layers through an another app like Adobe Photoshop.

To test their interface out, Willett’s team recruited six people with varying degrees of animation experience, two of whom were good enough to create their own animations themselves. They presented their new method just last week, at the Association for Computing Machinery’s Symposium on User Interface Software and Technology.