In the not-so-distant future, characters might practice kung-fu kicks in a digital dojo before bringing their moves into the latest video game.

AI researchers at UC Berkeley and the University of British Columbia have created virtual characters capable of imitating the way a person performs martial arts, parkour, and acrobatics, practicing moves relentlessly until they get them just right.

The work could transform the way video games and movies are made. Instead of planning a character’s actions in excruciating detail, animators might feed real footage into a program and have their characters master them through practice. Such a character could be dropped into a scene and left to perform the actions.

The same algorithm can be used to teach a wide range of challenging physical skills. Berkeley Artificial Intelligence Research

“An artist can give just a few examples, and then the system can then generalize to all different situations,” says Jason Peng, a first-year PhD student at UC Berkeley, who carried out the research.

The virtual characters developed by the AI researcher use an AI technique known as reinforcement learning, which is loosely modeled on the way animals learn (see “10 Breakthrough Technologies 2017: Reinforcement Learning”).

The researchers captured the actions of expert martial artists and acrobats. A virtual character experiments with its motion and receives positive reinforcement each time it gets a little closer to the motions of that expert. The approach requires a character to have a physically realistic body and to inhabit a world with accurate physical rules.