Google’s artificial intelligence (AI) subsidiary DeepMind has released a paper detailing how its AI agents have taught themselves to navigate complex virtual environments, and the results are weird, wonderful, and often extremely funny.

The agents in the simulations were programmed with a set of sensors— these allowed them to know things like when they were upright or if their leg was bent — and a drive to continue moving forward. Everything else that you see in the video — the agents’ jumping, running, using knees to scale obstacles, etc. — is the result of the AI working out how best to continue moving forward through reinforcement learning.

The complexity of the agents’ movements is a testament to how far AI has come in recent years. While agents in simulations like these often break down when faced with unfamiliar environments, DeepMind’s have utilized startlingly sophisticated movements to traverse obstacles.

These agile AIs aren’t the first to impress, though. A DeepMind AI has previously illustrated super-human performance levels on an object recognition task, and a team at the University of Cambridge has developed an AI system capable of performing more abstractly cerebral tasks, such as reading emotions and detecting pain levels.

The groundwork being laid by experiments such as these is pivotal to the integration of AI into society. Eventually, researchers will be able to incorporate these advancements into the programming of future AI robots, which will be able to navigate around your home or the streets, ushering in the age of truly seamless robot/human interaction.