Don't even worry about Dyret the robot. At first glance, the scrawny quadruped looks pathetic, as it struggles to walk without collapsing. But keep watching, and you’ll see it start to improve—walking slowly, yet ever more proficiently. Dyret the robot is teaching itself to walk. Or even, according to a new class of robotics researchers, evolving.

Machines like Cassie the biped or SpotMini the robot dog are quickly mastering locomotion, thanks to line after line of meticulous code. But Dyret is different—it learns to walk on a certain surface, say carpet or ice, through trial and error. It adapts to its environment, not with lots of explicitly coded instructions like in traditional robots, but with special algorithms and limbs that automatically shorten and lengthen to adjust the robot’s center of gravity. It’s called evolutionary robotics, and it’s a potentially powerful way to get machines to master novel terrain on their own, no hand-holding required.

Getting a non-biological machine to evolve like organisms would out in nature means following the rules of natural selection. Organisms evolve in part because of mutations, which—if they’re beneficial—may give an individual something like better coloration to fit its environment. This helps the individual survive to sire more offspring, thus propagating the genes that code for fitness. By way of death, natural selection kicks the less-than-ideal genes out of the population.

With Dyret, researchers begin by generating eight random ways, or “solutions,” for the robot to walk, which includes varying leg lengths. Generally speaking, these solutions are mediocre at best. “You combine several of them and then you get new solutions, a new generation,” says Tønnes Nygaard, a roboticist with the Engineering Predictability With Embodied Cognition project at University of Oslo. Think of it like parents giving birth to new children. The code that powers Dyret takes solutions and slightly modifies them, “which is the mutation you have in nature,” Nygaard adds.

Take a look at the GIF below. The system is trying different ways to walk, as motion-capture equipment tracks how far it went and how fast. On top of that, a sensor in the robot itself calculates how stable each gait is. Good “solutions” get good scores. “There's a higher chance of selecting the ones that are more stable or faster,” says Nygaard. Thus the robot can improve, generation after generation, like a species adapting to an environment.

Which means if you plop Dyret in a new environment, you don’t have to explicitly code to make it more comfortable with, say, a slippery surface—it just adapts its gait. “The robot doesn't know that now we've changed its surface,” says Nygaard. “It's simply trying to walk as quick and stable as possible given the situation it's in.”