The little robot finger’s favorite color is blue. Wave a handful of blueberries in front of it and the finger will follow, transfixed. If you’re wearing a blue shirt, congratulations, you’re its new best friend. If you painted everything around it blue, the robo-digit could well have a heart attack.

You can program a robot to fall in love with a color easily enough. But this robot is thinking in a fundamentally different way—not with line after line of complicated code, but with simulated neurons.

“These neurons, each one may cause a little twitch in the muscle,” says the robot’s master, USC biomedical engineer Terry Sanger. “It can push the muscle left, right, up, and down. All the robot knows is when it sees blue things it wants to go toward blue things and avoid everything else.”

The system is a glimpse at a potentially powerful approach to robotic intelligence: To create machines that move more naturally, maybe the trick is to first make them clever, yet kind of simplistic. Maybe you replicate the primitive functioning of neurons, governed by a relatively simple supervisory code, instead of relying on complicated algorithms.

Across the USC campus from Sanger’s office lives Kleo the robotic cat. Well, more like struggles than lives—Kleo is a remotely piloted machine that ambles awkwardly. But biomedical engineer Francisco Valero-Cuevas has big plans for Kleo: Get it walking on its own with the help of simulated neurons on a chip that can mimic the operation of neurons in a biological spinal cord.

But why not just mimic the brain? “The spinal cord is not just some cables that go from brain to muscle,” says Valero-Cuevas. “The spinal cord has its own low level circuits that do a lot of the micromanagement of muscles. So our goal is to reverse engineer the entire system.”

That begins with the neurons. Scientists know generally how neurons are arranged in a spinal cord. What’s less clear are the strengths of the connections between neurons as they form into networks that drive, say, the movement of legs.

So Kleo would start with lots of simulated neurons connected to each other with random strengths, or perhaps the same strengths. “You have Kleo just sitting there doing nothing and the neurons are spiking at random,” says Valero-Cuevas. “And then one of these random spike patterns causes an accelerometer to feel forward progression. That minute forward progression is fed back to the system and says, Hey, for that spiking pattern, reinforce the connections among neurons that did that.”

This is known as reinforcement learning. Bit by bit, Kleo’s artificial spinal cord learns which neuronal connections, and connection strengths, trigger the desired outcome. Some move the robot ever so slightly forward and are rewarded. Over many, many iterations, Kleo could begin to crawl and eventually walk.

Yeah, it’s not exactly brilliance in motion. It’ll be awkward at first. “Where is the intelligence here?” Valero-Cuevas asks. “You realize there is no intelligence. It's all dumb parts, but put together the emergent behavior is at the very least useful.” A single neuron means nothing, but formed up as a whole network they build something special.

Such a system could be big for robotics. To get a robot to move, typically you’ve had to program its actions. Move leg, balance, move other leg, etc. It’s hard as hell to do, as evidenced by the bumbling antics of entrants in Darpa's Robotics Challenge.

By reverse-engineering how the spinal cord drives movement in biological beings, roboticists could get lower-level behavior like walking to develop automatically without complicated algorithms. “The way we walk through the world is not by estimating the contact forces with our feet or trying to identify every single thing in the field of view or trying to estimate to precise levels what our velocity is,” Sanger says. “We don't do that. We just see and we feel and we move.”