A while back, I wrote a blog post about spherical robots. I had taken it upon myself to learn a bit more about robotics with the intention of building a simple autonomous robot. Well, over two years later, I’m at a point where I can do some actual robotics work. I look back on what I’ve learned… and the rabbit hole of a trek that led me here.

The Coursera Rabbit Hole

What goes into a robot? Well, naively I thought that you hook up servos and sensors tosome kind of micro controller and away you go. It’s the Lego Mindstorms version of Robotics. Of course, that *IS* one way to look at robots… but really, that’s just the beginning… the “Hello World” program of building robots. I wanted to build something a little more sophisticated than the “recommended age 12-adult” crowd.

Well, for the answer we look to the Control of Mobile Robots class offered on Coursera. In this class, Dr. Magnus Egerstedt introduces Control Systems. The class itself does not get into the hardware of building robots, but digs into the abstraction layers necessary for successfully modeling and controlling things that interact with the real world.



What exactly do I mean when I say “Control things?” Well, think about yourself for a moment. If you’re standing and someone shoves you, you are able to react in a way to keep yourself stable and standing… or simply put you’re in control of your body. The act of keeping yourself upright is a complex set of muscle movements that need to be carried out correctly but you don’t need to think about how to control each muscle… you just do it. The instinctive impulse to lean or step is handled by your innate control system.

It’s not enough, however, for a robot to be “controllable”. My goal is to build a robot that’s autonomous. That requires some form of higher level artificial intelligence. It turns out that Coursera offers another class geared toward exactly this: Artificial Intelligence Planning! In this class, Dr. Gerhard Wickler and Professor Austin Tate take you through a survey of programmatic problem solving algorithms. I was amused to learn that like all other computer science problem, artificial intelligence problem solving comes down to a search algorithm.

At the end of this course, you’ll be able to write a program that given some set of circumstances and corresponding set of possible actions, it will figure out what to do to accomplish its goals; assuming that some possible set of actions can achieve the goal.

This lead me to the next problem… perception. An autonomous robot has sensors and it needs to be able to figure out some “state” of the universe in order for it to use its problem solving capabilities. How on earth do you map images/sounds/echolocation to logical states of the universe? Through machine learning. As it turns out, Coursera offers a LOT of classes on exactly this. The most notable of these classes is Coursera’s co-founder‘s class on Machine learning… Here, you learn all kinds of algorithms for automatically classifying and identifying logical states based on noisy or confusing input.

A more advanced class that I really enjoyed focused on state of the art Neural Networks. The class is called Neural Networks for Machine Learning and is taught by Dr. Geoffrey Hinton. This class goes into great depth on various kinds of Neural Networks. This class totally blew my mind. I have no doubt that the correct application of neural nets with the right kind of self-motivating planner will lead to formidable AIs.

Putting it all together

First let’s talk about hardware. Below is a list of hardware that I’m going to use and I’ll parallel it with what I feel may be the human anatomy counter-part.

I’m going to use an Arduino Uno as the primary interface with all of my robot’s actuators. It represents the spinal cord and instinctive nervous system of the robot. The Arduino is a very simple microcontroller that isn’t terribly fast. It also doesn’t have much in the way of memory. It does have extremely easy interfaces with motors and sensors. This makes it ideal for running a closed loop System. (See Control of Mobile Robots class for details).

Connected to the Arduino will be a Raspberry Pi. The Pi will be the brains of the Robot. All higher order problem solving will occur here. The brain and the spinal cord will talk to each other using SPI. Naturally the Raspberry Pi will be the master of the SPI bus. As the robot gets more complex, it might be necessary to attach more than one microcontroller (maybe not all arduinos)… especially if I start working with more complex sensors.

The supports and overall skeleton of my robot will be created with Shape Lock. It’s a material that can be melted, shaped and re-used over and over. It claims to be machinable (using my dremmel) and durable. I imagine that if I need stronger load bearing parts, I can prototype in shape lock and carve some other material based on the prototype. Wood is a likely candidate.

Okay. The big pieces are out of the way. Now the fun stuff. What sensors / servos will I use? I have a variety of electric motors, solenoids and steppers that I picked up from Adafruit. It’s likely that my first robot will be a simple differential drive deal.. but eventually I’d like to go back to my ideas in the original blog post and create a spherical robot. In the end, the actual drive system and sensors don’t matter that much… they’re just the accessories of the Mr. Potato head. All interchangeable.