In terms of process that worked fairly well — I learned how to work with Arduino, interface with the motors and Bluetooth. I also had to remember my soldering skills, but not so much — as most of my wiring was done on prototyping breadboards. I used this great Android app from Stony Brook University to steer the robot.

Work in progress

The next step was adding sensors, so that the car has more awareness of itself. I’ve got an HC-SR04 ultrasonic sensor (sonar) to detect obstacles, four wheel encoders as odometry sensors, to count vehicle rotations and therefore calculate speed and distance traveled. I’ve also got a 9DoF IMU — inertial measurement unit with a magnetometer, so I can detect the heading of the car. The goal was to add simple obstacle avoidance logic: when the car senses an obstacle ahead, it will steer away at a preset angle and drive further until it meets another obstacle.

Ready for simple obstacle avoidance

It was funny to see the car bumping around the house, but actually the logic was trivial — I had a “planet explorer” toy from 1984 doing exactly that. The difference was rather hidden at that point — my car knew where it is heading and how far did it travel.

To start using this advantage, my next step was to finally get to use ROS (Robot Operating System). My Arduino would be a ROS node, connected via WiFi to a computer running the ROS core. ROS is essentially an efficient publish/subscribe framework with some conventions on semantics, but the learning curve was somewhat steep in the beginning. In the end I could interact with Robaka remotely, using state-of-the art technology stack. Even just for debugging purposes, things like live plotting of sensor data and recording everything for later playback, are very helpful.

Around the same time NVIDIA has announced its new product for AI and robotics developers — Jetson Nano, a 0.5 teraflops computer (128-core GPU, 1.5 GHz 64-bit CPU, 4GB RAM), running Linux. Jetson platform is supported by Isaac SDK and simulator — robotics platform from NVIDIA, and it also supports ROS.

I’ve decided to get Nano to be the mobile ‘brain’ of Robaka, and to not depend on a wireless connection to computer anymore — and also eventually support the robot with machine learning algorithms like object detection and image recognition. Nano can do all the fancy CV things like YOLO in real time, but requires a decent mobile power source for that.

I also had to replace Arduino Uno with Arduino Due, because of memory and I/O limitations of Uno. Due is a much more powerful board, running 32-bit ARM CPU clocked at 84 MHz, 512 KB flash memory, 96 KB SRAM and carrying 54 I/O ports — compare it to Uno at 16 MHz, 32 KB flash memory and just 2 KB SRAM!

Arduino + Jetson Nano

This is how Robaka looked with Jetson Nano, its camera and WiFi module hooked up to it. As you can see, I installed one more ‘floor’ on the chassis sandwich to accommodate the Nano and the power bank. Onboard camera made it much easier to track the robot driving around the house without the need to walk behind it. I’ve also added two more sonars, for better selection of obstacle-free path without rotating the robot, and had to move the IMU away and up from the magnetic field of motors. (And yes, I’m a bit sentimental about my very first DIN PC keyboard from 1995 you can see in the background).

I was carefully approaching the SLAM topic, only to find out that sonar accuracy was far below what’s needed for a decent indoor localization. It was good enough for obstacle detection and basic avoidance, but that’s it. I knew that basically everybody’s using LIDARs for SLAM, but always thought they’re very expensive and I get away without a LIDAR. However it turned out that these days you can get an indoor 2D LIDAR for as low as 90 EUR! Without further waiting I’ve ordered one, installed it on Robaka and finally all pieces ‘clicked together’! The accuracy and update rates of the LIDAR were overwhelming.