Moving one step at a time to reduce speed

Requirements

Programming knowledge (Python / Javascript)

Arduino

4 x 150Ω resistors

4 x 100Ω resistors

4 optocouplers

Breadboard + jump wires

Solder iron kit

RC car

Voltage step up converter (optional)

Smartphone with camera streaming app (iOS / Android)

Smartphone car holder

Building the circuit

The idea is to open and close switches on the remote controller from a script.

Optocoupler

I needed to find a way to mimic the action of pushing a physical button from a piece of code.

I first thought of using a basic switch controlled from the Arduino but the remote controller I have requires 9V when the Arduino can only handle 5V.

Here comes the solution: “Optocouplers”

In electronics, an opto-isolator, also called an optocoupler, photocoupler, or optical isolator, is a component that transfers electrical signals between two isolated circuits by using light.

— Wikipedia

This way, we can have 2 circuits logically connected but electrically isolated. The Arduino is now safe and I can open/close the remote controller’s circuit in different places.

But wait, the optocoupler requires 1.3V and the Arduino GPIOs output 5V

From the optocoupler datasheet

Arduino outputs 5V

A common and easy way to reduce the voltage is to build a voltage divider.

Voltage divider formula

According to the formula, I picked a 100Ω and a 270Ω resistor.

R1 = 270; R2 = 100; Vin = 5

Now we can build the circuit …

… and check if everything is working as expected

Close enough

We can now safely connect it to the optocoupler.

Optocouplers have a dot printed on it. This is where you have to bring the positive wire.

Also, once everything was done and working, I discovered voltage regulators which would make things even easier.

Voltage regulators will always output the predefined voltage

Connecting the remote controller

When I opened the remote controller, I noticed the buttons were making connections between two parts when pressed. This means we can solder wires on each part, hook it to the optocoupler and control it with the Arduino.

Forward / Backward / Left and Right connected to optocouplers

Powering the remote controller with the Arduino (optional)

As mentioned before, the Arduino outputs 5V and the remote controller requires 9V provided by a battery.

Since I want this system to be plug & play, I didn’t want to have to deal with the battery.

A voltage step up converter allows me to get 9V out of the Arduino and connect it to the remote controller.

White wires are connected to the 5V output and ground of the Arduino. Red and black to the remote.

We are now done with the hardware part, let’s move on to the code.

Controlling the car with Python through the Arduino

Arduino sketch

The Arduino is waiting for strings from the serial port and the python script will send those commands.

I have added the colour mapping of the wires to the pin.

Each wire is connected to an optocoupler

// arduino code

// read the string from the serial port and turn on/off the associated pin int x;

String str;



void setup()

{



Serial.begin(250000);

pinMode(8, OUTPUT); // left - brown

pinMode(9, OUTPUT); // right - orange

pinMode(10, OUTPUT); // backward - white

pinMode(11, OUTPUT); // forward - blue

}



void loop()

{

if(Serial.available() > 0)

{

str = Serial.readStringUntil('

');

if (str.equals("left_on")) {

digitalWrite(9, LOW);

digitalWrite(8, HIGH);

}

if (str.equals("right_on")) {

digitalWrite(8, LOW);

digitalWrite(9, HIGH);

}

if (str.equals("backward_on")) {

digitalWrite(11, LOW);

digitalWrite(10, HIGH);

}

if (str.equals("forward_on")) {

digitalWrite(10, LOW);

digitalWrite(11, HIGH);

}

if (str.equals("left_off")) {

digitalWrite(8, LOW);

}

if (str.equals("right_off")) {

digitalWrite(9, LOW);

}

if (str.equals("backward_off")) {

digitalWrite(10, LOW);

}

if (str.equals("forward_off")) {

digitalWrite(11, LOW);

}

}

}

Sending command from Python

Using the serial connection, we are now able to send commands to the Arduino.

You can try any commands in the serial monitor

First find out the port of your Arduino:

Now we should be able to control the car from a Python script

from serial import Serial

import time

arduino.write(b'forward_on

')

time.sleep(1)

arduino.write(b'forward_off

')

Machine learning and Neural Network

Before we go further, let’s pause for a while to think about how we are going to build the self driving feature.

We are going to use supervised machine learning, which means we need to:

Collect good samples of data

Translate those samples to numbers

Train our model

Collecting data

With the smartphone and the car holder attached to the roof of the car we will be able to see what’s in front of the car.

Using any app that allows us to get the image of the camera from a URL and a REST API with an endpoint for each direction, we should be able to control the car and see where it’s going.

We can then use it to collect our data.

# REST API to control the car

# each endpoint move the car toward a direction for a short time

from serial import Serial

import requests

from PIL import Image

from io import BytesIO

import time

import uuid

arduino = Serial('/dev/cu.usbmodem1411', 250000)

# wait for the serial port to be ready

time.sleep(2)



from flask import Flask

from flask_cors import CORS



app = Flask(__name__)

CORS(app)



# check if everything is working

arduino.write(b'forward_on

')

time.sleep(0.05)

arduino.write(b'forward_off

')

time.sleep(0.05)

arduino.write(b'backward_on

')

time.sleep(0.05)

arduino.write(b'backward_off

')

time.sleep(0.05)

arduino.write(b'left_on

')

time.sleep(0.05)

arduino.write(b'left_off

')

time.sleep(0.05)

arduino.write(b'right_on

')

time.sleep(0.05)

arduino.write(b'right_off

')





def save_data(command):

# request the image many times

# for some reasons the app doesn't always return

# the latest image if you request it only once ¯\_(ツ)_/¯

for _ in range(0, 10):

response = requests.get('url of camera')

# save the image for future use

# the direction is stored at the end of filename

Image.open(BytesIO(response.content)).convert('L').save('../data_tmp/{}_{}.jpg'.format(uuid.uuid1(), command))



@app.route("/forward")

def forward():

save_data(0)

arduino.write(b'forward_on

')

time.sleep(0.10)

arduino.write(b'forward_off

')

return 'forward'



@app.route("/left")

def left():

save_data(1)

arduino.write(b'left_on

')

time.sleep(0.5)

arduino.write(b'forward_on

')

time.sleep(0.10)

arduino.write(b'forward_off

')

time.sleep(0.5)

arduino.write(b'left_off

')

return 'left'



@app.route("/right")

def right():

save_data(2)

arduino.write(b'right_on

')

time.sleep(0.5)

arduino.write(b'forward_on

')

time.sleep(0.10)

arduino.write(b'forward_off

')

time.sleep(0.5)

arduino.write(b'right_off

')

return 'right'



if __name__ == "__main__":

app.run(host='0.0.0.0')

Driving the car from your browser

Let’s write a bit of Javascript to send HTTP requests to our server and drive the car from our keyboard.

The html file to load FastClick and Mousetrap:

<html>

<head>

<title>Remote</title>

</head>

<body>

<div id="forward">Forward</div>

<div id="left">left</div>

<div id="right">right</div>

<div id="backward">Backward</div>

<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/fastclick/1.0.6/fastclick.min.js"></script>

<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mousetrap/1.6.0/mousetrap.min.js"></script>

<script type="text/javascript" src="index.js"></script>

</body>

</html>

Then bind the keys and buttons to HTTP requests:

if ('addEventListener' in document) {

document.addEventListener('DOMContentLoaded', function() {

FastClick.attach(document.body); // remove 300ms delay on mobile

}, false);

}



function switchOn(direction) {

fetch('http://localhost:5000/' + direction)

.then(response => response.text())

.then(response => console.log(response))

.catch(error => console.log(error));

}



// send a request to switch off a GPIOs and turn off a LED

function switchOff(direction) {

fetch('http://localhost:5000/' + direction + '-off')

.then(response => response.text())

.then(response => console.log(response))

.catch(error => console.log(error));

}



// binding the buttons on the UI

['forward', 'backward', 'left', 'right'].forEach(key => {

document.getElementById(key).addEventListener('mousedown', () => switchOn(key));

document.getElementById(key).addEventListener('mouseup', () => switchOff(key));

});



// keep track of which keys are down

const keysDown = {

up: false,

down: false,

left: false,

right: false,

};



// mapping keys to directions

const keyDirections = {

up: 'forward',

down: 'backward',

left: 'left',

right: 'right',

};



// binding keyboard keys

['up', 'down', 'left', 'right'].forEach(key => {

Mousetrap.bind(key, () => {

if (!keysDown[key]) {

keysDown[key] = true;

switchOn(keyDirections[key]);

}

}, 'keydown');

Mousetrap.bind(key, () => {

keysDown[key] = false;

switchOff(keyDirections[key]);

}, 'keyup');

});

Now, we open the webpage and start driving the car to collect data.

Translate an image to numbers

First we need to understand how a Neural Network works.

The easy version is that it takes an array of numbers and gives a solution.

E.g: We want our NN to understand that we are performing a multiplication:

// data sample to teach multiplications

input =[

[1, 2],

[2, 4],

[5, 10],

...

] output = [2, 8, 50, ...]

We train our model, then hopefully we ask it to predict [3, 2] and it returns 6.

Images are made of pixels and each pixel has a color based on its level of Red Green and Blue (RGB).

This sounds like something we could use to convert our image into numbers : an array of arrays representing each pixel colours [[R, G, B]].

// representation of an image with colours

[

[123, 212, 12],

[43, 17, 121],

[3, 54, 90],

...

]

However, working with all the colours will triple the amount of data and we’re interested in patterns and shapes detection.

In order to simplify the process I propose to train our model based on the brightness of pixels (aka grayscale) instead of the colour.

We end up with an array of numbers. Each element of this array representing the brightness of every pixel.

// representation of a black and white image

[0, 255, 127, ...]

I first started with scikit-image but OpenCV seems to be better for the job.

We are going to do a few things:

Blur the image to remove details

Convert it to black and white only (binary)

(binary) Resize to 25x25

Convert to an array

# convert image to array

# adapt params to the image and the brightness of the room import cv2

import numpy as np



img = cv2.imread('image.jpg')

img = cv2.blur(img, (5, 5))

retval, img = cv2.threshold(img, 140, 255, cv2.THRESH_BINARY)

img = cv2.resize(img, (25, 25))

image_as_array = np.ndarray.flatten(np.asarray(img))

cv2.imshow('img',img)

cv2.waitKey(0)

cv2.destroyAllWindows()

Grayscaled image

Blur the image to remove the details of the wooden floor

Binarized image with OpenCV

Why do we need to resize the image?

The minimum size the app allows us to use is 192 x 144, which means our array will have 27,648 elements. We don’t need such a huge sample to detect patterns. By reducing the size of the image to 25 x 25, we will deal with arrays of 625 elements instead, improving training speed (no one wants to wait hours to see the result).

Train the model

We first load and process the previously saved images generated while driving the car manually and use them as our training set. Then train the model and save it for later.

This is shamelessly taken from the scikit-learn documentation with a few tweaks to adapt it to the use case.

import numpy as np

from os import listdir

from os.path import isfile, join

from sklearn.neural_network import MLPClassifier

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler

import cv2



X = []

y = []



# load all the images and convert them

files_name = [f for f in listdir('data') if isfile(join('data', f)) and f != '.DS_Store']

for name in files_name:

try:

# load the image

img = cv2.imread(join('data', name))

# blur to remove details

img = cv2.blur(img, (5, 5))

# convert to binary

retval, img = cv2.threshold(img, 210, 255, cv2.THRESH_BINARY)

# resize to improve performance

img = cv2.resize(img, (24, 24))

# convert to array

image_as_array = np.ndarray.flatten(np.array(img))

# add our image to the dataset

X.append(image_as_array)

# retrive the direction from the filename

y.append(name.split('_')[1].split('.')[0])

except Exception as inst:

print(name)

print(inst)



# split for testing

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)



# scale the data

scaler = StandardScaler()

scaler.fit(X_train)

X_train = scaler.transform(X_train)

X_test = scaler.transform(X_test)





clf = MLPClassifier(solver='lbfgs', alpha=100.0, random_state=1, hidden_layer_sizes=50)

clf.fit(X_train, y_train)

print('score: ', clf.score(X_test, y_test))

Notes:

clf.score isn’t really accurate. You could use cross validation

predicted = cross_val_predict(clf, X, y, cv=5, verbose=2, n_jobs=8)

print('CV: ', metrics.accuracy_score(y, predicted))

Train you model again with the full dataset

clf = MLPClassifier(solver='lbfgs', alpha=100.0, random_state=1, hidden_layer_sizes=50)

clf.fit(X, y)

Let the computer drive the car

from skimage import io

import numpy as np

from sklearn.externals import joblib

from serial import Serial

import time

import cv2

arduino = Serial('/dev/cu.usbmodem1411', 250000)

time.sleep(2)



arduino.write(b'forward_on

')

time.sleep(0.1)

arduino.write(b'forward_off

')

time.sleep(0.1)

arduino.write(b'backward_on

')

time.sleep(0.1)

arduino.write(b'backward_off

')

time.sleep(0.1)

arduino.write(b'left_on

')

time.sleep(0.1)

arduino.write(b'left_off

')

time.sleep(0.1)

arduino.write(b'right_on

')

time.sleep(0.1)

arduino.write(b'right_off

')



CAMERA_URL = 'http://192.168.1.211:8080/live.jpg'

ARDUINO_SERVER = 'http://localhost:5000'



clf = joblib.load('model.pkl')

scaler = joblib.load('scaler.pkl')

scaler_stop = joblib.load('stop/scaler.pkl')

is_stop = joblib.load('stop/model.pkl')

print('model loaded')



def send_command(result):

if result == '0':

arduino.write(b'forward_on

')

time.sleep(0.05)

arduino.write(b'forward_off

')

if result == '1':

arduino.write(b'left_on

')

time.sleep(0.5)

arduino.write(b'forward_on

')

time.sleep(0.05)

arduino.write(b'forward_off

')

time.sleep(0.5)

arduino.write(b'left_off

')

if result == '2':

arduino.write(b'right_on

')

time.sleep(0.5)

arduino.write(b'forward_on

')

time.sleep(0.05)

arduino.write(b'forward_off

')

time.sleep(0.5)

arduino.write(b'right_off

')



def drive():

img = io.imread(CAMERA_URL)

cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

img = cv2.blur(img, (5, 5))

retval, img = cv2.threshold(img, 210, 255, cv2.THRESH_BINARY)

img = cv2.resize(img, (24, 24))

retval, img = cv2.threshold(img, 210, 255, cv2.THRESH_BINARY)

image_as_array = np.ndarray.flatten(np.array(img))

result = clf.predict([image_as_array])[0]



send_command(result)



time.sleep(0.5)

drive()



print('start driving')



drive()

Run the script and see what happens!

I had many failures. Apparently this ML model didn’t like the camera.

Moving slowly but follows the tracks

Reduce the sleep time and move backward after each step to limit speed

You can play with the value of time.sleep to get smoother movements.

What’s next?

The car

The steering angle of this car is really bad and it needs a bigger room to perform significant turns.

Building a RC vehicle using the Zumo Chassis would be a fun project and give more control over the directions.

Self driving

There are many ways to solve a problem with machine learning. So far we only used scikit learn but Tensorflow or any other alternative should be explored.

There are also many paths to explore in the way data are processed.

Fit a curve on the track and use it as data input instead of pixels

Add stop sign and traffic lights

I have already started to experiment with STOP signs but I need to craft a proper one and an Arduino with 3 LEDs should be enough for traffic lights .

The car should stop whenever it recognizes this sign

It should be fairly easy to build a traffic light with 3 LEDs and an Arduino

OpenCV has a lot of tutorials about object detection.

Acknowledgements

GreatScott has many videos about electronics. I highly recommend his playlist on Electronic Basics

David Singleton made a great tutorial and talk on his own self driving RC car

Sarwech Shar who found this great project and built it with me

Miroslav Batchkarov is the one who suggested to reduce the size images allowing faster iterations.

Chris Charlton helped me to understand the Neural Network part and led to the idea of building my own RC vehicle based on his previous experience.

Thank you everyone for your help!

Any feedback or suggestions are welcome.