Jobless, aimless, eating noodles for dinner, how did I get here?

I’ve always been a developer: writing code for computers to eat up and spit out as useful things for people to use.

But over the last few years, my passion for software development, sitting down for 8 hours a day and writing code — has diminished a little. I’ve always greatly enjoyed it, because ever since I had written a game on my Commodore 64 when I was a kid, I could see that if you could write code, you could make anything happen in the world of the screen.

So while it’s fun setting up websites and writing iPhone apps, I found that because these things are now well understood, and they are reaching the limits of what effect they can have on the world on screens on desks and in our pockets, they’re not really as fun as they used to be, because they’re not expanding what is possible to make happen in the world with code.

So in 2015 while working at my job, I started to think again about the potential that code has to transform what exists in the world. Starting out as just pages on screens, the web, apps, and phones have now reached out into the real world in small ways and transformed how a lot of people spend their time and make their living these days. This was good progress from the pages of the early web, but it seems that exciting new development in this area is starting to hit the tail end of the S curve.

In other words, code has reached out as far as it can into the real world from its screen and browser world.

That’s what I thought, until I read about a small group of dudes in Melbourne that were racing drones around that they had built themselves from parts. I remember reading the article on a Friday night late at the office, where I was just sitting and updating some website files, and thought: I want to go to there.

Code flies free.

This is where code has the chance to break free of the screen, and into the real world.

So over the next two years, 2016 and 2017, I became obsessed building and flying drones; with the idea that you could buy a few $10 motors, put them onto a small frame, and again with the magic of some code to balance and control the drone in the air, you could create something that allowed you to become a bird, opening up a new perspective on the world that humans have never seen before. With the help of local Maker spaces, and seeing the never ending mischief reported, along with actual somewhat promising and useful reports, it seemed clear that something was happening here: A chance for code to interact with the physical world in a way that it hadn’t been able to before, for it to expand the environment in which it runs, from the screens to the skies.

At the same time, I kept hearing about this crazy fringe group of developers talking about how great some things were with funny names like Torch, Caffe, CNNs, and AlexNet. They seemed to be quite keen on these technologies as seemingly very effective solutions to problems that I had studied way back in the AI course I took in university where we solved Wompus World, which didn’t seem like the most pressing issue in the world at the time, and honestly, doesn’t even now. This was of course well before TensorFlow, Keras, ResNet, and YOLO (both the cultural idea to live your life by, and the better-than-human performance object recognition neural network). Okay, computers are now better than humans (remember, that’s us) at finding where Wally is in an image — that’s interesting, somewhat scary, but still, how does that change things?

Dining room. Understood.

With these two pieces coming together at the same time, it’s time for a new S curve to begin. Physical actuation meets physical scene perception equals a whole new, bigger screen for code to run on: the human world (and beyond).

So I decided to spend the entire year of 2017 not working, but studying self driving cars, robotics, computer vision, and machine learning.

(During this time I also heard an awful lot about Bitcoin and VR, but these things haven’t really helped people that much, other than a few fun games, and reallocating money dollars to and from a few lucky people.)

At the end of 2016, I happened to be hanging out at the Starbucks near the Google campus, and saw on Twitter that George Hotz was speaking at a nearby event about self driving cars. So I walked over, and found it was for the Udacity Self Driving Car Nanodegree program, welcoming the first term of students enrolled. Lots of questions were asked to me about how I was enjoying the program so far, and I had seen it before and it looked interesting, so I decided to enrol.

Completing the three-term, year long, Udacity Self Driving Car Nanodegree was pretty eye opening, and a great way to start enjoying coding again. Three terms and many projects later, including learning how to control steering and speed, use simulators, process images, detect lane lines, and make sense of sensor data, I’m onto the last group project where we run our code on a real car in California, hope it obeys traffic lights, and doesn’t hit anything.

The Udacity projects were the only clearly defined checklist item to complete for the year, but of course there was a lot more than that to do. So for 2017, I sat in an apartment in Melbourne city for most days of the year, starting in January with an empty apartment and a laptop, and by December my robot lab ended up looking something like this: