Nathaniel Fairfield, Technical Lead at Google, gave the keynote at last month’s Embedded Vision Summit West and spoke about self-driving cars. The Google Self-Driving Car project was created to rapidly advance autonomous driving technology based on laser sensors, cameras, and radar coupled with a detailed, highly annotated, and constantly updated map of the world. Google's self-driving cars have now traveled nearly a million miles autonomously.

In his deeply informative, hour-long talk, Fairfield discusses the cars’ capabilities in detail, Google's overall approach to solving a huge number of diverse driving problems (weather, modeling the expectations of other drivers, lane-splitting motorcycles, occluding objects like other vehicles, hidden pedestrians dashing into view, railroad crossings, getting permission to put a self-driving car on the road, etc.), and the remaining challenges to be resolved (like squirrels, snow, and no room on the car’s roof for a ski rack because of the rooftop laser). Fairfield also takes an enlightening detour to talk about the challenges of machine vision with some interesting and novel revelations such as the use of QR codes for instant locality identification.

Fairfield also talks about the recently announced Google autonomous electric vehicle, which is the direct result of this research. It’s got a soft nose to reduce pedestrian injuries, a speed governor set to 25mph, and no steering wheel. It’s an entirely new take on solving urban transportation problems.

The video ends with 20 minutes of informative, not-to-miss Q&A following the presentation. The first question was about computational load. No surprise. Sensor fusion/analysis and vision processing are the big computational loads.

Fairfield’s keynote was representative of the high-quality material presented at the recent Embedded Vision Summit West. You can see the presentation on the Embedded Vision Alliance page or just watch below.