The Rover gathers data from a 3-D camera, which, like the sensors for a Microsoft Kinect, monitors moving objects. A simple lidar sensor determines how far you are from those objects. The onboard computer gathers it all together and gives you, through the headset, a series of multicolored lines that occasionally coalesce into recognizable shapes. It does its best to guess what they are—like a pedestrian or car—and even tells you, with a percentage, how confident it is in its guess. This is an artistic approximation of how AVs see the world, since the goal is to simulate the experience, not perfectly reproduce computer understandings of lidar and radar data.

The Moovel team has taken the setup to exhibitions and conferences, and used it for informal interviews rather than rigorous experiments. They’re keen to get people thinking about some of the issues, and believe making them tangible makes them easier to discuss. They say most of the volunteers who have gone for a ride found it fun—eventually—and informative.

moovel lab

Rolling a mile in a soulless robot's tires may seem pointless, but the Moovel researchers see value in understanding, communication, and even empathy between people and driverless cars. With their plethora of cameras and other sensors, it's easy to assume that robocars will be all seeing, all knowing. But seeing and processing are two distinct processes. The intelligence that makes decisions has to register and react to an object that appears in front of a camera. And that AI is a black box, even to the developers who train it with hundreds of thousands of examples of what not to hit. Moovel believes everyone should try to pick up at least a basic understanding of how it works—and its potential limitations.

“One thing that we do want to raise is how many sensors is enough to be confident that your machine is able to see the things that are necessary,” says Lee. If you step out into the path of an AV, will it definitely spot you, recognize you as a person, and stop? If you’re riding in a driverless taxi and it starts snowing, do you know how much its view of the road ahead is impacted? The more answers we have, the better we'll all be able to live in peace.

The folks building real self-driving cars are tackling this communication gap, without the terrifying bit. Waymo and Uber have each developed interfaces that translate for human eyes what the car is doing, and how it sees the world. When in Autopilot mode, Tesla cars give a basic representation of what they see in the instrument cluster, an easy way for the human to double check that the car really has spotted that car cutting in front of you.

Maybe one day, in the utopian future of crash-less computer drivers, none of this will be necessary. But for the foreseeable future, when AVs with their learner permits are sharing the roads with humans who’ve never encountered them before, a better two-way understanding, and even a little empathy, will keep everyone safer.

I'd Like to Buy the World Some Code