But all of that depends on cars being able to navigate the built environment. The cars now being tested by Google, BMW, Ford and others all see by way of a particular kind of scanning system called lidar (a portmanteau of ‘‘light’’ and ‘‘radar’’). A lidar scanner sends out tiny bursts of illumination invisible to the human eye, almost a million every second, that bounce off every building, object and person in the area. This undetectable machine-­flicker is ‘‘capturing’’ extremely detailed, millimeter-­scale measurements of the surrounding environment, far more accurate than anything achievable by the human eye. Capturing resembles photography, but it operates volumetrically, producing a complete three-­dimensional model of a scene. The extreme accuracy of lidar lends it an air of infallible objectivity; a clean scan of a stationary structure can be so precise that nonprofit organizations like CyArk have been using lidar as a tool for archaeological preservation in conflict zones, hoping to capture at-­risk sites of historical significance before they are destroyed.

Lidar, however, has its own flaws and vulnerabilities. It can be thrown off by reflective surfaces or inclement weather, by mirrored glass or the raindrops of a morning thunderstorm. As the first wave of autonomous vehicles emerges, engineers are struggling with the complex, even absurd, circumstances that constitute everyday street life. Consider the cyclist in Austin, Tex., who found himself caught in a bizarre standoff with one of Google’s self-­driving cars. Having arrived at a four-­way stop just seconds after the car, the cyclist ceded his right of way. Rather than coming to a complete halt, however, he performed a track stand, inching back and forth without putting his feet on the ground. Paralyzed with indecision, the car mirrored the cyclist’s own movements — jerking forward and stopping, jerking forward and stopping — unsure if the cyclist was about to enter the intersection. As the cyclist later wrote in an online forum, ‘‘two guys inside were laughing and punching stuff into a laptop, I guess trying to modify some code to ‘teach’ the car something about how to deal with the situation.’’

Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University and author of the book ‘‘Robot Futures,’’ uses the metaphor of the perfect storm to describe an event so strange that no amount of programming or image-­recognition technology can be expected to understand it. Imagine someone wearing a T-­shirt with a stop sign printed on it, he told me. ‘‘If they’re outside walking, and the sun is at just the right glare level, and there’s a mirrored truck stopped next to you, and the sun bounces off that truck and hits the guy so that you can’t see his face anymore — well, now your car just sees a stop sign. The chances of all that happening are diminishingly small — it’s very, very unlikely — but the problem is we will have millions of these cars. The very unlikely will happen all the time.’’