In fact, because the car reacts to the track as if it were controlled in real time by a human, a funny thing happens to passengers along for the ride. Initially, when the car accelerates to 115 miles per hour and then breaks just in time to make it around a curve, the person riding shotgun freaks out.

But a second lap looks very different. Passengers tend to relax, putting their faith in the automatically spinning wheel. "We might have a tendency to put too much confidence in it," cautioned Gerdes. "Watching people experience it, they'll say, oh, that was flawless." Gerdes reaction: "Wait wait! This was developed by a crazy professor and graduate students!"

Ninety percent of accidents occur because of human error, and even a really smart algorithm isn't going to maneuver a car out of every dangerous situation. Gerdes spoke specifically about the problem with recognizing pedestrians. You can teach a car to recognize something with two arms and two legs as something to avoid, but "Go to [San Francisco's] Castro for Halloween, and the pedestrian system needs to recognize strange cases." Costumes or not, Gerdes believes that by teaching a car to operate at the level of our most skilled drivers, we're better equipping them to take care of us.

Gerdes can imagine a scenario where cars would assist drivers in difficult situations. Hit a nasty ice patch? All that skidding around Thunderhill Raceway has taught the cars -- and their programmers -- how best to deal with the slippery situation.

He also sees a future for networked learning between cars. A tricky merge or a ravaged road could be explained to the car before it approaches, enabling the vehicle and its owner a better shot at a safe passage.

Although autonomous cars may reach the point of being much safer than cars driven by humans, they won't be perfect. The next challenge is to get people comfortable with shifting the blame.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.