You have doubtlessly heard the news. A robotic Uber car in Arizona struck and killed [Elaine Herzberg] as she crossed the street. Details are sketchy, but preliminary reports indicate that the accident was unavoidable as the woman crossed the street suddenly from the shadows at night.

If and when more technical details emerge, we’ll cover them. But you can bet this is going to spark a lot of conversation about autonomous vehicles. Given that Hackaday readers are at the top of the technical ladder, it is likely that your thoughts on the matter will influence your friends, coworkers, and even your politicians. So what do you think?

The Technology Problem?

Uber, Waymo, and other companies developing self-driving cars have a lot of technology. There have been a few hiccups. An Uber car ran a red light in California. Another — also in Arizona — was struck by another vehicle and rolled over, although police blamed the other (human) driver. Now we have the [Herzberg] case and we don’t know for sure if there was any technology component to the tragedy. But if there is, we are certain it is soluble. The technology to drive doesn’t seem like it should be so difficult.

That doesn’t mean machines will drive in the same way as a human. Humans have intuition and some pretty awesome pattern matching capability. On the other hand, they also have limited attention spans and don’t always react as fast as they would like. Nor is the machine able to perform miracles. No matter if the driver uses silicon or protoplasm, it takes a certain amount of distance to stop a vehicle moving at a given speed. Vehicles will kill people no matter how smart the driving computers get.

It Isn’t Technology

If the technology is there, why aren’t the highways saturated with robotic limos? Like most real-world technology, the technology is only one part of it. We can make plenty of electricity using nuclear power plants, but risk aversion, regulation, and tax structures make it infeasible. I’m not saying that’s appropriate or not, I’m just saying that we know how to build atomic power plants, we just decided to stop. We have had the technology to go to the moon for a while. We’d have to develop some new ways if we wanted to go back, but we could do it. We just don’t today. Making a self-driving car isn’t a problem like sending a live human to the Andromeda galaxy.

Data from 2016 shows that just under 40,000 people a year die in United States traffic fatalities — an average of 102 people a day. We don’t have much data for robot vehicles, but intuitively you have to guess that well-designed autonomous vehicles ought to be able to do better. If the majority of cars were under computer control, you’d think much better. But it isn’t going to be zero fatalities.

Some of the resistance is probably an oddity of human behavior. In 2016, there were 325 deaths worldwide due to commercial air travel. Yet very few people are afraid to drive, but many people are afraid to fly. Behind the wheel, there is an illusion that we are in control. Maybe we think a car wreck is somehow “your fault” even if it isn’t. But on a commercial airplane, you feel like you are at the mercy of the pilot and — to some degree — what many people consider mysterious technology.

That doesn’t bode well for public sentiment for self-driving cars. We are willing to dismiss 102 deaths a day much more readily than less than a death a day caused by air travel. The other issue is how companies are going to survive the onslaught of lawsuits and legal challenges that are inevitable. Don’t get me wrong; autonomous cars shouldn’t get a free pass. But they probably shouldn’t be held far more accountable than a human driver, either.

Hacker Activism

As a technically savvy community, we should influence people to have sensible positions about this and other technology policy issues. What should you say? That’s not really for me to tell you. Maybe you are against self-driving cars. Or maybe you are for them. Justify your position and carry it forward.

Me? I think the cars are coming. I think they will make us safer and have other benefits to the environment and even the economy. But I would like to see more regulations — something that I usually reject. However, as more companies enter the fray, there will eventually have to be safety standards just like there are on human-driven cars, airplanes, and other dangerous things we all come into contact with often. [Adam Fabio] suggested that companies should be required to share crash data with the industry, and I thought that was a good idea, too.

Countries that allow unrealistic lawsuits against these robotic cars are going to fall behind those who both reasonably handle liability and establish reasonable guidelines for operation. Your opinion may differ. That’s OK. And it should make for lively comments.

Technology plays a bigger role in everyone’s lives every year. We’ve gone from a bizarre priesthood of nerds to the people who understand how the world works. Computers, cell phones, home assistants, and self-driving cars were the stuff of science fiction not long ago. Now they are advertised on prime-time television. If we expect ordinary people, community and business leaders, courts, and politicians to make rational decisions, we should be vocal and active sharing what we know in a way that helps people see what we do.

We’ve been wrestling with the ethics of self-driving cars for years here at Hackaday. It isn’t always as clearcut as you might think.