The crash of an Uber Volvo in Tempe, Arizona has dragged a regulatory spotlight back onto self-driving cars. The Uber car, in driverless mode, ended up on its side after being shunted by a Honda that was turning left. Such incidents bring the hype surrounding automotive autonomy bouncing back to earth. But they also remind us of the need for smart regulation.

The true believers at Wired magazine used the crash as another illustration of human incompetence and called for for an acceleration of self-driving. As with almost all crashes involving self-driving cars, it appears that the humans were legally at fault. However, casting the blame when computers mix with humans is not easy, and can impede opportunities for social learning. (For Uber, the crash could also have been an opportunity to take responsibility and change its frat boy narrative).

After such bumps and scrapes, the normal response from self-driving car enthusiasts is to emphasise that, while the computers are still learning to drive, they are still far safer than their human counterparts. The solution to the dangers of driving, they argue, is more autonomy – giving cars and the companies that make them more control of their destinies.

The pace of progress from the first major public self-driving experiment, organised by the Defense Advanced Research Projects Agency (DARPA) in 2004 has been breath-taking. Self-driving cars are now able to handle most well organised roads with ease. Carmakers release videos to show how their cars see and navigate the world without any help.

There is something heroic about going it alone but, as with Brexit, the pride that accompanies independence may come before a fall. While car companies trumpet their autonomy, they are less forthcoming about their connectivity. It is far easier to engineer cars to speak to each other and the environment than it is to get them to drive on their own. However, this requires a degree of solidarity, which displeases start-ups.



The short history of self-driving cars has already been rewritten to suit the advocates of autonomy, many of whom want to see the car industry disrupted rather than augmented.

Jameson Wetmore from Arizona State University (in Tempe) describes how, before DARPA took things off-road, the history of self-driving cars was as much about automated roads as automated cars. The ability of a car to independently steer its way through a city, with all of the uncertainties that brings, is extraordinary, but it will never be either as safe or as efficient as a car that is communicating with other cars and its surroundings.

Historian David Mindell argues that ‘autonomy’ is a myth. Strictly speaking, there is no such thing as an autonomous system. We are all guilty of perpetuating the myth by, for example, calling unmanned aircraft ‘drones’ even though they are tightly controlled (for the time being) by humans. This myth is not just wishful thinking. It is also a set of political claims in disguise.

Techno-evangelist Kevin Kelly claims that technology has a mind of its own, but his is transparently an argument for libertarianism – letting the technologists do whatever they want in order to save the world. As Langdon Winner argued in the 1970s, just because technology looks as though it is out of control, that doesn’t mean we that we should give up on trying to control it.

It is up to governments to resist the big lie of autonomy and reattach technologies to the real world. For self-driving cars to really work, they need to become interdependent. We need standards for accident investigation, data sharing and communications between vehicles and infrastructure such as roads and traffic signals. At the moment, each Tesla vehicle generates gigabytes of information, which is used to train its self-driving software, but the company is willing to divulge data only when it suits them. Companies like Tesla and Uber would rather hang onto their data (which they see as vital for competitive advantage) communicate only among their own vehicles, keep regulators at arm’s length and keep their machine learning private. Connectivity certainly creates new risks and new privacy concerns, but we should not let companies use these as an excuse for inscrutability.



Carmakers must be encouraged not just to share data between their vehicles but with their competitors and with regulators, so that each can learn the lessons of others. Had the Arizona accident happened in California, Uber would have been forced to share more of the relevant data with regulators. Arizona, desperate to win Uber’s affections, make fewer demands on car companies.

Governments should resist a race to the bottom and instead supply their own visions of self-driving for the public good. Ironically, it may be Europe, with its heritage of public transport, that sees the first explosion of useful automated transport. One recent analysis suggests that, for all of the noise they make, the self-driving upstarts may already be losing their lead to the big, well-regulated car companies.

If governments don’t play their part then companies will start drawing their own connections. They will start demanding that, for reasons of public safety, infrastructure must be upgraded, at huge public cost, in order to become machine-readable. Self-driving, particularly in built-up areas, should proceed on cities’ terms rather than carmakers’.



Update: this post was edited on 7 April to clarify that the Honda ran into the Volvo rather than the other way around.

