Last month, an Uber self-driving car struck and killed pedestrian Elaine Herzberg in Tempe, Arizona. The tragedy highlights the need for a fundamental rethink of the way the federal government regulates car safety.

The key issue is this: the current system is built around an assumption that cars will be purchased and owned by customers. But the pioneers of the driverless world—including Waymo, Cruise, and Uber—are not planning to sell cars to the public. Instead, they're planning to build driverless taxi services that customers will buy one ride at a time.

This has big implications for the way regulators approach their jobs. Federal car regulations focus on ensuring that a car is safe at the moment it rolls off the assembly line. But as last month's crash makes clear, the safety of a driverless taxi service depends on a lot more than just the physical features of the cars themselves.

For example, dash cam footage from last month's Uber crash showed the safety driver looking down at her lap for five agonizing seconds before the fatal crash. Should Uber have done more to train and supervise its safety drivers? Should Uber have continued to put two people in each car, rather than switching to a single driver? Not only are there no federal rules on these questions, at the time of the crash the public was completely in the dark about how Uber and its competitors were dealing with the issue.

That was partly because the current administration has a philosophical commitment to minimal regulation. But it's also because the current legal framework—developed under both Democratic and Republican administrations—isn't designed to address this kind of issue.

Right now, Congress is considering legislation to exempt tens of thousands of self-driving cars from conventional car safety regulations. It's a reasonable idea. Those regulations really are a poor fit for fully autonomous vehicles, and the technology is changing so fast that any regulations written today are likely to be obsolete in a few years.

But in exchange for this regulatory relief, Congress should insist on a lot more scrutiny for the testing and deployment of self-driving cars. Driverless car advocates worry, correctly, that premature regulation could hamper the development of this potentially life-saving technology. But officials could do a lot more to promote transparency and provide oversight without hampering progress.

Why conventional regulations don't work for driverless cars

Federal car safety regulation has traditionally been based on a thick book of rules called the Federal Motor Vehicle Safety Standards (FMVSS). These regulations, developed over decades, establish detailed performance requirements for every safety-related part of a car: brakes, tires, headlights, mirrors, airbags, and a lot more.

Before a car can be introduced into the market, the manufacturer must certify that the vehicle meets all of the requirements in the current version of the FMVSS. A carmaker must certify that the brakes can stop the car within a certain number of feet, that airbags can deploy safely with passengers of various heights, that the tires can run for many hours without overheating, and so forth.

Federal regulations don't say much about how companies develop and test cars before bringing them to market. In the era of conventional cars, they didn't need to. Development and testing was generally conducted on private test tracks where they posed no danger to the public. Then car companies would provide the government with documentation that the car met the standards in the FMVSS before putting them on the market.

But that approach doesn't work for driverless cars. Companies can do some testing of driverless cars on a closed course, but it's impossible to reproduce a full range of real-world situations in a private facility. So at some point, carmakers need to put self-driving cars on public roads for testing purposes—before a manufacturer is able to clearly demonstrate that they're safe. In effect, this makes the public involuntary participants in a dangerous research project.

So far, the approach favored by most driverless car advocates has been for federal officials to simply throw up their hands at this problem. Legislation passed by the House last September, and companion legislation currently stalled in the Senate, would carve out broad exemptions from the FMVSS for driverless cars (the legislation would require manufacturers to submit a "safety report" explaining key safety features of fully self-driving vehicles).

Again, there's some logic to this. It's true that some FMVSS requirements don't make sense for fully driverless cars, and it will take years to update the rules.

But updating the FMVSS is neither necessary nor sufficient for effective regulation of driverless cars. It's perfectly possible to make an FMVSS-compliant driverless car by starting with a conventional car (which already meets all FMVSS requirements) and adding self-driving gear to it. In fact, Waymo is planning to do exactly that for its Phoenix taxi service with a fleet of Chrysler Pacifica minivans.

At the same time, there are many important aspects of running a driverless taxi service that aren't addressed at all by the FMVSS:

Protecting driverless cars from cyberattacks not only depends on the architecture of cars themselves, it also depends on the operational security of the systems used to update the car's onboard software.

Driverless car safety will depend on the accuracy and timeliness of updates to cars' onboard maps.

Companies need a rigorous process for testing safety-critical components on cars in the field and replacing them when they fail.

Companies need a system for thoroughly investigating crashes and other anomalies and updating the car's software to make sure problems don't get repeated.

During the testing phase, safety depends on the training and supervision of safety drivers.

Once the commercial service is launched, safety may depend on the competence of staffers overseeing cars from a remote operations center.

Driverless car companies need plans for dealing with emergency situations and interacting with first responders.

Most of these issues aren't covered by the FMVSS—and they probably shouldn't be either. The FMVSS is supposed to focus on objective metrics—like stopping distance—that can be measured in a lab or on a test track. But no numerical measurement can capture how rigorous a company's cybersecurity policies are or how thoroughly a company performs post-crash investigations.

Moreover, the technology is so new that it would be a mistake to write detailed regulations on any of these topics now.

But what regulators could do is focus on transparency and oversight. If the public is going to share the road with potentially dangerous driverless cars, we should at least have timely and detailed information about how those vehicles are performing and what steps companies are taking to protect public safety.