Last Monday, the Obama Administration released a hundred-and-twelve-page policy tome, “Federal Automated Vehicles Policy,” which, despite its sleep-inducing title, found an eager readership. The document contained long-awaited regulatory guidance on self-driving cars—a concept that has gone from sci-fi fantasy to legitimate industry in just a few short years. The official reaction from manufacturers has been muted. Nonetheless, the internal reaction was likely relief. Without federal recognition and regulatory authority, the autonomous-vehicle industry exists in legal limbo. As of Monday, there is a road forward.

Yet all is not settled. Delicate negotiations lie ahead before regulations are finalized. One key element of these negotiations will be the guidelines’ suggestion that automakers share data on driving incidents, including crashes, with other manufacturers. The wording in the guidelines was gentle, using prodding “shoulds,” rather than commanding “shalls.” But the regulations nonetheless create the possibility that the government will, eventually, force the carmakers to do something they don’t want to do, which is give their data, a key currency of the information age, to their competitors.

The purpose of sharing crash data is to make driverless cars safer, which is the primary objective for the Department of Transportation. Many in the general public question how safe these cars will actually be. Part of the discomfort with autonomous vehicles arises from a lack of familiarity. People find it hard to believe that a car can drive itself, though that feeling tends to disappear after just a few minutes in an autonomous car. There’s also a more durable concern: the “trolley problem,” which has become something of an obsession for policymakers and journalists alike. In its most common form, the trolley problem, a flexible philosophical construct created by the Oxford philosopher Philippa Foot, asks whether you would sacrifice one life to stop a runaway trolley in order to save the lives of five people standing in its path. What fascinates people (including academic computer scientists and psychologists) is how to program autonomous vehicles to make complex ethical decisions when confronting no-win situations. (Should the vehicle protect passengers or pedestrians first, for instance?) The story underlying the trolley problem is realistic and easy to understand, but the likelihood of confronting a trolley-problem type of situation is small. How many times have you had to decide whether to hit a baby carriage or a group of pedestrians? For most, that answer is probably never. Regardless, the whole discussion assumes that humans will make split-second decisions that are both right and ethical.

Whether or not computers can be programmed to make moral decisions (whatever those moral decisions may be), they can most certainly be trained and programmed for safety. And that’s where the importance of the crash data comes in. Sensor and other data from autonomous-vehicle crashes will improve the performance of autonomous-vehicle fleets.

A good example of how this might happen comes from a fatal Tesla Autopilot crash in the spring. A Tesla Model S operating in “self-driving” mode slammed into a tractor-trailer at high speed, instantly killing the driver, a forty-year-old Navy veteran. It appears that the vehicle sensors interpreted the white side of the trailer as the sky. Afterward, the data collected from video footage and other sensor information, such as radar and sonar logs, were examined closely by engineers. Using that data, Tesla has upgraded its software so that Autopilot-enabled cars won’t repeat the mistake. That software update was sent to every vehicle in the Tesla Autopilot fleet, new or old. Presumably every car in the Tesla fleet can now distinguish between the white side of a truck and the sky. But, significantly, that information was not shared with Google, GM, Uber, or other companies experimenting with driverless cars. If the government has its way, that could change. Information would be shared, or, as the guidelines gingerly put it, “each entity should develop a plan for sharing its . . . data with other entities. Such shared data would help to accelerate knowledge and understanding of HAV performance, and could be used to enhance the safety of [autonomous-vehicle] systems.”

It’s not just catastrophic situations that lead to progress. The software that runs autonomous vehicles is constantly learning from real-world driving data, getting better at recognizing things around it (the road, other cars, nearby trees) and deciding how to respond (steering, braking). All the information is transmitted to a centralized brain and used to make the fleet smarter. This sharing, or “fleet learning,” is incredibly powerful. It’s kind of like how, when your finger touches a hot stove, your foot, knee, and elbow also learn that it’s not a good idea to touch a hot stove.

Human drivers don’t learn like this, to their detriment. That’s why there are countless intersections in America that are known to be “dangerous.” Different drivers make the same mistakes again and again. Unfortunately, there’s no easy way to help humans learn from the experience of other drivers (although there have been some interesting efforts using virtual reality). Every year, new drivers take to the roads and start from scratch. Older drivers lose their edge. Some drivers are just reckless. Fleet learning means that autonomous vehicles won’t suffer from any of these problems. (Our consulting firm advises robotics, automotive, energy, and artificial-intelligence companies on market and policy issues surrounding autonomous vehicles.)

The caveat to fleet learning is that the engineers and machine-learning algorithms are limited by the data available. If that data remains accessible only to the manufacturer, then autonomous vehicles designed by Honda, Chrysler, BMW, and GM will all have to make the same mistake (crashing at the same intersection, say) before fleet learning takes hold. Software updates to autonomous-driving systems will reach a company’s own vehicles, but not those of competitors.

This week’s announcement of new federal rules highlights the complexity of dealing with such issues. The extent of software testing, operating-system updates, and data collection required for autonomous vehicles is beyond the Department of Transportation’s previous experience. On Monday, Mark Rosekind, the administrator of the National Highway Traffic Safety Administration, said, “We’re looking at ways that that information could be shared so that you could provide a way for all autonomous vehicles to learn from those issues, as opposed to just one company.” Over the coming months, the government will be receiving public comments on proposed rules to require manufacturers of driverless cars to share crash data with all other manufacturers. The resulting regulations may not be straightforward. Different autonomous vehicles use different sensors, so the crash data from one manufacturer wouldn’t necessarily be an exact fit for another manufacturer’s software. (Tesla, for example, doesn’t use a laser-based system called lidar that is common among other manufacturers.) But engineers can find ways to translate and adapt crash data to be relevant to their own systems, and to extract valuable safety lessons.

Automakers are likely to oppose this regulation with the full force of their lobbying might. Safety will be seen as a key competitive advantage for leaders in autonomous-vehicle technology, and giving up crash data has negative effects for both leaders and laggards. For leaders, it allows competitors to profit from their hard-won knowledge–and, potentially, to catch up. For laggards, it exposes vulnerabilities. After all, who would want to buy an unsafe autonomous car? But the regulation would be more than just sensible; it would be essential to creating safe driverless cars more quickly. Shared data should include all the sensor records collected by the vehicle before, during, and immediately after a crash. It doesn’t have to include operating software, but it is ethically and morally indefensible to withhold the benefits of fleet learning from less-advanced manufacturers.

Unlike the trolley problem, the idea of fleet learning lacks emotional immediacy. But the ethics of fleet learning are already having an impact on the real-life driving experience of people on our roads. Tesla is gathering a million miles of data every ten hours or so, and safety incidents are beginning to pile up. Elon Musk would probably love to guard that data internally and use it for commercial advantage over competing systems, but autonomous-vehicle safety is where the N.H.T.S.A. should draw a vivid line. When profits are prioritized over safety, nothing good can result.