Read more about the effects of this crash a year later, right here

At about 10 pm on Sunday evening, a self-driving Uber struck and killed a woman crossing the street in Tempe, Arizona. The crash appears to be the first time a self-driving vehicle has killed someone—and could alter the course of a scantily regulated, poorly understood technology that has the power to save lives and create fortunes.

The Tempe Police Department reports the Volvo XC90 SUV was in autonomous mode when the crash occurred, though the car had a human safety driver behind the wheel to monitor the technology and retake control in the case of an emergency or imminent crash. The woman, Elaine Herzberg, was transported to a local hospital, where she died from her injuries. The police department will complete its full report later today.

In response, Uber has pulled its self-driving vehicles off public roads in the Phoenix metro area (including Tempe), San Francisco, Toronto, and Pittsburgh (where the cars also pick up passengers). A spokesperson says the company is cooperating with local authorities. The National Transportation Safety Board and the National Highway Traffic Safety Administration are sending investigative teams to Tempe.

Few Rules

The deadly crash comes at a critical time for the nascent self-driving vehicle sector, which has spent billions on research and development for a technology it promises will be safer and more efficient than today’s human-driven cars and that it hopes to deploy for commercial service in the next few months or years. But now is the in-between time, the moment when autonomous vehicles are less than perfect, even as they take to public streets in ever greater numbers. So how might this first crash—which will not be the last—swing the safety vs. progress calculus?

Uber, Waymo, and other autonomous vehicle developers like Arizona not just for the sunny weather and calm conditions but for the near total lack of restrictions on how they test: Self-driving vehicles don’t need any sort of special permit, just a standard vehicle registration. And their operators don’t have to share any information about what they’re doing with the authorities.

LEARN MORE The WIRED Guide to Self-Driving Cars

“Although other states have reporting requirements for autonomous vehicles being tested in their state, Arizona does not see a need to implement reporting requirements at this point,” a spokesperson for the Arizona Department of Transportation told WIRED last year.

Earlier this month, Arizona Governor Doug Ducey signed an updated executive order giving companies permission to test or operate fully driverless vehicles in the state. No wonder, then, that Waymo plans to launch a totally driverless taxi service in the Phoenix area this year. (The Google sister company did not respond to a request for comment.)

Thus far, only California demands developers make public specific data on their operations, including descriptions of any crashes, how many miles they drive each year, and how often their human safety operators take control from the robot. Even those numbers are less than helpful in understanding the pace of their work or just how well these things really drive. The state will begin allowing the testing of totally driverless vehicles—without safety drivers for backup—on public roads next month.

Meanwhile, the companies in this space await legislation that would put the federal government firmly in charge of all autonomous vehicle design, construction, and performance, and allow even more testing—as many as 100,000 vehicles per manufacturer—all over the country. The bill, called the Self Drive Act, passed in the House this fall. But the companion Senate bill, the AV Start Act, has been held up by a few senators who wonder whether the young technology needs more aggressive oversight.

This crash will not help the companies’ arguments, as onlookers ask whether autonomous vehicles should be kept on a tighter leash as their developers iron out the considerable kinks in the tech.