Uber has discovered the reason why one of the test cars in its fledgling self-driving car fleet struck and killed a pedestrian earlier this year, according to The Information. While the company believes the car’s suite of sensors spotted 49-year-old Elaine Herzberg as she crossed the road in front of the modified Volvo XC90 on March 18th, two sources tell the publication that the software was tuned in such a way that it “decided” it didn’t need to take evasive action, and possibly flagged the detection as a “false positive.”

The reason a system would do this, according to the report, is because there are a number of situations where the computers that power an autonomous car might see something it thinks is a human or some other obstacle. Uber reportedly set that threshold so low, though, that the system saw a person crossing the road with a bicycle and determined that immediate evasive action wasn’t necessary. While Uber had an operator, or “safety driver,” in the car who was supposed to be able to take control in a failure like this, the employee was seen glancing down in the moments before the crash in footage released by the Tempe Police Department.

All of Uber’s self-driving testing efforts have been suspended since the accident, and the company is still working with the National Transportation Safety Board, which has yet to issue a preliminary report on the progress that’s been made in its investigation. When reached for comment, a spokesperson for Uber issued the same statement to The Verge that is found in The Information’s story:

We’re actively cooperating with the NTSB in their investigation. Out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident. In the meantime, we have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture. Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.

In the wake of the crash, signs have emerged that Uber’s self-driving program was potentially fraught with risk. For one thing, Uber had reduced the number of “safety drivers” in its test cars from two to one, according to a New York Times report. This explained why the driver who was in the car that killed Herzberg was alone.

Then in late March, Reuters discovered that Uber had reduced the number of LIDAR sensors on its test cars. (LIDAR is considered by most to be critical hardware for autonomous driving.) All this was happening in an environment with little oversight from the government in Arizona. Emails obtained by The Guardian in the weeks after the crash detailed a cozy relationship between Uber and Arizona Governor Doug Ducey that may have allowed the company’s test cars to hit the road even earlier than previously thought.

Many of Uber’s competitors, and even some of its partners, have spoken out since the accident as the company tried to find an answer for what went wrong. Nvidia, which supplies the GPUs that help power Uber’s autonomous tech, distanced itself in late March and said the fault must have been with Uber’s software. Velodyne, which makes the LIDAR sensor that Uber uses, says its tech shouldn’t have been affected by the nighttime conditions. Intel’s Mobileye division published a breakdown of how and why its tech would have recognized Herzberg, though now that doesn’t seem to have been the problem according to The Information’s report.

Despite Herzberg’s death, Uber CEO Dara Khosrowshahi — who The New York Times recently reported had considered ending the self-driving program when he came on board last August — said in an April interview with the Today show that the company is “absolutely committed to self-driving cars.”