A new report from the New York Times details a bureaucratic mess, caused by the overhauling of an A.I. system, that it says is responsible for the two fatal Boeing 737 Max 8 crashes in Oct. 2018 and March 2019.

Boeing allegedly introduced "aggressive and riskier" changes to an A.I. system built for safety. Those changes, plus siloed departments and a lack of pilot training and regulatory oversight, reportedly led to the deadly Boeing 737 Max crashes.

Governments around the world grounded Boeing 737 Max planes in March, after an Ethiopian Airlines flight crashed just after takeoff. That disaster unfolded five months after a similar crash on a Lion Air flight on the same airplane model taking off from Indonesia.

Since the crashes, A.I. software called Maneuvering Characteristics Augmentation System (MCAS) has emerged as part of the cause of the crash. The Times' report, published Saturday, provides new information about the specific overhauls to the system that made it more vulnerable to malfunctioning. It also shows how a lack of understanding between departments, inadequate pilot training on the new system, and downplaying the changes to the Federal Aviation Administration, created a perfect, deadly storm.

Boeing originally designed MCAS as an extremely limited A.I. system that would only kick in to course correct the nose of the plane when two sensors detected extreme wind resistance and force. However, in 2012, Boeing began to make changes and expand MCAS.

As a Vox report details, Boeing needed to compete — and fast — with competitor planes that could fly faster and longer. Rather than design a new plane, it decided to put bigger engines on the older 737 model.

According to the Times, this caused the 737 Max test pilot to notice some problems with handling the frankenplane. Boeing's solution was to expand the use of MCAS in order to automatically improve handling more routinely, and not just in cases of emergency. It removed one of the two sensors, so now only unusual wind resistance (not force) would trigger MCAS. It also increased the amount of control it had over the plane, increasing the speed and amount of force it would exert to move the plane's nose.

Then, Boeing allegedly did a few things that may have contributed to the deadly crashes.

Engineers and safety personnel were reportedly not aware that the system would only rely on one sensor, and that it would function more broadly and aggressively. Relying on only one sensor is thought to be risky because these sensors are more vulnerable to damage by birds, bumps, or mishandling than it might seem. Investigators suspect that a damaged sensor may have played a role in the Ethiopia crash.

Additionally, Boeing allegedly "downplayed" the extent of the changes to MCAS to the FAA, so the safety regulator did not actually approve the new version.

Boeing also requested permission from the FAA to exclude MCAS training from pilot manuals and training (doing so would have been expensive and a time-suck). Pilots flying the 737 Max were allegedly not aware of how much control an A.I. system had over their plane, and what to do in cases when it misfired.

Boeing has said that it is fixing the A.I. software. But the Times' report indicates that the problem is less an issue of a software bug than one of over-reliance on and under-regulation of A.I. in the first place.