What happened the night of the crash is, at this point, a well-known story. About an hour and a half into the flight, the plane’s air speed sensors stopped working because of ice formation. After the autopilot system transferred control back to the pilots, confusion and miscommunication led the plane to stall. While one of the pilots attempted to reverse the stall by pointing the plane’s nose down, the other, likely in a panic, raised the nose to continue climbing. The system was designed for one pilot to be in control at all times, however, and didn’t provide any signals or haptic feedback to indicate which one was actually in control and what the other was doing. Ultimately, the plane climbed to an angle so steep that the system deemed it invalid and stopped providing feedback entirely. The pilots, flying completely blind, continued to fumble until the plane plunged into the sea.

In a recent paper, Elish examined the aftermath of the tragedy and identified an important pattern in the way the public came to understand what happened. While a federal investigation of the incident concluded that a mix of poor systems design and insufficient pilot training had caused the catastrophic failure, the public quickly latched onto a narrative that placed the sole blame on the latter. Media portrayals, in particular, perpetuated the belief that the sophisticated autopilot system bore no fault in the matter despite significant human-factors research demonstrating that humans have always been rather inept at leaping into emergency situations at the last minute with a level head and clear mind.

Humans act like a "liability sponge."

In other case studies, Elish found the same pattern to hold true: even in a highly automated system where humans have limited control of its behavior, they still bear most of the blame for its failures. Elish calls this phenomenon a “moral crumple zone.” “While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.

This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted to focus on the distraction of the driver.

“We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interact with. Yet in the current regulatory vacuum, they will continue to pay the steepest cost.

Regulators should also have more nuanced conversations about what kind of framework would help distribute liability fairly. “They need to think carefully about regulating sociotechnical systems and not just algorithmic black boxes,” Elish says. In other words, they should consider whether the system’s design works within the context it’s operating in and whether it sets up human operators along the way for failure or success. Self-driving cars, for example, should be regulated in a way that factors in whether the role safety drivers are being asked to play is reasonable.

“At stake in the concept of the moral crumple zone is not only how accountability may be distributed in any robotic or autonomous system,” she writes, “but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.”

This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have more stories like this delivered directly to your inbox, sign up here. It's free.