The cockpit of a Boeing 787 Dreamliner, March 13th. Photograph by Dhiraj Singh/Bloomberg via Getty

At 9:18 P.M. on February 12, 2009, Continental Connection Flight 3407, operated by Colgan Air, took off from Newark International Airport. Rebecca Shaw, the first officer, was feeling ill and already dreaming of the hotel room that awaited in Buffalo. The captain, Marvin Renslow, assured her that she’d feel just fine once they landed. As the plane climbed to its cruising altitude of sixteen thousand feet, the pair continued to chat amiably, exchanging stories about Shaw’s ears and Renslow’s Florida home.

The flight was a short one and, less than an hour after takeoff, the plane began its initial descent. At 10:06 P.M., it dropped below ten thousand feet. According to the F.A.A.’s “sterile cockpit” rule, all conversation from that point forward is supposed to be essential to the flight. “How’s the ears?” Renslow asked. “Stuffy and popping,” Shaw replied. Popping is good, he pointed out. “Yeah, I wanna make ’em pop,” she assured him. They laughed and began talking about how a different Colgan flight had reached Buffalo before theirs did.

As ground control cleared the flight to descend to twenty-three hundred feet, the pilots’ conversation continued, unabated. There was the captain’s own training, which was, when he first got hired, substantially less than Shaw’s. There were Shaw’s co-workers, complaining about not being promoted quickly enough. There was the ice outside. Renslow recalled his time flying in Charleston, West Virginia, and how, being a Florida man, the cold had caught him doubly off guard. As the plane lost altitude, it continued to decelerate.

At 10:16 P.M., the plane’s impending-stall alert system—the stick shaker—kicked in. “Jesus Christ,” Renslow said, alarmed. In his panicked confusion, he pulled the shaker toward him instead of pushing it away from him. Seventeen seconds later, he said, “We’re down,” and, two seconds after that, the plane crashed, killing everyone on board and one person on the ground.

In its report about Flight 3407, the National Transportation Safety Board (N.T.S.B.) concluded that the likely cause of the accident was “the captain’s inappropriate response to the activation of the stick shaker, which led to an aerodynamic stall from which the airplane did not recover.” The factors that the board said had contributed to Renslow’s response were, “(1) the flight crew’s failure to monitor airspeed in relation to the rising position of the low-speed cue, (2) the flight crew’s failure to adhere to sterile cockpit procedures, (3) the captain’s failure to effectively manage the flight, and (4) Colgan Air’s inadequate procedures for airspeed selection and management during approaches in icing conditions.” All but the fourth suggested a simple failure to pay attention.

In this respect, Flight 3407 followed a long-established trend. A 1994 N.T.S.B. review of thirty-seven major accidents between 1978 and 1990 that involved airline crews found that in thirty-one cases faulty or inadequate monitoring were partly to blame. Nothing had failed; the crew had just neglected to properly monitor the controls.

The period studied coincided with an era of increased cockpit automation, which was designed to save lives by eliminating the dangers related to human error. The supporting logic was the same in aviation as it was in other fields: humans are highly fallible; systems, much less so. Automation would prevent mistakes caused by inattention, fatigue, and other human shortcomings, and free people to think about big-picture issues and, therefore, make better strategic decisions. Yet, as automation has increased, human error has not gone away: it remains the leading cause of aviation accidents.

***

In 1977, the House Committee on Science and Technology identified automation as a major safety concern for the coming decade, and, three years later, the Senate Committee on Commerce, Science, and Transportation repeated the warning. Boeing, McDonnell Douglas, and other leading commercial-aviation companies were, at the time, developing new aircraft models with ever more sophisticated cockpits. With the move toward automation seemingly inevitable, Congress requested that NASA research the effects of the changes on pilots.

Leading the charge at NASA’s Ames Research Center was Earl Wiener, a pioneer of human-factors and automation research in aviation. Wiener had been studying flight records in the years since automation was first introduced into the cockpit. Beginning in the nineteen-seventies, he published a series of papers that analyzed the interplay among automation, pilot error, and accidents. By the early nineteen-eighties, he had concluded that a striking number of innovations designed to address the perceived risk of human error had, in fact, led to accidents. Among the most notorious examples he cited was the 1983 crash of Korean Air Lines Flight 007, which was shot down by the Soviet Union after veering three hundred miles off course. The official report attributed the crew’s “lack of alertness” as the most plausible cause of the navigational error. Such inattention, the report went on to say, was far from unique in civilian-aircraft navigation.

By 1988, Wiener had added more cases to his list and had begun supplementing his research with extensive pilot interviews. He was well aware that automation could work wonders: computers had markedly improved navigation, for example, and their ability to control the airplanes’ every tiny wiggle via the yaw damper was helping to prevent potentially fatal Dutch rolls. But, as pilots were being freed of these responsibilities, they were becoming increasingly susceptible to boredom and complacency—problems that were all the more insidious for being difficult to identify and assess. As one pilot whom Wiener interviewed put it: “I know I’m not in the loop, but I’m not exactly out of the loop. It’s more like I’m flying alongside the loop.”

Wiener accused the aviation industry of succumbing to what he called the “let’s just add one more computer” phenomenon. Companies were introducing increasingly specialized automated functions to address particular errors without looking at their over-all effects, he said, when they should have been be making slow and careful innovations calibrated to pilots’ abilities and needs. As it stood, increased automation hadn’t reduced human errors on the whole; it had simply changed their form.

***

It was against this backdrop, in 1990, that Stephen Casner arrived at Ames, armed with a doctorate in Intelligent Systems Design from the University of Pittsburgh. Casner had been studying automation, and although he didn’t have any particular experience with planes (he became a licensed pilot soon after), he brought a new perspective to the problem: that of human psychology. His adviser at Pitt had been a psychologist, and the field had deeply influenced his understanding of automation. He hoped to bring a new experimental rigor to the problem, by testing the effects of computerized systems on pilots.

Over the next two decades, Casner dedicated himself to systematically studying how, exactly, humans and computers were interacting in the cockpit, and how that interaction could be improved to minimize error and risk. How were the pilots solving complex problems as a flight progressed along its regular course? How well-suited were the displays and functions to the pilots’ preferences and behaviors?

Cockpit systems, he found, were not particularly well understood by the pilots who had to use them, and he concurred with Wiener that the forms of automation in use were not particularly well suited to the way pilots’ minds operated during a flight. In 2006, Casner attempted to remedy the first part of the problem by publishing a textbook on automation in the cockpit. Since then, he has focussed increasingly on the problem of inattention. Last year, he teamed up with the psychologist Jonathan Schooler, from the University of California, Santa Barbara, who studies attention and problem-solving ability, to see whether automation was genuinely responsible for the kinds of monitoring errors that Wiener had identified. If computerized systems performed as intended ninety-nine per cent of the time, Casner and Schooler asked, how would a pilot’s ability to engage at a moment’s notice if something went wrong, as it had for Colgan Air, be affected?