Somewhere over northern New Mexico, a pilot and co-pilot felt a shudder as something struck the airplane. Immediately, the plane’s systems set to work testing out its various control surfaces, such as the flaps and rudder, to discover its new limitations. Then it mapped updated instructions to the pilots’ hand controls that would allow them to maneuver in spite of the damage.

Whatever it was had taken out one side of the tail. With damage to the horizontal stabilizer and elevator, it would be harder to control the altitude and descent rate, This meant that the pilots would have to land at a much higher speed than normal, and they needed to start planning that landing now.

The pilot called up the Emergency Landing Planner on the flight computer. It quickly assessed the nearby runways, possible flight paths, the weather, the risk that the plane might pose to people on the ground and how quickly help could arrive.

Within a few seconds, it had ranked many promising routes, with Clovis Municipal Airport in New Mexico at the top. Cannon Air Force Base, sporting a much longer runway, drew the pilots’ eyes. But the weather was too poor there — the wind was blowing across the runway, which spelled trouble for a plane with a damaged tail.

Clovis offered a headwind, allowing the plane to travel slower with respect to the ground as it came in to land. The pilots hoped it would be enough as they turned the plane toward the smaller airport.

With this choice, odds are good that these pilots would land the plane in relative safety, but in truth they were safe the whole time. This is one of the scenarios that five teams of professional airline pilots faced as they flew in a simulator at the NASA Ames Research Center at Moffett Field, Calif.

A group of researchers in the Intelligent Systems Division was testing their Emergency Landing Planner software — a type of software first developed by a researcher now at U-M. Although NASA is best known for space exploration, the National Aeronautics and Space Administration has always played a strong role in atmospheric flight innovations.

The team, led by David Smith, hopes that the planner will help pilots find the best landing site and route to take, potentially saving lives. The test was encouraging, with the pilots saying that such a tool would be welcome in their cockpits, Smith reported.

“It’s allowing people to make faster decisions and take more information into account,” he said. “If you can make the right decision quickly, it helps a lot.”

In its current iteration, the planner only acts as a guide, a bit like a GPS route planner in a car. It doesn’t choose the route or fly the plane. Still, it demonstrates that a computer can assess a complex and unexpected situation, a skill that previously set human operators apart from machines.

Those who defend our reliance on pilots often point to unexpected emergency scenarios as the reason why we need humans in the cockpit. Sure, a drone can handle routine flight, but can it come up with a way to save the day when it encounters a problem that isn’t in the plan?

This kind of software challenges that view. Expanded to the point where it could choose the route and load it into the autopilot system, such an emergency lander would represent a computer capable of handling a midair crisis — a machine to call on in a “mayday” situation. And then, would we still need pilots?

View from the cockpit

Right now, with pilots at the controls, we are enjoying a period of unprecedented safety. In the United States, statistics from 2008 to 2012 put the odds of dying on any flight at one in 45 million. To give that number some perspective, if you flew on three commercial flights every day, you could expect to experience one fatal crash in 40,000 years.

And yes, modern pilots deserve some credit. Patrick Smith, first officer for a commercial airline as well as an author and columnist on aviation, says the autopilot is overrated. “Millions of people out there think that planes are programmed to fly themselves and pilots are sitting back,” he said. “It’s one of the most misunderstood and exaggerated aspects of commercial aviation.”

State-of-the-art automation can handle all physical parts of routine flight, but pilots tell the plane what to do and handle any changes to the plan that may arise from weather, traffic at the airport or other circumstances.

Currently, pilots receive flight plans from a dispatcher for the airline, which the pilots review before the plane takes off. At the gate, the pilots fire up the plane’s electronics and automated systems — among these, the flight management system. They plug in an outline of the flight, including points that the plane will pass by on its route, the sequence of climbs that will take it to cruising altitude, the descent at the destination airport and winds and weather along the way.

Then, the pilots fly the plane through takeoff until they hand off control to the flight management system. Patrick Smith compares it to cruise control on a car.

“Cruise control frees the driver from certain tasks at certain times, but it can’t drive your car from L.A. to New York. Automation can’t fly a plane from LA to New York either,” said Smith. Even with the autopilot on, he added, both pilots often become completely occupied with tasks such as updating the route to avoid a storm or make changes ordered by air traffic control.

While planes are capable of auto-landing, Patrick Smith says it is rarely used. “More than 99 percent of landings are performed by hand,” he said. Unless he can’t see the runway, it’s easier to fly a successful landing than program one in.

An imperfect balance

The current safety record in aviation represents a substantial change from the 1970s, when over 30 passenger flights on U.S. carriers ended in fatal accidents. In the decade from 2004 to 2013, that number was just four.

Much of the credit for this improvement goes to computer-driven systems on airplanes, known collectively as flight deck automation, that handle aspects of flight for the pilots. These systems allow pilots to perform higher-level planning tasks, such as anticipating challenges like bad weather. Sensors and software that interprets the readings can alert pilots to issues such as a potential stall or mechanical problem. In some cases, the automation even handles the problem for the pilot.

However, automation has also introduced new challenges that played a role in a number of mishaps, most recently the Colgan Air and Air France crashes of 2009 and the ill-fated Asiana Airlines landing last year (see graphic). In all these cases, due to a combination of poor feedback, insufficient training and unusual conditions, the pilots lost awareness of the plane’s status with fatal results.

“The flight crews failed to either notice or understand what the systems were doing and why they were acting the way they did,” said Nadine Sarter, a professor of Industrial and Operations Engineering at U-M.

In most cases, pilot error is blamed for such accidents. But Sarter argues that plane crashes almost always result from a combination of factors involving the systems as well as the pilots. Also, she suggests that pilots get short shrift when it comes to accident reporting.

“Lo and behold, 75 percent of all aviation accidents are attributed to human error,’” she said. “What we need to look at as well, however, is the figure that shows, for every year, the number of incidents that did not turn into an accident because a human got involved. That is the fair comparison.”

Firm numbers aren’t available, but the Aviation Safety Reporting System, an incident reporting system run by NASA, offers some insights. Anyone involved in aviation operations is encouraged to report safety-related incidents. The program is confidential and anonymous, and the reports can be used to identify and study safety concerns. On average, the database receives more than 6,700 incident reports per month, including submissions from private pilots.