Photograph by Michael Amme/laif/Redux

Shortly before the dreadful crash of Germanwings Flight 9525, I happened to be reading part of “The Second Machine Age,” a book by two academics at M.I.T., Erik Brynjolfsson and Andrew McAfee, about the coming automation of many professions previously thought of as impervious to technological change, such as those of drivers, doctors, market researchers, and soldiers. With the advances being made in robotics, data analysis, and artificial intelligence, Brynjolfsson and McAfee argue, we are on the cusp of a third industrial revolution.

From what I’ve read of their book, Brynjolfsson and McAfee don’t mention airline pilots, but they would appear to be another possible candidate for technological displacement. Already, some of the routine work in the cockpit, such as directing the plane to the next beacon on its route, is carried out by an autopilot system. In some ways, human pilots have become systems managers. They prepare the aircraft to depart, execute the takeoff and landing, and take the controls in an emergency. But for much of the time that a routine flight is in the air, a computer flies the plane.

The U.S. military appears to be moving in the direction of eliminating pilots, albeit tentatively. The Pentagon and the C.I.A. have long operated unmanned drones, including the Predator, which are used for reconnaissance and bombing missions. In 2013, the U.S Air Force successfully tested the QF-16 fighter-bomber, which is practically identical to the F-16, except that it doesn’t have a pilot onboard. The plane is flown remotely. Earlier this year, Boeing, the manufacturer of the QF-16, delivered the first of what will be more than a hundred QF-16s to the Air Force. Initially, the planes will be used as flying targets for F-16 pilots to engage during training missions. But at least some military observers expect the QF-16 to end up being used in attack missions.

If it’s conceivable for one type of large-scale unmanned aerial vehicle that is flown remotely to engage in a dogfight over hostile terrain, couldn’t another type of U.A.V. carry passengers from points A to B? Technologically, there doesn’t seem to be any obvious barrier. But that wouldn’t rule out the possibility of ground-based flight operators bringing down a flight. Arguably, this would be more of a concern, because the flight operators’ own lives wouldn’t be at risk.

An unmanned aircraft that flies itself, in much the same way that Google’s prototype driverless cars drive themselves, would be another big leap forward technologically, one that many pilots would call a fantasy. “A plane is as able to fly itself about as much as the modern operating room can perform an operation by itself,” writes Patrick Smith, the author of the popular “Ask The Pilot” blog. Still, after the Germanwings tragedy, it is hard not to consider, at least, the possibility of unmanned passenger flights. Setting aside the technological issues, the biggest issue is trust.

Until now, most executives in the airline industry have assumed that few people would be willing to book themselves and their families on unmanned flights—and they haven’t seriously considered turning commercial aircraft into drones or self-operating vehicles. By placing experienced fliers in the cockpit, the airlines signal to potential customers that their safety is of paramount importance—and not only because the crew members are skilled; their safety is at stake, too. In the language of game theory, this makes the aircraft’s commitment to safety more credible. Without a human flight crew, how could airlines send the same signal?

Clearly, this is a big issue. In other forms of transportation, however, it has already been demonstrated that the trust barrier isn’t necessarily insurmountable. Some mass-transit systems operate driverless trains; so do some freight-rail lines. Driverless cars, if they prove to be safe, may well accustom people to relying on computers in hazardous environments. But will they be completely driverless? So far, according to press reports, Google has included in its test vehicles a feature that allows a human to override the computer and take manual control.

The trust issue comes up in many other areas, too, such as medicine, finance, and determining the nature of threats. Would you trust a computer to diagnose your heart condition, or a robot to carry out the surgery it recommended? Would you trust your life savings to an automated trading system? Would you trust your country’s air-defense system to a computer that relies on Bayesian statistics to distinguish real threats from phony ones?

At this stage, I suspect, most people retain an underlying skepticism about technology. They trust it, but only up to a point. They are happy to have minor ailments diagnosed online, but if they get sicker they demand the opinion of a doctor. They invest in automated index funds, and some people, if they are very rich, invest in hedge funds that employ risky computer-generated trading strategies. But they also want to know that someone is keeping a close eye on the computers, and making sure they don’t destroy the portfolio. Airline passengers, even after several instances in which pilots are suspected of crashing flights deliberately—Egypt Air Flight 990, Malaysian Airlines Flight 370, and Germanwings Flight 9525—are relieved to have a human being, or preferably two, sitting up front along with the autopilot.

From my knowledge of artificial intelligence and robotics, which is admittedly limited, such caution is justified. Computers, although they can outperform humans at many rules-based tasks, still can’t match us in many other areas, especially those that involve thinking creatively, moving around confined spaces, recognizing other people’s moods, and reacting to unexpected situations. In financial markets, for example, it’s well known that computer trading programs can amplify price swings, and, in some cases, generate “flash crashes.” That’s why the New York Stock Exchange has a circuit breaker in place: once the market goes down by a certain amount, all trading is halted for a time.

But the point Brynjolfsson and McAfee make is that things are changing rapidly. Pattern-recognition technology is constantly advancing, and so, thanks to Moore’s Law, are overall processing speeds and storage capacity. Important progress has already been made on problems that were once considered intractable, such as voice and face recognition. The ongoing progress “will eventually yield a computer with more processing and storage capacity than the human brain,” the M.I.T. professors write. “Once this happens, things become highly unpredictable.” Conceivably, computers will become just as adept, or perhaps more adept, as humans at dealing with the uncertain and the unexpected.

At what point will we trust computers more than we trust humans? As my colleague Philip Gourevitch pointed out in his excellent post on the Germanwings crash, the apparent actions of young Andreas Lubitz have left us with many questions that are tough to ponder, let alone answer. This is another one of them. But in the case of flying, at least, my guess is that we are still a good distance from crossing the trust threshold.