by Martin Peterson

Two Boeing 737 MAX 8 airplanes crashed shortly after takeoff, on October 28, 2018 near Jakarta, Indonesia and March 10, 2019, near Addis Ababa, Ethiopia. The disasters cost the lives of 346 passengers and crew. Black box data recovered from the two planes indicate that bad engineering practices and surprisingly simple design errors contributed to both calamities. The Boeing 737 MAX 8 only recently went into service, in May 2017.

The question I wish to raise is whether anyone at Boeing behaved unethically in approving the plane for sale. My tentative answer is yes. I believe that at least three ethical principles may have been violated by Boeing engineers and managers.

The Fundamental Canon

According to the first “fundamental canon” of the National Society of Professional Engineers (NSPE) Code of Ethics, engineers “shall hold paramount the safety, health and welfare of the public.” According to preliminary findings from the ongoing investigations leaked to the New York Times, both disasters were caused by a single faulty sensor, which triggered a new automatic anti-stall system to repeatedly push the plane’s nose down. Several newspapers have reported that Boeing until recently charged extra for relatively simple and cheap warning displays in the cockpit that alert pilots to divergent sensor readings. If such displays had been installed on the two 737 MAX 8s that crashed, it is more likely (but not certain) that pilots would have been able to diagnose the malfunctioning anti-stall system. An aircraft manufacturer that attempts to increase its profit by charging extra for relatively simple but vital safety devices does not “hold paramount the safety, health, and welfare of the public.”

Does it matter that the decision to charge extra for the displays was most likely made by managers in the sales department rather than by engineers? This is likely to depend on what opinions engineers expressed as the decision was made. The NSPE Code clearly states: “If engineers’ judgment is overruled under circumstances that endanger life or property, they shall notify their employer or client and such other authority as may be appropriate.”

There is, of course, a limit to how much money aircraft manufacturers can be asked to spend on making their products safe, but that does not seem to have been a relevant consideration in this case. Compare, for instance, the automobile industry. Consumers are permitted to buy cars that are less safe than the safest models on the market, but regulators do not permit manufacturers to offer cheap and simple safety systems as optional upgrades. Seatbelts, ABS brakes, and airbags are mandatory equipment in all new cars sold in almost all countries. The plausible idea that engineers shall “hold paramount the safety … of the public” explains why this is so.

Informed Consent

Pilots were never informed that the new version of the 737 MAX 8 model had been equipped with the new automatic anti-stall system, nor that it could be activated by a faulty reading of a single sensor. Because pilots did not know that the automatic anti-stall system existed, they were unable to understand why the onboard computers repeatedly pushed the nose of the jet down. This can be construed as a violation of the principle of informed consent. Just as doctors are obliged to ask patients for informed consent prior to any medical intervention, aircraft manufacturers arguably have a similar obligation to make sure that pilots responsible for the safe operation of their products are properly informed about all critical systems, and consent to using systems that take away control from the pilots ultimately responsible for the safety of the passengers.

The principle of informed consent is widely accepted in medical ethics but arguably deserves more attention by engineering ethicists. It is, for instance, uncontroversial to demand that cell phone manufacturers ought to ask customers for consent before their gadgets share the cell phone’s position with third parties. This moral requirement can be understood as an application of the principle of informed consent. That said, the principle of informed consent is sometimes not as easy to apply in engineering contexts as in medical ethics. The doctor-patient relationship is more direct and predictable than the engineer-user relationship. Engineers seldom interact directly with the user and technological devices are sometimes (mis)used in ways that cannot be reasonably foreseen by engineers.

The Precautionary Principle

The third ethical principle violated by Boeing is the precautionary principle. Several days after the 737 MAX 8 was grounded by aviation authorities around the world, Boeing CEO Dennis Muilenburg called President Trump to assure him that there was no need to ground the model in the United States. It was still unclear what had caused the crashes, Muilenburg claimed. From this epistemic premise, he inferred that it was too early to take action. For several days, the Federal Aviation Administration agreed with this policy. The regulators claimed that foreign civil-aviation authorities had not “provided data to us that would warrant action.”

According to a plausible formulation of the precautionary principle I defend in The Ethics of Technology, “reasonable precautionary measures” should be taken by engineers and others “to safeguard against uncertain but nonnegligible threats.” Few would dispute that it would have been a reasonable precautionary measure to ground the 737 MAX 8 immediately after the second crash. If two brand new airplanes of the same model crash shortly after each other under what appears to be similar circumstances, regulators do not have to wait until they know for sure what caused the crashes before they take action. The second crash changed the epistemic situation enough to warrant action, even if it did not prove that the anti-stall system was to blame.

To avoid some of the objections associated with the precautionary principle, it is appropriate to think of it as an epistemic principle rather than as a principle that should directly guide our actions. In essence, it is better (from a moral point of view) to believe that something is unsafe when it is not, than to believe that something is safe when it is not. If construed as a belief-guiding principle grounded on moral consideration, the precautionary principle is compatible with the principle of maximizing expected value. We should first adjust our beliefs about the world by applying the precautionary principle and then maximize expected value relative to those modified beliefs.

Addendum: In my recently published textbook Ethics for Engineers, I discuss all three principle mentioned in this post in greater detail.

Martin Peterson is Sue G. and Harry E. Bovay Jr. Professor of the History and Ethics of Professional Engineering in the Department of Philosophy at Texas A&M University. His most recent book is Ethics for Engineers (New York: Oxford University Press 2019).