I asked my friend, Robert Dawson, to help me understand the numbers behind risk assessment and the 1% rule. Robert writes fantastic science fiction and poetry, however I contacted him in this case because he teaches mathematics in Nova Scotia at Saint Mary’s University and is absolutely excellent at explaining numbers to me.

Although he doesn’t have an aviation background, he immediately grasped the issues that I was grappling with and wrote me a detailed explanation. I’m thrilled that he was willing to expand on it so that I could share it with you. I think you’ll find his explanation as fascinating as I did.

Just about everything in life has risks attached. If you drive across North America, you have (according to various sources) about one chance in 14,000 of being killed in an accident during that trip. If you take the same trip in a commercial jet plane, the odds are about one in seven million – five hundred times less. Of course, these numbers are only averages. While everybody has about the same survival odds on the airplane, a very safe or very foolhardy driver would have different odds on the road trip. (Lists of risks like this need to be read with care. It’s easy to find lists online that compare lifetime risk of dying of heart disease with the risk of a fatal accident on a single air voyage.)

There are many things that can go wrong with a commercial air flight; the industry’s excellent safety record shows that they are all unlikely. Some of them – birds in the engine, for instance – may be hard to do much about. Others, like mechanical failure, can be made much more unlikely by foresight and careful maintenance. But incidences like the GermanWings murder-suicide remind us that one of the hardest factors to quantify is the mental and physical health of the people in the cockpit. Critical parts of the plane can tested to destruction in a laboratory, and once their life cycle is determined they can be replaced before they start to fail. It’s more complicated with human pilots.

What risk is acceptable?

The ICAO Manual of Civil Aviation medicine starts top-down by setting acceptable levels of risk. Their round (and hence probably somewhat arbitrary) starting figure, for all catastrophic flight failures, was one disaster in ten million flying hours. Is this standard reasonable? The number above suggests that the industry can and does meet it. A pilot is permitted to fly about a thousand hours per year, and few passengers fly more; so that represents a personal risk of at most one in ten thousand years. Over a generous fifty years, that would be a lifetime risk of one chance in 200 of dying in a plane crash for somebody who flew about as much as it’s possible for a human being to fly. That’s comparable of the risk of dying by any one of drowning, poisoning, or in a fire if you stay on the ground.

The manual furthermore sets out that not more than 10% of failures should be due to a single system (eg, pilot failure of all kinds), and not more than 10% of failures of a single system should be due to a single subsystem (in this case, medical incapacitation.) The classification of systems and subsystems is of course somewhat arbitrary: if “systems” were defined coarsely enough that a plane had fewer than ten, or finely enough that some systems had fewer than ten subsystems, the requirement would be mathematically impossible to meet! But the underlying idea appears to be a good one: if more than 10% of the risk is concentrated in one system, a problem has been identified that can and should be fixed. In other words, this rule does not directly guarantee safety, but it may lead to the identification of critical points where safety can be improved. (Of course, as there is no really natural definition of what constitutes a “system” or “subsystem,” the rule could easily be gamed by regrouping and redefining systems. As a total outsider, I have no idea to what extent this takes place.) This means that only 10% of the acceptable risk of disaster can be due to pilot failure, and only 10% of that due to pilot medical incapacitation. This leads to an acceptable risk of disaster due to pilot medical incapacitation 10-9 disasters per flying hour, or one disaster in a billion flying hours.

That sounds like a lot, but a 2014 ATAG report said that there were over 100,000 commercial flights per day. If flights average two hours, that’s 75 million flying hours per year, so a billion flying hours is about 13 years’ worldwide commercial flying.

Is this goal being met?

Within the last fifty years there have been at least five commercial flights lost due to pilot incapacitation, mostly at the beginning of the period; the per-year rate appears to have dropped significantly, and the per-flying-hour rate still more so. Overall, the rate is about one disaster per decade; arguably the current rate is significantly less, though the fortunate shortage of data makes it hard to be certain if this change is an actual phenomenon.

There were three possible suicide-by-pilot disasters on commercial flights in the late 1990s, and two considered as definite in recent years. These are not getting rarer; they also appear to come in clusters, consistent with the idea of copycat suicides.

If “pilot mental health” and “pilot physical health” are considered as separate “subsystems” the goal is, approximately, being met. If they are lumped together as one “subsystem” it is probably not.

What is being done?

Pilot medical incapacitation is usually hard to predict: if a pilot knew she was going to have a heart attack on a particular day, she would be in the hospital ICU, not the cockpit, when it happened! To avoid disasters of this sort, little can be done but ensure that the pilot and copilot are healthy ahead of time. But what standard of health should be used? Air forces retire fighter pilots in their forties, as soon as reflexes start to slow; but, in the absence of an obvious safety argument, commercial airlines cannot (and should not) do this. Some sort of probabilistic standard has to be applied, but what?

There are, of course, many scenarios for disaster; but some – pilot and copilot having simultaneous heart attacks, for instance – are so unlikely that they may be ignored. During most phases of flight, the incapacitation or death of a single pilot, while tragic, would not lead to the loss of the plane. The most likely disaster scenario for physical incapacitation is as follows:

The pilot at the controls is incapacitated during a critical phase of flight; and the other pilot then fails to take control effectively.

The probability of this happening is found by multiplying the probability of the two events. Simulator studies have shown that in such cases the other pilot will fail to take control about one time in 400. Presumably to allow for the greater stress of a real-world situation, with a colleague’s life in danger, the ICAO replaces this by one chance in 100. (Note that this has nothing to do with the (1/10 x 1/10) above.) Thus the requirement is that

(1/100) x (risk of pilot incapacitation at a critical time) ≤ 10-9/hr (once in a billion hours)

so the incidence of pilot incapacitation at a critical time should be at most 10-7/hr (one in ten million hours).

Now, we must be careful here. This is not the incidence of pilot incapacitation; this is the incidence, taken over all flight hours, of events where the pilot is incapacitated and this happens at a critical phase of flight. The ICAO estimates that critical phases comprise at most 10% of flight time; thus the requirement is that the incidence of pilot incapacitation at any time during flight is at most

(10-7/hr)/(10%) = 10-6/hr

or one in a million hours.

Asking a doctor to make assessments like this would be like asking a pilot to estimate stall velocity in furlongs per fortnight, and just as disastrous in practice! To make the task easier, the ICAO assumes that the risk of sudden incapacitation is the same in the cockpit as elsewhere. (This is probably a conservative estimate, as heavy physical activity is a short-term risk factor.)

The doctor is asked to assess the probability that the pilot, in her current state of health, will suffer an incapacitating medical condition within the next year. There are 8760 hours in a year; rounding this to 10,000, the doctor is asked whether the risk is less than

(10-2/yr) ≈ (8760 x 10-6/yr) = (10-6/hr)

that is, 1% per year. (Again, this 1% has nothing directly to do with the other two one-percents above! Because all numbers are rounded to powers of ten, we can expect coincidences.) If the risk is over 1%, the pilot is grounded.

Murder-suicide by the pilot

One weak link in this calculation is the assumption that the ability of the other pilot to take over is independent of the first pilot’s inability to fly the plane safely. In the case of a stroke or heart attack this is a good assumption. If they had both had the same lobster sandwiches from the Business Class menu, and the lobster was tainted, this assumption might not hold (you saw Airplane!, right?) More seriously, if one pilot actively wishes to crash the plane, the odds that the other pilot can somehow take control are anywhere from moderate to zero (if, for instance, he locks her out of the cabin). We lose a factor of 100 in our safety calculations. The GermanWings accident report takes this into account, and suggests an acceptable risk of 10-4 or 0.01%.

Consequently, mental incapacitation should not be treated the same way as physical incapacitation because the risks they generate cannot be mitigated in the same way by the two-pilot operation principle. Therefore, the target of acceptable risk for non-detection of a mental disorder that may result in a voluntary attempt to put the aircraft into an unsafe condition should be more ambitious than the one usually accepted for “classical” physical incapacitation risk. If one follows the calculation methodology developed in ICAO’s Manual of Civil Aviation Medicine (Doc 8984) and described in paragraph 1.17.2, a quantitative target should be lower by at least two orders of magnitude, or 0.01%.

It may be argued that is too optimistic. It does not appear to take into account that in this scenario, also, all phases of flight are equally critical. This loses a further factor of 10 in safety.

But the decision to crash the plane need not be made on the spur of the moment; a suicidal pilot may bide his time until his next flight. Thus, it may be argued that the the pilot’s time cannot be broken up into flying and nonflying, critical and non-critical. If this is accepted, the criterion of one disaster of this type per billion flying hours must be thought of simply – at 1000 flying hours per year – to one decision to commit murder-suicide per million pilot-years, (10-6/yr) and this is the only relevant criterion.

The quote above is somewhat misleading in that it compares “non-detection of a mental disorder that may result in a voluntary attempt to put the aircraft into an unsafe condition” with the risk that a detected physical disorder does incapacitate a working pilot. A doctor can reasonably estimate the second for an individual; it is not at all clear to me as a layman how the first should be assessed.

Most of the uncertainty is not in the “detection” (which, as I understand it, psychologists usually do fairly well) but in what “may result” (which is very hard to predict.) There is also the question as to how many nondomestic murder-suicides result from religious or political fanaticism rather than from any diagnosable mental disorder. Nondomestic murder-suicide is very rare among the mentally healthy, and almost as rare among those who are not. A protocol to deal with this problem will probably need to be more than a tweak to one designed for another type of illness.

Robert Dawson teaches mathematics at Saint Mary’s University in Halifax, Nova Scotia. He has written more than sixty academic articles, mostly on pure mathematics but including topics like locked gimbals, Charles Dodgson’s probability problems, and the statistics of lichen growth. In his spare time he fences, volunteers with a Scout troop, and writes science fiction: links to many of his stories can be found at http://cs.smu.ca/~dawson/Writing/.

I hope you found that as interesting as I did. Even though I looked into this in detail when I went over the GermanWings flight 9525 final report and then again when I was analysing the Mozambique flight 470 report, I had absolutely not understood it. I was especially interested to realise that the risk percentage was based on phases of critical flight (departure and approach), which hadn’t occurred to me.

Any questions? Leave them in the comments!