As many of you are aware, the launch of the Glory satellite was a failure. The mission would have studied solar irradiance, aerosols, and clouds — all of which are important data for climate studies. Alas, the satellite failed to deploy and the mission — if it happens at all — will have to wait. It’s surely a demoralizing blow to the Glory team, and a blow to climate science since it follows hard upon the launch failure of the OCO satellite two years ago. RealClimate has a post on the subject.



Reader comments at RealClimate have included some conspiracy theory speculation (but just a little, thank goodness!), spurred by two consecutive mission failures of important climate science satellites — that those who don’t want to know the truth about global warming may have undertaken sabotage. I think such speculation is completely unjustified, best left to those who thrive on conspiracy theories. The fact is that launching a satellite into orbit is hard. Lots of things can go wrong, and even in this day and age the chance of mission failure is uncomfortably high.

Still, the question arises naturally, what’s the chance of two consecutive mission failures for a given type of science mission?

I’ll consider the probability of two consecutive failures for a launch vehicle with only a small number of flights on which to base estimation. This is not meant to be an analysis of the probability of the actual observed failures — I don’t even know whether OCO and Glory used the same launch vehicles — it’s just meant to illustrate the nontrivial probability of consecutive launch failures for the extremely difficult task of successful deployment of satellites into earth orbit.

In 2005 the FAA published the Guide to Probability of Failure Analysis for New Expendable Launch Vehicles. They give some rough statistics of failure probability for new launch systems:



… the worldwide flight history of ELVs from 1980 to 2002 reveals that launch operators who have never launched vehicles successfully before had 8 failures in 11 launch attempts. Worldwide flight history for “experienced launch vehicle developers” over the same period indicates 5 failures in 18 launch attempts. Many factors influence the level of experience of a launch vehicle developer. However, in the results of the recent CSWG investigation, the term “experienced launch vehicle developer” corresponded to developers who had produced at least one launch vehicle with a demonstrated probability of failure less than or equal to 33 percent. The probability of failure was based on the reference values in table A.



Table A (in the FAA guide) gives baseline failure probabilities based on the number of launch failures in (up to 10) preceding launches. They’re based on confidence limits for the binomial distribution, although they don’t specify exactly how the confidence limits are estimated (there are many methods, including Wald, Clopper-Pearson, Willson). My natural instinct would be to use a Bayesian estimate, which is also sanctioned in the FAA guide:



The FAA may also consider other approaches. Once a launch vehicle completes at least two flights, the FAA will accept a Bayesian estimate based on a uniform prior distribution of one hypothetical failure in two hypothetical flights updated with the outcomes of all previous flights of the subject vehicle. The reference probability estimate will be the final estimate input to any launch risk analysis unless the FAA has a reason to make an adjustment away from the reference value.



When one observes failures in trials, the posterior mean probability of failure when using a uniform prior is , which is what they mean by “one hypothetical failure in two hypothetical flights updated with the outcomes of all previous flights.” I’m delighted that they recommend the more conservative uniform prior, since I wouldn’t want to ride on an airplane whose safety depended on using the Jeffreys prior rather than a uniform prior.

But when estimating the probability of two consecutive failures, a proper Bayesian analysis doesn’t rely on any single failure probability estimate — it should incorporate all the information contained in the posterior distribution for the failure rate. If we’ve observed failures in missions, the posterior distribution for the failure rate using a uniform prior is

.

The probability of failure on the next launch can be estimated as , as already mentioned. Ordinarily the probability of failure for the next two launches would be that quantity squared. But a full Bayesian estimate, using the full distribution, turns out to be a little different, namely

.

Now let’s plug in some numbers. Suppose the launch vehicle behaves as the FAA expects for a new system from an “experienced launch vehicle developer.” Suppose further that the system has been launched times, with failures. Then the probability of two consecutive failures is estimated as

In other words, there’s a 10% chance of such a run of bad luck!

That’s pretty substantial. Even though this is not a direct analysis of the OCO and Glory systems, I’m guessing it gets us in the right ballpark. So: although the results of OCO and Glory are certainly lamentable, they shouldn’t be regarded as implausible. And they certainly shouldn’t be regarded as so implausible that we start entertaining conspiracy theories about sabotage.

Here’s hoping that both missions are tried again soon, with complete success.