Percentiles have become one of the primary service level indicators to represent real systems monitoring performance. When used correctly, they provide a robust metric that can be used for base-of-mission critical service level objectives. However, there’s a reason for the “when used correctly” above.

For all their potential, percentiles do have subtle limitations that are very often overlooked by the people using them in analyses.

There’s no shortage of previous writings on this topic, most notably Baron Schwartz’s “Why Percentiles Don’t Work the Way You Think.” Here, I’ll focus on data, and why you should pay close attention to how percentiles are applied.

Right off the bat, the most misused technique is aggregation of percentiles. You should almost never average percentiles, because even very fundamental aggregation tasks cannot be accommodated by percentile metrics. It is often thought that since percentiles are cheap to obtain by most telemetry systems, and good enough to use with little effort, that they are appropriate for aggregation and system wide performance analysis most of the time. While this is true most of the time and for most systems, you lose the ability to determine when your data is lying to you — for example, when you have high (+/- 5% and greater) error rates that are hidden from you.

Those times when your systems are misbehaving the most?

That’s exactly when you don’t have the data to tell you where things are going wrong.

Check the Math

Let’s look at an example* of request latencies of two webservers (W1 blue, W2 red). P95 of the blue server is 220ms, p95 of the red one is 650ms:

What’s the total p95 across both nodes (W1, W2)? (plot generated with matplotlib)

By aggregating the latency distributions of each web server, we find that the total p95 is 230ms. W2 barely served any requests, so adding requests from there did not change p95 of W1 by much. Now, naive averaging of the percentiles would have given you: (220+650) / 2 = 87/2 = 435ms, which is ~200% away from the true total percentile (230ms).

So, if you have a Service Level Indicator in this scenario of “95th percentile latency of requests over past 5 minutes < 300ms,” and you averaged P95s instead of calculating from a distribution, you would be led to believe that you have exceeded your SLI by ~30%. Folks would be getting paged even though they didn’t need to be, and maybe conclude that additional servers were needed (when in fact this scenario represents overprovisioning).

Incorrect math can result in tens of thousands of dollars of unneeded capacity,

not to mention the cost of the time of the humans in the loop.

*If you want to play with the numbers yourself to get a feel for how these scenarios can develop, there is a sample calculation with link to an online percentile calculator in the appendix [1]

“Almost Never” Actually Means “Never Ever”

“But above, you said ‘almost never’ when saying that we shouldn’t ever average percentiles?”

That’s 100% correct. (No pun intended.)

You see, there are circumstances where you can average percentiles and get a result that has low errors. Namely, when the distribution of your data sources are identical. The most obvious case is when the latencies are from two web servers that are (a) healthy and (b) serve very similar load.

Be aware that this supposition breaks down as soon as either of those conditions is violated! Those are the cases where you are most interested in your monitoring data, when one of your servers starts misbehaving or you got a load balancing problem.

“But my servers usually have an even distribution of request latencies which are nearly identical, so that doesn’t affect me, right?”

Well, sure, if your web servers have nearly identical latency distributions, go ahead and calculate your total 95th percentile for the system by averaging the percentiles from each server. But when you have one server that decides to run into swap and slow down, you likely won’t notice a problem, since the data indicating it is effectively hidden.

So still, you should never average percentiles; you won’t be able to know when the approach you are taking is hurting you at the worst time.

Averaging percentiles masks problems with nodes that would otherwise be apparent

Percentiles are aggregates, but they should not be aggregated. They should be calculated, not stored. There are a considerable number of operational time series data monitoring systems, both open source and commercial, which will happily store percentiles at 5 minute (or similar) intervals. If you want to look at a year’s worth of data, you will encounter spike erosion. The percentiles are averaged to fit the number of pixels in the time windows on the graph. And that averaged data is mathematically wrong.

Example of Spike Erosion: 2 week view on the left shows a max of ~22ms. 24 hour view on the right shows a max of ~70ms.

Clever Hacks

“So, the solution is to store every single sample like you did in the example, right?”

Well, yes and no.

You can store each sample, and generate correct percentiles from them, but at any more than a modest scale, this becomes prohibitively expensive. Some open-source time series databases and monitoring systems do this, but you give up either scalability in data ingest, or length of data retention. One million 64-bit integer samples per second for a year occupies 229 TB of space. One week of data of this data is 4 TB; doable with off-the-shelf hardware, but economically impractical for analysis, as well as wasteful.

“Ah, but I’ve thought up a better solution. I can just store the number of requests that are under my desired objective, say 500 milliseconds, and the number of requests that are above, and I can divide by the two to calculate a correct percentile!”

This is a valid approach, one that I have even implemented with a monitoring system that was not able to store full distributions. However, the limitation is subtle; if after some time I decide that my objective of 500ms was too aggressive and move it to 600ms, all of the historical data that I’ve collected is useless. I have to reset my counters and begin anew.

Store Distributions, Not Percentiles

A better approach than storing percentiles is to store the source sample data in a manner that is more efficient than storing single samples, but still able to produce statistically significant aggregates. The histogram, or distribution, is one such approach.

There are many types of histograms, but here at Circonus we use the log-linear histogram. It provides a mix of storage efficiency and statistical accuracy. Worst-case errors at single digit sample sizes are 5%, quite a bit better than the 200% that we demonstrated above by averaging percentiles.

Log Linear histogram view of load balancer request latency. Note the increase in bin size by a factor of 10 at 1.0M (1 million)

Storage efficiency is significantly better than storing individual samples; a year’s worth of 5 minute log linear histogram windows (10 bytes per bin, 300 bins/window) can be stored in ~300MB (sans compression). Reading this amount of data from disk quickly is tractable with most physical (and virtualized) systems in under a second. The mergeability properties of histograms allows precomputed cumulative histograms to be stored for analytically useful windows such as 1 minute and 3 hours. This allows the composition of large time spans of time series telemetry to be rapidly assembled from sets that are visually relevant to the end user (think one year of data with histogram windows of six hours each).

Using histograms for operational time series data may seem like a challenge at first, but there are a lot of resources out there to help you out. We have published open source libraries of our log linear histograms in C, Golang, and even JavaScript. The Envoy proxy is one project that has implemented the log linear histogram C implementation for operational statistics. The Istio service mesh uses the Golang version of the log linear histogram library via our open source gometrics package to record latency metrics as distributions.

In Conclusion

Percentiles are a staple tool of real systems monitoring, but their limitations should be understood. Because percentiles are provided by nearly every monitoring and observability toolset without limitations on their usage, they can be applied to analyses easily without the operator needing to understand the consequences of *how* they are applied. Understanding the common scenarios where percentiles give the wrong answers is just as important as having an understanding of how they are generated from operational time series data.

If you use percentiles now, you are already capturing data as distributions to some degree through your toolset. Knowing how that toolset generates percentiles from that source telemetry will ensure that you can evaluate if your use of percentiles to answer business questions is mathematically correct.