Google has announced that its widely used Octane JavaScript benchmark is being retired, with Google saying that it's no longer a useful way for browser developers to determine how best to optimize their JavaScript engines.

Octane was developed for and by the developers of V8, the JavaScript engine used in Chrome. It was intended to address flaws in the earlier SunSpider benchmark, developed by Apple's Safari team. SunSpider's tests were all microbenchmarks, sometimes testing something as small as a single operation performed thousands of times. It wasn't very representative of real-world code, and it was arguably being gamed, with browser vendors introducing optimizations that were aimed primarily, albeit not exclusively, at boosting SunSpider scores. This was being done even when those optimizations were detrimental to real-world performance, because having a good score carried so much prestige.

Octane was introduced in 2012 and includes cut-down versions of somewhat realistic workloads, such as compiling the TypeScript compiler. But since then, JavaScript coding styles have changed. JavaScript itself has changed; ECMAScript 2015 (the standardized version of JavaScript) introduced a range of new features that weren't available in 2012 and, hence, aren't tested by Octane, and all manner of new libraries and frameworks have emerged since.

At first Octane provided useful focus for the engine developers, highlighting areas that needed improvement. But, just as with SunSpider before it, Google has found that optimizations have been developed to boost Octane even if it hurts other scenarios. Once again, the desire to get the highest score possible has come at the expense of developing a better scripting engine. With all browsers now fast at Octane—Edge is a little ahead of Chrome, which is a little ahead of Firefox—Google has chosen to retire the benchmark.

This habit of gaming benchmarks into uselessness is as old as benchmarking itself. Some benchmarks, such as the SPEC CPU integer and floating point benchmarks, have rules for which compiler optimizations are permitted; they have to be applicable to "a class of problems [...] larger than the SPEC benchmarks themselves" in an attempt to prohibit compiler vendors including optimizations that are good for SPEC and nothing else. This has not prevented extremely specific optimizations being used in the past. But browser benchmarks, which have no rules on which scores are and aren't "official" lack even this kind of control.

The best protection against this kind of gaming is to develop better benchmarks. Google has developed infrastructure to allow it to repeatably and consistently time the loading of entire Web pages (thus covering not just JavaScript performance but also HTML and CSS performance) which it uses to measure real-world performance on 25 popular sites and guide development. The Safari developers created JetStream to replace SunSpider, containing a wider mix of tasks and a greater proportion of real applications. The Speedometer benchmark, also from WebKit/Safari developers, is artificial, but Google has discovered that it corresponds well to real-world performance of popular sites.

In spite of the problems, the desire to benchmark and have repeatable, objective measures of performance won't go away. As new benchmarks are developed, we'd expect the cycle to repeat itself; first they provide a useful target for improved performance, but then they become the primary goal. Real-world testing of the kind Google performs acts as a useful backstop, discouraging the company from doing anything that's detrimental to the Web at large, but the incentive to skew things will never disappear completely.