Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

This quote from Donald Knuth is now so famous that the phrase “premature optimization” has become well known derogatory term in software development. But what does “premature optimization” actually mean, and why is it generally so bad?

Optimization is considered to be premature when it’s performed before measurements are conducted in order to determine if there are any performance issues in an application. Only after measurements determine that there are indeed performances issues, and also which parts of the application code cause these issues, should optimization be performed. There are two main downsides to optimizing before measuring:

Wasted time and effort because optimizations of noncritical parts of a program will not have any noticeable impact on overall performance. And since noncritical parts usually outnumber the critical parts, this will likely occur Optimized code is often more complex and even more convoluted than code which has not been optimized. Such code is generally more difficult to read and comprehend, and therefore to debug and maintain

At this point I might have concluded this post by simply reiterating Donald Knuth’s advice, but it turns out that there is an exception to the rule: JavaScript. For JavaScript premature optimization can be a Good Thing. In fact, it can be beneficial to optimize the entire code of JavaScript applications! To understand why that is the case, I need to explain how JavaScript compilers work, and what I mean by “optimizing JavaScript code.”

How JavaScript Compilers and Optimizers Operate

When Donald Knuth wrote that famous statement, most programming languages used Ahead Of Time (AOT) compilation. AOT compilers transform applications’ source code from a High Level language into Low Level machine language before it’s packaged for execution. JavaScript, as we know, is distributed in the original source code format, and is compiled right before it’s executed. This type of operation is known as Just In Time (JIT) compilation.

Whereas AOT compilers can generally take their time to generate highly tuned, performant machine code from the source code, JIT compilers don’t have this luxury. JIT compilers must perform the code transformation as quickly as possible, otherwise users will experience a lengthy delay when applications start executing. Because of this, JavaScript JIT compilers generally don’t optimize the code at all. Instead, they use inefficient representations, which can be generated quickly.

Another reason that JavaScript JIT compilers don’t tune up the code is that they lack sufficient information to do so. Many programming languages utilize explicit type specifications to inform compilers about the types of variables and expressions. This enables compilers to generate optimal code for the expressions. For example, the code generated for the expression a + b will vary significantly if a and b are integers, floating points or strings. JavaScript code doesn’t provide any explicit type information, forcing the compiler to generate code that checks the types at runtime, every time the expression is executed.

The fact that JavaScript JIT compilers don’t optimize code doesn’t mean that modern JavaScript engines completely forgo optimizations. Instead of optimizing before app execution, JavaScript engines optimize the code while it’s running. In a way, JavaScript engines follow Donald Knuth’s advice: instead of trying to optimize everything in advance, they monitor the code during execution, identify the critical parts in terms of performance (“hot spots”), and optimize just those parts. In addition, this monitoring provides the engines with information about the types of values contained in variables, as they occur at runtime. For example, an engine can determine that expression a +b is always invoked with both a and b containing integer values, and generate optimal code for that scenario, excluding the runtime type checks.

It’s important to understand that not all JavaScript code can be optimized by the engine — certain code constructs can prevent code from being optimized, even if it’s identified as “hot”. Because code optimized by JavaScript engines can run much faster than unoptimized code, it’s very important to know how to write the code so that it can be optimized. Indeed, to a great extent, optimizing JavaScript code means writing code that can be optimized by the engine. Turns out that the best way to enable JavaScript engine optimizations is by writing good, clean JavaScript code. In particular, you should follow these guidelines:

Use Small Functions

It’s considered a programming best practice to write small functions, because small functions are easier to comprehend, test and debug. Turns out that small functions are also more likely to be optimized by JavaScript engines. Functions are the basic unit of optimization for current engines, and it’s not surprising that smaller functions are easier for optimizers to handle than larger functions. As a result, JavaScript engines give smaller functions higher priority for optimization, and if functions are too large they may not be optimized at all, even if they’re hot.

Reuse Common Code

Code reuse is another programming best practice. Encapsulating common functionality in objects or functions makes it possible to develop it once, and utilize it repeatedly. This improves both development velocity, and correctness. When common code is implemented as a single instance, it’s much more likely to be identified as being critical by the JavaScript optimizer. On the other hand, when there are multiple instances of the same code, each individual instance becomes less “hot”, and thus less likely to be optimized. And when common code is optimized, everybody using it instantly benefits.

Use Descriptive Variable Names

JavaScript optimizers don’t actually care about variable names, but they care a lot about variable types. This is because, as I explained above, being able to ascertain the run-time type of variables and expressions is key to being able to optimize the code. It’s been my experience that a very good way to ensure that JavaScript variables always have the same type is to use them for a specific purpose. And the best way to ensure that variables are always used for the same purpose is to give them descriptive names, which match that purpose. This is particularly true for projects the involve larger teams of developers.

Conclusion

The performance gains provided by JavaScript engine optimizers can be so significant, on a magnitude of 10x and even more, that ensuring their application is a critical aspect of optimizing JavaScript applications. The fact that for JavaScript, a good way to enable engines to optimize your programs is by writing clean, structured code, provides an extra strong incentive for good development practices. This is in contrast to some other programming languages where, as I explained above, speeding up programs often entails writing code that is more complex, and even convoluted. Since the best practices I mentioned — code reuse, implementing small functions, and assigning descriptive names — should be applied everywhere, there’s no need to wait for performance measurements before implementing them. Instead you should write all your code this way from the get-go. In other words, for JavaScript this important type of optimization should definitely be done “prematurely”.