This is post # 14 of the series dedicated to exploring JavaScript and its building components. In the process of identifying and describing the core elements, we also share some rules of thumb we use when building SessionStack, a JavaScript application that needs to be robust and highly-performant to help users see and reproduce their web app defects real-time.

If you missed the previous chapters, you can find them here:

Overview

We all know how things can get messy ending up in one big blob of JavaScript. Not only does this piece of code need to be transferred over the network but it also has to be parsed, compiled into bytecode, and finally executed. In the previous posts, we discussed topics such as the JS engine, runtime, and the call stack, as well as the V8 engine that’s mainly used by Google Chrome and NodeJS. They all play a vital role in the whole JavaScript execution process. The topic we’re planning to introduce today is no less important: we’ll see how most JavaScript engines parse text into something that’s meaningful to the machine, what happens after, and how we as web developers can turn this knowledge to our advantage.

How programming languages work

So let’s take a step back and look at how programming languages work in the first place. No matter what programming language you’re using you’ll always need some piece of software which can take the source code and make the computer actually do something. This software can be either an interpreter or a compiler. No matter whether you’re using an interpreted language (JavaScript, Python, Ruby) or a compiled one (C#, Java, Rust), there’s always going to be one common part: parsing the source code as plain text to a data structure called abstract syntax tree (AST). Not only do ASTs present the source code in a structured way but they also play a critical role in the semantic analysis where the compiler validates the correctness and proper usage of the program and the language elements. Later on, the ASTs are used to generate the actual bytecode or machine code.

AST applications

ASTs are not only used in language interpreters and compilers. They have multiple applications in the computer world. One of the most common ways to use them is for static code analyzing. Static analyzers don’t execute the code that’s given to their input. Still, they need to understand the structure of the code. For example, you may want to implement a tool that finds common code structures, so that you can refactor them to reduce duplication. You might be able to do this by using string comparison, but the implementation will be very basic and limited. Naturally, if you’re interested in implementing such a tool you don’t need to write your own parser. There are many open source implementations which are fully compatible with the Ecmascript specifications. Esprima and Acorn, to name a couple. There are also many tools that can help with the output produced by the parser, namely the ASTs. ASTs are also used widely in implementing code transpilers. So for example you might want to implement a transpiler which converts Python code to JavaScript. The basic idea is that you would use a Python transpiler to generate an AST which you would then use to generate back JavaScript code. You might ask, how is this even possible. The thing is that ASTs are just a different way of representing some language. Before the parsing it’s represented as text which is following some rules which make up a language. After the parsing it’s represented as a tree structure which contains exactly the same information as the input text. Therefore, we can always do the opposite step and go back to a textual representation.

JavaScript parsing

So let’s see how an AST gets built. We have a simple JavaScript function as an example:

The parser will produce the following AST.

Note that for visualization purposes, this is a simplified version of what the parser would produce. The actual AST is much more complex. The idea here, however, is to get a feel for what would be the first thing that would happen to the source code before it gets executed. If you want to see what the actual AST looks like, you can check AST Explorer. It’s an online tool in which you pass some JavaScript and it outputs the AST for that code.

Why do I need to know how the JavaScript parser works, you might ask. After all, it should be the browser’s responsibility to make it work. And you are right, sort of. The below graph shows the total time allocation to the different steps in the JavaScript execution process. Take a close look and see if you find anything interesting.

Did you see it? Take a closer look. On average, it takes a browser roughly 15% to 20% of the total execution time to parse the JavaScript. I didn’t come up with the numbers. These are stats from real-world applications and websites which utilize JavaScript one way or another. Now 15% might not seem a lot to you, but trust me, it is. A typical SPA loads about 0.4MBs of JavaScript and it takes the browser approximately 370ms to parse it. Again, you might say, well, that’s not that much. It’s not a lot by itself. Bear in mind though that this is only the time needed to parse the JavaScript code into ASTs. This doesn’t include the execution itself or any of the rest of the processes taking place during a page load such as CSS and HTML rendering. And this all refers to desktop only. Once we go into mobile, things quickly get more complicated. Time spent on parsing can often be two to five times as much on phones as on desktop.

The above graph shows the parsing time of 1MB bundle of JavaScript across mobile and desktop devices of different class.

What’s more, web apps are getting more complex by the minute as more business logic is going to the client side to introduce a more native-like user experience. You can easily understand how much this is affecting your app/website. All you need to do is open the browser dev tools and let it measure the amount of time spent on parsing, compiling, and everything else that’s happening in the browser until the page is fully loaded.

Unfortunately, there are no dev tools on mobile browsers. No worries though. This doesn’t mean that there’s nothing you can do about it. This is why tools like DeviceTiming exist. It can help you measure parsing & execution times for scripts in a controlled environment. It works by wrapping local scripts with instrumentation code so that each time your pages are hit from different devices you can locally measure the parsing and execution times.

The good thing is that JavaScript engines do a lot to avoid redundant work and get more optimized. Here are a few things that engines do across major browsers.

V8 for example does script streaming and code caching. Script streaming means that async and deferred scripts get parsed on a separate thread as soon as the download has begun. This indicates that the parsing is almost immediately done after the script is downloaded. It results in pages getting loaded about 10% faster.

The JavaScript code is usually compiled into bytecode on each page visit. This bytecode, however, is then discarded once the user navigates to another page. This happens because the compiled code depends a lot on the state and context of the machine at the time of compilation. This is where Chrome 42 introduces bytecode caching. It’s a technique which stores the compiled code locally so when the user gets back to the same page all steps such as download, parsing and compiling can be skipped. This allows Chrome to save about 40% on parsing and compilation time. Plus, this also results in saving mobile devices’ battery life.

In Opera, the Carakan engine can reuse the compiler output from another program that was recently compiled. There’s no requirement that the code should be from the same page or even domain. This caching technique is actually very effective and can skip the compilation step entirely. It relies on typical user behavior and browsing scenarios: whenever the user follows a certain user journey in the app/website, the the same JavaScript code gets loaded. However, the Carakan engine has been long replaced by Google’s V8.

The SpiderMonkey engine used by Firefox doesn’t cache everything. It can transition into a monitoring stage where it counts how many times a given script is being executed. Based on this count it determines which parts of the code are hot and need to be optimized.

Obviously, some take the decision not to do anything. Maciej Stachowiak, the lead developer of Safari, states that Safari doesn’t do any caching of the compiled bytecode. It’s something that they have considered but they haven’t implemented since the code generation is less than 2% of the total execution time.

These optimizations do not directly affect the parsing of the JavaScript source code but they definitely do their best to completely skip it. What can be a better optimization than not doing it altogether?

There are many things we can do to improve the initial loading time of our apps. We can minimize the amount of JavaScript we’re shipping: less script, less parsing, less execution. To do this we can deliver only the code required on a specific route instead of loading one big blob of everything. For example, the PRPL pattern preaches this type of code delivery. Alternatively, we can check our dependencies and see if there’s anything redundant that might be doing nothing more but bloating our codebase. These things deserve a topic of their own, however.

The goal of this article is to discuss what we as web developers can do to help the JavaScript parser do its job faster. And there is. Modern JavaScript parsers use heuristics to determine whether a certain piece of code is going to be executed immediately or its execution will be postponed for some time in the future. Based on these heuristics the parser will do either eager or lazy parsing. Eager parsing runs through the functions that need to be compiled immediately. It does three main things: builds AST, builds scope hierarchy, and finds all syntax errors. Lazy parsing, on the other hand, is used only on functions that don’t need to be compiled yet. It doesn’t build an AST and it doesn’t find all syntax errors. It only builds the scope hierarchy which saves about half the time compared to the eager evaluation.

Clearly, this is not a new concept. Even browsers like IE 9 support such type of optimization albeit in a rather rudimentary way compared to the way today’s parsers work.

So let’s see an example of how this works. Say we have some JavaScript which has the following code snippet:

Just like in the previous example, the code is fed into the parser which does syntactic analysis and outputs an AST. So we have something along the lines of:

Function declaration of foo which accepts one argument (x). It has one return statement. The function returns the result of the + operation over x and 10.

Function declaration of bar which accepts two arguments (x and y). It has one return statement. The function returns the result of the + operation over x and y.

Make a function call to bar with two arguments 40 and 2.

Make a function call to console.log with one argument the result of the previous function call.

So what just happened? The parser saw a declaration of the foo function, a declaration of the bar function, a call of the bar function and a call of the console.log function. But wait a minute… there’s some extra work done by the parser that’s completely irrelevant. That’s the parsing of the foo function. Why is it irrelevant? Because the function foo is never called (or at least not at that point in time). This is a simple example and might look like something unusual but in many real-world apps, many of the declared functions are never called.

Here instead of parsing the foo function, we can note that it’s declared without specifying what it does. The actual parsing takes place when necessary, just before the function is executed. And yes, the lazy parsing still needs to find the whole body of the function and make a declaration for it, but that’s it. It doesn’t need the syntax tree because it’s not going to be processed yet. Plus, it doesn’t allocate memory from the heap which usually takes up a fair amount of system resources. In short, skipping these steps introduces a big performance improvement.

So in the previous example, the parser would actually do something like the following.

Note that the foo function declaration is acknowledged but that’s it. Nothing more has been done to go into the body of the function itself. In this case, the function body was just a single return statement. However, as in most real-world applications, it can be much bigger, containing multiple return statements, conditionals, loops, variable declarations and even nested function declarations. And this all would be a complete waste of time and system resources since the function will never be called.

It’s a fairly simple concept but in reality, its implementation is far from being simple. Here we showed one example which is definitely not the only case. The entire method applies to functions, loops, conditionals, objects, etc. Basically, everything that needs to be parsed.

For example, here’s one pretty common pattern for implementing modules in JavaScript.

This pattern is recognized by most modern JavaScript parsers and is a signal that the code inside should be parsed eagerly.

So why don’t parsers always parse lazily? If something is parsed lazily, it has to be executed immediately, and this will actually make it slower. It’s going to make a single lazy parse and another eager parse right after the first one. This will result in a 50% slowdown compared to just parsing it eagerly.

Now that we have a basic understanding of what’s happening backstage, it’s time to think about what we can do to give the parser a hand. We can write our code in such a way so that the functions are parsed at the right time. There’s one pattern which is recognized by most parsers: wrapping a function in parenthesis. This is almost always a positive signal for the parser that the function is going to be executed immediately. If the parser sees an opening parenthesis and immediately after that a function declaration, it will eagerly parse the function. We can help the parser by explicitly declaring a function as such that is going to be executed immediately.

Let’s say we have a function named foo.

Since there’s no obvious sign that the function is going to be executed immediately the browser is going to do a lazy parse. However, we’re sure that this is not correct so we can do two things.

First, we store the function in a variable:

Note that we left the name of the function between the function keyword and the opening parenthesis before the function arguments. This is not necessary but is recommended since in the case of a thrown exception the stacktrace will contain the actual name of the function instead of just saying <anonymous>.

The parser is still going to do a lazy parse. This can be prevented by adding one small detail: wrapping the function in parenthesis.

At this point, when the parser sees the opening parenthesis before the function keyword it’s going to do immediately an eager parsing.

This can be rather difficult to manage manually as we’ll need to know in which cases the parser will decide to parse the code lazily or eagerly. Also, we’d need to spend time thinking whether a certain function will be invoked immediately or not. We surely don’t want to do this. Last but not least, this is going to make our code harder to read and understand. To help us do that, tools like Optimize.js come to the rescue. Their sole purpose is to optimize the initial loading time of the JavaScript source code. They do static analysis of your code and modify it in such a way that the functions that need to be executed first are wrapped in parenthesis so that the browser can eagerly parse them and prepare them for execution.

So we’re coding as usual and there’s a piece of code that looks like this:

Everything seems fine, working as expected and it’s fast because there’s an opening parenthesis before the function declaration. Great. Of course, before going into production, we need to minify our code to save bytes. The following code is the output of the minifier:

It seems ok. The code works as before. There’s something missing though. The minifier has removed the parenthesis wrapping the function and instead has placed a single exclamation mark before the function. This means that the parser is going to skip this and will do a lazy parse. On top, to be able to execute the function it will do an eager parse right after the lazy one . This all makes our code run slower. Luckily, we have tools like Optimize.js that do the hard work for us. Passing the minified code through Optimize.js is going to produce the following output:

That’s more like it. Now we have the best of both worlds: the code is minified and the parser is properly identifying which functions need to be parsed eagerly and which lazily.

Precompilation

But why can’t we do all this work on the server side? After all, it’s much better to do it once and serve the results to the client rather than forcing each client do the job every time. Well, there’s an ongoing discussion whether engines should offer a way to execute precompiled scripts so this time isn’t wasted in the browser. In essence, the idea is to have a server-side tool that can generate bytecode which we’d only need to transfer over the wire and execute it on the client-side. Then we’d see some major differences in start-up time. It might sound tempting but it’s not that simple. This might have the opposite effect since it would be larger and most probably would need to sign the code and process it for security reasons. The V8 team , for example, is working on internally avoiding reparsing so that precompiling might not actually be that beneficial.

A few tips that you can follow to serve your app to users as fast as possible

Check your dependencies. Get rid of everything that’s not needed.

Split your code into smaller chunks instead of loading one big blob.

Defer the loading of JavaScript when possible. You can load only the required pieces of code based on the current route.

Use dev tools and DeviceTiming to find out where the bottleneck is.

Use tools like Optimize.js to help the parser to decide when to parse eagerly and when lazily.

SessionStack is a tool that recreates visually everything that happened to the end users at the time they experienced an issue while interacting with a web app. The tool doesn’t reproduce the session as an actual video but rather simulates all the events in a sandboxed environment in the browser. This brings some implications, for example in scenarios where the codebase of the currently loaded page becomes big and complex.

The above techniques are something that we recently started to incorporate in SessionStack’s development process. Such optimizations allow us to load SessionStack faster. The faster SessionStack can free the browser resources the more seamless, natural user experience the tool will offer when loading and watching user sessions.

There is a free plan if you’d like to give SessionStack a try.

Resources