↩︎ ↩︎ Multithreading in Node.js March 14th 2017

When I wrote one of my first projects for ZEIT somewhere in the middle of 2016, I was doing a lot of sychronous operations, although I already had put a transpilation setup for async and await in place. The reason being that I just didn ' t see a difference between these two.

Then, a few days later, when it was time to publish the package, rauchg wrote me on Slack saying that I should write more asynchronous code because I would otherwise be making " the concurrency of the process plummet " .

Back then, I simply did what he told me and immediately noticed a slight performance boost. From there on, I never used any native synchronous functions (or packages) again and went completely asynchronous.

However, I didn ' t manage to ask him why it ' s like that. We were shipping a lot of stuff at that time and I simply forgot about it.

Now, nearly a year later, I came across this topic again because native support for both keywords has landed and I spent a lot of time thinking about how we could take advantage of that at ZEIT. So I collected my thoughts and we had a detailed discussion about why everyone should await asynchronous functions, rather than using sychronous ones (like fs.statSync ).

The reason why I ' m writing this post is because this newly aquired skill seems very valuable to me, since it brings me closer to understanding the backbone of Node.js and allows me to improve the performance of my code drastically.

Therefore, I thought making my learning progress public could make others - who ' re in the same position - profit from this knowledge as well. At the same time, it helps me to strengthen my understanding of it.

When I first heard about this statement, I got a little confused. Because, initially, I thought both words would mean the same.

In the context of computing processes, however, I learned that this assumption is not always true:

While parallel operations are both started at the same time and literally run simultaneously (which is only possible with multi-core CPUs), concurrent ones might both make process regardless of the other, but cannot run at the same time (suitable for single-core CPUs).1

Let me clarify that with an example:

setInterval( () => { console .log( 'Interval dispatched' ) }, 1000 ) loadDataSync() console .log( 'Data downloaded' )

As you can see, I ' m handling two tasks: The first three lines introduce an interval that gets executed every 1000 milliseconds (one second) and the last line calls an arbitrary function which is doing something in a synchronous way.

Now the interesting part:

Although the code for starting the interval gets executed before the synchronous function gets called, the callback inside setInterval() won ' t be run before loadDataSync() has returned something.

This is because of Node/JavaScript ' s concurrent nature. Its backbone consists of a single-threaded event loop and therefore doesn ' t allow for operations running in parallel out of the box.

Or as Panu from Byte Archer puts it:

The event-loop repeatedly takes an event and fires any event handlers listening to that event one at a time. No JavaScript code is ever executed in parallel.



As long as the event handlers are small and frequently wait for yet more events themselves, all computations (for example fulfilling and serving a HTTP request) can be thought as advancing one small step at a time - concurrently. This is beneficial in web applications where the majority of the time is spent waiting for I/O to complete. It allows single Node.js process to handle huge amounts of requests.

So technically, nothing can guarantee you that intervals in Node.js will always get executed on the exact times you ' ve defined. Instead, the execution of the callback will be enqueued on a certain point in time, but will only start once the thread isn ' t handling any other operation.

As an example, the loadDataSync() function call shown in the snippet above might take - let ' s say - five seconds to download some data. This would mean that the callback of setInterval() will get enqueued after 1000 milliseconds, but not actually executed until five seconds have passed.

Because 1000 milliseconds fit into five seconds - guess what - five times2, in our example, the callback execution would therefore get enqueued that often. In turn, you ' ll get the message logged to the console five times, immediately after " the data was downloaded " :

# await To the Rescue!

To solve this problem, we need to make the operation for pulling the data non-blocking. At the moment, it ' s still synchronous and therefore making the process ' performance plummet.

Here ' s how it looks with await :

setInterval( () => { console .log( 'Interval dispatched' ) }, 1000 ) await loadData() console .log( 'Data downloaded' )

To make this work, you would also have to rewrite your sychronous function into an asynchronous one (either a Promise or a function prefixed with async ).

To make my point clear, I came up with an arrow function that simulates the case of loadData() taking 5000 milliseconds to finish:

const loadData = () => new Promise ( resolve => { setTimeout(resolve, 5000 ) })

Now the data is being downloaded concurrently with the interval ' s callback getting executed every 1000 milliseconds. The operation can be called " non-blocking " now. In turn, our script just got much faster:

As you can see, even though the function is now acting asynchronously, the interval output never shows up after exact 1000 milliseconds. It ' s always a slightly different number.

That ' s because the callback WILL get triggered after that time, but Node.js takes some time to actually execute the code inside it. This, however, is as close as we can get to raw performance using async and await .

However, speeding up our code to the maximum isn ' t quite so easy. There ' s still a lot room left for improvement!

Although we ' ve fixed the problem of blocking the code by using asynchronous operations (a.k.a. " unblocking it " ), part of it is still run concurrently.

To understand this, we need to dive a little deeper:

In our example, we ' re handling two operations: Dispatching an interval every 1000 milliseconds and downloading data.

Now the tricky part:

The code I ' ve shown you above introduces a function call of loadData() preceded by the await keyword. As indicated by the name, it could be used for loading some data from a certain origin (like the web).

This means that we ' re dealing with a special kind of operation. Why? Because it won ' t happen entirely inside that single thread we ' ve talked about.

Instead, actions like fetching raw data and such are processed directly by the kernel (which can be thought of as a separate " thread " or " process " - independent from the thread the interval is running in).

Only the remaining " sub operations " required for loading the data (like processing the JavaScriptON response, which is mostly blocking) will be left to Node.js and are therefore run in that single-threaded event loop.

In turn, part of our code is still running concurrently. Both the processing of the response received from the kernel and the interval are sharing the same thread and are therefore not able to run truly in parallel. Instead, they ' re basically only swapping turns (that ' s the essence of the term " concurrency " ).

A process can contain multiple threads. Each of these threads can only handle one operation at the time. As a consequence, running the two operations in parallel would require creating two threads: One for the inverval and one for downloading the data. Right?

Yep, that ' s correct.

But sadly, a Node.js process only comes with a single thread out of the box (like mentioned before). This means that we can ' t increase the number of threads and will therefore only ever be able to handle a single operation at the same time.

As a result, we need to extend its default behavior if we want to run things truly in parallel. And that ' s where the native cluster module comes in:

Since we can only have one operation per thread (and therefore per process in the case of Node.js), we need to create multiple processes to achieve our goal of parallelism. But that ' s not very hard.

Here ' s an example how this could look:

const cluster = require ( 'cluster' ) if (cluster.isMaster) { setInterval( () => { console .log( 'Interval dispatched' ) }, 1000 ) cluster.fork() } else { await loadData() console .log( 'Data downloaded' ) }

Now we ' re taking advantage of cluster' s built-in .fork method to make a copy of the current process. In addition, we ' re checking if we ' re still on the main one or on a clone. If we are, we create the interval and if we ' re not, we load the data.

The result of these few lines of code are operations that are actually running in parallel. They ' re not started at the exact same time, bot are both running in separate processes. In turn, they can both make process at the same time.

If adding that module to your project wasn ' t easy enough, we actually made multithreading even more straightforward by equipping now with a really neat scaling algorithm, which seamlessly spawns multipe copies of your project without you even having to touch any code.

Hence, you don ' t even need cluster if your project is running on our platform. Just ensure that you ' re applying this technique wherever it ' s possible.

By now, you should understand why await is a much better idea than synchronous operations what to do if that ' s not enough.

I hope this post helped you to sharpen your mindset for being able to choose the best direction when it comes to achieving the maximum of performance for your future projects.

Big 🤗 to Olli and Guillermo for taking the time to clear up the confusion I had in my mind about this topic (+ proofreading this essay) and Matthias for the cute cover image!

I ' m truly happy to have such amazing mentors!