The Reality of JavaScript Performance

Speed Demon Turtle!

If you wanna know how to make JavaScript faster, this article is here to explain how little that matters in the real world.

I’m specifically talking abut the average JavaScript developer. If you’re a data scientist, huge swaths of data is your life; I get it.

JavaScript Development is Different

In typical web applications, you’re dealing with 3–20 items at a time, so performance is negligible.

Do you know the difference in performance of the different data structures in JavaScript? I know I don’t, but I do have some ideas. Thing is, I don’t come across large datasets very often in JavaScript. The majority of time I find them in legacy applications. Even still, there are usually much larger performance bottlenecks.

It’s possible you’re working with large datasets. Even in Node.js, you’re usually paginating data and lettin’ your database handle the heavy lifting. Considering how rare it is to see more than a few items, I’ve come across plenty of people who really really care about performance in JavaScript.

This one guy I knew worked in embedded systems for years. I’ve worked in them too. Performance, memory, output code size, everything’s important in those scenarios. But he liked to bring that mindset into our frontend code. Because of his need for speed, he’d created a lot of unreadable code which was eventually gutted and rewritten.

Another situation was when I started writing functional code around 2016. I remember being in a code review and having people complain about using the map function twice in a row.

It was an array of 3 items and using two map statements made it easier to read, but there was concern in that review it’d slow down execution. The difference in speed was probably microseconds, but it was important to those devs.

In almost all cases, I’m worried about code readability and maintainability. I care about how easy it is for any other developer to come in and figure out what’s going on. I care about 6 months down the road when I might have to refactor or fix a bug in this code.

In the JavaScript world, my opinion is that performance should be an oversight. Get it working, make it maintainable, then worry about improving it. Because improving JavaScript could very-well make the code unreadable.

Performance in the Moment

Performance in your application is heavily based on the JavaScript interpreter. For the most part, a lot of things you wrote in the past will get faster with every future interpreter update.

But it could be the opposite as well. You could be writing your code in the janky method X because you heard it’s faster.

When V8 gets an upgrade, suddenly method Y is finally as fast as it should be, but now you have all this legacy code written in the janky way. Unless you really needed that performance, you’ve just made yourself a legacy application.

Don’t believe me? Without going into Node 10 vs Node 8’s performance numbers, let’s go into a real-world example that nabbed me years ago.

Let’s look at this StackOverflow answer from 2010:

https://stackoverflow.com/questions/2781686/javascript-string-concatenation-slow-performance-array-join/2781854#2781854

It appears Array.join() for the most part across browsers is faster.

And this article also from 2010:

https://www.sitepoint.com/javascript-fast-string-concatenation/

Chrome 6.0: standard string appends are usually quicker than array joins, but both complete in less than 10ms. Opera 10.6: again, standard appends are quicker, but the difference is marginal — often 15ms compared to 17ms for an array join. Firefox 3.6: the browser normally takes around 30ms for either method. Array joins usually have the edge, but only by a few milliseconds. IE 8.0: a standard append requires 30ms, whereas an array join is more than double — typically 70ms. Safari 5.0.1: bizarrely, a standard append takes no more than 5ms but an array join is more than ten times slower at 55ms.

Is string concatenation really that slow compared to Array.prototype.join ? Nope. Not anymore. But you might see some legacy code that still used it because of that article. Why?

IE7 and below use a concatenation handler that repeatedly copies strings and causes an exponential increase in time and memory usage. By comparison, the array join completes in under 200ms — it’s more than 800 times faster.

Years ago, I screwed up a code project because I chose the “faster” join method. My code wasn’t readable at all not to mention slow.

Because of IE7, I used to have to do weird tricks to improve performance. But guess what? None of that performance mattered in the real world; at least, not with the apps I was working on.

Such a Hypocrite

I actually ran into a scenario recently where I needed a faster execution time. I used transducers for that purpose and wrote a few articles about JavaScript’s processing speed. That project had a list of 20K items and definitely caused a performance bottleneck.

But didn’t I say you wouldn’t come across this kinda stuff?

Instead of solving it myself, the real solution to this one-of-a-kind situation was to get a pre-formatted JSON file.

That way, I didn’t have to process the data, and the company could statically host it. Even if it took a minute to create the file on their end, that’s a worker machine’s time and resources, not every clients’ machine each time you open your browser.

I wrote my own solution as a temporary measure, but after proving that solution, then it made sense for them to take over the formatting.

So the real solution was to have someone else write the code. A bit of a cop-out, but it’s also what I’ve come across in the industry: “let the backend devs manage their own data”.

DOM Rendering

I think a lot of performance issues come from not addressing asynchronous code. That’s where I’ve found the real performance bottlenecks in my career.

Things like:

Clicking a button but not giving any indication something’s happening in the background (looking at you Steam signup process!).

Having a text input render, but not allowing the user to click and start typing as soon as it’s visible.

Allowing your application to run concurrently fixes user-perceived performance issues. For instance, React has Async Mode, now called Concurrent. It allows rendering changes to certain user-intractable components with a higher priority.

Loading data faster doesn’t help since it’s a deeper, fundamental issue: the DOM is slow. In fact, any “print” command is going to be slow. Think outputting to the console or even a real paper printer.

Conclusion

Does processing speed matter? Only in very specific scenarios. Figure out where those are and work on those sections. Don’t make your entire codebase unreadable in an attempt at securing a few extra microseconds of execution time.

Remember, you have to hire other developers to work on these projects and you want them happy! Don’t waste your time worrying about performance when you can spend that time improving your user experience instead.

More Reads

If you wanna read more on JavaScript performance, checkout these articles:

If you liked what you read, please checkout my other articles on similar eye-opening topics: