Mon, 12 Mar 2018 15:16:25 GMT

Though JavaScript has the highest number of developers in its community with respect to any other language on earth at this moment; there are a lot of misconceptions, shallow knowledge, bad assumptions among the community members.

In this article we have come up with a list of tips, which can make your javascript application faster.

This article is not about dev-ops and doesn’t discuss on things like minify your files or setup redis or use docker and kubernetes to make your application performant. This article is about coding in JavaScript to make the performance better.

I am majorly discussing about JavaScript, but few of the points are related to node.js only, or few may be only for client side JavaScript. However, as majority of the JavaScript developers are full-stack these days, I assume you can understand these easily.

No global vars please

This is one of the very easy tips. Many people are already familiar with this, but few knows the reason, why.

You can imagine like every single scope in your application is like a node in a huge tree. Whenever you try to use a variable, JavaScript first searches that in the very immediate node (which is the function’s own scope). If not found, it will move to the parent node in search of the same. If not found there as well, it will move the parent of that node and this process will continue till it reaches the root or global scope.

So whenever you call a global variable, it traverse through the entire scope tree (or AST) to find it, which is costly.

Second point is, global variables are not cleared by the garbage collector. So don't just continuously add more and more global variables

Only homogeneous Array

You must not push different kind of elements (different datatypes) in the same array until and unless it is absolutely necessary.

The reason of this has been well describe in this article about in depth array. However, in short, when you use heterogeneous array, JavaScript (or I would say v8 instead) can not use a proper array and de-structure it to a dictionary, which makes searching a very costly operations with respect to homogeneous array.

Put common codes in a function

Before saying anything else, let me tell you a bit about Just In Time compilers.

These compilers are used inside v8 and other JavaScript engines to optimise hot codes. Now what is hot code? Hot code is a function or object which is continuously being used. V8 stored a compiled binary version of that function or object if the signature of these are not changed. This gives you a huge performance boost.

Now when you put your common code in a function and do not change the contract (parameter or arguments and their types), V8 will compile it down and optimise. So following programming principals will give you a performance boost for sure.

Avoid using delete

I am a guy from C/C++ background and I was taught to use malloc etc. and also to release memory whenever possible.

When I moved to JavaScript and found delete operation to delete properties and variables, I started using that cause I though I am smart. :D

But when I came to know about JIT or just in time compilers in JavaScript engines, I realised what kind of big mistakes I was doing by using delete keyword.

Well, the last point about putting common code in functions already talked about JIT compilers. Here also the reason is same.

When you use delete to delete a property of an object; something like this, delete obj.prop , it internally changes the hidden class of that object. Due to this, the compiled code becomes invalid and V8 treats the object separately. This is certainly a performance hit.

So don’t use delete if it is not absolutely necessary and you know what exactly you are doing. You can set the property to null instead, which won’t change the hidden class or the compiled code.

Closure & timer - a deadly combo

Imagine an example like the following.

var foo = { bar: function () { var self = this; var timer = setTimeout(function () { console.log('Timeout called'); self.bar(); }, 100); } }; foo.bar(); foo = null;

There is a simple object foo which has a function bar in it, which is nothing but a timer of setTimeout.

After calling bar, I am setting foo to null. Ideally, now as that pervious object foo has no more reference left with it, it should be garbage collected. But unfortunately the timer is keeping the reference self to call every time. This will never garbage collect that object and will cause memory leak.

So be cautious in scenarios like this.

Create class for similar kind of objects

Another case of JIT compilation is here.

Whenever you are creating objects of same signatures (i.e. with same property set), try to make a class or constructor for that.

Cause this way you can make a single hidden class for all those instances you are using and this will again give a chance to JIT to optimise your code and execute JavaScript faster.

So instead of this:

var b = { p: 2, q: "something else", r: true } var c = { p: 10, q: "other things", r: false }

Do this:

function Some(p, q, r){ this.p = p; this.q = q; this.r = r; } var a = new Some(1, "something", false); var b = new Some(2, "something else", true); var c = new Some(10, "other things", false);

forEach vs for()

You may have heard that using inbuilt functions are always better. Now the question is, while iterating an array should you use for loop or forEach.

Well, irrelevant of the ultra small extra time taken, you must prefer forEach. This is always a good practice. However forEach also avoids the indexes if the element is not defined, which makes it smarter. Following example may make things clear.

var arr = new Array(1000); arr[0] = 20; arr[434] = 200; for(var i=0; i

In the example above, the text I am for will be printed 1000 times; however I am forEach will be printed only twice.

Avoid for…in

Especially in the case of object cloning, I’ve seen people using for...in loop. Unfortunately for...in is designed in such a way that it can never be performant. So avoid using it as much as you can. You can check other ways to clone objects in JavaScript.

Array literals are better than push

If you use an array literal, v8 will always understand the structure better than using an empty array and then pushing elements in it.

// good var arr = [1,2,6,2,10,3]; // bad var arr = []; arr.push(1); arr.push(2); arr.push(6); arr.push(2); arr.push(10); arr.push(3);

Now you may say who choose the second one? Everyone go with the first approach only.

No, not exactly. Cause sometimes, we unknowingly code like the second one. For an example, suppose you are modifying each and every element of an array with some certain rules and want to create a new array our of that.

Often developers creates an empty array first, then iterates on the previous array using forEach or something, and then after the operations pushes the modified elements in the new array. But the better way will be to use Array.map() method.

Web-workers & Shared buffer

JavaScript is single threaded. The same thread is used for event-loop. So your new request handling in node.js and dom rendering in browser, everything is processed in a non-parallel way.

So, whenever you have a task which takes more computing time, you should delegate it to some web-workers. In case of node.js, there is no in built worker, but you can use an npm module or spawn a new process for the same.

One common problem working with workers can be how to sync with them (without posting a message after finish). Well, SharedArrayBuffer can be a handy way for that.

It seems SharedArrayBuffer is disabled by default since 5th January 2018; but this (or kind of this) is already in stage 4 as ECMA proposal.

setImmediate over setTimeout(fn,0)

This is a point only for node.js developer. Many of the developers don’t use setImmediate or process.nextTick and go with setTimeout(fn, 0) to make a part of their program asynchronous.

Well, some of our experiments about setImmediate vs setTimeout(fn, 0) says, setImmediate can be upto 200 times (yes, times, not just percent) faster than setTimeout(fn, 0).

So use setImmediate more frequently than setTimeout; but be cautious about using process.nextTick unless you understand how it works.

Avoid static file hosting

The recommendation is not to serve static files from your node.js server.

Why? Because node is not made for that. Node can best serve your tcp/http requests, cause it works completely asynchronous.

If you are keeping it busy to read static files (this operation is handled by libUV’s threadpool), the performance will degrade, cause libUV has a predefined number of threads (by default 4), which you can extend upto 128 only. If there are more file reading tasks than the assigned threadpool size, the process will start getting synchronous.

Promise.all (async await)

Native Promises by ES6 definitely made our life easier. But what came up like a blessing is async await. Now we can write code which will look synchronous but will work asynchronous.

But, sometimes developers become too much comfortable and unknowingly create unnecessary synchronous code. One of the example is as follows.

// bad async function someAsyncFunc() { const user = await asyncGetUser(); const categories = await asyncGetCategories(); const mapping = await asyncMapUserWithCategory(user, categories); } // good async function someAsyncFunc() { const [user, categories] = await Promise.all([ asyncGetUser(), asyncGetCategories() ]); const mapping = await asyncMapUserWithCategory(user, categories); }

This is a very common scenario that for multiple async calls developers wait for one after another. However when you promisify them all using something like Promise.all() , they can be resolved concurrently.

Remove event listeners

We often use event listeners, especially in the client side. And often due to various reasons we may remove the element the event listener is attached to.

But one noticeable thing is event listeners are not removed if the element it attached to is destroyed. A repetition of this can cause memory leaks.

Thus you should be aware of when elements are getting destroyed and also remove your event listeners with them.

Use native instead of client libraries

Before the revolutionary node.js came into the picture, JavaScript was only used in client side programming.

Unfortunately the developers don’t know which user’s browsers is using what version of ECMAScript. This is why the JavaScript libraries usually used in the client side writes polyfills for different versions and they become a bit heavier.

But when you are using node.js you always know the exact version the server is running into. Thus, instead of using client side libraries for promises, colored console or something like that, it is always preferable to use the native code.

Binary modules

If you are developing big modules for your application and you want to take the performance to next level; you should compile and convert them to binary modules.

LinkedIn is one of the companies which uses node.js as their backend and they follow this practice.

Avoid O(n), try O(1)

I have seen developers to unnecessarily going with O(n) complexity rather than O(1).

In JavaScript objects are loosely typed and you can literally attach anything (which can be converted to string) as a key or property.

Now, suppose you have a list of N students with their roll number (which is unique) and totalMarks. If someone enters the roll number in a text field you display the totalMarks of that particular student in the page.

The common representation is you will create an array and push all your student objects there and when someone enters a roll number, you will iterate on the students array and find that particular student and display the marks.

In this way you are on complexity O(n). But if you were using an object instead of an array and set the roll number as a property and the corresponding student object as its value, things would have become O(1) for you, cause whenever someone enters a roll number, you had to show studentsObj[rollNumber] object.

This doesn’t mean you always should use an object instead of array. This is completely on the situation, problem statement and requirements.

Conclusion

Increasing your server’s configuration, scaling it out, distributing the services are some of the very well known processes to make your application performant.

But if your code is causing memory leaks or sequential processing; all those dev-ops steps will not able to save your server getting slowed down or even crashed.

Thus, while coding (irrespective of the language), you must be aware of all good practices, edge cases and performance points.

Happy coding :)