I’d like to share with you a few different ways of writing asynchronous code in JavaScript. Whether you’re writing for the browser or building complex Node.js apps you might find some useful tips how to improve the quality and readability of your code either by using popular techniques or some more experimental solutions.

What do we want to achieve

I’m a big fan of talking about code using live examples and not just theory, so this article will be based on a few snippets of code. They’ll all have essentially the same job, which I’ll describe later on, but will use different approaches to writing asynchronous JavaScript code to achieve the desired effect.

It’ll be some simple code to get data from an external API, for which I’ll use a fun little project getting quotes from Ron Swanson. Then it’ll print a message after 2 seconds since receiving the data. Subsequently another message will be printed indicating that the previous one was displayed.

All of the implementations will share two functions:

Shared functions const fetch = require('node-fetch'); function getQuote(cb) { return new Promise((resolve, reject) => { fetch('http://ron-swanson-quotes.herokuapp.com/v2/quotes') .then(response => response.json()) .then(quote => { if (_.isFunction(cb)) { cb(quote) } resolve(quote) }) .catch(reject); }); } function printMessage(msg, cb) { return new Promise(resolve => { setTimeout(() => { console.log(msg); resolve(); if (cb) { cb(); } }, 2000); }); } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 const fetch = require ( 'node-fetch' ) ; function getQuote ( cb ) { return new Promise ( ( resolve , reject ) = > { fetch ( 'http://ron-swanson-quotes.herokuapp.com/v2/quotes' ) . then ( response = > response . json ( ) ) . then ( quote = > { if ( _ . isFunction ( cb ) ) { cb ( quote ) } resolve ( quote ) } ) . catch ( reject ) ; } ) ; } function printMessage ( msg , cb ) { return new Promise ( resolve = > { setTimeout ( ( ) = > { console . log ( msg ) ; resolve ( ) ; if ( cb ) { cb ( ) ; } } , 2000 ) ; } ) ; }

You’ll notice that they both have two types of interface: promises or callbacks to allow interaction using those two different methods. It’s similar to how most libraries that switched to using promise-based approach handle backwards compatibility for callbacks.

Yesterday: callbacks

Since you’re reading this article you’re probably trying to avoid this approach as much as possible. The most difficult part of using callbacks is the inevitability of descending into what is commonly called a ‘callback hell’. I’m sure most of you are pretty familiar with the concept, but for those who aren’t just a simple example to illustrate:

Callbacks fs.readFile(fileName, function(contents) { if (condition) { fs.writeFile(fileName, function (data) { fs.readFile(fileName, function(contens) { //...etc. }) }) } }) function main() { console.log('Gonna get a quote'); getQuote(quote => { console.log(quote); printMessage('test msg', () => { console.log('A message was printed') }) }) } main(); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 fs . readFile ( fileName , function ( contents ) { if ( condition ) { fs . writeFile ( fileName , function ( data ) { fs . readFile ( fileName , function ( contens ) { //...etc. } ) } ) } } ) function main ( ) { console . log ( 'Gonna get a quote' ) ; getQuote ( quote = > { console . log ( quote ) ; printMessage ( 'test msg' , ( ) = > { console . log ( 'A message was printed' ) } ) } ) } main ( ) ;

Today: Promises

In the last example you already started seeing callback nesting, and we know what that leads to. So, to remedy that we started using Promises. In most cases they give you much cleaner code which is not only more descriptive but also a lot easier to read.

The most important thing is that they allow you to write your asynchronous code as ‘flat’ as possible, so you instantly know what comes after each other, and in what sequence the code is being executed.

Right now most modern browsers (with the exception of IE and Opera Mini) and Node.js implement native Promises, and for those who don’t, you can still use one of several other implementations like Bluebird or Q.

But let’s get down to business, here’s the same example as before but using Promises:

Promises function main() { console.log('Gonna get a quote'); getQuote() .then(console.log) .then(() => { return printMessage('test msg'); }) .then(() => { console.log('A message was printed'); }) .catch(console.error); } main(); 1 2 3 4 5 6 7 8 9 10 11 12 13 function main ( ) { console . log ( 'Gonna get a quote' ) ; getQuote ( ) . then ( console . log ) . then ( ( ) = > { return printMessage ( 'test msg' ) ; } ) . then ( ( ) = > { console . log ( 'A message was printed' ) ; } ) . catch ( console . error ) ; } main ( ) ;

Tomorrow: Async functions

This approach is based on a proposal that was initially submitted to be included in ES7 (2016). As you may know, eventually only two features were accepted as a part of that release (exponential operator and Array.prototype.includes() ), which was based on a decision to have smaller changes with a shorter release cycle. You can read more about that, and in general about how new features are being integrated into JavaScript, here: http://www.2ality.com/2015/11/tc39-process.html.

Async and await are basically two new keywords that allow to write asynchronous code that looks like it’s synchronous. Let’s use an example:

Async functions async function main() { try { console.log('Gonna get a quote'); const quote = await getQuote(); console.log(quote); await printMessage('test msg'); console.log('a message was printed after 2s'); } catch(error) { console.error(error); } } main(); 1 2 3 4 5 6 7 8 9 10 11 12 13 async function main ( ) { try { console . log ( 'Gonna get a quote' ) ; const quote = await getQuote ( ) ; console . log ( quote ) ; await printMessage ( 'test msg' ) ; console . log ( 'a message was printed after 2s' ) ; } catch ( error ) { console . error ( error ) ; } } main ( ) ;

As you can see the code is written in a manner that we’re used to when writing sync functions. Although, there are two main differences. You probably noticed the function is preceded with async which informs the interpreter that inside there’s asynchronous code. Then we can use await , and what it does is it makes the browser/Node app/etc wait for the completion of the async code specified in that expression – in our case: promises responsible for API communication and printing a message.

To make it even simpler, what it does in human words:

print ‘Gonna get a quote’ wait for the quote and when you get it assign it to quote print the quote wait for the ‘test msg’ message to be printed print ‘a message was printed …’

Since this feature is still just a proposal, we need to use a transpiler, like Babel or Traceur which will convert the code to ES5 so it can be used.

Check other articles in our blog

In the mean time: Co

co is a very clever and simple library that uses generator functions to achieve a similar effect as async functions that were mentioned before.

For those not familiar with generators there are some simple examples on MDN that should tell you most of what you need to know: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Generator.

First, let’s look at some code:

co function *main() { console.log('Gonna get a quote'); const a = yield getQuote(); console.log(a); yield printMessage('test msg'); console.log('A message was printed after 2s'); } co(main) .catch(console.error); 1 2 3 4 5 6 7 8 9 10 function * main ( ) { console . log ( 'Gonna get a quote' ) ; const a = yield getQuote ( ) ; console . log ( a ) ; yield printMessage ( 'test msg' ) ; console . log ( 'A message was printed after 2s' ) ; } co ( main ) . catch ( console . error ) ;

co wraps a generator function and automatically performs two things:

it executes next() until there’s something left to do in the generator function

it waits on yield for the async objects to resolve

co has a concept of ‘yieldables’ which are async objects that can be used with yield and for which it’ll wait before continuing code execution. Those are (according to https://github.com/tj/co#yieldables):

promises

thunks (functions)

array (parallel execution)

objects (parallel execution)

generators (delegation)

generator functions (delegation)

Two are especially worth mentioning here: objects and arrays . They provide an extremely easy way to achieve the parallel execution of async code, similar to what Promise.all allows you to do. Let’s say we want to get the quote and print the message as fast as possible and we don’t care which comes first:

parallel execution using co function *main() { console.log('Gonna get a quote'); const results = yield [ getQuote(), printMessage('test msg') ] console.log(results[0]); } co(main) .catch(console.error); 1 2 3 4 5 6 7 8 9 10 11 function * main ( ) { console . log ( 'Gonna get a quote' ) ; const results = yield [ getQuote ( ) , printMessage ( 'test msg' ) ] console . log ( results [ 0 ] ) ; } co ( main ) . catch ( console . error ) ;

With Promise.all the same thing would look like this:

parallel execution using Promise.all function main() { Promise .all([ getQuote(), printMessage('test msg') ]) .then(results => { console.log(results[0]); }) .catch(console.error); } main(); 1 2 3 4 5 6 7 8 9 10 11 12 13 function main ( ) { Promise . all ( [ getQuote ( ) , printMessage ( 'test msg' ) ] ) . then ( results = > { console . log ( results [ 0 ] ) ; } ) . catch ( console . error ) ; } main ( ) ;

Conclusions

As always in JavaScript there are many ways for you to achieve the same result. Most of them are actually based on Promises and although we might need to wait some time until we can stop using polyfills, it definitely is the way to move forward.

If you prefer to be on the bleeding edge of things, then you can use transpilers and already jump on the async/await train before native async functions are implemented. Be wary though, it’s still a stage 3 candidate, so it _may_ change.

If you want the safe options, which will give you some advantages of the former and make your code easy to read without the need of using experimental stuff, then co is a nice solution. The biggest advantage is that you can use it straight away and you don’t need to change your preexisting code too much. Since most of you probably use Promises, it might just be a way of orchestrating them in a clear way. And once the day comes and async functions are implemented – you can easily migrate because in most cases it will be just replacing yield with await and switching wrapped generators to async functions.

In any case, use what makes the most sense for you and your project, and consider the implications of using a particular solution. Next time I’ll try to write some more about the performance of all those solutions and how much overhead in your code you get from using them.

Further reading