This post was originally posted in November 2018 and updated in April 2020.

From web applications to servers and mobile apps, from small programs to big projects, JavaScript is everywhere. It's the main choice to embrace any project because, well, it's 2020 and JS is an even more mature language, with an enormous community supporting it.

Writing Asynchronous JavaScript

In JavaScript, all the code runs synchronously on the event loop, which executes sequentially small chunks of our program. On the event loop, each iteration is called a "tick" and runs until the queue is empty. Each chunk has a "tick" to process and after it's completed, the next one starts. For small applications, this is enough, but as we start doing heavier operations that require more time, like accessing a database or fetching data over the Internet, we need better mechanisms to handle them.

Javascript patterns for front-end development

Over the years, patterns and libraries emerged in the JS ecosystem to handle asynchronous programming, such as callbacks, events, promises, generators, async/await, web workers and packages on NPM registry like async, bluebird, co or RxJS.

As for front-end developers, we may need to use different patterns to solve the challenging problems of every day depending on the framework we are working on. Knowing the available tools in the JavaScript world allows us to choose the best solution for each problem.

You may ask if this guide is only for web development? No, most of these patterns are used in all JavaScript environments and platforms, so knowing how they work is always valuable for any developer.

What is a Callback

In JavaScript, functions are first-class objects and a callback is just a function that is passed as an argument to another function. Also known as high-order functions, the callback should be invoked whenever the asynchronous work is finished.

fs.readFile('./imaginary.txt', (err, result) => { if (err) { console.error('Error: ', err) return } console.log('Result: ', result) })

Since callbacks are just functions, they are supported by all the environments that run JavaScript, from our browsers to servers that run Node.js. Simple, but yet powerful, this pattern is fundamental in asynchrony. However, it also has its drawbacks.

When projects start to grow and we need to start doing more complex code, it becomes harder to implement generic solutions on our programs, making them harder to read and maintain. When this happens, we start having the pyramid shape of }) similar to what we can see in the following example.

fs.readFile('./imaginary.txt', (err, imaginaryResult) => { if (err) { console.error(`Error: ${err}`) return } fs.readFile('./cloud.txt', (err, cloudResult) => { if (err) { console.error(`Error: ${err}`) return } const data = imaginaryResult + cloudResult fs.writeFile('./imaginarycloud.txt', data, error => { if (error) { console.error(`Error: ${error}`) return } console.log('Success!') }) }) })

This is usually known as "Callback Hell".

However, the worst that we can have with callbacks is the inversion of control. If this happens, we are giving control of the program flow sequence to other parties, making it difficult (or even impossible!) to properly test it.

How do JavaScript Events work?

An event-driven architecture can also be used to write asynchronous javascript code. This architecture consists of having one event emitter with a corresponding event listener, that sends events when the async code completes. Sending different types of events allows having different callbacks for each type of listeners. One basic example and really important part of the front-end development is requesting data over the internet. To achieve that we could use the XMLHttpRequest object that is heavily used on AJAX programming.

const xhr = new XMLHttpRequest() // Callback for on request error const onerror = () => { console.error('Request failed') } // Callback for on request load const onload = () => { if (xhr.status !== 200) { console.warn(`Request status ${xhr.status}: ${xhr.statusText}`) } else { console.log(`Response: ${xhr.responseText}`) } } // Register listeners xhr.addEventListener('onerror', onerror) xhr.addEventListener('onload', onload) // Execute request xhr.open('GET', 'https://imaginary-amazing-data.json') xhr.send()

The XMLHttpRequest object already has some events listeners defined to handle the request flow, so we just need to take advantage of them. But this patter has lots of boilerplate code, as we could add and remove listeners depending on the different event types we need. This works perfectly on a small web page but as soon the complexity and functionalities grow it starts to be bloated and cumbersome to maintain, so better abstractions are needed!

What is a Promise

Promises are harder to master, but address the inversion of control issue. They are a little slower than callbacks, but in return we get a lot of trustability.

We can always be sure that a Promise will resolve or reject since they are a “wrapper” around a value that may not exist yet. Promises are a trustable mechanism that also helps to express async code more sequentially. They can have, at most, one resolution value, meaning that a Promise always needs to be resolved or rejected.

This is how they solve the inversion of control. Not by removing callbacks, but by creating a mechanism on the wrapper that handles this issue.

We can chain multiple Promises on our code without forcing a new level of indentation after each one, using .then() .

const promise = new Promise((resolve, reject) => { fs.readFile('./imaginarycloud.txt', (err, result) => { if (err) { reject(err) } else { resolve(result) } }) }) promise.then(text => { console.log(text) }).catch(err => { console.error(`Error: ${err}`) })

Promises provide more functionality, like, for example, the Promise.all() and Promise.race() vs the latest API aditions Promise.allSettled() and Promise.any() . With more complex front-end web applications we need more and better mechanisms.

const resolveSync = new Promise(resolve => resolve('hi')) const rejectAsync = new Promise((_, reject) => setTimeout(() => reject(new Error()), 2500)) const resolveAsync = new Promise(resolve => setTimeout(() => resolve('imaginary'), 4000)) // Promise.all vs Promise.allSettled // Returns array with all resolved values or rejects if any fails Promise.all([resolveSync, rejectAsync, resolveAsync]) // This will reject the promise after 2500ms // Returns one array of object with status and value/reason Promise.allSettled([resolveSync, rejectAsync, resolveAsync]) /** * [ * { status: 'fulfilled', value: 'hi' }, * { status: 'rejected', reason: Error }, * { status: 'fulfilled, value: 'imaginary' }, * ] **/ // Promise.race vs Promise.any // Returns the first settled value Promise.race([rejectAsync, resolveAsync]) // Will reject since rejectAsync rejects after 2500ms // and resolveAsync will only resolve after 4000 // Returns first resolved value // Rejects if no resolved value on array Promise.any([rejectAsync, resolveAsync]) // Will resolve after 4000ms with value 'imaginary'

This improves the readability of the code, and the maintainability of the program as well, but not everything is perfect. Since this feature is at the framework level, multiple implementations can vary on behavior, plus the overhead cost of time and memory.

Generators in JavaScript

Generators were introduced on ECMAScript 2015 and are functions in which we can use and control the iterator, meaning that functions can be paused and resumed at any time. This is a powerful tool for when we want to get each value only when we need, instead of getting all of them at once. This is possible with the addition of the word yield to JavaScript.

function* iterate(array) { for(let value of array) { yield value } } const it = iterate(['Imaginary', 'Cloud']) it.next() it.next() it.next() // RESULT: // { value: 'Imaginary', done: false } // { value: 'Cloud', done: false } // { value: undefined, done: true }

We can see in this example that for each next() we receive an object with the value and a flag indicating if the generator functions ended. But generators can be used to control async flows in conjugations with other libraries as well, like in co or redux-saga, of which I will talk more about further ahead.

How to use an Async/Await pattern

Finally, ES2017 introduced asynchronous functions making it much more easy to write and read asynchronous code in JavaScript!

They are much cleaner than the last patterns discussed, and the return of an async function is a Promise! This is very powerful because we have the goodness of both worlds. As we've discussed before, Promises are the safe pick when dealing with complex async operations, but they are not that easy to read and master as async/await code.

One drawback is that it needs a transpilation tool, like Babel, because Async/Await is still syntactic sugar over the promises code.

Since the result is a Promise and can be resolved/rejected, it's important to wrap our await code within a try/catch. This way we can properly handle errors on our async code.

async function() { try { const result = await fetch('https://imaginaryAPI') return result } catch (err) { console.error(`Error: ${err}`) } }

Web-workers as Async Background tasks

Using web-workers, it's possible to run scripts and functions on a different thread, running code in asynchronous background tasks. This will not affect the usability of the user interface and can send data between workers and the main thread.

The service worker on our browsers is heavily used on progressive web applications. This consists of registering a web worker for our website and deciding which files can be cached or not, and it will make the app usage faster. Also, if the user is offline, some features will still be available. They can also be used to perform heavy operations without freezing the UI or main JS thread.

NPM Libraries

Several other libraries try to solve those issues, each using its techniques. You can find some examples ahead:

Async: this library is good to work with callbacks trying to solve some problems that exist within them, as well as eliminating the callback hell problem! In the last implementations, it's possible to use Async/await code as well.

async.waterfall( [callback1, callback2], err => console.error(`Error: ${err}`) )

Bluebird: a very performant implementation of Promises that also includes a lot of extra features like cancellation, iteration and Promisify! This last one is a wrapper around functions working with callbacks, returning a Promise for those functions.

const module = require('imaginary-callback-module') Promise.promisifyAll(module) // RESULT: // Now we can call .then() on all module functions, yeaaah!

co: control async flows with generators. This library is a runtime around generators, combining the keyword yield with promises, executing the generator result and returning it as a promise object.

co(function* () { const auth = yield login(username, password) return auth }).then(result => { console.log(result) }, err => { console.error(err) })

Redux-saga: A front-end library for React/Redux stack. This is a Redux middleware aiming to make applications side-effects more efficient and easier to manage, as they can be started or canceled by Redux actions. This implementation makes heavy use of generators to fetch data over the internet and apply the needed side-effects on our website.

function* (username, password) { try { const auth = yield call(login, username, password) yield put(someActionToStore(auth)) } catch (err) { console.error(`Error: ${err}`) } }

RxJS: This is a pattern used on Angular apps and it's a reactive pattern. We create an observable that we can subscribe and wait for changes of which we will be notified. Using this pattern, it's possible to cancel subscriptions and chain observables, for instance.

Observable.pipe(first()).subscribe(result => { console.log(`Result: ${result}`) }, error => { console.error(`Error: ${error}`) })

Which Async patterns should we use?

For simple projects, callbacks are the simplest and easier way to handle async flows. On bigger projects with a proper setup, I would choose the async/await pattern, as the asynchronicity is easy to read, has a natural error handling and there's no pyramid of death.

This is the kind of syntactic sugar we need on our work, allowing us to write a more readable and maintainable program.

As seen in the above picture, JavaScript continues to be the most used language on GitHub, along with its vibrant community.

This is our top pick to handle asynchronous flows, but there are more ways to achieve the same results besides the ones that this guide describes. All in all, it's up to you to choose which is the best for your needs.