Native Promises are amongst the biggest changes ES2015 make to the JavaScript landscape. They eliminate some of the more substantial problems with callbacks, and allow us to write asynchronous code that more nearly abides by synchronous logic.

It's probably safe to say that promises, together with generators, represent the New Normal™ of asyc. Whether you use them or not, you've got to understand them.

Promises feature a fairly simple API, but come with a bit of a learning curve. They can be conceptually exotic if you've never seen them before, but all it takes to wrap your head around them is a gentle introduction and ample practice.

By the end of this article, you'll be able to:

Articulate why we have promises, and what problems they solve;

Explain what promises are, from the perspective both of their implementation and their usage; and

Reimplement common callback patterns using promises.

Oh, one note. The examples assume you're running Node. You can copy/paste the scripts manually, or clone my repo to save the trouble.

Just clone it down and checkout the Part_1 branch:

git clone https://github.com/Peleke/promises/ git checkout Part_1-Basics

. . . And you're good to go. The following is our outline for this path of promises:

The Problem with Callbacks

Promises: Definitions w/ Notes from the A+ Spec

Promises & Un-inversion of Control

Control Flow with Promises

Grokking then , reject , & resolve

If you've spent any time at all with JavaScript, you've probably heard that it's fundamentally non-blocking, or asynchronous. But what doe that mean, exactly?

Sync & Async

Synchronous code runs before any code that follows it. You'll also see the term blocking as a synonym for synchronous, since it block the rest of the program from running until it finishes.

Upgrade Your JS Go from vanilla JavaScript 👉 React

Watch for FREE

"use strict" ; const filename = 'text.txt' , fs = require ( 'fs' ) ; console . log ( 'Reading file . . . ' ) ; const file = fs . readFileSync ( ` ${ __dirname } / ${ filename } ` ) ; console . log ( 'Done reading file.' ) ; console . log ( `Contents: ${ file . toString ( ) } ` ) ;

Asynchronous code is just the opposite: It allows the rest of the program to execute while it handles long-running operations, such as I/O or network operations. This is also called non-blocking code. Here's the asynchronous analogue of the above snippet:

"use strict" ; const filename = 'text.txt' , fs = require ( 'fs' ) , getContents = function printContent ( file ) { try { return file . toString ( ) ; } catch ( TypeError ) { return file ; } } console . log ( 'Reading file . . . ' ) ; console . log ( "=" . repeat ( 76 ) ) ; let file ; fs . readFile ( ` ${ __dirname } / ${ filename } ` , function ( err , contents ) { file = contents ; console . log ( `Uh, actually, now I'm done. Contents are: ${ getContents ( file ) } ` ) ; } ) ; console . log ( `Done reading file. Contents are: ${ getContents ( file ) } ` ) ; console . log ( "=" . repeat ( 76 ) ) ;

The major advantage to synchronous code is that it's easy to read and reason about: Synchronous programs execute from top to bottom, and line n finishes before line n + 1. Period.

The major disadvantage is that synchronous code is slow—often debilitatingly so. Freezing the browser for two seconds every time your user needs to hit the server makes for a lousy user experience.

And this, mes amis, is why JavaScript is non-blocking at the core.

The Challenge of Asynchronicity

Going async buys us speed, but costs us linearity. Even the trivial script above demonstrates this. Note that:

There's no way to know when file will be available, other than handing control to readFile and letting it notify us when it's ready; and Our program no longer executes the way it reads, which makes it harder to reason about.

These problems alone are enough to occupy us for the rest of this article.

Let's strip our async readFile example down a bit.

"use strict" ; const filename = 'throwaway.txt' , fs = require ( 'fs' ) ; let file , useless ; useless = fs . readFile ( ` ${ __dirname } / ${ filename } ` , function callback ( error , contents ) { file = contents ; console . log ( `Got it. Contents are: ${ contents } ` ) ; console . log ( `. . . But useless is still ${ useless } .` ) ; } ) ; console . log ( `File is ${ useless } , but that'll change soon.` ) ;

Since readFile is non-blocking, it must return immediately for the program to continue to execute. Since Immediately isn't enough time to perform I/O, it returns undefined , and we execute as much as we can until readFile finishes . . . Well, reading the file.

The question is, how do we know when the read is complete?

Unfortunately, we can't. But readFile can. In the snippet above, we've passed readFile two arguments: A filename, and a function, called a callback, which we want to execute as soon as the read is finished.

In English, this reads something like: " readFile ; see what's inside of ${__dirname}/${filename} , and take your time. Once you know, run this callback with the contents , and let me know if there was an error ."

The important thing to take away is that we can't know when the file contents are ready: Only readFile can. That's why we hand it our callback, and trust it to do the right thing with it.

This is the pattern for dealing with asynchronous functions in general: Call it with parameters, and pass it a callback to run with the result.

Callbacks are a solution, but they're not perfect. Two bigger problems are:

Inversion of control; and Complicated error handling.

Inversion of Control

The first problem is one of trust.

When we pass readFile our callback, we trust it will call it. There is absolutely no guarantee it actually will. Nor is there any guarantee that, if it does call, that it will be with the right parameters, in the right order, the right number of times.

In practice, this obviously hasn't been fatal: We've written callbacks for twenty years without breaking the Internet. And, in this case, we know that it's probably safe to hand control to core Node code.

But handing control over mission-critical aspects of your application to a third party should feel risky, and has been the source of many a hard-to-squash heisenbug in the past.

Implicit Error Handling

In synchronous code, we can use try / catch / finally to handle errors.

"use strict" ; const filename = 'text.txt' , fs = require ( 'fs' ) ; console . log ( 'Reading file . . . ' ) ; let file ; try { file = fs . readFileSync ( ` ${ __dirname } / ${ filename + 'a' } ` ) ; console . log ( `Got it. Contents are: ' ${ file } '` ) ; } catch ( err ) { console . log ( `There was a/n ${ err } : file is ${ file } ` ) ; } console . log ( 'Catching errors, like a bo$$.' ) ;

Async code lovingly tosses that out the window.

"use strict" ; const filename = 'throwaway.txt' , fs = require ( 'fs' ) ; console . log ( 'Reading file . . . ' ) ; let file ; try { fs . readFile ( ` ${ __dirname } / ${ filename + 'a' } ` , function ( err , contents ) { file = contents ; } ) ; console . log ( `Got it. Contents are: ' ${ file } '` ) ; } catch ( err ) { console . log ( `There was a/n ${ err } : file is ${ file } ` ) ; }

This doesn't work as expected. This is because the try block wraps readFile , which will always return successfully with undefined . This means that try will always complete without incident.

The only way for readFile to notify you of errors is to pass them to your callback, where we handle them ourselves.

"use strict" ; const filename = 'throwaway.txt' , fs = require ( 'fs' ) ; console . log ( 'Reading file . . . ' ) ; fs . readFile ( ` ${ __dirname } / ${ filename + 'a' } ` , function ( err , contents ) { if ( err ) { console . log ( `There was a/n ${ err } .` ) ; } else { console . log ( `Got it. File contents are: ' ${ file } '` ) ; } } ) ;

This example isn't so bad, but propagating information about the error through large programs quickly beomes unwieldly.

Promises address both of these problems, and several others, by uninverting control, and "synchronizing" our asynchronous code so as to enable more familiar error handling.

Imagine you just ordered the entire You Don't Know JS catalog from O'Reilly. In exchange for your hard-earned cash, they send a receipt acknowledging that you'll receive a shiny new stack of books next Monday. Until then, you don't have that new stack of books. But you can trust that you will, because they promised to send it.

That promise is enough that, before they even arrive, you can plan to set aside time to read every day; agree to loan a few of the titles out to friends; and give your boss notice that you'll be too busy reading for a full week to come to the office. You don't need the books to make those plans—you just need to know you'll get them.

Of course, O'Reilly might tell you a few days later that they can't fill the order for whatever reason. At that point, you'll erase that block of daily reading time; let your friends down know the you won't receive the books, after all; and tell your boss you actually will be reporting to work next week.

A promise is like that receipt. It's an object that stands in for a value that is not ready yet, but will be ready later—in other words, a future value. You treat the promise as if it were the value you're waiting for, and write your code as if you already had it.

In the event there's a hiccup, Promises handle the interrupted control flow internally, and allow you to use a special catch keyword to handle errors. It's a little different from the synchronous version, but nonetheless more familiar than coordinating multiple error handlers across otherwise uncoordinated callbacks.

And, since a promise hands you the value when it's ready, you decide what to do with it. This fixes the inversion of control problem: You handle your application logic directly, without having to hand control to third parties.

The Promise Life Cycle: A Brief Look at States

Imagine you've used a Promise to make an API call.

Since the server can't respond instantaneously, the Promise doesn't immediately contain its final value, nor will it be able to immediately report an error. Such a Promise is said to be pending. This is the case where you're waiting for your stack of books.

Once the server does respond, there are two possible outcomes.

The Promise gets the value it expected, in which case it is fulfilled. This is receiving your book order. In the event there's an error somewhere along the pipeline, the Promise is said to be rejected. This is the notification that you won't get your order.

Together, these are the three possible states a Promise can be in. Once a Promise is either fulfilled or rejected, it cannot transition to any other state.

Now that the jargon is out of the way, let's see how we actually use these things.

To quote the Promises/A+ spec:

A promise represents the eventual result of an asynchronous operation. The primary way of interacting with a promise is through its then method, which registers callbacks to receive either a promise’s eventual value or the reason why the promise cannot be fulfilled.

This section will take a closer look at the basic usage of Promises:

Creating Promises with the constructor; Handling success with resolve ; Handling errors with reject ; and Setting up control flow with then and catch .

In this example, we'll use Promises to clean up the fs.readFile code from above.

The most basic way to create a Promise is to use the constructor directly.

'use strict' ; const fs = require ( 'fs' ) ; const text = new Promise ( function ( resolve , reject ) { } )

Note that we pass the Promise constructor a function as an argument. This is where we tell the Promise how to execute the asynchronous operation; what to do when we get the value we expect; and what to do if we get an error. In particular:

The resolve argument is also a function, and encapsulates what we want to do when we receive the expected value. When we get that expected value ( val ), we call resolve with it: resolve(val) . The reject argument is also a function, and represents what we want to do when we receive an error. If we get an error ( err ), we call reject with it: reject(err) . Finally, the function we pass to the Promise constructor handles the asynchronous code itself. If it returns as expected, we call resolve with the value we get back. If it throws an error, we call reject with the error.

Our running example is to wrap fs.readFile in a Promise. What should our resolve and reject look like?

In the event of success, we want to console.log the file contents. In the event of error, we'll do the same thing: console.log the error.

That nets us something like this.

const resolve = console . log , reject = console . log ;

Next, we need to fill out the function that we pass to the constructor. Remember, our task is to:

Read a file, and If successful, resolve the contents; Else, reject with an error.

Thus:

const text = new Promise ( function ( resolve , reject ) { fs . readFile ( 'text.txt' , function ( err , text ) { if ( err ) reject ( err ) ; else resolve ( text . toString ( ) ) ; } ) } )

With that, we're technically done: This code creates a Promise that does exactly what we want it to. But, if you run the code, you'll notice that it executes without printing a result or an error.

The problem is that we wrote our resolve and reject methods, but didn't actually pass them to the Promise! For that, we need to introduce the basic function for setting up Promise-based control-flow: then .

Every Promise has a method, called then , which accepts two functions as arguments: resolve , and reject , in that order. Calling then on a Promise and passing it these functions allows the function you passed to the constructor to access them.

const text = new Promise ( function ( resolve , reject ) { fs . readFile ( 'text.txt' , function ( err , text ) { if ( err ) reject ( err ) ; else resolve ( text . toString ( ) ) ; } ) } ) . then ( resolve , reject ) ;

With that, our Promise reads the file, and calls the resolve method we wrote before upon success.

It's also crucial to remember that then always returns a Promise object. That means you can chain several then calls to create complex and synchronous-looking control flows over asynchronous operations. We'll dig into this in much more detail in the next installment, but the catch example in the next subsection gives a taste as to what this looks like.

We passed then two functions: resolve , which we call in the event of success; and reject , which we call in the event of error.

Promises also expose a function similar to then , called catch . It accepts a reject handler as its single argument.

Since then always returns a Promise, in the example above, we could have only passed then a resolve handler, and chained a catch with our reject handler afterwards.

const text = new Promise ( function ( resolve , reject ) { fs . readFile ( 'tex.txt' , function ( err , text ) { if ( err ) reject ( err ) ; else resolve ( text . toString ( ) ) ; } ) } ) . then ( resolve ) . catch ( reject ) ;

Finally, it's worth pointing out that catch(reject) is just syntactic sugar for then(undefined, reject) . So, we could also write:

const text = new Promise ( function ( resolve , reject ) { fs . readFile ( 'tex.txt' , function ( err , text ) { if ( err ) reject ( err ) ; else resolve ( text . toString ( ) ) ; } ) } ) . then ( resolve ) . then ( undefined , reject ) ;

. . . But that's much less readable.

Promises are an indispensable tool in the async programming toolkit. They can be intimidating at first, but that's only because they're unfamiliar: Use them a few times, and they'll be as natural as if / else .

Next time, we'll get some practice by converting callback-based code to use Promises, and take a look at Q, a popular Promises library.

Until then, read Domenic Denicola's States and Fates to master the terminology, and read Kyle Simpson's chapter on Promises from the book series we ordered earlier.

As always, drop questions in the comments below, or shoot them to me on Twitter (@PelekeS). I promise to respond!

Like this article? Follow @PelekeS on Twitter