Ever since the dawn of Node.js, Node developers have complained about Callback Hell. Various solutions have been proposed with various degrees of success. Generators is an exciting prospect because it finally allows us to write async code in a straight-line fashion. Although generators is not yet available on all browsers, you can make it work using the Babel compiler. It is getting more adoption in Node because the modules co and koa have been around for over 2 years, and first IO.js and now Node has had out-of-the-box support for it since version 4.0. This post will cover two modules that give support to generators as a coroutine mechanism to tame the async beast: co and Bluebird. Both co and Bluebird make use of promises. Bluebird is in fact specifically a promise library. The way you write coroutines with co vs Bluebird are quite similar.

Prerequisites

While you may be able to vaguely follow along otherwise, I do expect you to know about generators and promises. Here are some resources if you need to drill down to those topics:

The Big Idea

Both co and Bluebird let you do the following: yield a promise to get the underlying value that's wrapped by the promise. Since we can use promises to wrap async operations, yielding allows async operations to be written in a straight-line manner - something that has been sought after since the dawn of Node.js.

co

co is a module that returns just two functions :

co(genFn) - co takes a generator function and returns a promise. It will execute the supplied generator function to completion and then resolve the promise.

- co takes a generator function and returns a promise. It will execute the supplied generator function to completion and then resolve the promise. co.wrap(genFn) - co.wrap works like co except it returns a function that returns a promise.

The following program will read a Markdown file from the filesystem, convert the contents to HTML, then read a handlebars template file again from the filesystem, then inject the contents into the template, and finally write the resulting HTML into a file.

'use strict' const co = require ( 'co' ); const marked = require ( 'marked' ); const fs = require ( 'fs-promise' ); const handlebars = require ( 'handlebars' ); co ( function * () { let md = yield fs . readFile ( 'README.md' ); let html = marked ( md . toString ()); let template = yield fs . readFile ( 'layout.hbs' ); let output = handlebars . compile ( template . toString ())({ title : 'README' , contents : html }); yield fs . writeFile ( 'index.html' , output ); }). catch ( function ( err ) { console . error ( err . stack ); });

Full Source

There are a few things to note about this example:

Instead of the core fs module, I am using the fs-promise module which promisifies all the fs APIs. Whenever we need an async operation, we wrap it up inside of a promise, and we yield it to get back the underlying value. In this example, I use yield twice to read a file, and once to write a file. In the case of writing a file, I didn't care about the resulting value. Error handling: since co() returns a promise, I can use catch() to handle any errors occurs within the coroutine - anything from file-not-found to variable-not-defined. If I hadn't set up an error handler this way, any error that occured would have been silenced, which doesn't not make for a good debugging experience.

Bluebird

Bluebird is primarily a promise library - maybe the most popular one at this time. But it also features a coroutine function which works very similarly to co. Rewriting the previous example using Bluebird is a small change

'use strict' const bluebird = require ( 'bluebird' ); const marked = require ( 'marked' ); const fs = require ( 'fs-promise' ); const handlebars = require ( 'handlebars' ); bluebird . coroutine ( function * () { let md = yield fs . readFile ( 'README.md' ); let html = marked ( md . toString ()); let template = yield fs . readFile ( 'layout.hbs' ); let output = handlebars . compile ( template . toString ())({ title : 'README' , contents : html }); yield fs . writeFile ( 'index.html' , output ); })(). catch ( function ( err ) { console . error ( err . stack ); });

Full Source

The only changes are

It use the bluebird.coroutine() function instead of co() . bluebird.coroutine() returns a function that returns a promise, as opposed to co() which returns a promise directly - this is like the behavior of co.wrap() - this necessitated adding a () to invoke the resulting function that was returned.

Same Example Code in Promsed-Land

For comparison, this is what the example code might have looked like written using promises, but without the help of coroutines

'use strict' const marked = require ( 'marked' ); const fs = require ( 'fs-promise' ); const handlebars = require ( 'handlebars' ); fs . readFile ( 'README.md' ). then ( function ( md ) {; let html = marked ( md . toString ()); return [ fs . readFile ( 'layout.hbs' ), html ]; }) . spread ( function ( template , html ) { let output = handlebars . compile ( template . toString ())({ title : 'README' , contents : html }); return fs . writeFile ( 'index.html' , output ); }) . catch ( function ( err ) { console . error ( err . stack ); });

Full Source

Note: spread() is a convinience provided by Bluebird, which is used by fs-promise underneath.

Promisifying Async Operations

Using this coroutine paradigm means that you have to wrap any and all async APIs with promisified versions. You have two options:

Use a promise library's helper utilities to wrap Node-style async functions as functions that return a promise. Bluebird provides promisify and promisifyAll.

Q provides nfcall and nfapply. Use libraries whose sole purpose is to promisify an exist Node-style library. Some examples are: fs-promise

request-promise

child-process-promise

Extracting Sub-Coroutines

Now that you are writing a lot of code inside of generator functions, you may want to at some point break them up into sub-routines in order to reuse and better organize your code. Normally you'd extract a new function, but since you can't use yield statements inside of functions, you will need need to extract generator functions.

Subroutines With co

Extracting subroutines will work slightly differently between co and Bluebird. I will start with co.

If I wanted to extract a routine called md2html to convert a Markdown file to HTML. I could write

function * md2html ( filename ) { let md = yield fs . readFile ( filename + '.md' ); let html = marked ( md . toString ()); let template = yield fs . readFile ( 'layout.hbs' ); let output = handlebars . compile ( template . toString ())({ title : filename , contents : html }); yield fs . writeFile ( filename + '.html' , output ); console . log ( ` Wrote $ { filename }. html ` ); }

The body of this generator function is mostly copy-n-pasted from my original, only now with the file name substituted via a variable. To call this generator function via another function, you would simply yield it:

yield md2html ( 'README' );

Full Source

Subroutines With A Return Value

If you want to extract a subroutine that returns a value, you can use the return statement within the extracted generator function - same as normal functions. For example, let's say we want to return the generated markup instead of writing it to a file:

function * md2html ( filename ) { let md = yield fs . readFile ( filename + '.md' ); let html = marked ( md . toString ()); let template = yield fs . readFile ( 'layout.hbs' ); let output = handlebars . compile ( template . toString ())({ title : filename , contents : html }); return output ; }

Now we can yield to get its value:

let html = yield md2html ( 'README' );

Full Source

Subroutines With Bluebird

Bluebird's coroutine mechanism doesn't permit yielding generator instances directly. To extract a generator function in Bluebird, you could use the bluebird.coroutine function to wrap the extracted generator function

const md2html = bluebird . coroutine ( function * md2html ( filename ) { let md = yield fs . readFile ( filename + '.md' ); let html = marked ( md . toString ()); let template = yield fs . readFile ( 'layout.hbs' ); let output = handlebars . compile ( template . toString ())({ title : filename , contents : html }); return output ; });

And then invoking the coroutine the same way

let html = yield md2html ( 'README' );

Full Source

Parallelizing Tasks

Someone once mentioned to me that using generators in Node is not in the Node style because it removes the parallelization of IO operations that Node gives you for free. Well, yes and no, because this can be easily rectified.

Let's say you have 100 files you have to convert from Markdown to HTML. You might do this:

let files = yield fs . readdir ( 'markdown' ); for ( let i = 0 ; i < files . length ; i ++ ) { yield md2html ( 'markdown/' + path . basename ( files [ i ], '.md' )); }

Full Source

And this would convert the files serially. But to convert them in parallel is pretty easy too! Just convert each task into a promise and then use Promise.all or equivalent to execute them in parallel. Promise.all() takes an array of promises and resolves as an array of the resolved values of the promises when every one of the promises has resolved. Hint: you can convert a generator function into a promise using co or Bluebird.

let files = yield fs . readdir ( 'markdown' ); let tasks = files . map ( function ( file ) { return co ( md2html ( 'markdown/' + path . basename ( file , '.md' ))); }); yield Promise . all ( tasks );

Full Source

If you want to limit the concurrency - the number of tasks that are being performed at a time, you can use the throat module.

Error Handling

Error handling within a coroutine is easy - you can use try/catch. What a concept!

function * md2html ( filename ) { try { let md = yield fs . readFile ( filename + '.md' ); let html = marked ( md . toString ()); let template = yield fs . readFile ( 'layout.hbs' ); let output = handlebars . compile ( template . toString ())({ title : filename , contents : html }); yield fs . writeFile ( filename + '.html' , output ); console . log ( ` Wrote $ { filename }. html ` ); } catch ( e ) { console . error ( ` Failed to convert $ { filename }. md because $ { e . message } ` ); } }

Full Source

Gotchas

The most common gotcha for someone learning to use this paradigm is forgetting to yield a promise. Unfortunately, you often won't get a very illuminating error message in this case due to JavaScript's lack of type safety. For example, if I had forgotten to yield the promise that holds the contents of the file:

let md = fs . readFile ( filename + '.md' ); let html = marked ( md . toString ());

Full Source

md would hold the promise object rather than the actual buffer containing the content of the file. Then on the next line, md.toString() will actually work and return the string "[object Promise]" because toString() is a method that all JavaScript objects have.

If you forget to yield a generator function that doesn't return a value, or whose value you simple don't care about

md2html ( 'README' ); // nothing happens!

Full Source

That generator function never executes! Because calling a generator function only instantiates a generator instance, but it doesn't execute that generator instance.

If you are hitting a JSON-based API, and you forget to yield, for example

let result = request ({ url : 'https://api.github.com/repos/petkaantonov/bluebird' , json : true , headers : { 'User-Agent' : 'Script' } }); let owner = result . owner . login ;

Full Source

Will result in a TypeError: Cannot read property 'login' of undefined because a promise object doesn't have the properties/object struct you are expecting.

Alas, we are still coding JavaScript!

Are Coroutines Right For You?

I have been using this approach for Node.js programming to do standalone command-line scripts, app servers, and end-to-end browser tests using selenium, and not having to write the standard Node-style callback pattern and handle error handling code every time I do an IO operation has really help me reduce mental and typing overhead.

I think this approach is particularly appealing if you primarily do server-side programming. You can also use this approach in the browser in a way that works cross-browser if you use the Babel compiler with the regenerator runtime - although I have not done this extensively.

One reason some people might shy away from adopting this technique now is the upcoming async/await feature - which is essentially a small layer of syntactic suger on top of generators. async/await is an ES7 feature, but you can already use it currently if you use the Babel compiler. I have not taken this route myself because I take comfort in staying close to the metal and prefer not using code transpilers. Also, in general I feel more comfortable when there is a bit of distance between me and the bleeding edge.

At the end of the day, try it and see.

Next Up

In the near future I will have more to say about server programming and testing in this paradigm.