Measuring requests duration in a Node.js + Express application properly

Hello Node.js lovers, in this post I’ll describe how to properly measure request duration in a Node.js + Express application. This topic is not new, but after googling for a while I found that even approved answers on StackOverflow are outdated and represent not an optimal way to measure requests duration. In the end of this post, you’ll be ready to use a generic middleware to measure the duration of every request in your app, even a terminated one.

First of all, let’s make some assumptions. By measuring requests duration I mean measuring only the time needed for processing a request by your app’s code. I ignore the time needed for DNS lookup and time spent inside TCP/IP stack. I think you couldn’t really affect a DNS lookup or TCP/IP time. It’s better to direct your efforts to your app’s performance, which you can improve significantly.

In terms of Express framework, we will measure the time of the request-response cycle. Let’s start with a simple boilerplate code, which has the fast method ( /fast ) which responds immediately and the slow one ( /slow ) which responds after 10 seconds.

const app = require ( 'express' )() // send response immediately app.get( '/fast/' , ( req, res ) => { res.sendStatus( 200 ) }) // mock heavy load, send response after 10 seconds app.get( '/slow/' , ( req, res ) => { setTimeout ( () => res.sendStatus( 200 ), 10 * 1000 ) }) app.listen( 3000 , () => { console .log( 'Server started' ) })

With this boilerplate, we’re ready to dig into the matter of requests duration measuring.

The start point

The only one legit method to capture all the requests in Express framework is to implement a middleware function and load it before any other handlers, which means it will be the first to handle a request from a client. In other words, it will be our start point to measure the duration of a request. Let’s add a middleware function that logs the request url and the http verb.

app.use( ( req, res, next ) => { console .log( ` ${req.method} ${req.originalUrl} ` ) next() })

And make a request to our test app:

> curl localhost:3000/fast/ GET /fast/

The end point

When does the request-response cycle end? The first and the obvious answer is when the response is sent. Node.js has a default event that is fired when the response has been sent. This hook is res.on('finish') .

99% of software developers use only “finish” event, but I should notice that not every request comes to the “finish” state, in the real life client can close the connection before the response has been sent. In this case Node.js emits only the res.on('close') event.

Personally, one of my client was constantly complaining on my api for not responding properly. For other clients everything was fine. After a few hours of digging into the problem I found that he misconfigured his timeouts, so he was closing his connections before the response was sent. If I’d used the “close” hook I would have nailed it in a few seconds, not hours.

Let’s add both hooks to our sample middleware and capture the “finish” and the “close” events:

app.use( ( req, res, next ) => { console .log( ` ${req.method} ${req.originalUrl} [STARTED]` ) res.on( 'finish' , () => { console .log( ` ${req.method} ${req.originalUrl} [FINISHED]` ) }) res.on( 'close' , () => { console .log( ` ${req.method} ${req.originalUrl} [CLOSED]` ) }) next() })

And make a few test requests

> curl localhost:3000/fast/ GET /fast/ [STARTED] GET /fast/ [FINISHED]

Notice that I added the -m 1 argument to the curl command to close the connection after 1 second

> curl localhost:3000/slow/ -m 1 GET /slow/ [STARTED] GET /slow/ [CLOSED]

Time measurement

Now we have request-response cycle start and end points, let’s measure the time between them.

The most popular approach is to use JavaScript Date class (e.g. Date.getTime() ), it can be found in many StackOverflow answers. However, it has several disadvantages such as limitation to milliseconds (hard to measure functions executing for a fraction of a millisecond) and a vulnerability to a clock drifts, this means new Date() relies on the system clock. For instance, if your system clock gets adjusted by an automatic clock adjustment job (which happens regularly in operating systems, google for the “NTP” phrase), the elapsed time between measures may not be correct.

So, to measure time we will be using high-resolution Node.js timer. Let’s add a call to capture the time at the start point of a request-response cycle and calculate the diff at the end point of the request-response cycle. I created a small helper function getDurationInMilliseconds to translate high resolution results to milliseconds. Check the modified middleware code below.

const getDurationInMilliseconds = ( start ) => { const NS_PER_SEC = 1e9 const NS_TO_MS = 1e6 const diff = process.hrtime(start) return (diff[ 0 ] * NS_PER_SEC + diff[ 1 ]) / NS_TO_MS } app.use( ( req, res, next ) => { console .log( ` ${req.method} ${req.originalUrl} [STARTED]` ) const start = process.hrtime() res.on( 'finish' , () => { const durationInMilliseconds = getDurationInMilliseconds (start) console .log( ` ${req.method} ${req.originalUrl} [FINISHED] ${durationInMilliseconds.toLocaleString()} ms` ) }) res.on( 'close' , () => { const durationInMilliseconds = getDurationInMilliseconds (start) console .log( ` ${req.method} ${req.originalUrl} [CLOSED] ${durationInMilliseconds.toLocaleString()} ms` ) }) next() })

The results:

> curl localhost: 3000 /fast GET /fast [FINISHED] 6.951 ms > curl localhost: 3000 /slow GET /slow [FINISHED] 10 , 007.288 ms

Sum up

That’s it. We created a middleware that captures all the requests and logs the duration of every request with the nanosecond precision. The full code of this sample app you can find on this GitHub page.