How Autocannon became fast.

If you’re interested in Node.js, you may want to read the following paragraphs to understand how this speed is achieved, otherwise, feel free to skip to the next section.

As previously mentioned, Autocannon is fast. 50K+ Requests per second fast. While it is built on Node.js, It has been observed to be faster than wrk, a benchmarking tool built with C. The reason for this is many choice decisions in the implementation of Autocannon for optimisation. Autocannon itself is built on a custom HTTP/S client, which is primarily where these optimisations exist. It is believed that the only route to optimise Autocannon further is working within the underlying networking stack, within the OS.

One of the key optimisations for Autocannon is that everything is built within the JavaScript domain, meaning there are no native dependencies. While native dependencies can be very powerful for heavy compute tasks, they come with a non-negligible overhead when the runtime must deviate from JavaScript land into native dependency land. This overhead can then stack up significantly in an application where there are many calls between JavaScript to native code. Additionally, by working only within the JavaScript domain, it enabled the code paths to be optimised by the JS engine optimising compiler. This StackOverflow answer by a V8 developer covers this quite well. When Autocannon was created, there were some primary places where native dependencies could be a slowdown due to the number of calls between the JavaScript to the native domain. These were identified to be in the HTTP client/parser and within the library used for tracking the histogram of benchmarked values.

The HTTP parser was an issue because for every request made, a response needed to be transferred to the native dependency, parsed, and then the parsed values must be returned/exposed to the JavaScript land. The HTTP parser that Node.js uses in the native ‘http’ library is a dependency created specifically for that use case. While this is pretty powerful, it was decided to avoid using this for the reason mentioned, which meant writing a custom HTTP client with another JS-based HTTP Parser. The HTTP parser in use is http-parser-js, which is a JS library specifically created for the use-case as laid out above.

The histogram tracking library was not a major issue, but it was still a consideration due to the number of calls being made for tracking benchmarked data within it. Initially, Autocannon was built on top of native-hdr-histogram which exposed the C library bindings to Node.js. After some time of using this, the HDR Histogram community built and released a TypeScript based version which would expose only JavaScript code, so Autocannon was quickly migrated to HdrHistogramJS.

Another key optimisation of Autocannon is that it made use of HTTP Pipelining. This enabled the reuse of connections for multiple concurrent requests, so connections being dropped and reopened is not an issue like it could be otherwise. Think HTTP keep-alive, but for multiple requests and responses in parallel.