You don’t need anything fancy to load some data from a GraphQL server. All you need for a basic client is a POST request that sends a query down to the server and gets a result back. However, GraphQL queries include a lot of useful information that can be used to make your application faster and more efficient. This is why we have smart client implementations, such as Apollo Client and Relay, that take advantage of GraphQL’s query structure to do useful things.

Smart GraphQL clients can make interaction with the server more efficient by reducing the number of roundtrips to fetch data. A common way to reduce server roundtrips is through caching, because the fastest way to load something is to already have it. But it turns out that caching isn’t the only thing that can make server communication more efficient.

What is query batching?

GraphQL query batching means sending multiple queries to the server in one request, which can have some pretty significant benefits:

Reducing the total number of server roundtrips, which can incur significant overhead in HTTP 1.x. When queries are sent in one request, you can use DataLoader to batch loading the actual data from your backend API or database.

When you load your UI on the client, it might fire several queries in a short period of time to put together its initial state. One simple strategy to improve this without changing any of the UI code is to batch together requests made within a small time interval. This way, all of the data can be loaded in one roundtrip, without any extra effort.

While updating the UI for Meteor Galaxy to use Apollo Client, we realized how desirable it is to have query batching in a production app. So, we implemented automatic query batching in Apollo Client.

In this post, we’ll first take a look at how we can use batching and then we’ll open up the box and see how it is implemented. We’ll also consider some next steps we can take to make batching even more awesome in GraphQL.

Using batching

We wanted ensure that an application developer using Apollo Client wouldn’t have to do anything extra to get batching to happen; you should get it “for free”. For example, say you have two UI components that each fire off a couple of GraphQL queries with Apollo Client:

// these queries may be loaded from two different places within your // code.client.query({ query: query1 }) client.query({ query: query2 })

Batching is turned off in Apollo Client by default. To turn it on, all you have to do is set the shouldBatch option in the constructor:

const client = new ApolloClient({ // ... other options ... shouldBatch: true, });

And that’s it! Now the two queries above will be sent in one request.

Batching using time intervals

The query batcher operates on “ticks” — it checks a queue every 10 milliseconds to see if there are any pending queries. If there are multiple queries in the queue, they are combined into one server request. So, as long as the two queries above are fired within the same 10-millisecond tick of the batcher, they’ll be sent as one request, like so:

Query batcher queue

Notice that your application code can remain completely oblivious to this batching. Other than providing an additional option to the ApolloClient constructor, you don’t have to change anything else about your application to start getting the benefits of batching. Check out the docs for a slightly closer look at the semantics of batching and how you can use it with custom network interface implementations.

A look inside

Let’s take a look at how this works. Here’s how the pieces fit together at a high level:

Batching architecture

Let’s consider a case where we fetch two queries, one after another. Both of the queries will go through the QueryManager and will be placed into the QueryBatcher’s queue. Then, during the next tick of the QueryBatcher, these queries will be placed into an array and passed along to an instance of the BatchedNetworkInterface. Once the results for both of the queries have returned from the server, the QueryBatcher will resolve the promises issued for both of the queries.

The important bit here isn’t actually the QueryBatcher: that just acts as a consumer on a queue. Rather, it is the implementation of a BatchedNetworkInterface that batches together multiple requests in a way that works with any spec-compliant GraphQL server.

Batching with no special server support

All GraphQL servers expose a way for us to submit a single query and receive a single result in return. However, there is little support to submit multiple queries and receive results for each of them in a single roundtrip. So, we need to utilize a method that can turn multiple queries into a single query, submit it to the server, receive a single result and then unpack it into results for each of the queries initially submitted. This is all stuff that the default network interface in Apollo Client does for you.

It does this by using a pretty simple concept called query merging. It involves taking multiple queries and putting them all under a single root query. As a simple example, say we’re telling the BatchedNetworkInterface to batch these two queries:

query firstQuery { author { firstName lastName } }query secondQuery { fortuneCookie }

Then, we can imagine a simple approach to merging that produces the following query:

query ___composed { author { firstName lastName } fortuneCookie }

However, the devil is in the details. We’ve picked nice, non-conflicting names in our two example queries, but we have no guarantee of that being the case when we are attempting to merge arbitrary queries. So, we turn to a nice feature of GraphQL: aliasing. Basically, this allows us to refer to a field with a different name and the server will use this name when returning the result. For example, given the following query:

query ___composed { aliasName: author { firstName lastName } }

The server will return a result that looks like the following:

{ “aliasName”: { “firstName”: “John”, “lastName”: “Smith” } }

Using this feature of GraphQL, we can step through the AST of a query and rename all of the top-level fields, inline fragments and named fragments in a way that makes sure that they will never conflict once merged. We also rename variables since two queries can definitely refer to a variable that takes on different values in each query (e.g. a variable like id). When we apply this query merging to the queries we just mentioned, we get the following composed query:

query ___composed { ___firstQuery___requestIndex_0___fieldIndex_0: author { firstName lastName } ___secondQuery___requestIndex_1___fieldIndex_0: fortuneCookie }

This query is then sent down to the server and we get a result that looks like this:

{ “___firstQuery___requestIndex_0___fieldIndex_0”: { “firstName”: “John”, “lastName”: “Smith” }, “___secondQuery___requestIndex_1___fieldIndex_0”: “No snowflake in an avalanche ever feels responsible”, }

This does look a bit ugly, but your frontend code will never see any of this. The network interface will automatically unpack this result into the results you’d expect for the queries you originally submitted:

{ “author”: { “firstName”: “John”, “lastName”: “Smith” } }{ “fortuneCookie”: “No snowflake in an avalanche ever feels responsible” }

The aliasing allows us to establish a one-to-one relationship between the fields returned by the server and the fields in the query originally submitted, allowing the network interface to unpack the result correctly.

This style of query merging can work with any GraphQL server since it only uses what is available in the GraphQL specification. The merged queries aren’t the prettiest, but Apollo Client handles this for you and allows your UI to render with a single roundtrip.

Transport-level batching

Although query merging is very effective, it has a few downsides. Primarily, if you’re trying to debug stuff from the server’s point of view, you’ll see queries that look very different from what you initially fired on the client, which could make it harder to debug your queries. One potential solution is described by Lee Byron, one of the creators of GraphQL, in his talk about new GraphQL features: batching at the network transport level.

In most current GraphQL servers, requests are sent in the following form:

{ "query": “< query string goes here >”, "variables": { <variable values go here> } }

The GraphQL server then resolves the query string and returns a single result.

Instead, imagine we submitted a request that looked like this:

[ { query: < query 0 >, variables: < variables for query 0 >, }, { query: < query 1 >, variables: < variables for query 1 >, }, { query: < query n > variables: < variables for query n >, } ]

And the server would return a response that looks like:

[ <result for query 0>, <result for query 1>, ... <result for query n> ]

This is another way of accomplishing what we are currently doing with query merging: fetching the results for multiple queries in a single roundtrip. However, this is much better, since the queries you see on the server will look exactly as they do on the client.

Transport-level batching is easier to debug than query merging, but it requires additional server support. The server has to know how to process an array of queries and respond with an array of results. We will soon be implementing support for this in Apollo Server, and hopefully the GraphQL community can work together to come up with a standard approach that can be used with all GraphQL server implementations.

In the meantime, just passing an additional option to the Apollo Client constructor will let you load your UI in a single roundtrip rather than a dozen, without needing to put in any effort to manually merge queries using fragments. Then, once transport-level batching is built into most GraphQL servers, you’ll be able to reap these benefits while still getting nice-looking queries in your server logs.

Want to try this out? Check out Apollo Client. And follow us on Medium for more GraphQL and Apollo content!