Since I found out about GraphQL I fell in love with it, especially because of the client libraries like apollo-client. However, as with every technology, there are trade-offs (see this article https://dev-blog.apollodata.com/graphql-vs-rest-5d425123e34b). One of the biggest trade-offs I think was the lack of CDN & HTTP caching. Fortunately, the apollo-client team came up with some solutions for that: Apollo Engine and Automatic Persisted Queries. And in combination with Firebase Cloud Functions, it can solve all of the caching problems.

Drawbacks of Apollo Engine

As I am writing this article (27/06/2018) the Apollo Engine is not really suited if you want to use it on a serverless framework like AWS Lambda or Google/Firebase Cloud Functions. The main drawback is that you need to host a proxy server, introducing an extra node in the infrastructure thus creating a single point of failure.

The problem lies in how the apollo-server is handling the requests (at least the apollo-server-express). Currently, whenever the request comes from a client, single express middleware handles the query parsing, resolving the query, and finally sending the request back to the client. This leaves only one option to pre-process and post-process the request. Yes, using a proxy…

Fortunately we can split the apollo-server-express package into several express middlewares so we get the power to pre/post-processing the request.

Lets write some code

It is time to code:

First we need to modify the apollo-server-express package to be able to use our custom express middleware:

The main difference are these lines:

gqlResponse => {

res['gqlResponse'] = gqlResponse;

next();

},

Instead of sending the gqlResponse to the client directly, we storing it in the res[‘gqlResponse’] variable and going to next middleware in the chain.

Lets now create the final middleware which is responsible for sending the response to the user:

And replace default setup (assuming you already have some bodyparser middleware before)

app.use('/graphql', graphqlExpress({ schema: myGraphQLSchema }));

With

app.use(

'/graphql',

// resolve the request

graphqlExpress({

schema,

}),

// send the response

graphqlResponseHandler

);

Done.

Lets unleash the beast

The best part of this design, is that we can add middleware before graphql process the request and before the server sends the response. A perfect technique to implement caching, tracing, tracking, etc.

In the end the endpoint will have these middlewares:

storeCache is a LTU caching library but can also be a redis database.

All the middlewares are defined here:

An quick explanation about caching the “POST” requests

First, if you use post requests, CDN caching won’t work. What’s left is response caching on the server.

The trick is to generate a unique hash queue from a post body. This is done by calling the hashPostBody function.

When the post request arrives at the server, the getFromCacheIfAny middleware checks if query is cached (line 101). If it is, it sends the response from the cache. Otherwise it continue to the next middleware which is graphqlHandler. The graphql server resolves the query and sets the result in the res[“gqlResponse”] variable as defined above.

Then the storeInCache middleware kicks in, here we cache the post body again, checking the maxDuration of the whole query (which is explained here: https://github.com/apollographql/apollo-cache-control) and store the result in the cache with the maxDuration.

After we clean up the extensions with the extensionsFilter (you can do some logging here too). And finally we sending the request back.

Lets create a rocket

Now it is time to setup the CDN caching. To use CDN caching, we need to send GET requests. In summary, the apollo-client hashes the query (like we doing for the POST requests) and sending a hash. The server checks if the hash is in the cache, if so, it sends the cache data else it resolves the query and stores the result in the cache with the hashed key (which is also send by apollo-client). An in-depth article and setup instructions can be found here: https://www.apollographql.com/docs/engine/auto-persisted-queries.html , just skip the engine part.

Free CDN caching powered by Firebase

Firebase is a great platform to host serverless back-ends. And the best part: you get free CDN caching if you using Firebase Cloud Functions. See details here: https://firebase.google.com/docs/hosting/functions#manage_cache_behavior

To enable CDN caching on Firebase Cloud functions takes only ONE line:

// set CDN caching

// line 120

res.setHeader('Cache-Control', `public, max-age=${durationLeft}, s-maxage=${durationLeft}`);

Drawbacks of firebase cloud functions:

Firebase Cloud Functions: the cold starts…

However, you are not bound by using firebase or any other serverless service.

Conclusion

We have created a full functional graphql server which runs on Firebase Cloud Functions and has built-in in-memory & CDN caching. And the best part: IT IS FREE (until you reach 2 million requests a month, CDN responses do not count).

The full code base can be found here: https://github.com/Rusfighter/firegraph

To run locally:

cd functions && npm install && npm run watch

To deploy to firebase, check out the firebase cloud functions docs.

Final Words

Like I said in the disclaimer, this is my first article on medium. And my English writing skills are not as good as I want them to be, so if you have some remarks or need more explanation, go ahead and leave a comment :).

Bonus

Since we should improve the cold start performance, we need to lazy load the modules we are only interested in for a specific cloud function. To do this we create a helper function:

So if the function is triggered, we lazy load the modules we only need. This will result in much better cold start performance, especially when you need a lot of modules. For more improvement tips see: https://firebase.google.com/docs/functions/tips#performance