”Bridge it until you make it.”

Because using GraphQL and especially Apollo client suite is so convenient, one would like to develop frontend apps in GraphQL.

But the server you have to communicate with is… REST-ish.

Ahhhhh…

Do not complain in endless discussions about adding GraphQL server. For sure there are (objective, subjective or straight irrational) reasons why it could not be done right now.

So instead of it

go and still make your frontend with GraphQL, use GraphQL to REST bridge to make it happen elegantly and show the benefits to lay a groundwork for your future GraphQL server

This is the positive way how to go ahead, right?

Let’s do it.

We want to write an app like you are facing GraphQL server.

We need to write layer to map the GraphQL queries and mutations to the REST API to make it happen.

— Intermezzo 1:

Why bother with GraphQL? This is subjective but I’ll name some of my

favorite points.

ask for what you want (and cascade deep down) in one request

don’t think about normalization, store and dispatching actions

enjoy type system, that will help you in thinking about the app and data and not the implementation

If you are the manager not (so much) interested in how to make your programmer’s lives easier but chasing other benefits…really are you? hmmmm… imagine: the code on the frontend will be much more future proof. Once you realize, that GraphQL is here to stay, and the requirement to actually implement GraphQL server emerges, your frontend clients will be ready and a lot of work on schema and resolvers will be already done. And this is a win!

So give your developers green card to do it if they want to.

… end of intermezzo

Should I add some image? Ok, it’s here…

The key points here are:

You will develop your (web)app exactly the same way as you would be facing GraphQL server.

The GraphQL schema is the important part. You develop with it and you have advantage — you can define it. You are the architect. A lot of work done with the schema. And once the decision to make GraphQL server is done, your schema is here to start with.

The same resolvers will be there but will act differently of course (not resolving to REST calls but DB queries or so). But even on the (future GraphQL) server, the adoption may be gradual — reusing old code from REST API.

The only thing that you will have to change, once facing real GraphQL server, is to replace one Apollo link for another one and delete a lot of code from the frontend. Smaller and faster app, no rewrite needed… hooray!

How easy is it?

I’d say easy and fun, of course. The friend of mine says: depends on how shitty the current server’s API is.

I made a small library (apollo-bridge-link) and put together a detailed working example with REST API, authorization, dataloader.

To show you that it is possible. That it is not a hack and that it works pretty good.

In 3 minutes you may have working example and hack the code the way you want.

Now we will go into the technical detail with a lot of code. If you are not a developer, this may not interest you, but I hope you leave now with the impression that frontend with GraphQL may be done even against current REST-like API and usually this is a good choice to do so.

Working example

The code is > https://github.com/dacz/apollo-bridge-link-example.

(updated Dec 29th 2017)

It has working REST server (JSON based, no DB needed), working frontend, Apollo client with bridge link, resolvers with dataloaders for batching REST queries and authorization is included (token based).

I suppose you know what the GraphQL is (at least a bit), otherwise you are probably not reading this article.

What it is about?

No, it is not a ToDo app.

There are authors and posts. A post has one author, an author may have multiple posts. You can create a new post (and by default, it is created as user u1 ).

Parts

schema

graphQL client (ApolloClient)

frontend (React & Recompose)

REST API server (json-server)

resolvers (to map queries to REST)

dataloader to batch REST queries

authentication

Schema

We start with the schema.

BTW: Very often frontend/app developer has the best understanding of the client’s needs (clients usually need frontend because this is visible, right?). So you are in control and can design the schema the way how you want to get the data.

Nothing special here.

We prepared queries and mutation using GraphQL fragments:

Apollo client

I use Apollo client 2.0 here. The setup is slightly different than the official how-to. We don’t use apollo-http-link but (my) apollo-bridge-link ). Standard in-memory cache.

apollo-bridge-link intercept your queries and process them in resolvers, same as the GraphQL server will do.

The important part — createBridgeLink parameters are:

schema

resolvers : we will discuss them later

: we will discuss them later mock : true/false if you want to mock resolvers and data

Let’s talk about the “context” thing here.

The context is an object that will be available during processing GraphQL request. If you are familiar with GraphQL on the server, this is the same context.

We use Apollo context library and passing to is standard object (which looks useless but is not) and dataloaders. We can (and will) modify the context object during the lifecycle of app to pass down the authorization token (committing deadly sin in the functional heaven: ‘Thou shalt not mutate object’, but even the recommended usage where token is read from local storage is not pure).

We will talk about dataloaders later but the in the context we simple create new ones for each request.

Frontend (React and Recompose)

This post is not about React and the way you write your app is completely the same, as you would be talking to GraphQL server.

Side note: I prefer using functional components and Recompose. The day I found it I started to like React even more. For me, it is easier to think about managing state (and transforming props etc.) outside the presentation components. But YMMV.

I hope you will understand what it does here (and if not, this article is not about Recompose after all — it works here and you know Apollo’s graphql HoC function — this is all you need to know).

You see there [React Router](https://reacttraining.com/react-router/). The mutation is done with Recompose ( PostNewHoc.js ) — therefore slightly different than what you will read in Apollo docs. I usually want to react (in UI) on “request in flight” during mutation (to disable submit button for example) but mutate doesn’t have this option like graphql function. With recompose it is easy to do it.

Another note: Do not judge me for usability, design or CSS, please — this was low priority of this demo. And I did not want to process CSS with webpack for sake of simplicity (otherwise I’d use my beloved CSS modules and cssnext), so no autoprefixer etc., just plain CSS that works in Chrome.

Fully mocked mode

The example app may run in the fully mocked mode. Copy the file apolloClient-fullymocked.js to apolloClient.js (don’t worry to overwrite, the copy of full Apollo client is in apolloClient-bridge.js ). Then do

npm run dev

and the app will work with given schema and will mock the data. No backend needed.

Don’t forget to copy apolloClient-bridge.js back to apolloClient.js before you go on.

REST API server

I use super simple (but amazingly powerful) json-server (more libraries like this one, please.) You can define the data as JSON file and the server will even update that file as you call PUT, CREATE or DELETE on its REST API. It was easier than easy to set up.

Copy db.json.dist to db.json and start server with

npm run server-unauthorized

It’s here instantly. If you want to test it, try for example in your terminal:

curl http://localhost:3000/posts

(you can find more curl examples in curl.md file).

I added simple middleware to the server to display authorization header for each request so we can debug the authorization later.

Resolvers

This is the core. If you are familiar with resolver functions on the GraphQL server, you will be home here. This is exactly how it works.

The resolvers object reflects the GraphQL schema. It simply specifies the function that should return field of given type.

It looks like:

type: {

field: function

}

Query and Mutation are types, therefore it is clear that for example

Query->user will invoke function rest.getUser with all arguments that we get from GraphQL query and is supposed to return data for User type. The User type has anticipated fields (see the type User in schemaPlain.js ).

But there is the field posts .

— Intermezzo 2

I realized that REST for someone may mean any API. A mix of REST, extended fields and RPC calls. It’s ok, just do not call it REST. I saw many times the approach that in case of our example of user.posts — the “rest” server returns even the array of the post objects. Simple, straightforward, right? Especially when the backend is backed by MongoDB it’s easy. But IMHO this is not REST. At least not when there is other endpoint where you can ask on posts.

— end of intremezzo

So our service is RESTful enough that it doesn’t return posts . Not even it’s IDs.

We simply have to ask for the posts on posts endpoint and send there the user id or IDs we want to get. To put it another way: for each user record we need to ask REST API posts endpoint with parameter userId and it returns us the array of posts of this particular user.

Here is the beauty of GraphQL. We do not paste logic for obtaining posts into the function that resolves the User type. No no. We simply define new resolver with the path User(type)->posts(field) . And we suppose that it will obtain the data for user.posts .

But how this resolver knows the userId?

Resolver function takes 3 parameters: root , args and context .

root is the important part here. When resolving posts for a particular user , the root is the already resolved user .

args are any arguments we send to GraphQL query/mutation (usually variables) — like id when querying getUser for example.

We discussed context earlier (and will discuss later, too).

So the flow is:

We fire the users query. The resolver Query->users is activated and returns array of users. Because GraphQL is clever enough it knows that we defined User->posts resolver and calls this resolver with each User as a root argument. Therefore the resolver function knows the id and may request post data.

Simple as that. Clever GraphQL.

“Wait! If there will be 20 users it would fire 1 query for users and them 20 queries for posts? Insane!”

Yes… but not necessary. In fact, in this demo, it will fire only 2 queries thanks to amazing dataloader. We will praise dataloader later in this article.

Keep in mind that if you would face real GraphQL server, the client will fire only one request and get all the data. The server will do all the hard work to obtain the data. It may store the data as nested objects or call DB queries with multiple joins or … it may use the same technique with dataloader as described here. And it seems that this is usually the most transparent and maintainable way how to do it.

Back to our resolvers.

I usually structure my resolvers just as a dispatcher to the appropriate resolver functions and pass all GraphQL arguments to them.

And the functions:

All resolver functions (in this example) reside here. The functions like z or addPost are straightforward. We have simple fetcher tool ( fetcher.js ) that gets the URL, optionally data and context. It uses this to call REST API with fetch and parse the data.

The getPosts shows that we handle 2 cases differently — case when we are sending query posts (querying posts directly) where no root will be defined.

The second case is when the GraphQL want to resolve user’s posts (where root will be the user).

This is the case for dataloader . In general, it creates the function that will collect what you want for all dataloader.load calls (in one tick) and then make just one call. In our case — when we will display the list of users, we get a couple of users. Then the GraphQL resolver will call getPosts for each user. Dataloader will collect all our requirements (in this case userId s whose posts we want to get), builds from them just one query and after it receives the data, transforms them the way that it “looks like” all the queries were done sequentially. This is totally transparent to GraphQL resolver. A-ma-zing!

I recommend you to read the dataloader docs to exactly understand what’s going on in the dataloader. The important thing here is the fact that it has to return the array of results in the same order we asked. In our case, this means the array of results has to correspond to the array of userIds we give dataloader to get their posts. This is done with the code userIds.map(id =>… (line 15 or 24 in rest.js ).

We could probably make the dataloader here a little bit more complex to handle the fact that you may ask for nonexistent userId (which would be weird because this id was returned by the server in the previous call) but this example is demo and is should be clear enough.

Dataloader in the context…?

You may notice that we did not call the dataloader directly but as it resides in the context. Why?

In fact, you can call the dataloader directly in the resolver functions. But you would not be able to pass context to the fetcher function inside the dataloader. If you would not pass context, you would not be able to use authentication header for example.

You could decide to make new data loader within the resolver function. But then the dataloader main functionality (above-mentioned collecting queries into one — batching) will be lost. Because the resolver function will be called multiple times during one query.

The fact is that you need to create dataloaders for each operation — for each GraphQL query or mutation. And this is done with help of apollo-link-context in apolloClient.js file ( contextLink ).

It’s simple thing — You can pass there one function (or array of functions) that will be called with arguments context and operation and are supposed to return an object that will be shallow merged with the context of the operation. So the dataLoadersFactory function is 1) called during each operation, 2) creates dataloaders and 3) they are merged to the context. And these dataloaders are shared across multiple calls of resolver functions (fulfilling one operation). That’s how dataloader may batch the request.

Another side note (again): Dataloader offers a kind of caching. It is recommended for the server to generate dataloaders for each request (to limit caching) because each request may be from a different user and different users may see different data for the same queries. But why do we need to re-create dataloaders for each request when the user is all the same?

There is one straightforward reason: because Apollo client has own excellent cache. And you may ask Apollo client to refetch and you want to be sure that the request will be fired to the server and not been resolved from another cache layer.

Authorization

Usually, your app gets token after user login/register and you want to send this token with each request to the server in the authorization header. Same here, but I had no time to implement login and because I think that it was not the purpose of this example.

Once your login/registration mutation is ok, you call the addTokenToMiddleware (from apolloClient.js ) with your token and it will be available since then in all your GraphQL queries and therefore in the context and fetcher will use it to fetch data from REST server. Comment out or uncomment the last line in the apolloClient.js and see the autorization header in the JSON server log (written to the console).

Potential disadvantages of this bridge

bundle size for the app can be higher (or not)

the app may be slower (or faster)

No, I’m not joking. Depends on how you solve the REST fetching, normalization of the cache (store), dispatching events (for loading and receiving data) and other aspects of managing app state, you may even end with the smaller bundle size and faster app when using GraphQL with Apollo. Both libraries are highly optimized (and debugged) and this is important thing here.

Wrap up

The biggest advantage I see using GraphQL for frontend development is that you can focus much better on UX and functionality and “design the data you need for it”.

Maybe it’s even easier to start with the bridge because there will be nobody from the server team who will dictate the schema. No pun intended here — I’m more backend developer than frontend — but backend guys are often thinking the SQL (or NoSQL) way and it may not always help to get to the agreement about API structure.

GraphQL is great and brilliantly designed. It will not go away anytime soon, the contrary is true. Even AWS start supporting GraphQL with appSync.

Apollo makes it really easy fun to work with GraphQL. It is flexible to make anything you can imagine (and beyond), powerful, open, has a great community behind it, … thanks for Apollo.

You can start to do frontend app (web or mobile) with GraphQL without the need to wait for GraphQL server. REST is not a stopper here.

The example (mentioned and referenced all over this article) is here (again).

My motivation

I implemented the bridge link because I needed it (to do the jobs for my clients). I want to help other developers with the adoption of GraphQL because other developers and their work helped me greatly.

I’m sure a lot of things may be done better.

Feedback welcome, help (with your GraphQL) available.

Similar/related projects

apollo-link-rest by Apollo team (interesting concept!)

AWS appSync (how to start super fast GraphQL server with concept of GraphQL proxy)

If you liked this piece but you do not want to clap, no problem. You can go and smile at the first person you meet. Maybe it will make your day.