By Shashank Mehra

As we scaled up our backend at Grofers, we moved from a monolithic service to a microservice architecture. Our monolithic service was written to handle tasks like authentication and rate limiting as a part of its code base. When we started to set up various microservices we found ourselves replicating these functionalities on each microservice. This was not manageable especially for a task like authentication which needs to be performed for each microservice, and would end up being performed multiple times if the microservices were calling each other.

We needed an API Gateway layer that sits between the HTTP clients and the microservices. It could perform tasks which are common across all the services like authentication, rate limiting, logging, caching, request or response transformation etc. Without this layer each microservice will end up implementing these tasks in their respective framework/language. With an API Gateway layer, microservice teams can concentrate on their core functionalities and not get involved in these common tasks.

Using openresty

Our initial requirement was fairly simple: authenticate an API request against a user session stored in Redis cache. Openresty was chosen for this task. Openresty is a software bundle which includes Nginx with Lua modules and Lua libraries which provide a framework to extend Nginx’s functionality. Openresty provides enough capabilities that there are even web frameworks like lapis built on top of it. A gateway server can soon become a bottleneck as you scale up. Openresty, being lightweight and strikingly fast, fit our requirements. Although Lua had a bit of a learning curve, once you get a hang of it tasks like authentication are fairly easy to write with a few lines of code.

Nginx Lua module adds many Nginx directives for you to hook your Lua code into. One of these is access_by_lua_file , using which you can control Nginx process flow during its access phase. It is during this phase that Nginx does access control checks before it decides on response to be returned for the request. Using the directive access_by_lua_file you can hook up a Lua script which when properly returns allows access to the underlying content.

location /api {

access_by_lua_file ./lua/token_check.lua;

proxy_pass $upstream_api;

...

}

token_check.lua (pseudo code):

if not check_key() then

# Make nginx block request in access phase

ngx.status = 401

ngx.exit(401)

end # Nginx allows the request if the script returns cleanly

Openresty provides Lua client libraries for various data stores like redis, postgres, mongodb. These libraries had to be written for openresty framework to use Nginx’s Cosocket API. This needs to be done because in Nginx we can’t use blocking database calls, otherwise all the Nginx workers would get busy waiting for a reply from the database instead of processing any new requests. Cosocket API is a TCP stack for Nginx Lua but it is “non-blocking out of the box”. They function like light threads (or greenlets in Python).

Switching to Kong

After plugging in a Lua script in our already existing Nginx installation, we were ready to offload authentication task to this layer. Authentication logic would be provided by a Lua script called in Nginx’s access phase of request processing. Routing would be configured using Nginx’s location config section.

This worked for us for a while until we started to get more feature requests from teams working on different microservices. There was a requirement to have an organization wide standard for CORS. There was also a requirement for rate limiting, which although can be done in a plain Nginx installation but it further complicated our Nginx configuration which was already filled with lots of location sections for various microservices. This was becoming unmanageable, both at Nginx configuration level and at code level for Lua.

Kong solves these issues. Kong is a plugin oriented open source API gateway built on top of openresty. It is essentially a router with a plugin framework which uses various openresty hooks to execute Lua code on each request. It was developed by Mashape to manage thousands of microservices for its API Marketplace. It not only has plugins for rate limiting and CORS, but also for various other tasks commonly performed at gateway layer. It opened us to various possibilities like cookie and session management, IP restriction, bot detection, metrics collection etc. Kong’s rate limiting plugin had an added advantage over Nginx’s native region wise rate limiting: it allowed us to rate limit per logged in user, falling back to IP based rate limiting only if user was not logged in. You can also easily integrate your own custom plugins and well organize your code.

High level overview of our Kong infrastructure:

Kong stores all its configuration (routes and plugin configuration) in Postgres or Cassandra database for persistence. It also has a caching layer implemented on its database bindings which prevents Kong from hitting the database on each request. (There was an issue we faced with null values not being properly cached in some cases bringing down performance, but that was an easy fix submitted to kong: https://github.com/Mashape/kong/pull/1841)

Switching to this method of configuration has had both pros and cons. While Kong’s configuration is cleaner than specifying all Lua bindings in Nginx configuration (which was starting to become repetitive), it does lack version control. It also doesn’t fit in our Ansible based deployment perfectly. We are currently using the project kong-dashboard to manage our configuration. It provides us with a good UI but the lack of version control is a problem we are still looking into. But since these configurations rarely change drastically over time, the trade off seems to be worth it for now.