The current state of service development is full of words like "API", "Microservices", "DevOps", and "REST". Also, it’s changing at the speed of light. Actually, it may be changing even faster than that. While you were reading this paragraph, three new architectural styles and five new system languages were just announced, reached their zenith of popularity, and then were abandoned.

The point is, writing services today means writing lots of small services. If your service does more than one thing, then you should resign and take up work as a mega-store greeter or something. Oh yeah, and those small services? They are changing all the time, too. There are new demands put on these services, to be faster, to be visible, to be MORE.

Whew, life as a service programmer can be harrowing. But, we are a hardy bunch, and so we have come up with patterns to handle all those microservices and APIs and changes to APIs. We’ll put something between our suite of services and the clients that consume them. It’ll act like, um, a gateway or something. YES! A GATEWAY. That’s brillant! The gateway will handle routing and versioning and authentication and transformation and…wait a sec…I thought doing more than one thing was bad?

Well, it’s really all in how you define one thing. In this case, the gateway can be the container that holds items that each do one thing we need to make service development tenable. That should work, right? Let’s go with that for now.

This tidal wave in the service development world has hit a few of our recent projects, so we’ve had to come up with a solution to maintainable and responsive microservice development. The "gateway" I mention above is (now) a well-known pattern called the API Gateway Pattern. Oh, it’s worth noting that a lot of what the API Gateway does is proxying calls, so you’ll hear the word "proxy" in the same vein as API Gateway.

Allow me to briefly run down that major tenets of the API Gateway Pattern via an example. First, the gateway sits between the clients of a set of services and the services themselves. If we take the well-used example of an Ecommerce application, a client may want to know:

The details of a product (how it works, i.e.) (Product Service)

If the product is in stock (Inventory Service)

Is the product affordable? (Pricing Service)

Do previous buyers like this product? (Review Service)

Has the client already purchased it? (Order Service)

Each of these bullet points could come from a different service on the backend, as my super-useful parenthesized items above detail. But, the client doesn’t want to make FOUR API calls. Also, the developers don’t want to write authentication into EVERY service, nor do they want to have to maintain backwards compatibility for all clients if they want to release new (and breaking) changes for newer clients. Heck, some clients may not want all the data other clients want, so do we handle that per service?

It looks like we need the gateway to:

Translate and transform data from all the services into a single exposed call.

Authenticate the client when it makes sense

Version the endpoints as new (and breaking) functionality is released

Offer different endpoints to different cilents that have specific data needs from our services

We had this very use case (large Ecommerce service-based infrastructure) on one project. On another, there’s a need to update a SOAP-based API to a REST-based API. The gateway can do that, too. As you might imagine, the use cases for an API Gateway are myriad. As such, there are now 1,453,765 (and counting) offerings for software that does some or all of the API Gateway Pattern tasks.

Now, let me tell you about our search for software that could serve the purposes of our gateway.

The Search

There are quite a few pieces of software out there that can do the job of an API Gateway. Some are open source, some are closed source, and some are "freemium". There are also companies that have Software-as-a-Service (SaaS) offerings around API Gateways/proxies. We looked at a LOT of them. I don’t think it’s fair to name names, as there are quite a few great options. I’ll just focus on the criteria we used:

Primary Criteria:

It has to be fast. Very fast. This is hard to know out of the gate, but you can make some assumptions based on how the gateway is implemented and the language it uses.

It has to be easily configurable. I don’t want to bring down the entire application infrastructure to add a new endpoint.

It has to handle versioning pragmatically.

It has to be extendable. If we needed the gateway to do more than just proxy requests, like perfrom JWT authentication, then we wanted to be able to add that.

Did I mention it has to be fast?

After a long and arduous search, our needs were met by Vulcand Proxy from the fine folks at Mailgun

Vulcand

Vulcand Proxy is a "reverse proxy for HTTP API management and microservices". It is an open source application written in Go. Choosing Go as language for a proxy is smart, as Go has an excellent story when it comes to concurrency. I am not sure there are many applications that require concurrency more than an API gateway. Requests are crashing in all the time, so serving them concurrently is the only option.

Vulcand uses etcd from the excellent people at CoreOS (We really like open source here at Skookum!) etcd is a distributed key-value store and Vulcand stores it’s configuration there. By the way, if you ever want to see how spinning up a cluster of nodes for a software package should work, install etcd. It’s a good experience in an ocean of bad ones.

Components

Vulcand offers components to provide the functionality required by a gateway. Remember, the gateway needs to do things like routing, transformation, load-balancing, and custom tasks. The components that make up Vulcand are tailored to these items.

Briefly, Vulcand uses a "Server" to hold the location of a backend service. A Server is, really, just a protocol and a location, like https://mycompany.com/myservice . Vulcand will eventually forward a request to a Server.

Since it’s likely that you’ll have multiple physical servers or URLs for a given service, Vulcand has the concept of a "Backend". A Backend is a collection of Servers. As you probably guessed, a Backend will do things like load balance across all the Servers registered with the Backend.

The clients of your services need endpoints to call, a job that falls to a "Frontend" in Vulcand. A Frontend is associated with a Backend and provides routing based on various routing rules. The routing rules you can define are endless: path bases ( /services/endpoint ), host based ( service.company.com ), method based ( GET | POST | PUT ), header based, or take any of these options and use a regular expression to match incoming requests to a Frontend. The routing language really is very nice. So, Frontends handle the routing.

The last Vulcand item will cover is Middleware. A Middleware performs a particular task (i.e., that one thing we talked about) on a request. The folks at Mailgun have written some Middlwares that ship with Vulcand, such as Circuit Breaker, Rate Limiting, and Trace Logging. One or more Middleware can be registered with a Frontend, allowing you to apply certain tasks to certain requests. This is how Vulcand is extendable.

Let’s see how Vulcand handles are Primary Criteria from above.

Is It Fast?

As I mentioned, it’s written in Go. Go is a compiled language that, again, has a really nice story when it comes to concurrency. Go is fast, as Dave Cheney will tell you. This has proven more than true in production.

Is It Configurable?

Yup, it is. In fact, Vulcand has three different ways you can configure it: HTTP, etcdctl (etcd’s command line interface), and a Vulcand-provided binary called vctl. We opted to use the vctl binary, as we think it makes configuring the gateway easy. It’s really just writing a bash script that adds all the resources you need to manage your services. Also, you can change the configuration on-the-fly, with no need to shutdown Vulcand. I’ll show you some examples when we talk about the bits that make up Vulcand.

Is It Extendable?

The Middleware concept brings the extendability that rocks the party. Each Middleware is simply an HTTP handler that conforms to the Go HTTP Handler interface. It’s very nice that Mailgun did not make up their own way to handle HTTP requests, so now existing handlers written in Go can be used as Vulcand Middleware.

For one of our projects, we wrote (and open sourced) Middlware to handle CORS and JWT Token Authentication. This allows the services to be unburdoned by authentication and focus on their respective functionality. Another API Gateway tasked effectively checked-off by Vulcand.

Example

Let’s close up this post with an example of configuring Vulcand. This example is contrived for simplicity’s sake, but I think it does enough to get your wheels turning.

First off, though, you need an easy way to run Vulcand locally. As I mentioned, Vulcand has a major dependency on etcd, which means you’d have to install CoreOS which could mean VMs and…blech…too much setup. But, there’s a better way: Docker.

If you don’t know what Docker is, then I weep for you. In a nutshell, Docker is like a VM for a single app or service. These "app VMs" are called "containers" and they’re very lightweight, easy to spin up, and make life as a service developer worth living. Seriously, Docker is great and I am going to presume you know how to get it up and running. If not, the example will show enough for you to get an idea of what’s happening, and then you can go learn about Docker when we’re done here, K? (HMMM…maybe a post on Docker should be forthcoming…)

Getting back to our example, I am using Docker Compose (a handy utility that allows you to spin up mutliple Docker containers from one configuration file) to create a local Vulcand service. That file and some instructions can be found in this Github repository. Here’s the docker-compose.yml file:

version: "2" networks: vulcand: driver: bridge ipam: config: - subnet: 172.24.0.0/24 services: etcd0: image: quay.io/coreos/etcd:latest networks: vulcand: ipv4_address: 172.24.0.4 ports: - 4001:4001 - 2379:2379 - 2380:2380 volumes: - ./etcd/etcd0:/data environment: - ETCD_NAME=etcd0 - ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001 - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379,http://0.0.0.0:4001 - ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.24.0.4:2380 - ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380 - ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1 - ETCD_INITIAL_CLUSTER=etcd0=http://172.24.0.4:2380 - ETCD_INITIAL_CLUSTER_STATE=new - ETCD_DATA_DIR=/data vulcand: image: mailgun/vulcand:v0.8.0-beta.3 links: - etcd0 - test_server networks: vulcand: ipv4_address: 172.24.0.2 ports: - 8181:8181 - 8182:8182 command: ["/go/bin/vulcand","--etcd","http://etcd0:4001", "-sealKey", "e2eebb2f2eb02180c391ac53f5d71edbf2bf49c7112f4a54e7c4797fb1eb92c4"] test_server: image: jamesdbloom/mockserver networks: vulcand: ipv4_address: 172.24.0.3 ports: - 1080:1080 - 9090:9090

Now, before you glaze over and run screaming from this page, try to read through that file. It’s pretty self-explanatory. We create a network for our containers, which are three services: etcd, vuland, and test_server. The rest of it is just specific to each service, such as the ports it exposes and the IP Address of each. The test_server is a simple HTTP service that will log out all requests to it so we can see what’s happening.

Basically, the example starts up Vulcand and a test server. We are going to proxy a URL path to this test server. The Github repository also has a configuration script, which looks like:

#!/bin/bash docker-compose exec vulcand vctl backend upsert -id our-backend --vulcan=http://172.24.0.2:8182 docker-compose exec vulcand vctl server upsert -id our-server1 -b our-backend -url http://test_server:1080 --vulcan=http://172.24.0.2:8182 docker-compose exec vulcand vctl frontend upsert -id our-frontend -b our-backend -route='Path("/dagoogs")' --vulcan=http://172.24.0.2:8182 docker-compose exec vulcand vctl rewrite upsert -f our-frontend -id r1 --regexp="dagoogs" --replacement='$1' docker-compose exec vulcand vctl frontend ls --vulcan=http://172.24.0.2:8182 docker-compose exec vulcand vctl frontend show -id our-frontend --vulcan=http://172.24.0.2:8182

OK, first, you can safely ignore the docker-compose exec vulcand at the start of each line as all that does is run the commands in the context of our Docker containers. Each configuration command starts with vctl , which is a command-line tool that ships with Vulcand for this very purpose.

From there, the configuration is straightforward:

Create a Backend called our-backend . Register a Server with the Backend that points to our test server. Create a Frontend called our-frontend that routes the path /dagoogs to our-backend . Add a Middleware to our-frontend called r1 that will rewrite the path by removing dagoogs from the path that is forwarded to our-backend . The Rewrite Middleware ships with Vulcand. List all the frontends. Show the details of our-frontend .

Notice that the verb upsert is used for the configuration. This means that the configuration commands are idempotent, meaning, I can run them over and over and I’ll always get the same thing.

Now, with my Docker containers running ( docker-compose up ), I can run the configuration script. The output looks like:

$ ./vulcand-config.sh OK: Backend(id=our-backend) upserted OK: server upserted OK: frontend upserted OK: rewrite upserted [Frontends] Id Route Backend Type our-frontend Path("/dagoogs") our-backend http [Frontend] Id Route Backend Type our-frontend Path("/dagoogs") our-backend http [Middlewares] Id Priority Type Settings r1 1 rewrite regexp=dagoogs, replacement=$1, rewriteBody=false, redirect=false

Everything is in place. Now I can call our /dagoogs path and see what happens:

$ curl -i http://localhost:8181/dagoogs?q=vulcand

In the output window of our Docker containers, you’ll see the test server tells us what it received:

test_server_1 | 2016-11-03 21:16:49,817 INFO o.m.m.MockServerHandler returning response: test_server_1 | test_server_1 | { test_server_1 | "statusCode" : 404 test_server_1 | } test_server_1 | test_server_1 | for request: test_server_1 | test_server_1 | { test_server_1 | "method" : "GET", test_server_1 | "path" : "/", test_server_1 | "queryStringParameters" : [ { test_server_1 | "name" : "q", test_server_1 | "values" : [ "vulcand" ] test_server_1 | } ], test_server_1 | "headers" : [ { test_server_1 | "name" : "Host", test_server_1 | "values" : [ "localhost:8181" ] test_server_1 | }, { test_server_1 | "name" : "User-Agent", test_server_1 | "values" : [ "curl/7.49.1" ] test_server_1 | }, { test_server_1 | "name" : "Accept", test_server_1 | "values" : [ "*/*" ] test_server_1 | }, { test_server_1 | "name" : "Content-Type", test_server_1 | "values" : [ "application/json" ] test_server_1 | }, { test_server_1 | "name" : "X-Forwarded-For", test_server_1 | "values" : [ "172.24.0.1" ] test_server_1 | }, { test_server_1 | "name" : "X-Forwarded-Host", test_server_1 | "values" : [ "localhost:8181" ] test_server_1 | }, { test_server_1 | "name" : "X-Forwarded-Proto", test_server_1 | "values" : [ "http" ] test_server_1 | }, { test_server_1 | "name" : "X-Forwarded-Server", test_server_1 | "values" : [ "" ] test_server_1 | }, { test_server_1 | "name" : "Accept-Encoding", test_server_1 | "values" : [ "gzip" ] test_server_1 | }, { test_server_1 | "name" : "Content-Length", test_server_1 | "values" : [ "0" ] test_server_1 | } ], test_server_1 | "keepAlive" : true, test_server_1 | "secure" : false test_server_1 | }

You can see the path was / , meaning the rewrite happened, and the querystring was forwarded along. Incidentally, the 404 is simply saying that TestServer has nothing configured to respond at that path, which we don’t really care about for this example. We’re more concerned with seeing Vulcand in action. This example is exceedingly simple, but the core concepts are the same for more complex scenarios. Define URLs (Frontends) for your clients, write Middleware to transform/rewrite/rate limit/authenticate requests on those Frontends, which are forwarded to Backends that front one or more Servers. Vulcand does have other components that provide some (really cool) peripheral functionality, but what we’ve seen here is most of what we’ve used.

Conclusion

If you find yourself needing a service-based infrastructure, it’s very likely that the API Gateway Pattern will make its way into your architectural conversations. When it does, I hope you’ll take a look at Vulcand Proxy. It’s free, fast, easy to configure, easy to extend, and we’ve used it to great effect in more than one instance. Live Long and Proxy!