While gRPC is getting a lot of attention lately, RPC (the concept behind it) has been around for ages (~40 years).

As is often the case, yesterday’s solutions resurface to solve today’s issues.

gRPC and protocol buffers (more on that below) have been on my list of communication frameworks to explore for a while now, alongside Apache Thrift and Finagle from Twitter.

After a summary of what gRPC is and how it can help your engineering team, I will share the results of a comparison with a classic REST approach.

What’s gRPC?

Originally created (and later open-sourced) by Google in order to address performance and scalability issues in a high throughput and QPS (Query Per Second) heavy environment, gRPC shines where performance really matters.

It isn’t exactly new, but recently it has gained enough maturity to be seriously considered as an alternative to REST.

gRPC is a protocol agnostic client-server communication framework that uses binary serialisation by default.

It means that compared to REST, where we are passing JSON formatted payloads, with gRPC the data is all binary. We will come back to this later.

Interface Definition Language

gRCP relies completely on interfaces to define the services as well as the messages being transmitted from the client to the server. The default IDL is protobuf (protocol buffers). However, because the framework has been well designed, it also supports JSON, Wire from Square, Apache Avro or even Microsoft Bond.

Layer 4 Agnostic

As mentioned above, gRPC is also protocol agnostic. Depending on the use case, no need to stick with TCP — the framework can also be used with Quic, based on UDP when peak performance is the goal.

Benefits

gRPC is an attractive alternative to REST as it offers multiple advantages: better performance and easier implementation.

Canonical

With a classic usage of REST+JSON, the client-side implementation requires matching the (hopefully up-to-date) API documentation. More often than not, you won’t get it right on the first try, you may miss a parameter, a field, or get the version wrong. It happens to all of us.

Because gRPC forces the implementation to be defined in a client interface (called “stub”), you are less likely to get things wrong. That’s a great way to ensure consistency between the definition of the service and its implementation, client-side.

The main issue with REST is that it gives lots of room to interpretation. Hyphens in the paths? Snake-case vs. CamelCase?

These aren’t a problem with gRPC as the client libraries are generated for the consumer, based on the definition.

Performance

Performance is the argument of choice when it comes to gRPC and this is due to two things.

Because the data format is binary (as opposed to a JSON payload) it gets much lighter. Indeed, it’s not rare to see a payload with half of its size being the JSON syntax only.

The other reason is that gRPC uses HTTP/2. The main advantages of this new version is request multiplexing, allowing to make multiple requests within the same connection.

This great post by Ilya Grigorik describes how that works in more detail.

Backward Compatibility

As is the case with RESTful APIs, gRPC supports API versioning.

Let’s use a simple example to illustrate this. Assume the snippet below is “version 1” of our service.

service Ledger {

rpc GetTransaction (TransactionRequest) returns (TransactionResponse) {}

} message TransactionRequest {

string id = 1;

}

message TransactionResponse {

string id = 1;

string type = 2;

}

Now, let’s imagine that we want to add a field in the response to the client and ship “version 2”:

...

message TransactionResponse {

string id = 1;

string type = 2;

string description = 3;

}

It’s that simple. If your server implements “version 2” but your client isn’t up-to-date yet, this field will simply be ignored. Obviously, it also works the other way around.

The number on the right side of the field definition is mandatory, it’s the field number and allows to identify a field in the binary.

A couple of suggestions regarding field numbers:

They must be unique per message

These can’t be changed afterwards — that would break the backward compatibility

Polyglot

gRPC is polyglot — the client stub can be provided in different languages (Python, Go, Java, Dart, …), with no limit whatsoever.

Multi-platform support is very likely to be your number one requirement when crafting consumable APIs.

How It Works

Define and compile your service description (requests, responses and endpoints) *.proto file for the targeted language, for instance, with Go:

protoc -I <DESTINATION> <SOURCE>/<FILE>.proto --go_out=plugins=grpc:pb

The command above will generate a file named *.pb.go, which will effectively contain all the classes. This is the file that we need to import both client-side and server-side.

Google Cloud Endpoints

Cloud Endpoints provides a highly scalable API gateway for the backend. It can be used with everything — App Engine, Compute Engine, Kubernetes Engine and even on-premise hosts.

Scalability

Cloud Endpoints will create an Extensible Service Proxy (ESP). This proxy (based on NGINX), will run in a container within the Kubernetes pod alongside your application container.

This is what makes Cloud Endpoints highly scalable, it automatically scales with your application.

Configuration

Configuring the ESP is pretty straightforward:

Deploy the API endpoints definition (using the OpenAPI standard for REST and a generated service descriptor for protobuf) and Google Service Management. Add the ESP in the GKE template as below:

containers:

- name: my_esp

image: gcr.io/endpoints-release/endpoints-runtime:1

args: [

"--http_port", "9000",

"--backend", "grpc://127.0.0.1:50051",

"--service", "<NAME>.endpoints.<PROJECT_ID>.cloud.goog",

"--rollout_strategy", "managed",

]

ports:

- containerPort: 9000

Authentication

Cloud Endpoints offers built-in authentication via different mechanisms (Auth0, Firebase, JWT and API Keys).

For the sake of this project I chose to keep it simple and only use API Keys.

Monitoring

I must admit I was astonished by how rich Cloud Endpoints is, particularly when it comes to logging and monitoring. A few years ago, Google acquired Stackdriver and it’s now part of the Google Cloud ecosystem.

In a nutshell, Stackdriver is a multi-cloud logging and monitoring solution.

Most of the metrics that you’d want from an API gateway are available in Stackdriver using Cloud Endpoints. It even monitors specific details about gRPC streams.

Note: all the numbers below come from Stackdriver.