What is gRPC?

gRPC is an open source RPC framework by Google. It is designed to be highly performant and implemented across many environments and languages.

The major selling point of gRPC is its speed and the ability to implement polyglot services in a microservice style architecture. You can also use it to connect mobile and browser clients to your servers. Other benefits include bi-directional streaming through HTTP/2 and pluggable load balancing, auth and monitoring.

gRPC Motivation and Design Principles.

Like most of the major tech companies out there, Google has long been using microservices architecture for running most of its services. And as a result of this, they had a common RPC framework called Stubby to connect their microservices distributed across data centers. Having all the services implement a common framework allowed for things like easy monitoring, faster iterations, security and reliability among others.

However Stubby was not general purpose and there was too much of Google in it ( pun intended 😛). Feeling a need to evolve Stubby and considering its tight coupling with Google’s infra, the team decided to spin off a new project aimed at achieving all thing Stubby could do but also keeping an open and general design. They even went ahead and took inspiration from other widely standardized projects such as SPDY, HTTP/2, and others.

For more on the design principles behind gRPC read the official page.

So why should you care?

So far we’ve discussed the pros and cons of REST. Now let’s see what gRPC is packing for the face-off.

Static Typed and Versioned Interface — gRPC uses an IDL ( Interface Definition Language ) called Protocol Buffer to define rpc services and their request and response models. Protocol Buffers come with a type system, thus allowing us to structure the types of our messages and allow for compile time detection of type mismatches. Also, versioning API becomes easy as the addition or deletion of keys to our request and response models do not break existing client implementations. In the sections to come, we’ll discuss Protocol Buffers in much more detail. Efficiency and stability — By default Protocol Buffers are designed to be efficient on the wire. This allows for faster data transfers between services. The call and wait model with REST is challenged by the streaming model thus providing better stability of the systems. The very nature of a typed system and backward-compatible model allows for solid stability. Polyglot services — Having an IDL and client-server stubs across many languages, allows gRPC users to go the polyglot way and choose different languages for different microservices. Timeouts and request cancellation — gRPC provides out the box solution for request cancellation and timeouts for services. Each call can carry timeout fields which are then used to calculate the new timeouts as request propagates down our microservice stack. Logging and metrics — gRPC provides easy extensions to plug logging and monitoring systems such as Zipkin. Bi-directional side streaming — With HTTP/2 support we get features like streams which allows for a non-blocking I/O model for requests and response. This is not the case in the REST world where there the request blocks some resources until the data is aggregated for the response.Also with gRPC we get bi-directional streaming, thus allowing for a better model to transfer data than the chatty RESTful APIs.

What it won’t solve for you?

Well, gRPC won’t just make your API designs better. You can still write badly designed gRPC endpoints and get away with it. The design of API if considered an implementation detail in gRPC world and thus is left for the implementors. So think twice before writing your APIs.

What are Protocol Buffers?

Protocol Buffer is a high performance, open source binary serialization protocol that allows for easy definition of services and automatic generation of client libraries. Similar to gRPC, Protocol Buffer compilers are available in many popular languages and platforms.

Compared to other data exchange formats such as XML and JSON, Protocol Buffers and much more faster, light-weight, simpler and clearer. They provide features to compile data from various sources and update structures with breaking the old compiled structures.

Protocol Buffers example

While writing structures in Protocol Buffer format, we define messages which are the units of encapsulation here. The messages are the serialized over the wire and de-serialized a the other end. Consider the sample below -

Some things to notice about the proto messages are -

Each field in a message has a type ( string, int32, bool or even other message types ). This allows for the type system to enforce static type checking during compilation. Each field also has a unique numbered key associated with it. The keys are an integral part of the encoding scheme of messages. Fields can also be marked as required, optional or repeated. You can run the compiler to get language specific accessor/stub classes. Through them, you get getter and setter methods to read and write messages. Also updating ( adding or removing ) fields to a message do not break old binaries. The new fields are just ignored by older binaries. Win win !

For more on the Protocol Buffers language specification check the proto3 spec.

Further Reading