Today, we'd like to announce the availability of our latest open source project: gruf, a Ruby framework for building gRPC services.

Over the past year at BigCommerce, we've begun using gRPC for our internal services. We believe gRPC provides the standardization and performance benefits that will greatly benefit our organization, and want to share some news regarding our adoption of gRPC at BigCommerce, as well as how we're releasing some of that back to the larger open source community.

What is gRPC?

For those new to gRPC, it's a RPC-based protocol that uses protocol buffers for serialization. Protocol buffers are a binary format developed by Google for serializing data over the wire. They are 3-10 times smaller than a normal XML or JSON payload, decode much faster than JSON, are less ambiguous than traditional REST declaratives, and use code generation to generate clients automatically in all the languages we support at BigCommerce; this means far less time writing and maintaining client and server code.

gRPC also uses HTTP/2 by default, which allows using the same TCP connection for multiple requests simultaneously, which vastly speeds up requests per minute and efficiency for a server. You also get free bi-directional streaming, flow control, binary framing, and header compression out-of-the-box with gRPC.

Adopting gRPC

Around January this year, BigCommerce Engineering started adopting gRPC for our internal services, as a way of improving performance, streamlining development, and increasing standardization. As we were developing Ruby gRPC services, we noticed we were adding a lot of boilerplate functionality to gRPC's core Ruby libraries.

For one, the Ruby library as-is did not offer any kind of interceptors; this meant that if we, say, wanted to authorize gRPC requests with anything but TLS, we had to call a custom authorize! method at the start of every call. This got cumbersome, fast. We wanted a way to automatically intercept every incoming server method and execute logic against it, both before, after, and around those calls.

While gRPC's request/response message format via protobuf is wonderful for defining APIs, we also wanted to be able to seamlessly handle error messages that would occur, outside of just returning status codes. For example, field validation errors for situations like "Please provide a valid zip code" became an issue - other than including a custom error message in every response message, which was cumbersome and unnecessary, could we instead push it into the metadata? We adopted this: by serializing an Error proto message in the trailing metadata of the response, we could then have our clients – in any language – deserialize that and properly handle errors, field-level validation, and debug logging – automatically and implicitly, in any response message.

Instrumentation was also an issue: getting insight into gRPC requests was fairly opaque out of the box. We use statsd and Zipkin at BigCommerce, and wanted greater insight into our services during their request call flow. The core Ruby libraries didn't have this interoperability.

Furthermore, gRPC requires a heavy amount of boilerplate for handling the initialization of a server. We wanted a more Rails-like setup for configuring and running a server on a Ruby service, and to make the interfaces as easy and standardized as possible.

We won't hide it: gRPC is fantastic, and has worked very well for us at BigCommerce. But at its core it is only client and server libraries; to make it usable and repeatable at scale in a service-oriented architecture, we needed something more robust that offered more of a framework in which to build gRPC-backed APIs for our Ruby services. Amongst all of this, we quickly found ourselves building a gem that wrapped the gRPC libraries.

Introducing Gruf

This gem eventually came to be named "gruf", for gRPC Ruby Framework. Gruf provides an abstracted server and client for gRPC services, along with other tools to help get gRPC services in Ruby up fast and efficiently at scale:

Abstracted server endpoints with full interceptor support on both server and client

Robust client error handling and metadata transport abilities

Server authentication support, with basic auth with multiple key support built in

TLS support for client-server auth, though we recommend using LinkerD for TLS authentication and SSL termination instead

Error data serialization in output metadata to allow fine-grained error handling in the transport while still preserving gRPC BadStatus codes

Server and client execution timings in responses

Gruf supports Ruby 2.2-2.5, and works with any Ruby framework: Rails, Grape, etc.

An Example: Gruf in a Rails App

Setting up gruf to have gRPC endpoints in a Rails application, for example, is extremely easy. We built gruf to be framework-agnostic - but still easy to integrate - so that someone could just drop it in and run with it.

Running a server is simple. Let's say we have a proto file like so:

syntax = "proto3"; package demo; service Jobs { rpc GetJob(GetJobReq) returns (GetJobResp) { } } message GetJobReq { uint64 id = 1; } message GetJobResp { uint64 id = 1; string name = 2; }

We'll generate the Ruby code for it using gRPC's protoc tool. Once that's done, we'll create a new directory under app/rpc/ , where all of our gRPC services will live. Let's add a server like so to app/rpc/demo/job_controller.rb :

module Demo class JobServer < Gruf::Controllers::Base bind Demo::Jobs::Service ## # @return [Demo::GetJobResp] Our response object # def get_job job = Job.find(request.message.id) Demo::GetJobResp.new( id: job.id, name: job.name ) rescue ActiveRecord::RecordNotFound fail!(:not_found, :job_not_found, "Failed to find Job with ID: #{request.message.id}") end end end

Gruf will automatically mount this server to its registry. Next, we'll want to setup some initialization, in Rails' standard config/initializers/grpc.rb :

Gruf.configure do |c| c.server_binding_url = '0.0.0.0:50051' End

This binds our server to the 50051 port. From there, it's as simple as starting up the grpc server:

bundle exec gruf

And we're good to go! Gruf automatically setup the server, loaded the appropriate service, and initialized everything. Also, you can see there's some syntactical sugar in there with the fail! method, which sends back an appropriate GRPC::BadStatus code and serialized error payload for you. You can customize those serializers - it defaults to JSON, but at BigCommerce we actually use a custom protobuf message that's tailored for our services. Furthermore, gruf automatically detects if you're running a Rails app, and autoloads the environment for you - so all of your classes will be available for use in your gruf services.

It's important to note that gRPC servers run as a separate process from your normal HTTP/1 frameworks such as Rails; they can easily share code, but you'll need to manage their process separately (Docker with kubernetes or nomad - or foreman for more traditional deployments - can make this easy for you). A simple Rails foreman Procfile, for instance, might look like this:

web: bundle exec rails server -b 0.0.0.0 -p $PORT grpc: bundle exec gruf

Middleware and Hooks

Gruf has a fairly extensive middleware and hook system. These hooks allow you to inject various functionality into a gRPC server without having to modify the underlying gRPC stubs or framework code. The pluggable interface allows for modularity in what functionality you need per-service, and you can customize servers to your systems' needs.

Authentication

First off, it provides a interceptor-based approach to authentication, allowing you to write simple classes that can provide whatever authentication mechanism you want for authenticating your gruf-backed services. This is separate from the TLS-backed auth provided by the core gRPC libraries. It comes packaged with basic authentication support, but one could easily write a LDAP or Hawk-based interceptor for it.

For example, utilizing basic auth is as simple as adding these lines in an initializer:

Gruf.configure do |c| c.interceptors.use( Gruf::Interceptors::Authentication::Basic, credentials: [{ username: 'admin', password: 'mypass' }] ) end

This will require all gruf servers to provide basic authentication with the specified credentials in the metadata headers of the gRPC request. You can also specify a list of accepted credentials (for example, to enable zero-downtime credential rotation).

Instrumentation

Instrumentation is done similarly; gruf provides StatsD support out-of-the-box, but you can use the gruf-zipkin gem to integrate with Zipkin for distributed tracing of your requests as well. The instrumentation system uses the same interceptor system as auth, so classes can easily be written to support other systems such as FluentD or DataDog.

For example, let's install the gruf-zipkin gem in our app, and then setup its configuration in an initializer:

require 'zipkin-tracer' require 'gruf/zipkin' # Set it in the Rails config, or alternatively make this just a hash # if not using Rails Rails.application.config.zipkin_tracer = { service_name: 'job-service', service_port: 1234, json_api_host: 'zipkin.mydomain.com', sampled_as_boolean: false, sample_rate: 0.1 # 0.0 to 1.0, where 1.0 => 100% of requests } Gruf.configure do |c| c.interceptors.use(Gruf::Zipkin::Interceptor, Rails.application.config.zipkin_tracer) end

And then in our config.ru file:

use ZipkinTracer::RackHandler, Rails.application.config.zipkin_tracer

And then we restart our server, and there we are, distributed tracing automatically supported, giving way to traces like these:

What's neat about this, is that this will carry across all of your gruf services. So if you're making delegated requests out to other services as your transaction completes, you'll get to see a fully distributed trace (including to infrastructure) with as much detail as you choose to measure.

Interceptors

The most powerful setup for gruf lies in its interceptor system - it provides the following interceptors that anyone can write a simple class for. Adding an interceptor is easy:

class MyInterceptor < Gruf::Interceptors::ServerInterceptor def call # do my thing before the call. Calling `fail!` here will prevent the call from happening. result = yield # do my thing after the call result end end Gruf.configure do |c| c.interceptors.use(MyInterceptor)

And you're done! You can imagine quite a few things you can do here with interceptor - for example, parameter validation, entity marshalling, and delegated permission authorization become quite easy with access to the request object and metadata headers.

Utilizing a Gruf Client

Because of the separation of channel, method, request, and response, utilizing a gRPC client can be kind of verbose. It also does most of its error handling through exceptions passed as GRPC::BadStatus codes, which leaves a bit to be desired when dealing with specificity, debugging, or field-level validation responses in your API. We attempted in gruf to clean that up by wrapping responses in a Response object, and providing extra functionality on top of the gRPC core.

For example, a typical gRPC client request:

stub = Demo::Jobs::Service::Stub.new('localhost:50051', :this_channel_is_insecure) req = Demo::GetJobReq.new(id: 123) message = stub.get_job(req).message p "Job: #{message}"

In gruf, it's more straightforward:

client = ::Gruf::Client.new(service: ::Demo::Jobs::Stub) response = client.call(:GetJob, id: id) p response.message.inspect

Gruf automatically looks up the RPC descriptor and translates the parameters for you in the call method. It can also automatically deserialize any error messages, should you have passed them in in the server. For example, if an error returns:

begin response = client.call(:GetJob, id: 0) rescue Gruf::Client::Error => e p e.error.inspect p e.error.app_code p e.error.message e.error.field_errors.each do |f| p "#{f.field_name}: #{f.message}" end end

Viola - fine-grained error messaging!

Summary

We've open sourced gruf under the MIT license to help adoption in the Ruby community of gRPC, which we see as transformational for inter-service communication. We've built a few plugins for it already, such as the Zipkin support in gruf-zipkin for distributed tracing mentioned above, a circuit breaker plugin, and a request profiler. We hope that these libraries will be as much of use to others as it has been to us at BigCommerce, and welcome contributions and collaboration!

You can find more information about gruf at the GitHub repository and its README.

Thanks, and enjoy!