The source for this code can be found on Github.

There was an interesting talk by Chad Fowler, see it here, where he spoke about a system where you rewrote the individual components but the that didn’t change the system as a whole. He used the metaphor of a body having it’s cells replaced while still remaining the same body. It is a great talk about system architecture but how do you actually do this?

The components, or cells to use Chad’s metaphor, are called “microservices”. There are 1000s of things that need to be assessed before you adopt a micro service based architecture but one thing that it does allow you to do is to use the right tool, language or framework for the job at hand.

Using multiple languages may be useful for a number of different reasons. Augmenting existing legacy code, catering to the skills available on your engineering team or using a manufacture provided SDK. I am sure you can think of a few more. So I thought I would go through the basics of serving a HTTP request in two languages running in two separate processes and connecting them together via GRPC.

The RPC

gRPC is Google’s flavour of RPC. It based on their internal RPC tool called Stubby but has been specifically written to be suitable for general use. The nice thing about gRPC is that it allows a nice point of decoupling between services, you have a defined protocol that forms a contract between the two services. The features and benefits are beyond the scope of this article but what you need to understand now is the following. In no particular order GRPC,

It is based on protocol buffers, this is a binary format which holds the message down the wire. This format, compared to XML, is 3–10 times smaller and 20–100 times faster.

Gives you an explicit message structures that the defines a contract between your services.

Allows you to add versions to the interface between your services, but that beyond the scope of what we are looking at today.

api.proto — The protocol buffers are defined in a proto file, the full specification can be found here. Our protofile is as follows,

syntax = "proto3"; service Handler {

rpc HandleGeneric (Request) returns (Response) {}

} message Request {

string name = 1;

} message Response {

string message = 1;

}

Our protofile has three main features,

Syntax definition, this is used manage versions of the protocol buffer specification. Next we define a service, our service defines the remote procedure call. It is worth noting that at this point we are not defining the rpc protocol just the call signature. Our handler service defines a single method call HandleGeneric which accepts a Request and a Response. Finally we define the Request and the Response, which we define as being a structure which contains a string. The fields of our message are numbered, with the = 1 , this number allows you to version different requests.

Now that we have this very rudimentary protocol buffer file we now need to compile it. First we need to install the protoc compiler, full instructions are here, but on OSX I used brew,

brew install protobuf

In order to compile for Go we also need a parser that will generate our Go, this can be done by running go get -u google.golang.org/grpc .

Now we get top generate our Go file.

protoc -I api/ api/api.proto --go_out=plugins=grpc:api

We will now have a file in called api.proto.go with go specific methods to handle the requests. You can read the full file in the repository but for now we will move on to writing our server.

The server

The server is required to run a HTTP server and then delegate the response to a secondary process which will handle the response. The response from the RPC call, you know like an ATM machine, will then be issued to the client as a response.

Go has a handy dandy web server built in which can be started up just by

func main() {

http.HandleFunc("/", handler)

if err := http.ListenAndServe(":8080", nil); err != nil {

panic(err)

}

}

Our handler is a function which accepts the response and then writes the the bytes which are returned to the HTTP requester. For our use case our handler splits off the path of the incoming request and sends this through as the name field in our message.

To handle our remote procedure call we write a second function which we call handle the request.

func sendToServer(name string) string {

conn, err := grpc.Dial(address, grpc.WithInsecure())

... c := api.NewHandlerClient(conn) ctx, cancel := context.WithTimeout(

context.Background(),

time.Second

)

... r, err := c.HandleGeneric(ctx, &api.Request{Name: name}) return r.Message

}

If you copy paste this it won’t compile, my bad, I have removed the error handling and some defer statements for easy reading, the full code is available on Github.

This code here creates a new client and connects to the GRPC server. After this we create a context, and then we create a Request object. The Request item we create we pull out of the api module which was defined in the proto file.

Finally our server returns the Message from the response of the GRPC server.

It is worth noting that this implementation is not optimised at all and is not suitable for a situation where you might have many requests, you would probably want to look at a worker pool.

The Handler

Time for language number two, say a warm welcome to Node. For simplicity we will be using the dynamically loading GRPC implementation. If there is some interest I will look at the compiled GRPC implementation for Node moving forward as well.

Firstly we will start with a blank node project and install the following two packages via npm

Our service is defined by the following

const packageDefinition = protoLoader.loadSync(

PROTO_PATH,

{

keepCase: true,

longs: String,

enums: String,

defaults: true,

oneofs: true

}

) const api = grpc.loadPackageDefinition(packageDefinition)

We pass in the path to our Protofile before as well as some configuration parameters.

As we saw in the Go part of this project we have an API with a service called Handler. In turn the handler service has a function call HandleGeneric.

We now need to write our handler which will handle our requests.

function HandleGeneric(call, callback) {

console.log(`Handling request ${call.request.name}.`)

callback(null, {message: 'Hello ' + call.request.name})

}

When this handler is invoked it is passed in a function which has the call signature as defined by the protofile. In our case since we are expecting a string the library actually will convert other types, arrays, booleans etcetra, to the correct type for you behind. However you should not rely on this as I will point out in the discussion below.

We now write our GRPC server and tie the whole lot together. We create a new grpc server and the service to it.

function main() {

const server = new grpc.Server()

server.addService(api.Handler.service, {HandleGeneric})

server.bind(

'0.0.0.0:50051',

grpc.ServerCredentials.createInsecure(),

)

server.start()

}

We are binding it to localhost on port 50051, and we are passing in an Object that contains the handler function that satisfies the interface defined in the protofile.

Running the whole thing together

Starting up two terminal sessions side by side run

go run server/main.go in the first and navigate to the node directory in the handler of the other and run node index.js.

You now have started the go HTTP webserver in the first terminal. This will listen to requests on port 3000 and then connect to and send the URL path to the node GRPC server.

The GRPC server sits and wait much like the HTTP server waiting to serve the response.

Finally jump into your browser and navigate to http://localhost:8080/reader, changing the route to your name.

Watching the logs you will see that your request hits the go server, then hits the Node server and then follows the same path back to your browser.

Discussion

Firstly a couple of questions

What is with all this insecure stuff?

A lot of work is being done to make sure developers are using HTTPS wherever possible. Part of that is the guilt trip that you as a diligent developer feel when you type, because you would never copy and paste, Insecure into your editor. It just feels wrong. What it means is that you are not using Transport layer security on your requests. I plan on looking adding TLS to this idea in the future so stay tuned.

Why not JSON over HTTP?

That is an awful lot of kerfuffle for a simple “Hello World!” application, and you could just have easily run a webserver in node via the http module and achieved the same result.

The reason for me is the strict interface between the services. While there are performance gains these will mostly be negligible for most of our applications. What is a huge boon is the strict interface. We define the contract that both services have to adhere to in a single file that is shared by multiple services. These files can also be versioned allowing a more graceful upgrade path then waiting for other teams/ projects/ clients/ whoever to upgrade to the newer API.

Is it worthwhile?

For an application like this, definitely not, but as a way of breaking out an application into separate parts where each part is written in the language and tools best suited to deal with that part of the problem I think it may have some merit.

The nicely defined interface and support in many languages means that you can divide and conquer the work, being confident that it will connect together in the end. Through this little project I have grown quite fond of GRPC especially in the light of the problems that my team often encounters when two services meet.