RPC, ES6, and Proxies, Oh My!

Today we’ll be kicking off the first in a series where we go into more technical stuff than we have in the past. Not everyone will enjoy or even understand these posts, but we hope that it will give those of you familiar with javascript some neat ideas.

Our experience with TagPro made us weary about large monolithic servers. The DDoS problems grew out of control and there was no great way to get around it. As long as we have large servers, we’re vulnerable to simple DDoS attacks. While thinking about the design of Next, we realized that many of our problems could be alleviated by switching to microservices.

We have a microservice for database access, a microservice for player management, stat management, the joiner, and more. This gives us great flexibility and a lot more resilience than in the current model. However, any of you who have tried using microservices before know that communication is problem number one. There are lots of prerolled solutions out there, but all of them have issues in one way or another.

All we wanted was a dead simple RPC library, and no active library we could find would support what we wanted without significant hacking. So, we decided to build our own. We call it Intercom.

Before delving into any of the code, some background. We are using ES6 extensively throughout this project; most of it is transpiled via babel (formerly 6to5), but there are a couple of features of ES6 that are not transpilable. The one we care most about for the purposes of this discussion is the Proxy object. There’s not a huge amount of documentation for it out there, and its syntax is fairly wonky, but it’s immensely powerful.

A good comparison for it is __getattribute__ in python. Any time you attempt to look up a property on an object, the proxy will intercept it and allow you to override it. So if I call x.blah(), the proxy (x) can decide to send whatever it wants for the “blah” member, which will then be invoked. It’s a great way to accidentally get into infinite recursion.

Why does this matter? It allows us incredibly clean syntax, as you’ll see shortly.

There’s one more thing you should know. We’re using the wonderful ‘co’ library to make our async much cleaner. As a tl;dr, it abuses ES6 generators simulating the async/await features coming in ES7. So for example, it turns this:

x().then(function(res) { return y.something(res); }, function(err) { return z.someErrorHandler(err); }); 1 2 x ( ) . then ( function ( res ) { return y . something ( res ) ; } , function ( err ) { return z . someErrorHandler ( err ) ; } ) ;

Into this:

try { let res = yield x(); return y.something(res); } catch (err) { return z.someErrorHandler(err); } 1 2 3 4 5 6 7 try { let res = yield x ( ) ; return y . something ( res ) ; } catch ( err ) { return z . someErrorHandler ( err ) ; }

While it may not seem that amazing in such a small example, when your results depend on 8 different callbacks it makes all the difference in the world.

So, with that out of the way, let me show you how this library is actually used. It’s gone through a lot of iterations, but I think we’re rapidly approaching what the final syntax will look like. Here’s an example of its use from the tests:

it("should accept multiple arguments in an rpc call", function* () { let res = yield client.rpc.add(99, 98); expect(res).to.equal(197); }); 1 2 3 4 5 it ( "should accept multiple arguments in an rpc call" , function * ( ) { let res = yield client . rpc . add ( 99 , 98 ) ; expect ( res ) . to . equal ( 197 ) ; } ) ;

Now, this is with an already connected RPC client, but if you have any experience with JS rpc libraries you can see a big difference, in that you can call the actual method name as a method. Before we implemented the proxies, here’s what it looked like:

it("should accept multiple arguments in an rpc call", function* () { let res = yield client.rpc("add", [99, 98]); expect(res).to.equal(197); }); 1 2 3 4 5 it ( "should accept multiple arguments in an rpc call" , function * ( ) { let res = yield client . rpc ( "add" , [ 99 , 98 ] ) ; expect ( res ) . to . equal ( 197 ) ; } ) ;

While still usable, it certainly wasn’t clean. We also had a version that didn’t require the []’s, and also another one that use objects instead, but none of them were satisfying to me. I knew we could do better.

So, how does this magic proxy work? I’ll deconstruct the code below:

let proxy = function (self) { return Proxy.create({ get: (o, name)=> { if (self.hasOwnProperty(name)) { return self[name]; } return function (...args) { return new Promise((resolve, reject) => { let timeout = setTimeout(() => { reject(new Error("A response was not received")); }, settings.requestTimeout); let rpcOptions = { command: name, args: args }; self.emit("rpc", rpcOptions, (response) => { if (response && response.error) { reject(new Error(response.error)); } else { clearTimeout(timeout); resolve(response); } }); }); }; } }); }; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 let proxy = function ( self ) { return Proxy . create ( { get : ( o , name ) = > { if ( self . hasOwnProperty ( name ) ) { return self [ name ] ; } return function ( . . . args ) { return new Promise ( ( resolve , reject ) = > { let timeout = setTimeout ( ( ) = > { reject ( new Error ( "A response was not received" ) ) ; } , settings . requestTimeout ) ; let rpcOptions = { command : name , args : args } ; self . emit ( "rpc" , rpcOptions , ( response ) = > { if ( response && response . error ) { reject ( new Error ( response . error ) ) ; } else { clearTimeout ( timeout ) ; resolve ( response ) ; } } ) ; } ) ; } ; } } ) ; } ;

It’s a formidable block of code, but we’ll take it one step at a time.

let proxy = function (self) { 1 2 let proxy = function ( self ) {

Pretty simple start. We’re just creating a proxy to be used later, and requiring a scope argument. Due to the fact that a proxy is an object we need to provide an explicit outside scope as this can’t be rebound in an object, and that’s what self is.

return Proxy.create({ 1 2 return Proxy . create ( {

This is where the magic begins. The Proxy object is a built-in in ES6, and we can only use it using a special experimental flag on io.js, but it’s worth it.

get: (o, name)=> { 1 2 get : ( o , name ) = > {

This is called whenever a lookup is done on the object in question. Spoiler alert, we assign this proxy to .rpc, which means if we called yield client.rpc.add(99, 98) , then o will be the actual object (not actually used), and name will be add .

if (self.hasOwnProperty(name)) { return self[name]; } 1 2 3 4 if ( self . hasOwnProperty ( name ) ) { return self [ name ] ; }

This just checks if the object actually has the property we want, and if it does, return it.

return function (...args) { 1 2 return function ( . . . args ) {

This uses the new ES6 syntax allowing a variable number of arguments. Basically we’re creating a decorator function which can accept any number of arguments and will pass it along the network and handle everything else.

return new Promise((resolve, reject) => { 1 2 return new Promise ( ( resolve , reject ) = > {

Creates a promise, pretty standard stuff.

let timeout = setTimeout(() => { reject(new Error("A response was not received")); }, settings.requestTimeout); 1 2 3 4 let timeout = setTimeout ( ( ) = > { reject ( new Error ( "A response was not received" ) ) ; } , settings . requestTimeout ) ;

This allows us flexibility over how long we want a request to hang for. Due to the microservice nature of Next there’s always the potential for a server to be down or hanging, and we want to be able to deal with that if that’s the case.

let rpcOptions = { command: name, args: args }; 1 2 3 4 5 let rpcOptions = { command : name , args : args } ;

We’re just grouping together the options to be received by the intercom library on the other side. Nothing fancy, except args is that array from before that I mentioned.

self.emit("rpc", rpcOptions, (response) => { if (response && response.error) { reject(new Error(response.error)); } else { clearTimeout(timeout); resolve(response); } }); 1 2 3 4 5 6 7 8 9 self . emit ( "rpc" , rpcOptions , ( response ) = > { if ( response && response . error ) { reject ( new Error ( response . error ) ) ; } else { clearTimeout ( timeout ) ; resolve ( response ) ; } } ) ;

This uses socket.io (for now, we may change to something more low level eventually) to just emit the RPC call to the server, and reject it if it had an error, or resolve it if it didn’t.

So, that’s the client side code. The server side code is quite a bit more complex!

server.on("connection", socket => { if (api) { for (let key in api) { let fn = api[key]; let wrapper = (args, cb)=> { if (!args) { args = []; } args.push(cb); if (isGeneratorFunction(fn)) { co(function* () {return yield fn.apply(socket, args);}).then((res)=>(cb(res)), (err)=>(cb({error: err}))); } else { let res = fn.apply(socket, args); if (res) { cb(res); } } }; listeners.set(key, wrapper); } socket.on("rpc", (data, cb)=>{ if (data) { if (listeners.has(data.command)) { listeners.get(data.command)(data.args, cb); } else { cb({error: `RPC command ${data.command} not found.`}); } } }); } }); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 server . on ( "connection" , socket = > { if ( api ) { for ( let key in api ) { let fn = api [ key ] ; let wrapper = ( args , cb ) = > { if ( ! args ) { args = [ ] ; } args . push ( cb ) ; if ( isGeneratorFunction ( fn ) ) { co ( function * ( ) { return yield fn . apply ( socket , args ) ; } ) . then ( ( res ) = > ( cb ( res ) ) , ( err ) = > ( cb ( { error : err } ) ) ) ; } else { let res = fn . apply ( socket , args ) ; if ( res ) { cb ( res ) ; } } } ; listeners . set ( key , wrapper ) ; } socket . on ( "rpc" , ( data , cb ) = > { if ( data ) { if ( listeners . has ( data . command ) ) { listeners . get ( data . command ) ( data . args , cb ) ; } else { cb ( { error : ` RPC command $ { data . command } not found . ` } ) ; } } } ) ; } } ) ;

Boy, that’s a lot less clear than the client side code.

server.on("connection", socket => { if (api) { for (let key in api) { let fn = api[key]; 1 2 3 4 5 server . on ( "connection" , socket = > { if ( api ) { for ( let key in api ) { let fn = api [ key ] ;

All this is doing is handling connections, and once connected it checks if we’ve defined a server api. Let me show you what a server API looks like so you can understand what it’s doing from here on out:

server = Intercom.createServer(9090, { getSomething: () => { return "something"; }, echo: (phrase) => { return phrase; }, add: (a, b) => { return a + b; }, slowEcho: function* (phrase) { yield sleep(10); // No need for a long pause. return phrase; }, errorOut: function* () { throw "threw this just for funsies"; } }); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 server = Intercom . createServer ( 9090 , { getSomething : ( ) = > { return "something" ; } , echo : ( phrase ) = > { return phrase ; } , add : ( a , b ) = > { return a + b ; } , slowEcho : function * ( phrase ) { yield sleep ( 10 ) ; // No need for a long pause. return phrase ; } , errorOut : function * ( ) { throw "threw this just for funsies" ; } } ) ;

All this does is create a server on 9090, and put those functions into the API. So in the previous code, it’s just looping through those functions.

let wrapper = (args, cb)=> { if (!args) { args = []; } args.push(cb); 1 2 3 4 5 6 let wrapper = ( args , cb ) = > { if ( ! args ) { args = [ ] ; } args . push ( cb ) ;

So now we set up a wrapper. We accept any args and a callback. If there are no arguments, we make an empty array. We push the callback onto the end so if a function wants to handle the callback manually it can.

if (isGeneratorFunction(fn)) { co(function* () {return yield fn.apply(socket, args);}).then((res)=>(cb(res)), (err)=>(cb({error: err}))); } else { let res = fn.apply(socket, args); if (res) { cb(res); } } }; 1 2 3 4 5 6 7 8 9 10 if ( isGeneratorFunction ( fn ) ) { co ( function * ( ) { return yield fn . apply ( socket , args ) ; } ) . then ( ( res ) = > ( cb ( res ) ) , ( err ) = > ( cb ( { error : err } ) ) ) ; } else { let res = fn . apply ( socket , args ) ; if ( res ) { cb ( res ) ; } } } ;

Then we check if it’s a generator function. If it’s a generator, we want to run it in co before we do anything (like slowEcho above.) So if it is, we apply it with socket as this and then handle the callbacks. If it’s not, we run the function and check if it returned anything. If it did, then we complete the callback with that return data. If it isn’t a generator and it isn’t returning anything, we assume the function is handling the callback itself.

listeners.set(key, wrapper); 1 2 listeners . set ( key , wrapper ) ;

Then we just store that wrapper as what should be called whenever that name is called. This is a Map , a new ES6 data structure very similar to dictionaries in other languages.

socket.on("rpc", (data, cb)=>{ if (data) { if (listeners.has(data.command)) { listeners.get(data.command)(data.args, cb); } else { cb({error: `RPC command ${data.command} not found.`}); } 1 2 3 4 5 6 7 8 socket . on ( "rpc" , ( data , cb ) = > { if ( data ) { if ( listeners . has ( data . command ) ) { listeners . get ( data . command ) ( data . args , cb ) ; } else { cb ( { error : ` RPC command $ { data . command } not found . ` } ) ; }

Finally, we set up the listeners. If we get any data on the ‘rpc’ channel, check if it has a command. If it has a command, check if we have it. If we don’t, error that back to the caller. If we do, run that wrapper function!

It’s a bit of behind the scenes setup, but it allows us to very easily talk to microservices without the usual syntactic overhead. A simple connection and we’re calling dot methods without ever having to communicate with the original server to get a list. This is also useful as it allows us to modify the API on the fly without the client needing to be notified about it.

As an aside, this library will be open-sourced as we feel it has a lot of utility outside of TagPro.

That’s all for this week. If you have any questions about this code, please do feel free to ask!

Discuss this post on Reddit