Last saturday MIKAMAI hosted an interesting meetup, the Open Source Saturday Milano.

The format for the OSS is simple, people form groups and work on contributing to existing open source projects, or start new ones.

I couldn’t resist, and pitched my idea of adding an optional layer cache to ruby-lol, an open source wrapper to the Riot Games API that I contributed writing.

I got just one person interested in the project, but we were ready to go, and nothing would have stopped us, ruby-lol needed some cache, and boy, we wanted to cache that calls!

A layer cache for an API wrapper is usually a good idea, and something most people using said wrapper have to implement anyway. Being in a love relationship with Redis, we decided to go that way.

Now, everyone does caching in Redis, but everyone does it their own way, so feel free to complain about the way we did it, and feel even freer to contribute better solutions to our problem!

Our starting point was this:

client = Lol::Client.new "my_api_key"

We wanted to change it like this:

client = Lol::Client.new "my_api_key", :redis => "redis://whatever.local:6379", :ttl => 900

Our initialization already supported an option hash, so we just had to delegate redis initialization to a new method:

def initialize api_key, options = {} @api_key = api_key @region = options.delete(:region) || "euw" set_up_cache(options.delete(:redis), options.delete(:ttl)) end def set_up_cache(redis_url, ttl) return @cached = false unless redis_url @ttl = ttl || 900 @cached = true @redis = Redis.new :url => redis_url end

Don’t be scared by all those instance variables, they are all supported by proper written accessor methods!

After having changed the initialization we hit our first problem. You call the API methods like this:

summoner = client.summoner.by_name "intinig" team = client.team.get summoner_id

client.summoner and client.team are respectively instances of SummonerRequest and TeamRequest (both subclasses of the more general Request) and they have no access to client. So we had to add the capability, when instantiating said classes, to pass them caching data.

To do that we did this:

# Returns an options hash with cache keys # @return [Hash] def cache_store { redis: @redis, ttl: @ttl, cached: @cached, } end # Now we can get a new Request like this SummonerRequest.new(api_key, region, cache_store)

With all the supporting work done we could move into caching itself. All FooRequest classes do the dirty work with Request#perform_request, the method that gets the real data from the API with the help of httparty.

This is the pre-cache version of Request#perform_request:

# Calls the API via HTTParty and handles errors # @param url [String] the url to call # @return [String] raw response of the call def perform_request url response = self.class.get(url) raise NotFound.new("404 Not Found") if response.respond_to?(:code) && response.not_found? raise InvalidAPIResponse.new(response["status"]["message"]) if response.is_a?(Hash) && response["status"] response end

The cached version just adds two if blocks that handle the caching logic. Please don’t bash my usage of if here 🙂

# Calls the API via HTTParty and handles errors # @param url [String] the url to call # @return [String] raw response of the call def perform_request url if cached? && result = store.get(clean_url(url)) return JSON.parse(result) end response = self.class.get(url) raise NotFound.new("404 Not Found") if response.respond_to?(:code) && response.not_found? raise InvalidAPIResponse.new(response["status"]["message"]) if response.is_a?(Hash) && response["status"] if cached? store.set clean_url(url), response.to_json store.expire url, ttl end response end

If you’re smart, and I am pretty sure you are, you will have some questions: what is store? Why are you passing through JSON?

Store is just a method that returns the Redis instance we passed it during initialization. It’s called store because in a future we might add support for more cache stores 🙂

JSON came in handy when we hit our second problem: httparty returns a hash, and we didn’t want to destructure it into several Redis keys. The easiest way to proceed was serializing it using JSON.

WOAH! All this wall of text for less than ten new lines of code? You bet! The point of the article was that implementing a Redis cache on top of existing code is really easy 🙂

Last, but not least: TL;DR: Implementing a Redis cache is easy, do it, do it now!

PS. Check the whole (awesomely specced) code at https://github.com/mikamai/ruby-lol