The 500 Millisecond Rails Partial

Click… 15 seconds later: “Something went wrong”. So that happened. I’d written a web based email client and it was taking too long to load. If it reached 15 seconds it would quit because of the servers time-out setting. Here’s my journey into finding out what went down.

First things first, I needed to know what was going on. At first I wasn’t sure if it even had anything to do with the email, other pages were slow as well. So I needed to get the “behind the scenes” information.

To help me on my quest I looked into New Relic. New Relic has a free sign-up and they’re, by design, a behind-the-scenes informant. I found setting up New Relic to be as easy as 1,2,3. Add gem to Gemfile, download config file to config directory, change web site name in the config file. And that’s it. Now just deploy and wait.

Once the New Relic gem was installed I navigated around the web site and that generated information within the New Relic dashboard. On the dashboard there are graphs of how much time each feature takes within the application. Mine showed that the time was being eaten up in the Ruby language itself. So I clicked on that spike and it showed me which methods, or files, seemed to be consuming the most time. I noticed that out of everything, the Rails partials I was using all seemed to be a little unreasonable. And my partial for email title summaries was at 500 milliseconds!

So thinking I had the answer and that Rails partials were to blame I posted on Twitter about it that my email partial was taking 500ms per request and with 37 emails it was at about 15 seconds. I immediately got a reply that that was unusual, and was asked “What’s going on in that partial”. *Light Bulb Moment* So I looked into it. It turns out on each call I was unencrypting data twice. I tweeted my thanks at his mention which triggered my follow through. Honestly I should have thought of it. But sometimes it’s the simple things we overlook.

In the code

So I’ve written the email model to encrypt data to protect the users data. That’s how it should be. But I now find the pain point of security. Security and convenience are always at odds with each other. You can very rarely have it both ways. But that doesn’t change the need for efficiency.

Since the two pieces that were being unencrypted were from the same Object I knew I could cut the time in half by loading the object first before getting the two encrypted parameters. But I didn’t want it to run for every email during the view loading. So I stepped back further into where the collective request was being made and decided to map the summary header, unencrypted, into one parsed data set.

Since the model will unencrypt the data when the method for that data is called I would map the same method to a Hash.

@result = current_user.emails.map {|i| {id: i.id, email: i.email, created_at: i.created_at} } 1 2 3 4 @result = current_user . emails . map { | i | { id : i . id , email : i . email , created_at : i . created_at } }

This did well to convert the encrypted email data to a Hash. The email method on the model is where unencryption is performed. Calling it with i.email here returns a Hash result of the email parameters. Now this is cool, but it doesn’t load the collection in the view the same. For example, in the view I can’t do email.id on an instance from the returned Array results. I would have to do email[:id]. Well I don’t want to have to rewrite the views for this, and I know that Ruby has a way around such trivial matters. So I looked into it.

Part way through my online search for an answer I had a moment of brilliance. So I stopped looking for the answer and I wrote a little meta-programming to create singleton methods.

@result.tap {|result| result.keys.each { |key| result.define_singleton_method(key) {result[key]} } } 1 2 3 4 5 6 @result . tap { | result | result . keys . each { | key | result . define_singleton_method ( key ) { result [ key ] } } }

With @result being the Hash we created, we go through each key and make it a method available on the Hash to call the key. So now email.id will call email[:id] on the Hash.

Still too much

The unencryption gets called every time the page loads. This is still an expensive process to run. And emails don’t come in that often so we should be able to save them from having to be called and unencrypted so many times. I thought about Memoization, but that would only load the emails once and not allow loading of new ones.

So I looked into where I could store data in a Hash that would be session relevant. I came across the session Object. It seems like a decent place to keep stuff.

What to Store in the Session Deciding what to store in the session hash does not have to be difficult if you simply commit to storing as little as possible in it. Generally speaking, integers (for key values) and short-string messages are OK. Objects are not. -The Rails 4 Way

Now to think of the way to accomplish this. I came up with something like:

def emails_helper session["email_count"] ||= 0 if session["email_count"] != current_user.emails.count session["emails"] = @result # code from earlier session["email_count"] = current_user.emails.count end session["emails"] end 1 2 3 4 5 6 7 8 9 def emails _ helper session [ "email_count" ] || = 0 if session [ "email_count" ] != current_user . emails . count session [ "emails" ] = @result # code from earlier session [ "email_count" ] = current_user . emails . count end session [ "emails" ] end

I didn’t really like doing it this way… and it didn’t work. It raised an error “Singleton cannot be dumped!“. So I figured I could move the singleton declaration out of the session[“emails”] assignment and place it on the return value call. Well I grew more uncomfortable with this. This has the emails iterated over twice as much AND the singleton stuff ends up getting called every time the method’s called. Still faster than encryption, but not good by design.

So I went back to Google

to look for that original answer on calling Hash keys by the dot method syntax. And I found that if I hadn’t of had my brilliant idea moment earlier, then the answer was right in front of me. I had previously opened a StackOverflow question and now I saw Avdi Grimm’s answer:

What you’re looking for is called OpenStruct. It’s part of the standard library. answered Nov 18 ’09 at 3:44 Avdi

So I plugged OpenStruct in and that did the trick. No more “Singleton cannot be dumped!” errors. Just OpenStruct.new( id: i.id, email: i.email, created_at: i.created_at ) and now my Array of unencrypted Objects function just like query result Objects for my view.

Now at this time I was also looking into Rails caching since it was recommended in various websites as a way to increase website performance. But it was snippet from <emoticode/> that actually caught my attention as a relevant solution for my problem at hand.

…The following is an example on how to cache some data for 7.days, use a dynamic cache key to obtain automatic cache invalidation. @results = Rails.cache.fetch "put_a_cache_key_here", :expires_in => 7.days do # put your heavy query here end 1 2 3 4 @results = Rails . cache . fetch "put_a_cache_key_here" , : expires_in = > 7.days do # put your heavy query here end – evilsocket

This looked to be exactly the kind of solution I needed for caching things per session without my somewhat painful session Object usage. I experimented with Rails.cache.fetch in the rails c console and read up a bit on using Rails.cache from here: Advanced caching in Rails.

The way Rails.cache.fetch is designed is really brilliant. In this example:

Rails.cache.fetch "a" do Time.now end # => 2015-01-16 23:02:13 -0500 Rails.cache.fetch "a" do Time.now end # => 2015-01-16 23:02:13 -0500 Rails.cache.fetch "a" do Time.now end # => 2015-01-16 23:02:13 -0500 1 2 3 4 5 6 7 8 9 Rails . cache . fetch "a" do Time . now end # => 2015-01-16 23:02:13 -0500 Rails . cache . fetch "a" do Time . now end # => 2015-01-16 23:02:13 -0500 Rails . cache . fetch "a" do Time . now end # => 2015-01-16 23:02:13 -0500

As long as the key doesn’t change, or the expiry run out, the block will return the value from when it was first called. So with this I can cache the email content for the user and only update it when the count of the email changes by using the count in the key. So as follows:

def email_count Rails.cache.fetch [current_user.id, "email_count"], :expires_in => 15.minutes do current_user.emails.count end end def email_helper Rails.cache.fetch [current_user.id, "email", email_count] do current_user.emails.map {|i| OpenStruct.new(id: i.id, email: i.email, created_at: i.created_at) } end end 1 2 3 4 5 6 7 8 9 10 11 12 13 14 def email _ count Rails . cache . fetch [ current_user . id , "email_count" ] , : expires_in = > 15.minutes do current_user . emails . count end end def email _ helper Rails . cache . fetch [ current_user . id , "email" , email_count ] do current_user . emails . map { | i | OpenStruct . new ( id : i . id , email : i . email , created_at : i . created_at ) } end end

I’m using strings only to make the intent clear here in this post. “email” as opposed to the better symbol use :email to reveal it’s not referring to an external method call, but it’s just a unique identifier.



Now the email count will ping the database only once every 15 minutes at most. And when the email count changes, that changes the key within the email_helper block. And that key change calls forth a new fresh data request to/from our DB. Now the only thing you need to add here is to delete the count if an email gets deleted.

Rails.cache.delete([current_user.id, "email_count"]) 1 2 Rails . cache . delete ( [ current_user . id , "email_count" ] )

And that’s taken care of.

Still more to do

Loading all of the emails isn’t good for performance. Especially with having the unencryption process perform for each of them at once. So from here it’s best to implement your DB query to only take, say, maybe 10 email records at a time. So each call for emails should now include which “page” of ten you’re requesting. This will hugely cut back on waste and increase overall speed and efficiency.

There are always more ways things can be improved. Most often what we need is to be pointed in the right direction and have the tools we need at our disposal. From there it’s a journey of betterment. I’d like to thank New Relic for being a most helpful pointer to revealing where my area of concern was. Once you know where to go it’s just one foot after the other.

As always I hope my writing was both informational and enjoyable. Please comment, share, subscribe to my RSS Feed, and follow me on twitter @6ftdan!

God Bless!

-Daniel P. Clark

PS If you have counters on your site that will show up on every page, consider how frequently they need to be updated and if they’re an expensive process. If so; cache them a bit. (I had originally counted the emails after unencryption and had that counter on display on all pages… that was what was making the overall site drag. I’ve since moved the email counter to pre-unencryption and cached its value for a few minutes. Works wonders!)

Image by Steven Depolo via the Creative Commons Attribution 2.0 Generic License.