*** Free screencast below ***

With so many services available these days, it’s almost impossible to find or build an application that doesn’t rely on a third-party service. Most developers that have dealt with billing systems within the past few years have likely heard of Stripe. Stripe is, by far, the most developer-friendly billing service I’ve implemented.

While Stripe does provide a number of features and plugins that make updating a credit card or signing up for a service simple, there are occasions when data needs to be fetched from Stripe in real-time. For these cases, it’s great to be able to fetch and cache this data before-hand, and only expire if you know there’s been a change.

Combining Sucker Punch with Rails cache allows you to cache Stripe customer data so that billing pages are just as snappy as the rest of the application.

The Pain

Even though Stripe is generally pretty fast, retrieving customer data on the fly can be expensive. In order to optimize page load times, we can look to cache this data before it’s actually used.

If you’re familiary with the Stripe gem, you’ve probably seen something like this:

customer = Stripe::Customer.retrieve(user.stripe_id)

With the response of customer , we can further query customer data with the following methods:

invoices = customer.invoices upcoming_invoices = customer.upcoming_invoices

If we make all 3 of these method calls on page load, we’d have 3 separate lookups from Stripe. This is pretty common for the typical billing page where you might want to show the customer’s current credit card on file, their past invoices, and charges they can expect for the next invoice.

Three lookups like this could potentially add another second or so to page load, which is not ideal.

So how can we improve this?

The Solution

First, we can move the code to fetch the relevant stripe data in to a class of it’s own, which wraps the notion of caching around the data retrieval.

class StripeCache def initialize(user) @user = user end def refresh purge_all cache_all self end def customer return @customer if @customer @customer = Rails.cache.fetch(cache_key("customer"), expires_in: 15.minutes) do Stripe::Customer.retrieve(user.stripe_id) end end def invoices Rails.cache.fetch(cache_key("invoices"), expires_in: 15.minutes) do customer.invoices end end def upcoming_invoice Rails.cache.fetch(cache_key("upcoming_invoice"), expires_in: 15.minutes) do customer.upcoming_invoice end end private attr_reader :user def cache_all customer invoices upcoming_invoice end def purge_all Rails.cache.delete_matched("#{user.id}/stripe") end def cache_key(item) "user/#{user.id}/stripe/#{item}" end end

To use this on a billing page, we could do:

stripe = StripeCache.new(current_user).refresh

And from the response of that class, we could access the customer , invoices , and upcoming_invoice respectively:

@customer = stripe.customer @invoices = stripe.invoices @upcoming_invoice = stripe.invoices

This is great! All future calls to this customer’s Stripe data will be fast — for 15 minutes, of course.

However, the first time the page is load, the user is still burdened with the initial fetch of the data. So the method above works for every request to the billing page after the first.

But let’s be honest, what users are going to the billing page multiple times during a session? Probably not many. So we still need to fix the initial load somehow.

This is where Sucker Punch comes in. Like other Ruby background processing libraries, Sucker Punch allows you to move the processing of code to the background. However, unlike the others, Sucker Punch doesn’t require additional infrastructure like Redis, and doesn’t require a separate worker process to monitor and execute enqueued jobs. Because of this, the time it takes to extract code to a Sucker Punch job and have it incorporated with your application code is much lower.

In this case, rather than send a transactional email or perform some database calculation, we can write a job thats only responsibility is to run the Stripe caching code.

class StripeCacheJob include SuckerPunch::Job def perform(user) StripeCache.new(user).refresh end end

The next question is, when do you run this?

Well, I chose to run it on user login, but you could run it anywhere you think would give you a head start if the user were about to go to the billing page. In my case, on login meant that if they didn’t go to the billing page at all, after 15 minutes the data would be exhausted from the cache anyway, so no hard done.

But if the user did navigate to the billing page during that session, they would have up the latest Stripe customer and invoice data to see — all without a request to stripe on page load.

One other thing to keep in mind is there may be times when we’d want invalidate the Rails cache data. One example would be when the user’s card information is updated. In that case, we can slip in another call to the Stripe cache job, which would invalidate the previous cache and re-request the customer’s billing information:

module Accounts class CardsController < ApplicationController before_action :require_authentication def create cust = StripeCache.new(current_user).customer cust.save(card: params[:stripeToken]) StripeCacheJob.new.async.perform(current_user) redirect_to account_path, notice: t("card.update.success") end end end

Summary

Using Sucker Punch in combination with Rails cache feels like a great way make optimizations to third-party data requests. This article focused on using it to fetch Stripe data, but it could be used with another service just as easily.

The beauty of Sucker Punch is that it doesn’t require a separate worker process to be running in the background. On a platform like Heroku, this saves the cost of an additional dyno.

Sucker Punch excels at background jobs that are relatively fast and if missed, wouldn’t be critical to the operation. In this case, if a cache job is lost, it’s not the end of the world. At worst, the user’s Stripe data would be requested on the fly and the page would be slower than usual. But the majority of the time, the request is fast because the data’s been cached beforehand.

What other jobs have you used Sucker Punch for?