So there you are: a backend developer with a few years’ experience in developing Ruby on Rails applications. Lucky for you, Ruby on Rails seems to be versatile enough to solve pretty much any problem you encountered along the way way. How good life is!

Then, one day, as you’re about to start working on online ticketing marketplace app and, you get the feeling that it’s going to be an all-around different type of problem. You start having these nightmares where hundreds of thousands of Elton John fans are hitting your app once the tickets for the tour go on sale and everything just goes down (I know it may just be my nightmare, but just stick with me here!)

Suddenly, in the back of your head, you start hearing this whisper, telling you Rails might not be able to handle the huge traffic load without a lot of caching parts. The same voice keeps on saying that business requirements like these might be difficult to meet with a pure Rails implementation. Is it finally time to abandon ship? No! It’s time to try something new.

So in this article I’ll tell you a short story about the romance between Rails and Phoenix. And with a happy end, to boot (sorry for the spoiler).

The problem has found its solution

We wanted to base our app’s architecture and tech stack on solid facts and numbers rather than gut feeling, so we decided to attack the problem from different sides. We prepared a few implementations of proof-of-concept apps in various technologies like Rails, Rails + Event Sourcing, Node.js, Phoenix, RODA + Sequel. Each app included exactly the same business logic and exposed the same API, which allowed us to perform load tests on them using the Gatling tool.

I wouldn’t want to focus on describing the process of performing these tests along with their detailed results here, as we’ll be making that a subject of an upcoming post (stay tuned!). However, the experiment results matter and they were such that we ultimately decided to pick Phoenix as our choice as it offered the best performance out of the whole bunch.

All clear, let’s go, Phoenix!

But the claim above ain’t exactly the full truth, either—there were some limitations that have stopped us from fully implementing the backend part of app in Phoenix. Phoenix and Elixir were still rather new to us and we didn’t have sufficient experience with deploying Phoenix on such a scale. As the client insisted on delivering an MVP as soon as possible, we simply didn’t have the time to spend on polishing our Elixir and Phoenix skills.

Taking into account dev team’s experience and business requirements, we ultimately decided to develop two separate apps simultaneously: one based on Rails to handle the admin panel and process booking requests, the other for serving public API requests in Phoenix.

Responsibilities

Implementing two separate apps allowed us to split their responsibilities, so that we could make the most of the capabilities offered by each language/framework. We also needed to keep the team’s skillset in mind and adjust responsibilities accordingly:

RoR app:

processing booking-related logic (encapsulated in background jobs)

admin panel

Phoenix app:

scheduling booking-related jobs

forwarding requests to RoR app (will describe it detail in later sections)

public API

As processing booking-related logic is the core and the most complex piece of logic in the app, we decided to have it implemented in the language we are more experienced with. RoR also seemed like an obvious choice for the admin panel due to a variety of existing gems that make admin panels work almost out of the box.

Phoenix uses Elixir, which is a compiled language that runs on an Erlang Virtual Machine:

All Elixir code runs inside lightweight threads of execution (called processes) that are isolated and exchange information via messages. Due to their lightweight nature, it is not uncommon to have hundreds of thousands of processes running concurrently in the same machine.

Remember the thousands Elton John fans hitting our app simultaneously within a short timeframe? Now you can see that Elixir was the perfect choice for us! We decided to serve the public API in Phoenix app (booking-related requests, resources data requests, etc.) and forward requests not handled by the Phoenix app to the RoR app (as there are still a handful of public API endpoints that need to be handled by the RoR app).

Let us be async

The greatest tip we learned while designing the app’s architecture was to handle the booking requests asynchronously. Consider two scenarios:

Synchronous processing:

Server receives a booking request Server processes request inline (it may take a while) Server responds to client

Asynchronous processing:

Server receives a booking request Server schedules a background job with the booking request Server immediately responds to client with a unique booking identifier Client polls for booking status using unique identifier Booking request is processed in the background

One of the biggest advantages of async processing is taking the load off the HTTP layer by immediately responding to the client. Keeping in mind that processing booking requests is a time-consuming operation, we reduce the requests queue on the HTTP layer by moving the processing to the background. Such an approach makes the app much less prone to timeouts.

We also are able to quite easily limit the amount of served jobs and thus increase queue clearance time by immediately dropping jobs that have been enqueued for too long. Once traffic on the website decreases, the scheduled job will be instantly picked up for processing, but if Elton John has just tickets out on sale… well, it might be some time before the job is processed.

A scheduled job contains a queuing timestamp. We assume that every job in the queue has a certain lifespan (e.g. 10 seconds) and exceeding it means that there must have been heavy traffic in the queue, preventing the server from getting to and processing this job. We then assume that, in all probability, the desired resource has changed its state in the meantime, making it more feasible now to inform the client about the failure instead of processing the job.

So the first thing to do after picking a job from the queue is to check whether it hasn’t already exceeded its lifespan. If not, continue processing the booking request, otherwise immediately respond to the client with information about failure. This helps us keep our queue a little more tidy.

Communication

Both apps share the same Redis instance for storing background processing jobs. We’re using Exq in Phoenix, an Elixir job-processing library compatible with Sidekiq which is used in the RoR app. Both apps can schedule jobs in Redis, as well as pick up jobs from queues and process them. The setup is very easy and a sample implementation can be found here.

In fact, both apps employ only one-way communication with Redis: the Phoenix app queues the job after receiving a booking request through the API, and the RoR app picks up booking-related jobs from Redis and processes them. It’s as simple as that (sample code):

# Phoenix controller defmodule PhoenixAppWeb.GreeterController do use PhoenixAppWeb, :controller alias Exq.Enqueuer def create(conn, %{"name" => name}) do {:ok, _ack} = Enqueuer.enqueue(Enqueuer, "default", "GreeterJob", [name]) send_resp(conn, 201, "") end end

# Rails worker class GreeterJob include Sidekiq::Worker def perform(name) logger.info "Hello, #{name}" end end

Storage

Both apps need to access the same database, as admin provides the data through the admin panel in the RoR app, while the Phoenix app reads all the data and returns it through the API. We can already see that the database permissions for each app look something like in the picture below:

As we’re dealing with a shared database, we need to keep database structure files in sync between apps. We decided to make the RoR app responsible for database structure modification by defining and running migrations with ActiveRecord. The Phoenix app does not define or run the migrations itself—it only reads the structure.sql content to load the database structure with Ecto tasks. Ecto is an Elixir database wrapper and query generator (you can think of it as a kind-of equivalent to ActiveRecord in RoR).

Just like ActiveRecord, Ecto is capable of creating migrations and defining schemas, but we’re not taking advantage of these features in our project as we delegate these responsibilities to the RoR app. The results of ActiveRecord migrations are reflected in the db/structure.sql file, which needs to be kept in sync with priv/repo/structure.sql (the destination where Ecto saves the structure generated by its own migrations as well).

The fact that the file generated by RoR needs to be moved to Phoenix forces us to keep the schema as a structure.sql file, rather than schema.rb , which the Phoenix app would not be capable of interpreting. One last thing that needs to be adjusted is telling Phoenix that it should load the database structure from the existing structure.sql file rather than running migrations (which do not exist in the Phoenix app):

defp aliases do [ "ecto.setup": ["ecto.create", "ecto.load", "run priv/repo/seeds.exs"], ... ] end

Request forwarding

Remember how I mentioned that one of the Phoenix app’s responsibilities is to forward requests to the RoR app? Let’s take a look at the diagram that describes public API request handling in our system (local addresses are used for demonstration purposes):

Most of the public API endpoints are served by the Phoenix app, but there are still a few endpoints that need to be handled by the RoR app. Rather than hit up two separate apps depending on the request the client always hits up Phoenix app, which either handles the request itself or forwards it to the RoR app.

The Terraform Elixir library allows us to define the destinations which particular requests should be forwarded to. Any request that is not matched by the Phoenix router is handled by the Terraform plug and is then forwarded in accordance with the rules defined within that plug (plug is an abstraction layer between the initial request and the final response that accepts a connection, applies modifications to it, and returns the updated connection).

# /lib/phoenix_app_web/router.ex defmodule PhoenixApp.Router do use Terraform, terraformer: PhoenixApp.Terraformers.MainApp use PhoenixApp, :router # Define all routes handled by Phoenix app ... end

# /lib/phoenix_app_web/terraformers/main_app.ex defmodule PhoenixApp.Terraformers.MainApp do use Plug.Router plug(:match) plug(:dispatch) match _ do %{status_code: status_code, body: body} = forward_request(conn) conn |> send_resp(status_code, body) end defp forward_request(conn) do # Forward request to RoR application with HTTPoison end ... end

The rule we defined forwards all requests that are not handled by the Phoenix app’s router to the RoR app (by using the match _ clause), but you’re free to define whatever rules you want, e.g. forward only GET requests or forward particular requests by matching the exact path. Terraform is also a very useful tool for moving existing APIs into Phoenix—it allows you to rewrite it endpoint by endpoint, while forwarding yet unfinished endpoints to the existing API.

If you prefer a server configuration, you could implement a reverse proxy on the nginx level, which would more or less mirror such a request forwarding arrangement that we’ve managed to implement with Terraform.

We’re live and…

…we love it! All of us have agreed that the integration went smoother than expected. We feel we did our best to make the product capable of handling even the heaviest traffic, especially given the fact we were very short on time and it was our first Phoenix app on such a scale.

It’s hard to say whether a pure Rails implementation would take us less time. What we are certain of, however, is the fact that architecture we built is appropriately flexible and we could easily move the entire logic to the Phoenix app anytime we’d want to.

We are happy to say we have experienced very little bugs and issues in the course of development and we learned a lot. And most importantly, at least from my personal perspective, I fell in love with Elixir and feel that this is just the beginning of many great adventures!

P.S. Kudos to Kacper Pucek and Rafał Skorupa for putting so much heart into making it happen!