In this article, step by step, I will tell you how we’ve improved our personal feeds service performance and compare Rails and Phoenix solutions, spoiler Phoenix has won the battle.

Step 1. Explore JSON structure

Before we start concentrating on more complex things, we should audit and remove all rudiment data from JSON structure. It will allow us to reduce object’s attributes and its’ quantity, as well as, send less data through the network. Having accomplished that, we got a structure which contained heavy objects, we serialized it into JSON and paginated 10 items per page.

After we’ve audited the structure, we’ve received the following result:

Feeds include objects: photos, last_like, last_comment, user

Photos include object: photo and many version of images

LastLikes include objects: user, avatar

LastComments include objects: user, avatar

Response JSON Feeds structure:

Having done this, we gained our minimum JSON response structure, next we were able to switch over to other performance problem.

Step 2. Optimize JSON serializer and Ruby code

We were finding slow ruby code and running benchmark tests in order to compare different methods and gems, for example:

As you can see, bsearch is much faster than find. However, you always must keep in mind that array must be sorted when using bsearch.

We used RailsPanel (Rails development chrome extension) for ActiveRecord vs Rendering execution time determination:

RailsPanel Breakdoown

RailsPanel ActiveRecord

Then we compared alternative JSON implementation gems. And finally we opted for Oj:

Step 3. Analyze sql query

We were seeking for slow queries which response time was more than 150 ms and collected them in a file, for the purpose of further analysis. Alternatively, you can use NewRelic service for these kind of tasks. We used Postgres log_duration setting, so we could log all our slow queries.

To turn on log_duration in file postgresql.conf set:

log_duration = on # Turn on logging

log_min_duration_statement = 150 # Set minimum time ( milliseconds) for query

Restart posgresql service.

By using explain analyze tools, we’ve added necessary indexes. I’d like to stress that you should run explain analyze only on real data in order to get accurate query plan results.

Minimal query plan information for making a decision whether we need to add an index or edit the query itself:

Seq Scan — bad, since it scans entire table

Index Scan — good, because it uses index.

Bitmap Index Scan — good, as it uses index, besides, it’s very effective for a large number of rows.

Index Only Scan — the best, the fastest one, it reads only from index.

For more details on how to understand query plan, I’d recommend you to read this article by great guys from thoughtbot

Step 4. Choose the right production configuration

It’s highly important to setup the right production configuration for your project, since if you do it wrong, It will spoil all these three steps above. Configuration always depends on your purpose and server configuration.

Database

To configure Postgres database, you can use online service PGTune. This site allows you to set parameters of your system and get PostgreSQL settings.

Application service

We use Puma as Rails application server, and as we mentioned earlier, the settings in a config file, depend on your system parameters. There is a great article by Nate Berkopec about Application Server Performance and Puma.

After we’ve ended up with this stuff we realized that we stuck in serializer performance.

Rails benchmark

Testing configuration

Database: PostgreSQL 9.5.4 RDS db.m4.xlarge

Instance: one db.m4.xlarge ubuntu

Server: Nginx

Application server: puma (puma_workers=8, puma_min_threads=8, puma_max_threads = 16)

Application: Ruby 2.4.1, Rails 5.0.3, ActiveModelSerializer 0.10.6, oj gem

For load-testing we use JMeter with the following configuration:

15 threads,

ramp-up period 1 seconds

loop count 6000

request 6 most biggest users feeds and scroll from 1 to 10 pages

JMeter Rails load-testing

NewRelic Rails load-testing

Based on the results of Rails load-testing we get:

AVG 602 RPM because of slow response and network traffic.

AVG response time application 1.210 ms and JMeter 1.499 ms (response + network traffic)

We spend 46.3% time on data serialization.

Phoenix benchmark

Testing configuration

Database: PostgreSQL 9.5.4 RDS db.m4.xlarge

Instance: one db.m4.xlarge ubuntu

Application server: Cowboy

Application: Erlang/OTP 20, elixir 1.5.1, phoenix 1.3.0

For load-testing we use JMeter with the following configuration:

15 threads,

ramp-up period 1 seconds

loop count 6000

request 6 most biggest users feeds and scroll from 1 to 10 pages

We’ve rewritten feeds service to phoenix and obtained the following results:

JMeter Phoenix load-testing

DataDog Phoenix load-testing

Based on the results of Phoenix load-testing we get:

AVG 2.55K RPM in the same network

RPM in the same network AVG response time application 273 ms and JMeter 356 ms (response+ network traffic)

Conclusion: Phoenix can serve 4 times more RPM and respond 5 times faster than Rails solution.

Trying to repeat the same performance on Rails

For this purpose, we launched 8 db.m4.xlarge instances.

AWS 8 Rails applications instances

JMeter Rails load-testing

NewRelic Rails load-testing

Based on the results of Rails load-testing we get:

AVG 1.330 RPM

RPM AVG response application 373 ms and JMeter 656 ms (responce + network traffic).

Conclusion: we’re nearly there…

We spend 69.2% time on data serialization.

Conclusion

Prior to making a decision to rewrite feeds service on Phoenix(elixir), we did a lot of job on finding out the reason of bad serialization performance. Only by conducting this research, we were able to make the right step in dealing with this problem.

To sum up, I want to say that it’s really important to understand how your application works and what tools you may use to check your system’s bottlenecks. I don’t suggest rewriting every single service into elixir, but it solves its’ tasks quite well and drastically reduces server expenses.