LinkedIn replaced their back-end mobile infrastructure built on Ruby on Rails with Node.js some time ago for performance and scalability reasons. A former LinkedIn team member reacted explaining what went wrong, in his opinion.

Kiran Prasad, Director of Mobile Engineering at LinkedIn, told ArsTechnica they had to reconsider their back-end infrastructure providing mobile services for LinkedIn clients because although only 7-8% of the clients were using their mobile application, the Ruby on Rails back-end suffered from severe scalability problems.

LinkedIn evaluated three possible solutions: Rails/Event Machine, Python/Twisted, and Node.js. According to Prasad, Node.js was eventually chosen providing a number of benefits:

Better performance, Node.js being up to 20x faster than Rails for certain scenarios

Using only 3 servers instead of 30, leaving room for a 10x traffic growth

Front-end JavaScript engineers could be used for back-end code, and the two teams were actually merged into one

LinkedIn’s story of dumping Rails due to scalability reasons triggered some reactions around the web. Ikai Lan, a former member of the mobile team at LinkedIn, shared his part of the story on the technology chosen and the problems encountered:

The stack we chose was Ruby on Rails 1.2, and the deployment technology was Mongrel. Remember, this is 2008. Mongrel was cutting edge Ruby technology. Phusion Passenger wasn’t released yet (more on this later), and Mongrel was light-years ahead of FastCGI. The problem with Mongrel? It’s single-threaded. It was deemed that the cost of shipping fast was more important than CPU efficiency, a choice I agreed with. … We deployed using Capistrano, and were the first ones to use nginx. … [Later] we upgraded to Rails 2.x+ … Oh, and we also decided to use OAuth for authenticating the iPhone client. Three legged OAuth, so we also turned those Rails servers into OAuth providers. Why did we use 3-legged OAuth? Simple: we had no idea what we were doing. I’LL ADMIT IT.

The servers designated for mobile services were hosted by Joyent, so when a mobile application needed some information the request had to travel to Joyent then make another trip to LinkedIn’s datacenter where the main API service was located, according to Lan:

That’s a cross data center request, guys. Running on single-threaded Rails servers (every request blocked the entire process), running Mongrel, leaking memory like a sieve (this was mostly the fault of gettext). The Rails server did some stuff, like translations, and transformation of XML to JSON, and we tested out some new mobile-only features on it, but beyond that it didn’t do a lot. It was a little more than a proxy. A proxy with a maximum concurrency factor dependent on how many single-threaded Mongrel servers we were running. The Mongrel(s), we affectionately referred to them, often bloated up to 300mb of RAM each, so we couldn’t run many of them.

After pinpointing some of the problems, Lan eventually admitted that “v8 is freaking fast” but added: “Don’t assume that you must build your next technology using node.js. It was definitely a better fit than Ruby on Rails for what the mobile server ended up doing, but it is not a performance panacea. You’re comparing a lower level server to a full stack web framework.”

Hacker News has a long thread of reactions to this decision of using Node.js over Rails.