Injecting Failure at Netflix, Staying Reliable for 40+ Million Customers



March 4, 2014

by Vivian Au

Corey Bertram, Site Reliability Engineer at Netflix recently spoke to a DevOps Meetup group at PagerDuty HQ about injecting failure at Netflix. For Corey, he wanted to show people what can go wrong, because anything can go wrong, will. Promoting chaos and injecting failure has been a great way to keep Netflix up and running for their 40+ million customers.

Tasked with Netflix’s uptime and reliability, Corey said,

“I spend a lot of time thinking about how to break Netflix”

According to Corey, by injecting failure into their production systems Netflix has been able to significantly improve how they reach to failure.

A rarity, especially for large companies, is Netflix’s culture of Freedom and Responsibility. Every developer at Netflix is free to do whatever they think is best for Netflix. Their estimated 1000 engineers are encouraged to be bold and solve problems, which allows everything at Netflix to happen organically. It’s for this reason that Netflix does not have an operations team, instead every engineer is responsible for their own services from conception through production.

Corey admits this creates a hostile environment for engineers where every incident is unique and no singular person knows how it all works. But, when their engineers are told to go wild, they do. They don’t shy away from challenges and find solutions to problems no other company has ever experienced.

Netflix has hundreds of databases and hundreds of services in the production path, which makes frequently injecting failure in their system necessary for their continued success and growth.

“No one knows how Netflix works. I say that in the most sincerest way possible. No one understands how end-to-end this thing works anymore. It’s massive.”

Taking a Different Approach to Failure and Reliability

Deploys are happening 24/7 at Netflix, so anything can happen across their tens of thousands of instances at any time. Because of this they have decided to focus on clusters rather than individual incidents. According to Corey, it’s easier to roll back a thousand services rather than just one so you can spot trends.

Corey admits that Netflix doesn’t test. It’s impossible to mimic what has been built in production in a testing environment. However, that doesn’t mean that no testing is done, but their testing environments are only a small fraction of what is occurring in production. When services are deployed they are tackling an entirely new environment in production caused by conditions the unique conditions that occur in their production environment.

“From a reliability standpoint, we are kind of just along for the ride.”

In the light of not having a test environment, Netflix has automated everything and created the Simian Army.

Inject Failure… But Don’t Break Netflix

Reliability is secured at Netflix by continuously automating the testing of production systems. By purposely poking at their systems they can see if it can really stand up in a fight. But instilling the need for reliability meant that the concepts needed to be sold internally. To do this, Netflix decided to brand, promote and incentivize the use of their process, the Simian Army.

Start Small. Find your easy wins and keep it simple by going after your low hanging fruit. According to Corey, it’s these easy wins that will bite you if they are ignored. Don’t get bogged down creating hundreds of test scenarios.

Log Everything. Some people say Netflix is a logging company that happens to stream videos because they take a log of every customer action to gain insight into what’s working and what’s not. You can’t be successful without insight, so log everything. Log all of your metrics, graphics, alerts, everything. You will want to invest heavily in your insight infrastructure in order to scale.

Scale to Zone Reliability Testing. A great way to see how you will handle a zone outage. Netflix builds everything in threes, so they should be able to withstand zone outages. For Netflix, Chaos Gorilla automates the relocation of traffic, scales traffic, then nukes everything.

Tip: If you are on Amazon, Corey recommends using asymmetrical load balancing to avoid throwing a ton of traffic into one zone that is still standing after an outage.

Allow Opt-Outs, But Encourage Opt-ins. You may not want all of your services to experience failure as this may cause delays or loss of weeks worth of work. You want to build relationships with your developers not burn them by destroying their work.

Get a War Room (They’re Critical). When running failure automation its essential to have a rep from every team be present. You don’t know how your system may react to the failure. Having everyone together to monitor the service they are responsible for will make it easy to react and address what you have learned.

Repeat. Often. Currently, Netflix runs their failure automations quarterly. This is in the process of being adopted bi-weekly. This isn’t simple or easy, but its necessary if you want to scale and stay reliable.

Corey sums up that if you are looking at increasing reliability, it is not a task you can take on yourself or you will fail. While you will always need to balance reliability with cost or innovation he reminds us that it’s even more essential is that you must keep it simple.