Performance is a complicated beast. Each step in the problem solving chain of performance issues - identification, diagnosis, rectification and verification, pose unique challenges. This post will deal with themes from the first two steps, and is aimed at teams who are already conducting some form of performance testing (if don't currently have an automated load testing solution then Gatling is a great way to get started).

Even if you do have tests set up, then without a decent logging and analytics solution you're losing most, if not all, of the value of the tests themselves. It is vital to see historic trends, along with daily results in order to identify potential issues. Here you'll learn how to set-up Gatling with the Elasticsearch, Kibana, and Logstash (ELK) stack for effective performance monitoring.

Setup

The two pieces of software that are specific to this guide are Gatling, and the ELK stack. The former is a well-developed and feature-packed load testing tool. With Scala & Akka under the hood, Gatling has first-class concurrency support to really stretch the capabilities of your applications and the web servers they are standing on. If you're not familiar with Gatling I recommend you take a quick look at it's documentation to get a feel for how it works. It's more likely that you've heard of, or even currently use, the ELK stack. Here I assume you indeed have this running, or are thinking about deploying it in your environments. If you would like to know how, the official docs are a good start. For something more guided check out this tutorial.

A Quick Word on Gatling

By default Gatling produces visually rich and informative HTML reports. They are based on your simulations in the load testing project. You might have one which is a CreateUserSimulation, where you define the requests/payloads required to create a user in your app. When this simulation is run, Gatling creates a sub-directory for this test with the simulation log and the HTML reports inside. So when you have N simulations, Gatling produces N folders containing their results. Under this architecture Gatling executes all of your simulations independently, and serializes their results in separate sub-directories.