Engineering for performance is hard. This is because “performance” is rarely the priority for an engineer. Engineers are often tasked with building or maintaining software that does something or achieves some binary or testable purpose.

Jim, we need to build a web application that enables our users to view their account balances, deposit checks, and facilitate transfers between accounts.

From an engineering perspective, Jim knows the purpose of the application. The purpose is to allow users to “view banking information, deposit checks, and facilitate transfers between accounts.” The purpose is not, “be fast.” Even if it was, how do you know how fast the application should be? Fast is not a binary goal. Fast is not objective. Fast is subjective, it is relative. Fast must always be defined within the context of a specific situation.

For example, a fast turtle and a fast rabbit are moving at two very different speeds. This is the challenge with performance. In business we tend to prioritize objective problems over subjective ones, and in our increasingly binary world, performance has no static definition.

Adding context

So how do we solve this? How do we make performance an objective problem?

Monitoring technologies have attempted this by contextualizing performance. Companies like New Relic, AppDynamics, Dynatrace, even Rigor have tried to collect enough data so that we can provide enough historical context to make speed a priority.

These tools simply collect data, trend it over time, and leave the analysis and troubleshooting to the end-user. Instead of using software to resolve the problem, we’ve spurned a new one.

Before the problem was “how can I turn performance into an objective problem that can be solved.” Now the problem is “how can I cipher through all of this data to help me objectify performance.”

Engineers or operations teams using these tools will receive an alert where there is a clear (often catastrophic) problem and have to trudge through performance data. Monitoring systems are great at alerting when there is a systematic failure, like when your servers or a 3rd party’s service fail, causing excessive latency or downtime. However, these tools lack the ability to create urgency around objective performance problems that aren’t necessarily “crashing” your site. As a result, monitoring technologies have fostered what I view as a grossly reactive approach to performance.

Creating a catalyst for performance

At Rigor, we see the value of active, continuous monitoring to aid DevOps and performance teams to identify downtime and to view trends to track progress over time. However, we acknowledge that this methodology leaves large gaps when approaching performance. Outside of system-wide failures, there is rarely enough urgency to compel users to dig into the data and begin prioritizing performance.

We began to search for a way to make performance a more objective problem and lower the barrier of entry inside and outside of engineering (such as for marketers, designers, and other business users).

Turning data into actions

To do create urgency what we needed was a way to convert traditional “monitoring data” into actions. As a first step, we integrated a system that converts performance data into an objective, binary list of performance defects. We wanted to flip the conversation from a subjective argument about the hypothetical impact of an “x” second decrease in performance into an objective discussion about fixing a series of “bugs” or “defects” on the site.

When a designer uploads an unoptimized or improperly formatted image to their production or pre-production sites, we flag the change in our system and display why we flagged the issue, how it can be resolved, potential savings, and a link to download the optimized file. We even offer a slider that shows the image before and after optimization, so that they can review the optimized image quality: In the end, we believe the future of performance lies in the ability to integrate performance testing into an organizations development and content creation processes. If organizations test early and often for performance defects as a part of their internal process, these defects will not have degrade performance in production.

To fully reduce the prevalence of performance defects on the web, performance tools should look beyond collecting and distributing data, and take an objective approach that compels action.

Interested in making performance a priority for your organization?