I’ve been spending more time reading forums, groups and Quora about Agile. I’ve noticed a type of question coming up over and over again. People seem obsessed with finding the answer to the misguided question “what are the best metrics and KPIs we can use for measuring our teams in Agile?”. I’ve talked before about how most Agile metrics range from the unimportant to the downright dangerous. This point needs to be re-emphasised.

The flaws with Agile Metrics thinking

There are two main flaws with this line of thinking. The wrong people are doing the measuring, and the wrong things are being measured.

The wrong people are doing the measuring

This question usually assumes that there are some people who need to do a bunch of measuring and look at these measurements. And these people are “managers”. Because it’s the manager’s job to measure the “workers” and inspect their “productivity”. So when people get told they will be doing Agile, the first thing they want to know is “What things do managers measure in Agile?”. We all know what managers measure in Waterfall projects: whether the work matches the Gantt chart, how many thousands of defects got found in System Integration Testing, how many hours back the developers have to stay to fix those defects.

But what about in Agile? There are no more Gantt charts, no big System Integration phases, no death marches (because we empower teams and let them manage their sprint backlogs, right? Right.) So what are the managers supposed to measure now?

It turns out, managers aren’t supposed to be measuring much. For starters, if they want to know what’s going on with the project, they should get off their bums and go do some Gemba. That is, go for a walk: go see for themselves. They could look at some big powerpoint with charts and graphs about “employee engagement”, or they could actually talk to people and ask them how they’re going. We could scratch our head trying to figure out what defect density really means, or they could go look at some team boards and chat with some developers about the health of the latest build.

But what if our managers don’t want to do those things? What if they want to sit in their corner offices looking at reports? Then they shouldn’t do Agile. Because Agile is not for them. And if they’re unwilling or unable to change, they don’t belong here. They are welcome to go back to Waterfall project management: good luck with that. Their bureaucratic command-and-control mindset is simply incompatible with the Agile philosophy.

So who should do the measuring?

Instead, the team should be measuring things. They should be inspecting their code, their features, their progress against their goals. They are full-stack cross-functional teams (you do have cross-functional teams, right?), and they own their product, or slice of the product, from concept to cash. So they don’t need anyone peering over their shoulder every day, asking why that number is going down instead of up.

The managers should trust the team to own their work. If they don’t trust them, they shouldn’t have hired them in the first place. And if they do trust them, they should find better things to do than constantly “inspecting” them by “measuring” their productivity.

Then what do the managers do?

Good question! There are potentially other productive things for managers to do. Measuring people is not one of them. Although some firms like Spotify are adopting a more radical model that minimises administration and bureaucracy, and is on the way to getting rid of most managers altogether.

The wrong things are being measured

The other problem here is that the wrong things are being measured. Most of the time people talk about measurements and metrics and KPIs they are talking about team performance, productivity, efficiency. They are focused on outputs, not outcomes. This is a huge mistake and many organizations are guilty of this.

What you should measure instead

People do not build software for its own sake. They build it for the purposes of achieving some business value. Selling something, converting leads, reducing calls, delighting customers, reducing friction, enabling integration between platforms. Or any of a hundred other things. If you are going to measure anything – and again, this should ideally be the team doing the measuring – then measure your outcomes, not your outputs.

If a team can achieve the same business outcomes with fewer software outputs, that’s a good thing. That means less code, less risk, less work, less technical debt. So the team with the most output (i.e. the highest “velocity” or even worse, “function points”) isn’t the winner, they might be the loser. Teams should decide what business outcomes they are trying to achieve, then track against those.

If you are going to measure anything to do with outputs, then the only two worth worrying about are cycle time and story count.

Cycle Time and Story Count

Cycle Time is the average time it takes for a unit of work (generally a user story) to move from In Progress to Done – whatever those states mean for you. Lower cycle time means you are breaking your stories down smaller and achieving flow through your system, and that is a good thing. However, I wouldn’t get obsessed with it. (If you’re not sure what the difference between Cycle Time and Lead time is, I explained that here).

Story count is how many user stories a team completes in an iteration (whatever that might be).

This can help with planning and forecasting when a set of stories or features might be ready for release to customers. Note that there is nothing here about points or estimates. Those are frequently wrong and can be misused. If we assume that user stories will average out to be the same size across a sufficiently large sample set, you don’t need points or estimates on them at all.

And remember that these measurements are for the team to do internally, for their own planning. They do not need to be shared with anybody outside the team. If managers or stakeholders want to know when the feature will be ready, they can just ask the team for a forecasted date. They don’t need to know how that figure came about.

What customer outcomes should I measure?

This is, unfortunately, a complex question and is dependent on your product and business context. There is no one simple solution. That said, a lot of people these days are focusing on what are known as the Pirate Metrics: A.A.R.R.R. (like something a pirate would say, “AARRR!”). These stand for:

Acquisition (a customer gets in touch or registers for a product)

(a customer gets in touch or registers for a product) Activation (a customer starts using your product)

(a customer starts using your product) Retention (a customer becomes a long-term user of the product)

(a customer becomes a long-term user of the product) Referral (a customer refers your product to their friends), and

(a customer refers your product to their friends), and Revenue (a customer pays for your product).

A company should engineer their analytics to support these metrics. They provide solid evidence for the current and future financial performance of a software company. If you’re going to measure anything, put on your best pirate hat and measure those!

Another good business metric is NPS, or Net Promoter Score. This number tells you how likely your customers are to recommend your product or service to others. It has become widely adopted and is strongly correlated with business success.