I have released a course on Pluralsight called Agile Fundamentals that talks about Agile Software Development in detail.

I was listening to an episode of the DotNetRocks podcast about Agile Metrics. There was an interview with Michael ‘Doc’ Norton about his experiences figuring out the right metrics to measure for the productivity of a development team. The basic issue discussed was that Velocity is a dangerous metric to rely on as a goal or target.

Velocity is a measure of units over time, so in an agile iteration or sprint, that would be the number of story points completed in the iteration. This is a dangerous metric because it is misleading to management. One week, your team may complete 10 story points in the iteration. Management may then say,

“Well, that’s great, if you can better that to say 12, we might finish early.”

The team, then starts their next sprint, aiming to complete 12 points, but they end up only completing 5. This is like a red rag to a bull to management, but this could be a valid scenario. The velocity of 12 from the previous sprint may have been achieved because all the development tasks where contain within the development team. If as part of the next spring you need input from other teams or departments, then this could affect your ability to get work done as planned. This just one example of an external influence affecting velocity, you could have people go of sick, on holiday, or anything else that can happen that is out of the teams control.

It is because of these external influences that the velocity metric becomes a bad metric to rely on. There is just too much variability in the numbers from sprint to sprint. This doesn’t mean you should ignore velocity completely, but managers should not ask teams to hit targets based on velocity.

In the original interview, in the podcast linked above, Mr Norton talks about 3 laws that influenced him whilst he was looking into agile metrics. These are :

The Hawthorne Effect – This is where something that is measured will improve, at a cost. Goodhart’s Law – When a measure becomes a target, it ceases to be a good measure. Friedmans Thermostat – Correlation is not causation.

Going back to the idea of velocity tracking, in order to deliver more points to meet the target, the team will sacrifice on system quality, which slows down the team in the long run and introduces technical debt. You are better off focusing on the quality of the system that you are building and the processes (continuous delivery and integration etc.) and the people building the system.

As stated above, still use velocity as a rough guide, just don’t rely on it. Some other good metrics to use for your system development could be based around code quality metrics like the following available in Visual Studio.

Maintainablity Index: The Maintainability Index calculates an index value between 0 and 100 that represents the relative ease of maintaining the code. A high value means better maintainability. Color coded ratings can be used to quickly identify trouble spots in your code. A green rating is between 20 and 100 and indicates that the code has good maintainability. A yellow rating is between 10 and 19 and indicates that the code is moderately maintainable. A red rating is a rating between 0 and 9 and indicates low maintainability.

Cyclomatic Complexity: Cyclomatic complexity (or conditional complexity) is a software measurement metric that is used to indicate the complexity of a program. It directly measures the number of linearly independent paths through a program’s source code. Cyclomatic complexity may also be applied to individual functions, modules, methods or classes within a program. A higher number is bad. I generally direct my team to keep this value below 7. If the number creeps up higher it means your method is starting to get complex and could do with re-factoring generally by extracting code into separate, well named methods. This will also increase the readability of your code.

Depth of Inheritance: Depth of inheritance, also called depth of inheritance tree (DIT), is defined as “the maximum length from the node to the root of the tree”. A low number for depth implies less complexity but also the possibility of less code reuse through inheritance. High values for DIT mean the potential for errors is also high, low values reduce the potential for errors. High values for DIT indicate a greater potential for code reuse through inheritance, low values suggest less code reuse though inheritance to leverage. Due to lack of sufficient data, there is no currently accepted standard for DIT values. I find keeping this value below 5 is a good measure.

Class Coupling: Class coupling is a measure of how many classes a single class uses. A high number is bad and a low number is generally good with this metric. Class coupling has been shown to be an accurate predictor of software failure and recent studies have shown that an upper-limit value of 9 is the most efficient.

Lines of Code (LOC): Indicates the approximate number of lines in the code. The count is based on the IL code and is therefore not the exact number of lines in the source code file. A very high count might indicate that a type or method is trying to do too much work and should be split up. It might also indicate that the type or method might be hard to maintain. Please do not ever use Lines of Code as a productivity measure! Only use it to highlight potential complexity in a class or method.

Another tool great for dash-boarding these metrics is NDepend. You may also want to use test coverage as a metric for overall quality. Would you really want to trust any code that doesn’t have a good level of test coverage? Low test coverage has always made me nervous, and as Michael Feathers once said, a system is classed as legacy if it is not covered in tests. This means that even though you are working on a green field project that is brand new, you are already classed as legacy if the test coverage is very low.

The metrics above are more focused around system quality, but there are other Agile Metrics that you can use like :

Burndown : Burndown reports show the progression of a team through a set of work. The reports display a series of snapshots of the remaining and completed work. The remaining work appears to ‘burn down’ as the team completes more and more over the set of time viewed. The goal of the team is to complete all planned work by the end of the period so that the final snapshot shows nothing left to do. The ideal line on a Burndown shows the straight path assuming a start with the highest point on the very first day of the displayed period and an equal progression on each and every day.

Cumulative Flow : Progress of Backlogs by status over time. This allows you to see how your backlog items and tasks or a spring are progressing over time when plotted onto an area graph.

Estimate Trend : The Estimate Trend shows changes in the total amount of Estimate, the amount completed and the amount remaining over the course of time on the backlog. The total amount remaining should trend downward over time, while the amount completed should trend upward. Changes to the total estimates may be due to new items being added to the list, items being removed from the list, or changes in estimates of items already on the list.

There are many more metrics reports you can use, and it depends on what agile planning tool you are using like VersionONE or Jira, but the important thing to remember is not to turn the metrics into targets.