At Grip we use software development data to predict the success of an application and suggest improvements to increase the chance of success.

We divide the key factors that contribute to success into three goals. These goals are aligned with the objectives of delivering maximum value in the shortest possible time with the lowest possible costs.

The three goals are:

User Satisfaction, Velocity, and Costs

Scoreboard

Each of these goals incorporates measurable indicators that have an impact on an organization’s results.

We refer to these indicators as goal measurements.

By applying sophisticated analytics, machine learning and simulations to the data that we collect from the development process, we predict the outcomes for the goals and goal measurements.

In this post we’ll discuss the accuracy of our predictions for some of our goal measurements.

We’re going to compare the predicted results with the actual scores from our own development activities.

For this discussion, we will examine: Defect Removal Efficiency (DRE), Story Points per Collaborator (SPC) and Requirements per Collaborator (REQ). We’ll be examining the actual results from our work over the past twenty weeks before February 9 to determine how accurately Grip can predict if the indicator improves or gets worse.

The period and goal measurements were selected because we have completely analyzed data for all three during this period.

The Grip with our own data is public, so feel free to check it out here.

Defect Removal Efficiency

DRE compares the defects found and removed prior to release with the defects found after release and is an important indicator of quality (See for instance the work of Capers Jones)

In the graph below we plotted our predicted score against our actual score for the past 20 weeks:

Defect Removal Efficiency

Let’s look at the Direction (whether the DRE was increasing, decreasing or remaining static):

DRE and Predicted DRE

Out of the 19 weeks where we have a direction we have the direction correct 11 times.

Story Points Closed per Collaborators

SPC is an indicator of the velocity with which the team performing. It is important in agile planning sessions. SPC is a tool that quantifies how much a team can accomplish per unit of time.

Story Points Closed per Collaborator

Let’s look at the Direction:

SPC and Predicted SPC

Over the 19 weeks Grip correctly predicted whether our performance was improving, staying the same or declining, 13 times.

Requirements Closed per Collaborator

Requirements Closed per Collaborator

Similar to situation for story points closed per sprint, the REQ results provide insight regarding how much value the team can create in a given unit of time. This sub-goal helps to determine if the team is generating the highest possible output from the available resources.

REQ and Predicted REQ

Here we correctly predicted the direction 14 times.

Conclusion – the Accuracy of our Predictions

We are pleased to observe the directional accuracy of Grip’s predictions. While the magnitude of our predictions still requires refinement, the trends highlighted by Grip are valuable. The insight gained from knowing which way a team’s performance is trending enables the development organization to make far better decisions, more quickly.

For this study, the best possible result would have been 57 correct directional predictions (3 sub-goals across 19 weeks). From those possible 57 results, we have accurate directional predictions for 38 instances. Grip was correct exactly two-thirds of the time. This should be more than enough for a development team to verify their own “gut feelings”.

We expect the prediction magnitudes to more closely match actual performance, and for the directional predictions to be even more accurate as we train the system with more data both over time from our own efforts and adding analysis of other projects.

Interested to get predictions on your own development process?

Join our beta: http://grip.qa/beta