I recently got this email from my friend Carol:

I need a fairly scientific way to estimate testing time. Today, I know how long my test cases take to run individually, I know there will be some number of bugs, I know the fixes will take some period of time. I know I will need to rerun tests, etc. Is there a formula that helps with estimating this? I realize it will not be exact, but something that other companies do to make estimating more of a science than a feeling. I hope you have an exact answer for this question. My boss is going to ask me for this information on Monday, so no pressure but HELP!

Carol asks an important question. Management tends to think of software development as an investment, like buying a car or a house. Like those big-ticket purchases, there are plenty of other options to choose from, so management tends to like to know the benefits, the time to build, and the cost.

Those seem like reasonable requests, at least at first. Then we run into Carol’s questions, which make things more challenging. Sadly, the reality is that our guesses at how long tests take to run are often wrong, we likely don't get to put in forty hours of productive testing in any given week, we are often waiting for new builds and fixes, and the rate of failure means we’ll need to rerun tests, often more than once.

This means a terrible amount of uncertainty in the estimating process. Add a new team, technology, or process, and suddenly the uncertainty is over the edge; coming up with a schedule estimate for testing starts to feel less like science and more like an irresponsible guess.

The following factors significantly influence our ability to estimate testing time well, but with a little effort, you can tighten up the process.

In her email, Carol indicates her lack of knowledge about her situation. She states that fixes will take “some period of time” and there will be “some number of bugs.” Like Carol, most organizations lack sufficient historical data to build estimates from. Without the data of experience, it will be difficult to create accurate estimates.

So start gathering data! Over the next two weeks or so, try to figure out what percentage of your time is spent on rework. If it’s 30 percent, use 70/30 to find that the planned test effort should be multiplied by 1.43 to find the real effort.

The next key factor is the test team itself. How large is the team? What is each member’s personal level of skill and experience? Do they have a well-defined testing process that everyoe understands and can select from? How stable is the team? Do members come and go randomly, or do they have a cohesive history? How much time can the team focus on testing tasks without interruption? And what are the individuals’ interaction skills? These answers are all vital to the team’s performance and, thus, the estimates for testing time, but we have no ways of measuring these vital characteristics. Lacking this, ask yourself how much final schedules differ from the planned ones and how that is changing. If it’s getting worse, you need more time. If it varies, use the last example.

Another factor in good estimates is the stability of the requirements. We no longer “freeze the requirements” like we used to. In today’s agile world we welcome change, and with those changes to requirements will come changes in testing—and the estimates. Product owners flex scope to hit dates; take a look at flexing testing to hit deadlines.

System size, complexity, and risk are also key factors that influence the amount of testing that “should” be performed. And again, we have no effective ways of measuring these factors. In his book The Principles of Product Development Flow, Donald Reinertsen says larger projects slip not only by larger amounts, but also by larger percentages. When you look at how far your estimates are off, look at projects that looked to be of similar size at the beginning.

A key factor in estimating the testing effort (and other unknowns) is the defect density in the requirements, design, and code. Buggy requirements and design will result in buggy code. How bad will it be? How many defects will be delivered to testers? That factor has a substantial impact on the amount of time testing will require. Again, looking at similar projects can help here.

In her email, Carol mentioned that developer fixes will require some time. Another influencing factor is the developer “screw-up rate” when fixing defects. The general feeling is that about 5 percent of the “fixes” either will not fix the original problem or will break something else in the product. But what is the ratio at Carol’s organization? We don’t know, but it would help to find out.

Another factor that must be taken into account in test estimation is the required thoroughness, or coverage, of the testing. Is this a cribbage game app in which minor errors might be acceptable, or a drug infusion system where errors can be deadly? Does the system have zillions of paths that each require a unique test, or does the system have myriad combinations of data it must process correctly each time? This can be incredibly hard to calculate, but here’s an idea: Get a handle on what management expects for a schedule, what you can cover in that time, and what would be left uncovered. See if they find that acceptable—or if that inspires them to give you more time. (Another way to do it: Explain what kind of coverage you could generate while still keeping up with the programmers.)

The availability and reuse of previous test assets and environments can significantly change the time required to test. Unfortunately, there are no generally accepted ways to measure test reusability to factor it into the estimation process. If reuse is low, remember that test design and brainstorming aren't free. Even teams that do session-based test management and try to push design into the work need to come up with the charters for the work.

Lastly, good test estimation is just plain hard work. Software developer Joel Spolsky’s evidence-based scheduling method has four steps: (1) Break the planned testing tasks down into small chunks (without omitting any important ones), (2) Track the actual elapsed time, (3) Simulate the future using the Monte Carlo method, and (4) Manage your project actively. He claims good success with test estimation using this method, but who really wants to go to all that work? Not many companies I know of.

It’s no wonder that test estimation is so difficult. There are so many important factors that elude our measurement. And even if we knew most of them, a single unknown could skew the chart. In many cases, the deadline is given to us. It makes me wonder if perhaps we would be better off investing the time we would spend estimating in doing actual testing instead.