I don’t imagine it’s a stunning revelation to say testing has a point of diminishing returns (which isn’t to say testing is not vital, even central to software development). This is what I’ve come up based on my experience – hopefully someone out there can point me to more official research:

Note that there is a geometric progression here. The first few defects do not take very long to find. Then it gets harder and harder to find more defects. Bugs 1-10 might take you an hour. The 100th bug by itself might take you 10 hours.

Where exactly the point of diminishing returns lands on this curve depends on a bunch of things of course. First and foremost would be if there are any lives at stake. For example, if you’re testing body armor for the US military, you would think it would be quite a ways up and to the right. (Evidently though – it’s not.) In software, medical applications that could result in someone’s life being saved might qualify.

Next might be money issues, such as banking applications. You wouldn’t want to go live with a beta version of software that, for example, manages bank accounts, account transfers, and the like. (These people know that now.)

Most Web applications on the other hand are a little to the left of the far-right edge of the chart. No one is going to live or die if someone can or cannot post a MySpace comment. Ecommerce applications such as SoftSlate Commerce do have to deal with money, and so – especially those parts of it involved in payment transactions and the like – should rightfully take special attention.

But besides how mission-critical the code in question is, there are other factors:

Would defects in the code be easy to recover from? (Some defects come out of left field and take forever to fix, but most of the time, the developer has a gut feeling about what the nature of the defects would be and can make a judgement.)

Is the code isolated enough that it would not have cascading effects? (An argument for modular development.)

What is the client’s tolerance for risk on a human level? (Well, it matters!)

Would additional testing delay deploying the feature beyond an important date?

Would additional testing cause the feature to suffer from marketing amnesia? (In which case, maybe it shouldn’t have been developed in the first place!)

Are all hands going to be available if a defect is discovered? (There is a tendency to want to deploy big projects in the middle of the night, when there are as few users as possible. While that makes sense sometimes, we prefer to launch big features at the beginning of a regular workday, when everybody is around, alert, and ready to help. Definitely not on Friday at 5:30pm!)

Is the feature being launched with a lot of ballyhoo? (Prefer “soft launches” if you don’t have as much time to test.)

You might think it’s professional negligence to say all this about testing – that one should always strive for perfection, zero defects, in which case the above factors shouldn’t matter. Our time, and our clients’ money is too valuable to waste it testing to an extreme. Yes, our clients deserve excellent software, but they also deserve us to be smart about how we achieve it. For example, a couple of the most cost-effective testing techniques we do regularly include parallel ops and automated functional testing. Those are good ways to catch more defects without going beyond the point of diminishing returns. With the time left over, we can make the software better in much more certain, tangible ways.