I’ve had a couple conversations the past couple weeks about extending test coverage in the user interface and the necessity of writing integrated tests. One person wished his team were doing more automated testing through the UI (web app), and the other person just assumed that testing the UI (WinForms) itself was impossible. For the sake of this conversation, let’s say that it is perfectly possible to test the actual user interface. That leaves us with the question “is it feasible or even necessary to write automated tests against the UI?”

Let’s broaden the question out to “how do I know my unit and integration testing is good enough?” Here’s my simple, but subjective, criteria to knowing if the developer testing approach is sufficient:

Are you getting defects that seem easily preventable? Oddball combinations of user interactions and edge cases don’t count. If you’re getting a perfectly manageable rate of defects from the testers, then I’d say you’re doing ok. Unfortunately, my experience is that most of the bugs are related to screen behavior. I actually write quite a few unit tests purely on screen to presenter wiring. Going farther, my particular application contains so much screen behavior that I also invest in writing an extensive battery of integration tests between the MVP triad with the rest of the application services stubbed out just to get the interaction of the MVP pieces down. It’s turned up a lot of problems and helped stop regression bugs. That investment wouldn’t pay itself off on many projects, but it does on mine.

Are you having to use the debugger quite a bit on your own code? This might be the key indicator of the effectiveness of your unit testing. If you’re still having to use the debugger to any extent, my first thought would be to start making your unit testing more granular. Good unit tests/specifications keep that debugger collecting dust. Since I’d argue that the short term value proposition of doing TDD is to move more time out from the debugging column than you have to move into the unit testing column. If you’re still debugging, then TDD isn’t paying off. I learned this lesson in a hard way when I was doing some of the earlier work on StructureMap. In later releases I made changes to the architecture just to create more seams for easier unit testing. Those seams made the feedback loop faster and finer grained so I just really didn’t need my debugger quite so much.

From a comment on programming.reddit.com:

“The TDD fanboys are swarming to defend their dogma. From a comment, “If TDD hurts, it’s because you’re doing it wrong” (!!!).”

Sarcasm aside, this quote is dead on accurate. Pain is a stimuli. Pain tells us to stop doing whatever it is we’re doing for our own good. If your arm hurts when you bend it that way, stop bending it that way! If doing TDD on your project hurts, then you’re definitely doing something that isn’t right for your project.

I have yet to work in a problem domain where I couldn’t effectively use TDD for most of the code I wrote, but I’m sure that that domain is out there. On the other hand, to make TDD go smoothly you often have to change the way you construct the code and structure responsibilities to take advantage of TDD. In many cases that change coincides with what I would call good design anyway. In my experience, code that makes tests difficult or laborious to setup is almost always suffering from tight coupling and low cohesion — problems that you really don’t want regardless of using TDD. If you have questioned your design, but still can’t get to a point where writing and maintaining the unit tests first is possible, then yeah, ditch TDD because it’s doing more harm than good. TDD is merely a means. Working software is the goal. If you’re happy with your results, all’s well. Unless your competitors are continuing to get better…