Test driven development, or TDD, is a slightly controversial topic among developers. Some of the objections raised are:

Doesn’t it make sense to implement the solution before you test it, rather than vice versa

Tests before or after, isn’t the end result really the same?

The emphasis on testability can contort the code unnaturally

Progress is slower when writing “test-first”

You can’t enforce TDD

Having seen many projects without TDD delivering quickly in the first few months, then over later months and year finding it harder and ever more time-consuming to change code reliably, this article unashamedly argues in favor of TDD by addressing these objections.

Doesn’t it Make Sense to Implement the Solution Before You Test it, Rather Than Vice Versa?

It’s certainly more traditional within the programming community to write the code first and then worry about testing, and it does, at first, seem a bit bizarre to write a test for something that does not yet exist. However, if we treat the test as just an encoding of a piece of the spec then it makes good sense to encode that spec fragment first so that we can verify that our soon-to-be-written solution is in line with the spec. It also means that once all the spec has been translated into tests and code written to pass those tests, we’ll have the complete specification embodied in the tests. This expression of the specification with its ability to verify our solution means that we can confidently change our implementation at any time and have confidence that if the tests all pass, then our changes have not regressed the code.

Tests Before or After, isn’t the End Result Really the Same?

No, for several reasons – from the perspectives of design and human nature. First from the design angle:

TDD forces clarity of thought prior to coding. From my own experience without TDD, I’ve found that even when I thought I understood the requirement completely, it was only when I started coding a solution that gaps in that understanding emerged. The further into the development of a solution you are when these questions arise, the greater investment you’ve made in that solution and the more tempting it is (subconsciously or otherwise) to respond to those questions with answers that fit your already partially-coded solution, and thereby start deviating from the requirement. When you write the test first you are encoding the requirement so you are forced to think very clearly about it and answer any questions prior to investing any time in a solution.

It results in better design. When you write code to address a test, it needs to be able to run in an isolated fashion in the test environment (e.g, by JUnit in a developer IDE, or in TeamCity, Hudson or some other build service) away from all its dependencies. Being able to run your code easily in different environments requires you to keep coupling between components low. For example, your solution will likely have a number of dependencies which may not be available in the test environment. Instantiating references to those dependencies internally in your solution code means that you cannot provide a different implementation for those dependencies in a test environment making your code both hard to test and hard to reuse in other environments. However if you allow the dependencies to be instantiated outside your code, and injected (usually as constructor parameters) into it, then you can supply test-doubles in a test environment and other implementations in other environments without changing your code. (This is the Dependency Inversion Principles, one of the SOLID Design Principles). Writing your code to pass a test forces you to inject dependencies in this manner.

You don’t write code that you don’t need. Sure, as you add more tests you will need to refactor your existing code which may introduce more complexity but it may very well lead to making your code simpler and more general – either way you won’t over-engineer a solution by presupposing what is required. For a beautifully succinct demonstration of this in action using TDD see Uncle Bob’s Prime Factors Kata.

Now from the human nature angle:

Writing tests after you’ve written the solution can easily lead you unconsciously to write tests that simply verify your implementation works as you coded it, rather than that that it meets requirements – you already have the answer in your head, so to speak, so you’ll tailor your tests to match that answer.

When working under time constraints (and I’ve yet to come across a development team not working against the clock) means that once implementation is complete there is a strong temptation to defer the tests till later (often due to pressure from management to get a release out) and when that happens they’ll very likely never be revisited. Writing tests first as part of standard design/development strategy means they can’t be left behind.

The Emphasis on Testability can Contort the Code Unnaturally

This is an objection I’ve heard but never seen. Certainly code written against a test is often different to that written without a test being in place beforehand but as described above, it is usually better decoupled from its dependencies and often simpler. It can certainly mean there are more classes involved. For example rather than instantiating dependency D internally in the solution class S, you’d instantiate the dependency first and pass it as a constructor parameter. That dependency may well need to be a test-double and so you might provide an interface, I, for the dependency that your solution takes as a constructor parameter. The test-double would need to implement that interface as would the dependency – a little more complexity perhaps, but a proper separation of concerns.

Progress is Slower When Writing “Test-first”

There is an initial investment in coming up with sensible tests that accurately describe the requirement that the solution is intended to implement. Also, when starting out on TDD the approach feels strange – not because it’s inherently weird, but because it differs markedly from what most developers are used to – and as such can take a little time to get used to. However, the long-term benefits by far outweigh these costs. Aside from getting more flexible, better-designed code (both of which I accept can be subject to debate) there’s one undeniable and significant long-term benefit – you’ll have a set of unit tests that both accurately describes the requirement (so can be considered a form of documentation that stays in sync with the code) and which validates the correctness of the code whenever you make changes. This last point is hugely valuable – it allows you (or more importantly, someone else) to refactor – or even rewrite entirely – the solution code with confidence. Those tests will tell you if you’ve regressed the code or not, which is critical for code that’s likely to be around for a few years as developers come and go and knowledge kept in people's heads is gradually lost.

You can’t Enforce TDD

This is true, TDD is a discipline, not a technology and so there’s no switch to set in a build tool to ensure that test-first approach is followed by a developer. However, that doesn’t make TDD a bad idea. As a development lead, the only real way to ensure TDD is adopted for your project is by convincing the stakeholders that it makes good sense:

First and foremost, convince your developers that it makes sense. Without team buy-in, it just won’t happen. Trying to push developers to do something they don’t believe in will probably fail and result in poor team morale. The arguments described in this article are an attempt to address developer doubts. Convince your management that TDD will deliver the sort of benefits they’re looking for. Often the objection from management is that TDD is too elaborate and too slow to work with. Done properly, however, TDD will result in more flexible, testable code and, as a result, more bugs will be found before deployment to production and significant refactoring will be a lot easier. All of these aspects help manage changes in a more controlled way and help achieve one thing that managers and customers alike both want – to know predictably when a release will be available. (Note, often when the complaint from either of these groups is that a predicted release date is too far away, what is really meant is that they have no confidence that they will get what they asked for on that date. They are usually really after certainty rather than speed).

In conclusion, TDD is a discipline that aids the design and implementation of flexible solutions and importantly ensures that you’re left with a complete set of unit tests around your solutions. The long-term benefit of this cannot be understated, in terms of the ability to refactor with confidence and to have a form of always-up-to-date documentation enshrined in those unit tests. It may well be that some developers can produce just as high-quality solutions without the test-first approach, but TDD provides a work pattern that will benefit most developers in achieving this and provides as an artifact, the battery of unit tests essential to retaining quality and maintainability of the application code over its lifetime.