This should hopefully be a short and probably quite marmite post.

As the unit testing debate rages on with should we / shouldn’t we? How much does it cost? How long does it take? How much of it should we test? I have continued developing my skills as a developer, lead and designer of solutions. During this time I have learned a lot about approaches to unit testing and how I could achieve better ways to test my solutions.

I do qualify what I am about to say with the following: I have been doing automated testing in some guise or other since the middle of the last decade. As such, I have been through multiple generations of the pains that developers now face in the modern era (believe me when I say, the record / replay model for early mocking frameworks was far from the ease of the current NSubstitute / Moq we have now). Also – code coverage is a metric, like all metrics – it has it’s place and I believe that it is a very worthwhile one to isolate out untested (and often potentially flawed or unknown) area’s of your codebase. As a metric, it has to have some intelligence applied the other side of it to see what it is that is untested and refactor / test accordingly.

The Common Arguments & Questions

I am going to set aside the should we (not) unit test our code. My belief is very simple – we should and we should also achieve 100% test coverage. Below are listed some of the most common ones which I will cover.

It has no logic – why should I test it?

It is too difficult to test.

It takes too long to write tests

It has no logic – why should I test it?

This is an important question you have to ask yourself as a developer, and just take a moment to think on it….

In my experience – code that has no logic usually falls into one of a few categories.

Chained constructor arguments (this(), base())

Static extensions

Code calling a pre-existing / non controlled library

It is a test in its own right or solely created to aid testing

In all three cases above – I would agree there is no reason at all to test the code in particular.

Chained constructors should contain no logic of their own, simply setting of properties / fields or just passing on their parameters.

Static extensions – these should if written correctly, implement at worst a singleton field object which IS tested and hand off their logic to it instead

Code calling a pre-existing / non controlled library – we have to assume that the library beneath is in fact tested – or at least has a defined behaviour. There are many approaches to dealing with this – adapters, repositories are the main 2, but occasionally you have to get a little more creative.

It is a test in its own right or solely created to aid testing – this is kinda self explanatory, we don’t need to test our tests otherwise it will never end

If your code doesn’t fall into one of the categories above – or another genuine similar edge case, you should possibly be asking a different question ‘What is this method / property / whatever doing in my codebase?’

It is too difficult to test

Testing and true test driven development I will grant you is a skill, but to me, it is a skill like many others. It takes practice to master it. I actively choose to use unit tests to actually plan my proof of concept code – so even when I am just messing – I will have ‘SandboxTests’. By adopting TDD even for your sandbox code, you ensure you achieve TESTABLE working code that fulfils your objective.

It takes too long to write tests

Invariably I find this tends to come as an argument from developers who don’t do this day in day out. I will turn it on its head… I can write my css in less / sass, I can write javascript in coffeescript and I could probably write native android apps with my limited java knowledge. The fact is I don’t do them regularly and as such, I am just WAY slower than employing someone who does any of the above every day of their work life.

Excluding code from coverage

Remember that Code Coverage is a metric, and as such, we have in-built mechanisms to allow us to exclude elements that we know should not be covered in that metric (as we have done with Stylecop and other metrics). This is the System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute, it can be applied down to method / property (and I believe field level) in your code to denote that the code for some reason is not going to be tested and that you (as an intelligent developer) have decided this. It is respected by most of the major code coverage tools including DotCover (shipped with and

Sitecore in particular is full of areas where developers implement ‘poor mans DI’ (though I have a solution to resolve them correctly from the container – Commands & Pipelines through IoC)

public class MyHttpProcessor : HttpProcessor { private readonly IDependency dependency; protected IDependency2 Dependency2 { get; private set; } [ExcludeFromCodeCoverage] // There is no logic to test public MyClass() : this(ServiceLocator.Get<IDependency>(), ServiceLocator.Get<IDependency2>()) { } public MyClass(IDependency dependency, IDependency2 dependency2) { // This should be the normal route in through our test code this.dependency = dependency; Dependency2 = dependency2; } // Some Process Logic }

I personally like the convention of having a comment reason (and I am sure if you have a style cop / code analysis type solution in your build process, you could probably add a custom extension to check for this in an automated fashion) to show that as a developer I have thought about the reasoning. Note also that I have marked my field (dependency) as readonly, and my property (Dependency2) with a private set. This is also a contractual idea to show me as a dev that these should (ideally) only ever be set by the constructor.

The following code represents an approach you could use to look at static extension methods.

[ExcludeFromCodeCoverage] // Extension wrapper for ILogicProvider public static class MyExtensions { // The ILogicProvider implementation should be fully unit tested private static ILogicProvider logicProvider; internal static LogicProvider { get { return logicProvider ?? (logicProvider = ServiceLocator.Get<ILogicProvider>()); } } public static string GetSomeInformationFromLogic(this IWhatever whatever) { return logicProvider.GetSomeInformation(whatever); } }

Note that I don’t support the general use of service locators, but they have a couple of occasional uses that I accept as being sort of necessary, I don’t support the ‘Poor mans DI’ solution shown in the top example, and I accept it as a lesser of 2 evils in the bottom one.

I generally also don’t support the use of extension methods other than to augment an api you don’t control.

Just because you could, does it mean you should?

So, I have stated my belief above that I think code coverage of 100% as a metric is an attainable goal. I have written a lot of code that achieves this comfortably. However, in large development teams, having the build go red EVERY time someone fails to cover a new piece of code or a bug fix correctly is probably going to (at least in the early days) cause widespread panic.

Realistically therefore I usually recommend by employing these techniques, you should be looking at a CI build warning at 95% and a build failure at around 90%. This gives a little wiggle room to allow quick fixes to be pushed and retro tested in emergencies. By forcing this scenario’s, developers in the team are forced to learn better practice on writing testable code (even if it means slowing them down in the early stages), and means they are more likely (in my opinion) to start to better themselves by better understanding common development paradigms and patterns.

Before someone else points it out – I have a fairly hardline view on unit testing and coverage but I agree with a little icon / meme appearing on my LinkedIn feed not long back – ‘All code is guilty until proven innocent’. I have found so many bugs / potential failures in seasoned developers code by writing test for it and analysing the possible scenario’s. I also don’t buy the ‘that value will never be null / that method will never return a negative’ type arguments I have heard.. Invariably over time, the dreaded NullReferenceException will prove the best of you wrong.