There have been a couple of times where I cut some of my unit testing short. Not because something wasn’t testable. Merely for the fact that my unit tests were becoming unwieldy and repetitive. Unfortunately, this prevented me from testing all the edges that I wanted to. Typically I would resort to combining multiple tests into a single test function. The downside with this approach was if a test failed, you don’t get the insight into what is actually failing without looking at the details as each test function shows as a single item in the test explorer.

Testing ‘All the Things’ became quite the burdensome task!

This is all before I started parameterizing my unit tests. By using parameters, it is simple to execute numerous test with different inputs and expected results. The benefit over this versus coding the values manually is each iteration can be visualized as a separate test in the Visual Studio Test Explorer.

Each of the three main Unit Test Frameworks for .NET support parameterizing unit tests.

In this post, we will be using the TestCaseSource feature within NUnit to guide our examples. Now, lets take a look at the following sample code where we are testing a fictitious class, InputParser.

public class ParameterizingUnitTests { [Test] [TestCaseSource(nameof(ParseTestsCases))] public void ParseTests(string input, string expected) { // Arrange var sut = new InputParser(); // Act var result = sut.Parse(input); // Assert Assert.AreEqual(result, expected); } private static IEnumerable<TestCaseData> ParseTestsCases { get { yield return new TestCaseData("One Two Three Four", "Three").SetName("Valid input, Returns value"); yield return new TestCaseData("One Two", null).SetName("Invalid input, returns null"); yield return new TestCaseData(string.Empty, null).SetName("Empty string input, returns null"); yield return new TestCaseData(null, null).SetName("Null input, returns null"); } } }

Now, there are three key things to notice.

First, the test’s function signature contains two parameters.

public void ParseTests(string input, string expected)

Second, there is an extra attribute (TestCaseSource) supplied to the function

[TestCaseSource(nameof(ParseTestsCases))]

Lastly, there is a static property (ParseTestsCases) which returns an IEnumerable of TestCaseData.

private static IEnumerable<TestCaseData> ParseTestsCases { get { yield return new TestCaseData("One Two Three Four", "Three").SetName("Valid input, Returns value"); yield return new TestCaseData("One Two", null).SetName("Invalid input, returns null"); yield return new TestCaseData(string.Empty, null).SetName("Empty string input, returns null"); yield return new TestCaseData(null, null).SetName("Null input, returns null"); } }

As you’ve probably guessed, each instance of TestCaseData represents a separate test iteration and the TestCaseSource attribute ties the data to the test function. NUnit gives use the added benefit of providing a descriptive name for each test. Now when we execute the tests, we see the following.

Note – NUnit does not run natively in Visual Studio. I’m using the ReSharper Test Runner to bridge this gap.

In the event that there are some dependencies that need to be mocked, the parameters could also be used to alter the behavior of the mocked objects.

[Test] [TestCaseSource(nameof(ParseTestsCases))] public void ParseTests(string input, string expected) { // Arrange var mock = new Mock<ISomeService>(); mock.Setup(x => x.GetData()).Returns(input); var sut = new InputParser(mock.Object); // Act var result = sut.Parse(input); // Assert Assert.AreEqual(result, expected); }

Of course this is a very simple example however, as the number of test iterations increases, the more beneficial this approach will be. This brings us back to my opening comments. I found myself to be more willing to test edge cases, multiple permutations of settings, etc using this approach as opposed to the alternative. Adding an additional item to my TestCaseData is far more convenient than adding an additional test function OR embedding the test into another function and losing the visibility in the test explorer.