Is This For Me?

I’m going to showcase how I prefer to structure my microservice repositories while I practice Outside-in TDD using C# and Docker Compose.

I’ll show you how I create the skeleton repository with some initial tests. We will implement some temporary behaviour to satisfy the tests before replacing it with a more persistent solution.

If you’re like me, you won’t have Christmas done, so we will walk through the commits in the repository to save a bit of your time. What you gain from not copying and pasting, can be spent on wrapping and sticking instead!

This post is one of a series that is the The 3rd Annual C# Advent. Please take a look at the other posts and perhaps get your thinking caps on for the 4th.

Background

A number of years ago I was working for a client that decided we should be favouring outside-in tests. I understood the term to mean that tests shouldn’t have any knowledge of the inner workings of a component, but instead should just validate any outputs and/or any side-effects.

What I’ve learnt whilst applying this approach with other clients, is that it’s only half an understanding; when you realise the tests you WERE writing, were inside-out, things finally click. These tests may well have had no knowledge of the inner workings of the component they were testing etc. etc., but we were in fact solving the problem from the inside-out rather than outside-in.

Typically inside-out TDD will focus on teasing out detail in small components, and as a developer you will work towards the outside, layering more components until you get to the exposed surface of the API. These tests will be coupled to the implementation of your components and may prove costly during future maintenance. Conversely, with outside-in TDD you will construct tests that interact with the surface of the API only, without delving into implementation details.

The advent (excuse the pun) of containerisation has certainly helped in my understanding. Being able to replace whole components, or indeed containers without needing to refactor a whole heap of tests is a very compelling argument.

TL;DR

I’ve you’re really short on time, or just using this post as a reference, take a look at the GitHub repository https://github.com/acraven/microservice-outsidein-tests.

Moving On

Don’t get me wrong, I still practice TDD from the inside-out, but I always try to start with an outside-in test first. I find myself leaning towards focusing on success scenarios from the outside-in, then switching to inside-out to tease out potential problems with edge cases and generally making the code more robust.

One problem I have found with an outside-in approach, is that should you extract some functionality into a library, you may well find yourself short on tests if the components you are moving were relying on tests interacting with an API surface some layers away.

The showcase builds an API that captures basic contact information. We will see a skeleton solution, the first tests and an in-memory implementation that satisfies the tests. Finally I will show how we can replace the in-memory implementation with one that uses MongoDb without affecting any existing tests.

Prerequisites

You’ll need the .NET Core 3.1 SDK and Docker (for Windows) and a moderate understanding of both. You can download the SDK from https://dotnet.microsoft.com/download and Docker from https://hub.docker.com/editions/community/docker-ce-desktop-windows. I’m using Windows and Powershell, but this post should translate equally well to Linux or Mac with a few tweaks.

Use your Git client to clone the GitHub repository https://github.com/acraven/microservice-outsidein-tests.

Skeleton

First of all, checkout the Skeleton commit with git checkout Step_1 ; for your reference I created this solution using the following script before adding a couple of Dockerfiles and a Docker compose file.

The repository will look something like the screenshot below; the app folder contains the components to build the API Docker image and the outside-in.tests folder the same for the [outside-in] tests Docker image.

Run build.ps1 to confirm all is well; you will see a lot of log output, but hidden in there should be two passing tests, one from each test project. We will focus on the outside-in tests in this post, but the other test project would be used for tests that are coupled to the API via a project reference.

Write Some Tests

Had this been a real-world problem, I would have probably opted to use GraphQL, but for simplicity I’m using a RESTful approach. I’ve got three scenarios for creation and retrieval of contacts, so let’s take a look at one of these scenarios. It’s pretty straight-forward, it adds a contact and then retrieves it using the location returned by the response from the add.

I’ve been writing multi-tenant APIs of late, so I’m continuing with that here; for no other reason that it’s pretty simple to isolate all the tests from one another. The ScenarioBase class that’s used for the test scenarios creates a unique tenant for each scenario and passes it to the API in an X-Tenant header. For production use, one may choose to supply the tenant’s identifier in a JWT for example, but this is beyond the scope of this post.

Checkout the Write Some Tests commit with git checkout Step_2 ; the tests interrogate the API and validate the Json response using FluentAssertions.Json.

You can run build.ps1 at this point, but it won’t come as a big surprise that all 6 tests fail.

Make The Tests Pass

Advocates of TDD would suggest that we should now write the minimum amount of code to make the tests pass. We know our goal is to store the contacts in MongoDb, but as we haven’t got any MongoDb infrastructure hanging around (yet). Moving directly towards our goal can be a distraction, and so “writing the minimum amount of code to make the tests pass” isn’t necessarily the best course of action; I favour the approach of writing the minimum amount of code to make the tests pass in the least amount of time.

Checkout the Make The Tests Pass commit with git checkout Step_3 ; here we have created an in-memory store for our contacts which is invoked by a contacts controller.

Now when you run build.ps1 the tests should all pass. You could deploy this container should you wish; you shouldn’t, but you could. It has some value.

You might be thinking now that it’s quite time consuming running a build script each time. And you’d be correct. You can run the API in your IDE and the outside-in tests too allowing you to debug both the tests and the component(s) you’re testing. I use JetBrains Rider as my IDE, but if you prefer Visual Studio, you will need VS 2019 for this example.

Make It Persistent

Obviously an in-memory store is flawed, as at some point the container will cease to exist — when you release your next feature for example. We are now going to replace the in-memory store with a MongoDb backed store allowing our contacts to be persistent.

This relies on some existing MongoDb code I had lying around. You’ll notice that an interface is injected into the MongoDbAggregateStore class — we’re not actually testing this individually (as we may have done with inside-out tests) so we could just inject the concrete MongoDbClient in instead (so long as it’s registered as such in Startup).

Checkout the Make It Persistent commit with git checkout Step_4 ; we’ve included MongoDb in the Docker compose file, so we now have a database available when running the outside-in tests.

Having wired up the MongoDb components in Startup, you can run build.ps1 for the final time and the tests should remain green. At which point I’m often suspicious — you can confirm data is written to MongoDb using a tool such as Robo 3T.

Summary

We’ve seen how we can replace a whole component of a microservice without necessitating a refactor of any existing tests. The next step, albeit contrived, would be to replace the whole API container with something like a NodeJS implementation — an exercise for the reader maybe.

I love how easy it is to spin up dependencies in Docker. Your API may interact with RabbitMQ, SQL Server or even an email gateway; whatever it is you can spin it up in isolation using Docker Compose and write outside-in tests to validate these interactions work correctly.