One of the things that’s often advocated for a good software development process is having unit tests.

The more of those tests we have, the better. Having them increases our confidence that our code works as we think it works, isolates issues quickly to the area where they are introduced and thus increases our ability to do refactoring.

Central to unit testing in object oriented systems is the concept of a “Mock”. Those software systems largely exist out of objects that collaborate with each other in some way. Objects having internal references to other objects are said to be having dependencies on those other objects.

In unit testing we would like to test the smallest possible unit, which represents a single code path through a single method. As soon as that code calls a dependent object, we can’t call it a unit test anymore. In order to restore balance, and make the world perfect again, all those dependencies should be replaced by fake objects, called Mocks. See e.g. When Should I mock.

For years we have learned that we all should do this. Unit tests are good, unit tests need mocks. Therefor we unit test, and therefor we mock. Life was simple.

Lately however a growing group of people seem out to disturb our perfect little world. Their creed: “Mocks are wrong. We should use real objects”. They don’t seem to have a name yet, but following the NoSQL movement, let’s call them the NoMock movement.

Among them are people like Bill Burke, Dan Allen, Andrew Lee Rubinger, Augie Fackler, Nathaniel Manista and Stan Silvert.

They spread their ideas in blogs and presentations, e.g.:

So what exactly did we thought mocks bought us and what parts of that are those NoMock guys rebutting?

One benefit of mocks that comes up often is performance. Surely mocks are much faster than using say a real EJB as a dependency that requires starting up a WebSphere server that takes 30 minutes for a cold start alone, and then some unholy amount of time to deploy the actual application that among thousands of other beans contains this single bean we need.

While this may have been true in the past, it’s indeed true that the current crop of application servers starts up in a second or less on modern hardware. Via tools such as Shrinkwrap it’s easy to define micro-deployments and deploy those with another tool like Arquillian in mere milliseconds.

Another much touted benefit of mocks is the ability to run tests independently of some global resource. For a large number of cases, this global resource is then the main database on which an application depends.

In this case we could ask ourselves whether this global resource is really a global one. Many databases can be installed locally with great ease (think PostgreSQL and MySQL). A best practice in software development is to be able to install the entire application stack that you’re developing on your local workstation anyway. Sure, there are exceptions if big external mainframes are involved, but for many common types of development this should be possible. If for some reason this is difficult or even impossible, then the team likely has much bigger problems to worry about. Namely, more often than not that difficulty is caused by things like software depending on “some server” that “someone once installed” and now “nobody knows what it exactly does”. This will not only make (unit) testing extraordinary difficult, but will also impede proper debugging and staging.

So, with some exceptions, could we perhaps say that mocks in this case are used to cover up another code smell? Surely this can’t be a good practice, can it?

A more profound statement uttered by the NoMock movement is that testing with mocks simply isn’t really useful. By extension they thus seem to be saying that unit tests are just not really useful.

Obviously, this is a controversial statement that goes against common wisdom, and it surely may shock people who always did unit testing “because it’s what we should do”, without giving it much thought. (will people who did think it through not be shocked, and will they simply revolt against such a statement?)

The canonical unit test example is that of the calculator. Here we test an add() method by providing it with some well chosen sample points. Two positive numbers, a negative and a positive number, two negative numbers, etc. Every time we check that the output is what we expected. Without a doubt unit tests are very helpful here.

But the simple add() method didn’t had any dependency that we needed to mock. It’s a pure “functional” thing; the output depends only on the input, and nothing else.

Let’s now look at an example that does use dependencies. Better yet, let’s dive in deep and go straight to an example where a collaboration between objects is the main goal. For that, consider a typical Service facade where a method starts a transaction and calls two DAOs that persist something to a DB. The idea is that both DAOs join the transaction, and that either the data from both DAOs ends up in the DB, or none at all. The DB has some contraints set for the data that we are persisting.

This is what the class looks like:

@Stateless public class ServiceFacade { @EJB private DAO1 dao1; @EJB private DAO2 dao2; public void doStuff(SomeObject someObject) { dao1.foo(someObject); dao2.bar(someObject); } }

Per the rules of the unit test, we now have to mock our two DAOs and then our test will swap in those mocks before calling the method that we want to test. In order to do that, we first need to add two setters for those mocks:

@Stateless public class ServiceFacade { @EJB private DAO1 dao1; @EJB private DAO2 dao2; public void doStuff(SomeObject someObject) { dao1.foo(someObject); dao2.bar(someObject); } public void setDAO1(DAO1 dao1) { this.dao1 = dao1; } public void setDAO2(DAO2 dao2) { this.dao2 = dao2; } }

Now we also need to create our mocks. Let’s create a mock that just remembers if it’s being called. A mock for DAO1 could look like the following:

public class MockDAO1 implements DAO1 { private boolean isCalled; // + getter public void foo(SomeObject someObject) { isCalled = true; } }

The mock for DAO2 will look the same. Now we’re ready to do our test:

DAO1 dao1 = new MockDAO1(); DAO1 dao2 = new MockDAO2(); ServiceFacade serviceFacade = new ServiceFacade(); serviceFacade.setDAO1(dao1); serviceFacade.setDAO2(dao2); serviceFacade.doStuff(new SomeObject()); assertTrue(dao1.isCalled()); assertTrue(dao2.isCalled());

After we run this code, the test passes and we’re happy. The code works! YES! 😀

Taking a step back; what did we *really* test? Well, uhm, we tested that Java was able to call two methods on two objects. Great, it’s able to do that.

What we however really wanted to test was whether the data actually ended up in our DB and whether both DAOs joined the transaction to see if the effects of DAO1 are correctly undone (rollbacked) when DAO2 throws. In our test with mock objects, we could build a mock DAO2 that throws an exception, but this will surely not make DAO1#isCalled false again.

Now we could try to build some mock transaction manager, perhaps store it in TLS, then make our mock objects “mock transactional” objects, and then we could… but who are we kidding here? What would we *actually* be testing?

Aren’t we not blindly following the rules of the unit test here, without giving it a second thought of why we are actually doing it and what benefit it brings us? It’s this a bit like putting auto-generated comments above our methods, just to make the code analyzer happy?

Maybe those NoMock guys got me to rethink my strategy…