This post has been translated into Russian by HTR. Thank you!

A while back I wrote that Robert Martin was ruining software by being too good at programming. That was supposed to be a joke. Since then he’s done his damndest to actually ruin software by telling people they’re doing it wrong. His most recent response where he yells at software correctness was the breaking point for me, so I’m going to go ahead and say what many of us have been thinking:

Uncle Bob gives terrible advice. Following it will make your code worse.

He begins Tools are not the Answer by listing some of the “new” tools that are not “the answer”: Light Table, Model Driven Engineering, and TLA+. Now, I’m pretty biased here, what with writing a TLA+ manual and all. But I agree with what (I thought) was the core critique: there is no silver bullet. TLA+ has shown some astounded successes at Amazon and Microsoft, but it only verifies specs, not code. While it’s incredible for designing systems, you should combine it with other correctness techniques, like type systems and tests. A pretty good argument.

But it turns out that argument was only in my head, because he follows it with this:

The solution to the software apocalypse is not more tools. The solution is better programming discipline.

Just what is “discipline”, anyway? Uncle Bob says that means not “doing a half-assed job.” Uncle Bob is saying the solution for people writing bad code… is to not write bad code. Our programs would be perfect if it weren’t for the programmers!

One of the core assumptions of modern systems engineering is that there’s a constant flow of defects: that people make mistakes. You can’t rely on people to not fuck up on their own: after all, the US still has 30,000 auto deaths a year. Rather, the best way to reduce the volume and severity of mistakes is to adjust the system itself. Either make them harder to do, make them easier to catch, or make them cause less damage when they do happen. Don’t just blame the drivers, give them safe roads! Give them seatbelts!

One way of doing this is to add a bureaucratic process, such as code review. If your code doesn’t conform to requirements (it lacks tests, you named your variables x and x2 , etc), the code will be rejected. That, on a systems level, reduces bugs. When we adopt mechanical tools, like tests and IDEs, all we are doing is automating those processes. We use the way we create code, and the kind of code we create, to check our work. This is the vast field of software correctness, and spans everything from type systems to language design.

Uncle Bob is okay with software correctness: after all, he uses the phrase “unit testing” like a smurf uses “smurf”. But what about every other correctness technique?

In other words, any correctness technique that isn’t unit tests can be dismissed. But unit tests don’t give you much confidence in your code. That’s because humans make mistakes, and we can’t always guarantee that the mistakes we make are the nicely unit testable ones. For example, here’s a ceiling function I wrote. Quick, what numbers would you test it with?

function ceiling ( num ) { if ( num == ( num | 0 )) { return num ; } return Math. round ( num + 0.5 ); }

Did you try -1e-100 ? You’d have seen that ceiling(-1e-100) == 1 when it should be 0 . That’s because of how floating point works: 0.5 - 1e-100 == 0.5 . I’d be shocked if many people remembered to check that, if they even knew that floating point has quirks at all. But a property-based test catches it easily. Okay, function two:

function clamp ( min , x , max ) { return Math. max (Math. min ( max , x ), min ); }

The function is perfectly fine. The bug isn’t in the function at all! It’s that, in our 50 kLoC codebase, there is a single path that eventually ends with calling clamp with a null value. Are you going to test every possible path? Is that really superior to using a type system? Okay, last one:

function append_to_body ( type , text ) { var d = document. createElement ( type ); d . innerHTML = text ; document. body . appendChild ( d ); }

The function works fine, except you’ve now opened up an XSS vector. That’s why we have static analysis. These aren’t just toy examples. These are topics with plenty of research, plenty of development, and plenty of history. We’ve learned what they’re good for and what their limitations are. We use these tool because they work. Exactly the same reason we have unit tests and TDD.

But unit tests are not enough. Type systems are not enough. Contracts are not enough, formal specs are not enough, code review isn’t enough, nothing is enough. We have to use everything we have to even hope of writing correct code, because there’s only one way a program is right and infinite ways a program can be wrong, and we can’t assume that any tool we use will prevent more than a narrow slice of all those wrong ways.

That’s what makes Bob’s advice so toxic. By being so dismissive of everything but unit tests, he’s actively discouraging us from using our whole range of techniques. He demands we run blind and blames us for tripping.

Uncle Bob can say whatever he likes. We don’t have to listen to him. He’s welcome to keep shouting that tools won’t help, better languages won’t help, better process won’t help, the only thing that’ll help is lots and lots of unit tests. Meanwhile, we’ll continue to struggle with our bugs, curse our model checkers, and make software engineering just a little bit better. We don’t believe in silver bullets.