We all know that unit tests are important. They're the gold standard of automated testing, being the cheapest and fastest to create and run. And yet testers still find bugs. Even with 100% unit test coverage, there may be some arrangement of data that you didn't consider that could result in a bug.

Wouldn't it be great if you could find out which additional unit tests you need to bridge the gap between unit and manual testing? What if you could figure out what unit tests to write based on how the testers used the system? And is there a way to do that automatically, so that it happens in real time?

Luckily, the answer to all those questions is yes. Here's one approach.

The theory

Testers can think of ways to exercise the system at a higher level, which developers working in specific areas don't always see. And, although testers have that visibility, they don't often understand the inner workings of the code that the developers can access. The two circles of testing overlap, but aren't identical.

When manual testing finds a bug, it means there's a unit test missing. Usually, a unit test gets written after the bug's been found and fixed. But it can take a lot of time to communicate these findings to the developers, and for the developers to translate them into the appropriate unit tests.

For a system test, the classes, modules, interfaces, and methods in the code still get exercised, just as they would with unit tests. But hundreds, thousands, or even millions of these unit-testable points may be hit during a single system test.

Let's look at this in slow motion: When you perform a system test, there's going to be an initial touch point somewhere, where the code first gets used. Maybe it's a webpage that calls a web service upon clicking a button.

Wherever that touch point is, from there you will have a constant entry and exit of methods until the test is completed.

That means that any system test is really a collection of possible unit tests. But it would be tedious to trace down the code and then write all the unit tests that represent a system test.

The trick is to create them as part of doing the manual testing. If that's possible, then you'll have a lot more unit tests to add to the arsenal, and long-running regression tests can be reduced to an equivalent set of fast-running unit tests.

Code is just text, folks

As complex as code can be, source code is just text. It's formatted in a special way, but it's not that special. Humans can look at a piece of code and tell where a class or a method starts. And if we can do that and describe it to each other, then we can describe it to a computer. When you treat code as if it's text, that opens code up to being processed with regular expressions.

Let's say we have a class that looks like this:

class Arithmetic

def add(x, y)

return x+y

end

end

If this method were hit during a system test, you'd need to know four things to have enough information to build a unit test:

The name of the class

The name of the method

The parameters given to the method

The return value coming out of the method

Using a little regular expression magic, you can extract class names, methods, parameters, and return values, and then modify the code in-place, to look like this:

class Arithmetic

def add(x, y)

log.append(“assert(Arithmetic::add(#{x}, #{y}) == #{x+y})”)

return x+y

end

end

Although this looks like Ruby, it's actually pseudocode. The point is that you can have methods write out unit tests based on the name, location, parameters, and output. Parameters such as x and y should be replaced with the actual values before being written out, so that the unit test looks like this:

assert(Arithmetic::add(3, 5) == 8)

How it all works

If a system test runs and passes, then a batch of unit tests is available. That batch is what you'll run going forward, unless there are significant changes. And that batch of unit tests will run much faster than the system test that produced them.

On the other hand, if the system test fails, then that batch of unit tests becomes a great way to help developers pinpoint where the problem is. Whichever units have the problem won't pass, and you'll know which section of code caused the system test to fail.

The result: You end up with a Gatling gun, but it shoots out unit tests instead of bullets.

Practical considerations

While experimenting with this, keep these points in mind:

Many tests will be repeated. You might want to write a program that combs through the generated tests and removes duplicates before running them all. Think: cat | sort | uniq in Linux.

that combs through the generated tests and removes duplicates before running them all. Think: cat | sort | uniq in Linux. Unit tests can be written out to separate files based on method name and class name, which is great for focused testing. Run all unit tests for a method that was just changed, then for the class that encloses it, and then for all remaining tests.

Multithreading could be a challenge if the log files are receiving data from multiple sources.

Now get started

If you're ready to try this, don't be afraid to jump in and start. It's just text. You won't break anything that can't be reverted, and you can try with a small area before committing to changing the whole code base. You, your team, and your company will find the results very rewarding.

Keep learning