Legacy projects

In my career I’ve worked on my fair share of legacy projects. In fact I’d say that almost all the 50+ production projects I’ve worked on in my career have been legacy projects to some degree.

My experience is undoubtedly not a unique one. I would bet that most of most developers’ work has been on legacy projects. Due to Sturgeon’s law, I think it’s probably safe to assume that most code in the world is legacy code.

The challenges of working on a legacy project

Maintaining a legacy project is often a bummer. Many of the following things are often the case.

Changes take much longer than they should, perhaps by a factor of 10 or more.

Deployments are scary. They’re preceded by stress and followed by firefighting.

Due to the fragility of the system and frequent appearance of bugs and outages, trust in the development team is low.

The development team is always “behind” and under pressure to cut corners to meet deadlines.

Stakeholders are mostly not thrilled.

Due to these bad conditions, developer turnover is high, meaning the team has to spend time training new developers, leaving less time for development.

How do you reverse these challenges? The various problems associated with legacy projects have different causes which are not all solved by the same solutions. No single solution is a panacea, although some solutions go further than others.

One thing I’ve found to have a pretty good ROI is automated testing.

The ways automated tests can help legacy projects

Changes take less time

Why do changes take longer in legacy projects than in cleanly-coded ones?

The first reason is that in order to fix a bug or add a feature, you often have to have an understanding of the area of code you’re changing before you can make the change. If you don’t understand the existing code, you might not even know where to try to slot in your new code.

The second reason is that while any change in any codebase represents a certain amount of risk, the risk in legacy projects is amplified. Legacy projects features are often a delicate Jenga tower, ready to come crashing down at the slightest touch.

Tests can address both of these problems.

Tests can aid a developer in understanding a piece of code in multiple ways. Earliest on, before any changes take place, characterization tests can reveal the previously mysterious behavior of a piece of code.

The idea with a characterization test is that you write a test for a certain method that you know will fail (e.g. `expect(customer.name).to eq(‘asdf’)`), run the test, then change the test to expect the real value you saw (e.g. `expect(customer.name).to eq(‘Gern Blanston’)`). Through this “reverse TDD” method, a picture of the code’s behavior eventually becomes clear.

Later on, once a robust suite of tests has been built up around an area of functionality, tests can aid in the understandability of the code by enabling refactoring.

If my code is well-covered by tests, I can feel free to give descriptive names to obscurely-named variables, break large methods into smaller ones, break large classes into smaller ones, and whatever other changes I want. The knowledge that my tests protect against regressions can allow me to make changes with a sufficient level of confidence that my changes probably aren’t breaking anything.

Of course, the scope of my refactorings should be proportionate to my level of confidence that my changes are safe. Grand, sweeping refactorings will pretty much always come back to bite me.

Deployments become less scary

Legacy projects are often figuratively and literally more expensive to deploy than well-done projects.

The story often goes like this: management observes that each time a deployment happens, bugs are introduced. Bugs frustrate users, erode users’ trust, incur support cost, and of course incur additional development cost.

Because the organization wants to minimize these problematic and expensive bugs, an extensive manual pre-deployment QA process is put in place, which of course costs money. Often there will be a “feature freeze” leading up to the deployment during the QA period, which of course incurs the opportunity cost of discontinuous developer productivity.

And because releases are so risky and expensive, management wants to do the risky and expensive thing less frequently. But this unfortunately has the opposite of the intended effect.

Because each deployment is bigger, there’s more stuff to go wrong, a greater “risk surface area”. It also takes more time and effort to find the root cause of any issue the deployment introduced because a) there’s more stuff to sort through when searching for the root cause and b) the root cause was potentially introduced a long time ago and so may well not be fresh in the developers’ minds.

Tests can help make deployments less risky in two ways. For one, tests tend to:

Help protect against regressions

Help protect against new bugs

Help improve code quality by enabling refactoring

All these things make the application less fragile, decreasing the likelihood that any particular deployment will introduce bugs.

Second, the presence of automated tests enables the practice of continuous deployment.

If an application can be tested to a reasonable degree of certainty by running an automated test suite that takes just a few minutes to run, then each developer can potentially deploy many times per day. Spread across the team, this might mean dozens or hundreds of deployments per day.

Contrast this to the alternative. If there is no automated test suite and each deployment requires the QA team to go through a manual QA process first, then that QA process is a bottleneck that prevents deployments from happening very frequently. Even if the QA team might be technically capable of running through the whole manual test process daily to allow daily deployments, they would probably not want to spend their whole jobs doing only pre-deployment tests. And that would still only get you one deployment per day instead of the dozens or hundreds that continuous deployment would allow.

The obstacles to putting tests on a legacy project

Let’s say you’ve read the above and you say, “Great. I’m sold on adding tests to my testless legacy project. How do I start?”

Unfortunately it’s not usually simple or easy to add tests to a currently-untested legacy project. There are two main obstacles.

Missing test infrastructure

Writing tests requires a test infrastructure. In order to write a test, I need to be able to part of the application I’m testing (often called the “system under test” or SUT) into a state that allow me to exercise the code and make the assertions I need to make in order to tell me that the code is working.

Getting a test framework and related testing tools installed and configured can be non-trivial. I don’t know of any way to make this easier, I just think it’s a good thing to be aware of.

I’ve worked in teams where the boss says, “Let’s make sure we write tests for all new features.” This seems like a sensible rule on the surface but unfortunately there’s no way this rule can be followed if there’s not already a testing infrastructure and testing strategy in place.

The development team and leadership have to agree on what testing tools they’re going to use, which itself can be a very non-trivial step. All the right people have to have buy-in on the testing tooling and testing strategy or else the team will experience a lot of drag as they try to add tests to the application. You have to pull the car onto the road before you hit the gas. If you hit the gas while you’re still in the garage, you’re just going to crash into the wall. So make sure to think about the big picture and get the right team members on board before you try to add tests to your legacy application.

Dependencies and tight coupling

Sometimes it’s easy to get the SUT into the necessary state. More often in legacy projects, it’s very difficult. Sometimes it’s practically impossible. The reason for the difficulty is often tight coupling between dependencies.

Here’s what I mean by tight coupling. Let’s say we want to test class A which depends on classes B and C, which in turn depend on classes D, E, F and G. This means we have to instantiate seven objects (instances of A, B, C, D, E, F and G) just to test class A. If the dependency chain is long enough, the test may be so painful to write that the developer will decide in frustration to toss his laptop off a bridge and begin a new life in subsistence farming rather than write the test.

How to overcome the obstacles to adding tests to legacy code

Missing test infrastructure

If a project has no tests at all, you have two jobs before you:

Write some tests Set up the testing infrastructure to make it possible to write tests

Both these things are hard. To the extent possible, I would want to “play on easy mode” to the extent possible when I’m first starting to put test coverage on the project.

So rather than trying to identify the most mission-critical parts of the application and putting tests on those, I would ask myself, “What would be easiest to test?” and write some tests for that area of the code.

For example, maybe the application I’m working on has a checkout page and a contact page. The checkout page is obviously more mission-critical to the business than the contact page. If the checkout page breaks there are of course immediate financial and other business repercussions. If the contact page breaks the consequences might be negligible. So a set of tests covering the checkout page would clearly be more valuable than a set of tests covering the contact page.

However, the business value of the set of tests I’m considering writing isn’t the only factor to take into consideration. If a set of tests for the checkout page would be highly valuable but the setup and infrastructure work necessary to get those tests into place is sufficiently time-consuming that the team is too daunted by the work to ever even get started, then that area of the code is probably not the best candidate for the application’s first tests.

By adding some trivial tests to a trivial piece of functionality, a beachhead can be established that makes the addition of later tests that much easier. It may be that in order to add a test for the contact page, it takes 15 minutes of work to write the test and four hours of work to put the testing infrastructure in place. Now, when I finally do turn my attention to writing tests for the checkout page, much of the plumbing work has already been done, and now my job is that much easier.

Dependencies and tight coupling

If a project was developed without testing in mind, the code will often involve a lot of tight coupling (objects that depend closely on other objects) which makes test setup difficult.

The solution to this problem is simple in concept – just change the tightly coupled objects to loosely coupled ones – but the execution of this solution is often not simple or easy.

The challenge with legacy projects is that you often don’t want to touch any of this mysterious code before it has some test coverage, but it’s impossible to add tests without touching the code first, so it’s a chicken-egg problem. (Credit goes to Michael Feathers’ Working Effectively with Legacy Code for pointing out this chicken-egg problem.)

So how can this chicken-egg problem be overcome? One technique that can be applied is the Sprout Method technique, described both in WEWLC as well as Martin Fowler’s Refactoring.

Here’s an example of some obscure code that uses tight coupling:

require 'open-uri' file = open('http://www.gutenberg.org/files/11/11-0.txt') text = file.read text.gsub!(/-/, ' ') words = text.split cwords = [] words.each do |w| w.gsub!(/[,\?\.‘’“”\:;!\(\)]/, '') cwords << w.downcase end words = cwords words.sort! whash = {} words.each do |w| if whash[w].nil? whash[w] = 0 end whash[w] += 1 end whash = whash.sort_by { |k, v| v }.to_h swords = words.sort_by do |el| el.length end lword = swords.last whash.each do |k, v| puts "#{k.ljust(lword.length + 3 - v.to_s.length, '.')}#{v}" end

If you looked at this code, I would forgive you for not understanding it at a glance.

One thing that makes this code problematic to test is that it’s tightly coupled to two dependencies: 1) the content of the file at http://www.gutenberg.org/files/11/11-0.txt and 2) stdout (the puts on the second-to-last line).

If I want to separate part of this code from its dependencies, maybe I could grab this chunk of the code:

require 'open-uri' file = open('http://www.gutenberg.org/files/11/11-0.txt') text = file.read text.gsub!(/-/, ' ') words = text.split cwords = [] words.each do |w| w.gsub!(/[,\?\.‘’“”\:;!\(\)]/, '') cwords << w.downcase end words = cwords words.sort! whash = {} words.each do |w| if whash[w].nil? whash[w] = 0 end whash[w] += 1 end whash = whash.sort_by { |k, v| v }.to_h swords = words.sort_by do |el| el.length end lword = swords.last whash.each do |k, v| puts "#{k.ljust(lword.length + 3 - v.to_s.length, '.')}#{v}" end

I can now take these lines and put them in their own new method:

require 'open-uri' def count_words(text) text.gsub!(/-/, ' ') words = text.split cwords = [] words.each do |w| w.gsub!(/[,\?\.‘’“”\:;!\(\)]/, '') cwords << w.downcase end words = cwords words.sort! whash = {} words.each do |w| if whash[w].nil? whash[w] = 0 end whash[w] += 1 end whash = whash.sort_by { |k, v| v }.to_h swords = words.sort_by do |el| el.length end lword = swords.last [whash, lword] end file = open('http://www.gutenberg.org/files/11/11-0.txt') whash, lword = count_words(file.read) whash.each do |k, v| puts "#{k.ljust(lword.length + 3 - v.to_s.length, '.')}#{v}" end

I might still not understand the code but at least now I can start to put tests on it. Whereas before the program would only operate on the contents of http://www.gutenberg.org/files/11/11-0.txt and output the contents to the screen, now I can feed in whatever content I like and have the results available in the return value of the method. (If you’d like to see another example of the Sprout Method technique, I wrote a post about it here.

In addition to the Sprout Method technique I’ve also brought another concept into the picture which I’ll describe now.

Dependency injection

Dependency injection (DI) is a fancy term for a simple concept: passing an object’s dependencies as instance variables (or passing a method’s dependencies as arguments).

I applied DI in my above example when I defined a count_words method that takes a text argument. Instead of the method being responsible for knowing about the content it parses, the method will now happily parse whatever string we give it, not caring if it comes from open('http://www.gutenberg.org/files/11/11-0.txt') or a hard-coded string like 'please please please let me get what i want' .

This gives us a new capability: now, instead of being confined to testing the content of Alice’s Adventures in Wonderland, we can write a test that feeds in the extremely plain piece of content 'please please please let me get what i want' and assert that the resulting count of 'please' is 3. We can of course feed in other pieces of content, like 'hello-hello hello, hello' to ensure the proper behavior under other conditions (in this case, the condition when “weird” characters are present).

If you’d like to see an object-oriented example of applying dependency injection to our original piece of legacy code, here’s how I might do that:

require 'open-uri' class Document def initialize(text) @text = text end def count_words @text.gsub!(/-/, ' ') words = @text.split cwords = [] words.each do |w| w.gsub!(/[,\?\.‘’“”\:;!\(\)]/, '') cwords << w.downcase end words = cwords words.sort! whash = {} words.each do |w| if whash[w].nil? whash[w] = 0 end whash[w] += 1 end whash = whash.sort_by { |k, v| v }.to_h swords = words.sort_by do |el| el.length end lword = swords.last [whash, lword] end end file = open('http://www.gutenberg.org/files/11/11-0.txt') document = Document.new(file.read) whash, lword = document.count_words whash.each do |k, v| puts "#{k.ljust(lword.length + 3 - v.to_s.length, '.')}#{v}" end

Conclusion

Hopefully this post has armed you with a few useful techniques to help you get your legacy project under control and start turning things around.