A few years ago, I was asked to assist on a project where the client was replacing a highly-customized legacy system. As part of the discovery process, we were looking at the integrations to the other systems—internal, third-party administrators, and so on—currently in place. One integration in particular seemed unnecessarily convoluted; it received a file from one system, validated all of the data, and generated workflows to various roles in the event a particular transaction was deemed invalid. The transaction would stay in the “staging area” until corrected, at which point it would be loaded to the receiving system. I asked how many times these workflows were generated—six or eight times was the reply. Per week, or per day, I asked? Six or eight times to date, throughout the life of the system. Not exactly cost-effective, I thought.

From Design to Implementation to Deterioration

One of the things I’ve found over the years is that all designs begin with requirements, but they end with constraints. During a development or implementation project, these constraints might arise from cost and schedule targets, as well as operational and compliance needs. The project team takes various shortcuts, or they accept a certain inherent error rate, or they take some other action (or inaction) that creates what Ward Cunningham refers to as “technical debt.”

Of course, once a system or program goes into production, the maintenance cycle begins, and Manny Lehman’s law, coined in 1980, becomes operative (paraphrasing): “As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.” Maintenance programmers take the already-flawed work of the implementation team and insert additional flaws in a series of attempts to improve it.

Why We Can’t Backtrack to the Origin

Given the system deterioration described above, we should acknowledge that trying to reverse-engineer requirements from a body of code is essentially a fool’s errand. The combined load of technical debt, “nice to have” features added by the maintenance team, and vestigial code from obsolete requirements acts to obscure the original requirements, which will have almost certainly changed since the original program was designed. And yet, we ask to see the code, or at least a file containing the output from a program, as though it will give us what we need more accurately and unambiguously than an interview with the subject matter experts. Apparently, we’re going to replace the old with an essentially identical process, using more advanced technology.

Before replacing a buggy whip with an electric horse prod, let’s take the time to understand what’s different, both in terms of the endpoint systems and compliance requirements, before we design any interfaces. Let’s review the “as-is” with the SMEs, and then ask what should be changed. Let’s try to understand operational rhythms, fault tolerance, and how errors will be handled in production. Let’s see what will be different in the data sources and sinks once one of the endpoints has been replaced. And let’s try not to create a new set of maintenance problems.

Getting to Good Design

There is more to a system design than requirements—there are constraints, and compromises, and an operating environment, and a support model. Good designers take all of these into account, and their designs reflect that. And good implementation teams recognize that the last implementation team also faced requirements, constraints, compromises, and operating and support models, but not necessarily the same ones.

For more brilliant insights, check out Dave’s blog: The Practicing IT Project Manager

Are you involved in a data conversion project? Then check out Dave’s indispensable book: The Data Conversion Cycle