“It is a key requirement that the system be able to cope with future changes”

Heard this before?

“We don’t want a system that will be out of date within a year or so, forcing us to spend heaps of money just maintaining it and keeping it up with current demands”

I’ve certainly heard this enough. What the customers want is software that will adjust itself automatically, or at least easily, to new requirements. Whatever those requirements might be.

Sure. That’s reasonable. It’s reasonable to want that. You know what I want? I want a cordless garden hose. How awesome wouldn’t that be?

Awesome or not, some things are impossible. Cordless garden hoses are impossible. So is software that changes without being changed. If you need it to adapt to future requirements, then guess what: that involves adaptation – AKA change.

“Not necessarily. Not everywhere” some might say: ”Look what I’ve done here, if THIS part of the software ever needs to change, all you need to do is alter x, y and z in this xml-file, and - voilà”. Oh, really? Explain to me how altering an xml-file is not a change. A change in an xml-file is a change. A change in a database table is a change. A change in a properties file is a change. Moving changes outside the code itself does in NO WAY stop you from having to make changes. It does however create extra complexity in the code, while limiting the types of changes you can make. It also makes it much harder to track the changes made, who made them, when were they made and by whom. So if you like the idea of allowing random changes in your production environment, with limited accountability or ability to keep track of what’s going on - by all means move all your settings to your database, or properties files. Have fun.

This idea that it is possible to create “future proof” code is harmful on so many levels. We’re solving the wrong problem. When a kid complains that he doesn’t want to brush his teeth anymore, the solution isn’t to just brush them for 5 hours straight one night, and expect to never have to brush them again. You can stop spending money on the software after it has been built, or you can have software that stays relevant. Pick one. Making a new, flexible, ultra-modern system for developing film would not have saved Kodak. Configurable horse carriages in carbon fiber would not have saved the horse carriage industry. We need to stop kidding ourselves that we can predict the future. We can’t. We can’t think of everything up front and cover all potential future use cases. Nor can we make code so flexible that it will work well in any given situation. It’s not for lack of trying, but so far SAP – AKA the Germans’ revenge for WWII - is our most successful attempt. Sure, it‘s been a great commercial success for its makers. But it has been a complete and utter disaster for pretty much all who have to use, maintain and pay for it.

If you want a future proof system, you don’t want immortal and flexible code. You don’t want the T1000 terminator. You want Southpark’s Kenny. You need code that’s easy and fun to kill. You need to get used to killing it, often, so you can replace it with whatever you end up needing.

You don’t want configurability, you want continuous delivery. If you’re always deploying new versions of your application, you don’t need the software to be flexible. Your process is. Software that just does one thing unapologetically is much easier to reason about and get right. It is easier to test. It is easier to change. It is also easier for new people coming in to the team to understand. “Hard coding” implementation or even values is not a problem if you have a development process where your code is maintained and released regularly. In fact, it can lead to clearer code that’s easier to work with.

Example: You’re making a web application that handles applications for some kind of public benefit. Say parental leave (Where I’m from, if you’re employed, you’re entitled to 12 months paid parental leave, but you have to apply to get it). Your app will have a page with a form where you fill in the expected due date, your current employer and so on. Then a tester, or disgruntled user, reports that “when I click ‘submit’, I’m redirected back to the same form, instead of getting a confirmation that the application was submitted”

If you’re working on that project as a developer, what you want is to figure out why the system is behaving in this way. Firstly you want some way to easily locate the application page, and on that page, you should easily find the “Submit” button. If the page and/or button is automatically generated with parameters found in the database, you’re going to spend forever just trying to locate where that button even is. No maintainer will ever thank you for this. Once you’ve located the button, you need to find out what it does. This should not be hard to do. It should not involve searching through configuration files. It should have an easy to find onClick-event, which either contains, or sends you to the implementation. If you want your code to be easy to work with, you want to keep the number of steps required to find the implementation as low as possible. (Within reason of course. You don’t want business logic in the UI code, but you should aim for the implementation to be as close to the UI as possible. Markup -> EventHandler -> Business logic implementation.)

Many people will implore you NOT to tie the EventHandler code to a particular implementation of the business logic. Because – “WHAT IF YOU NEED TO CHANGE THE IMPLEMENTATION SOME DAY?!?!!”. You need to call an abstract interface they say. You know what I’d do if the implementation needed to change? OK, are you sitting down? Here’s what I’d do: I’d open the file with the implementation. I’d open the file with its tests. Then (drum roll) I would change them! Studies show that you can actually change code after it has been written. Even classes that don’t implement interfaces. I know! Crazy! It works. You should try it.

I mean really, how would you do it differently if there was an interface there? If you needed to change something, you’d do EXACTLY THE SAME THING. You’d still have to locate the implementation and change it. The interface saves you no work whatsoever.

Someone smart once said that you should always “program to an interface”. But you don’t need to implement a Java or C# interface to have an interface. All classes (or other groupings of code) have an interface. Their method signatures and member variables are their interface. This interface should not be implementation specific. That’s the point. We’re not talking about java interfaces. If we were, that would mean this advice could only apply in the java, c# or other programming languages with specific things called interfaces in them. When someone tells you to put your money where your mouth is, you don’t rush to the nearest cashpoint, and put your lips over the cash dispenser and fill your mouth with money. You need to understand what the phrase actually means.

Your method names and variables should show the intent of what the class does, not how it is implemented. Your ParentalLeaveManager (or whatever) should have methods called



submitApplication(application)



not



submitApplicationJSONOverJMS(application, queueTopic, false, 42).



You should be able to understand what the function does, without having to dive in to or learn about the implementation details. You should be able to change the implementation without altering the method signature. That’s the point. But none of this means you HAVE TO make an empty meaningless and annoying IParentalLeaveManager interface that your click-event can refer to. Typical enterprise projects are littered with these pointless interfaces with only one implementation. Have you seen FizzBuzz done Enterprise style? Absolutely hillarious! It is, sadly, not far from the reality out there. I swear, I once showed an enterprise developer this project and his response was “So? There’s nothing wrong with that”. True story! Interfaces galore are not only pointless and really annoying when trying to find out how things are implemented. They also send counter effective signals to the maintainers. Java interfaces are meant to be used when there are a bunch of implementations available, and your code wants to access them all in the same manner. Therefore, we all know that changing the method signatures of an interface is not something we choose to do lightly. Interfaces say “HANDS OFF” – don’t change unless you REALLY know what you’re doing. So while we often add interfaces to facilitate future changes, the effect is the opposite. The presence of interfaces stop the team from making necessary changes in the APIs of their business logic. Instead of altering the method signature of an interface, a maintainer with a bug to fix or a feature to add, is more likely to work AROUND the interface. Adding the new code before and after the method call to the interface. As the system grows older, more and more business logic starts seeping out of the core business logic classes, making a complete mess of everything.

Interfaces with only one implementation are the committees of code. If you don’t want to make a decision yourself, if you’re worried about being blamed if it was wrong, you delegate it to a committee. The Peoples Front of Judea would approve of EnterpriseFizzBuzz.

Some will tell you that interfaces are great for making your code testable. No they aren’t. They do no harm, but they don’t help either. You don’t need interfaces to create mocks or stubs or spies. Use Mockito or any other sensible mocking framework, and you can easily create mocks for concrete classes. You should also ask yourself why you need those mocks or stubs or spies – with a little rework of your code, you might be able to write tests with very little mocking:

Unit tests should, ideally, not involve other bits of code than the unit being tested. Extensive use of mocking should be seen as a code smell if you ask me. It indicates that the system has too tight coupling. So no – interfaces DO NOT make your code more testable. Nor do they make your code future proof.

I’ve been a bit too harsh so far on those who want systems to just stay relevant without work. There are of course several things you can do, and several things that you can avoid doing, that will make your code base more future proof without requiring code changes.

Firstly, I’d recommend adding a “manual override” option wherever possible. I once helped make a system that would handle benefit fraud. The main aim was to remove all the manual work involved – which was considerable. The system should enable the case officers to add a new bit of information about the person who’d received benefits, then the calculations would be rerun and the amount the person needed to pay us back would be calculated automatically. For most cases, this was pretty straight forward. There were a couple of typical scenarios that we were able to automate quite easily. But there were a great deal of corner cases we couldn’t handle so easily. But instead of aiming to write code that handled every possible eventuality, we added a “manual override” option to the calculations. In cases that didn’t match the standard ones, we allowed the users to revert back to their old mode of operation and enter the resulting calculation themselves. This saved us a lot of coding, and also was a great way to handle an unpredictable future. If new rules came along, or some unexpected event happened that meant the standard cases no longer worked, users of the system could always use the manual override. Many of these unforeseen future events and corner cases are so rare, that it is not worth adding specific code to handle them. It is far more cost effective to do them manually when they occur.

Which brings me to tip number two: Mitigation over prevention. There are two sides to risk management: reducing probability and reducing impact. In software we tend to forget the latter. We add lots of fancy logic to prevent illegal input. What happens then when the rules change? Suddenly the system becomes unusable. In Norway our parliament voted to change our criminal laws in 2005. But they have only now (2015) been put into full effect, because the police’s computer systems prevented them from applying the new rules.

If we visualize the effect of users actions, (highlighting potential issues), if we make it easy for the users to correct their mistakes, we’ve reduced the impact of error so much that we might not have to worry about prevention. This approach is much more future safe. If we allow users to make mistakes – they will still be able to do their job even if the rules change.

The more automated your system is - the more work is required by software maintainers to keep it up to date. In these cases configurability and interfaces galore will very often only get in the way - adding complexity, confusion and preventing your maintainers from doing whatever they need to get done.

The more you let your uses take control and decide how to use the system (less automation), the less maintenance work will be needed. BUT the less time you’ll save your users, as the users still need to do lots of the work.

Your choices often boil down to: Do you want to save the maximum amount of work for your users (fully automated system)? XOr do you want to save time on system maintenance (more manual system)? The choice is yours. You can’t have both.