Have you seen the next great framework? Seasoned managers will clinch their teeth when they hear this. It seems every 4-8 years there’s always some promise of faster development, or cleaner code. Sometimes these promises are true, and slowly the entire development team start to focus on the next big replatforming of the business. The real question is: is it worth it to keep doing this every few years?

It would seem much of our industry’s wealth is given to two major areas:

Innovating new solutions ; find entirely new things to do.

Streamlining existing processes; making something old a bit cheaper.

The challenges faced when attempting either of these are relatively common:

Misunderstanding the domain you’re streamlining/innovating

Working a solution into a corner that is too hard to change or walk away from

Adding complexity into existing models every time it is more understood

Over-generalizing and under-generalizing too-far.

Scalability problems with either the business, or software.

Scaling in the wrong direction ; improving things that don’t improve the business

Building a fundamental flaw into the system that is surprisingly discovered later on

Copying a competitor as quickly and cheaply as possible (I don’t like it either, but let’s face the facts: lots of companies are spending money to become just copy-cats of other companies)

As I said, these experiences are quite common, however dealing with these problems over the course of several years tends to result in major “symptoms” of bad software, and this starts to slow down the business, reduce strategic options, and eventually demanding yet another major replatforming.

What can you look for to see if these problems are present? Well, common signs that a project had one of these problem for far too long is no secret:

One big object: The code is full of massive object definitions. Often, the original “core” model thought up ages ago has now been crammed with three times more attributes than ever imagined. It’s actually several objects and business concepts in one. Developers often think of making properties optional, and adding “If-Statements” everywhere to inspect for what aspect their all-powerful “object-deity” is taking.

The danger zone: Teams shy away from changes to specific “core” parts of the software that will break things, because it’s unpredictable, and changes can’t be tested with confidence. There’s fears that changes could become not just buggy, but actually destructive.

Great remorse: Developers are discussing how everything should have built differently, but it’s no longer possible and they say they’re stuck with the existing flaws. They must continue building new features onto a flawed concept they are not satisfied with.

Discoordination: There are now multiple incompatible models of the same concept. Perhaps two or three objects for the same concept must be resolved in special ways depending on the way the competing objects are interacting.

House of cards: Implementing the right thing in one place breaks other parts of the system that nobody expected to be impacted.

You’ll see these symptoms when you see your development teams are producing less than they used to. You’ll hear about it when your product stops growing and teams stop imagining new features, and instead they struggle to build modest features and spend a larger amount of time fire-fighting issues.

State Of Our Profession (A generalization)

Good software design has offered a lot of help in the last few decades. Proper object-oriented design techniques decouple concerns. However, many approaches that improve quality are rarely used, are often misunderstood, and engineers with “senior” in their title frequently disagree on when to use different techniques (even if they agree on all the same basic facts). Great techniques like the “Single Responsibility Principle” are popular in blogs and Stack Overflow, but far from ubiquitously adopted.

Two abstract themes tend to be embraced when asking development teams about what makes good code:

Code and Data should be broken into smaller chunks, then associated in a relationship instead of treated as one big blob. However, exactly how to do this seems to have no universal consensus. There’s lots of blog authors trying to repudiate themselves from “SOLID”. Lots of developers refuse to abstract early yet others demand it. There’s no clear industry consensus on what to do, just individuals and sometimes camps of individuals.

Complexity should be reduced through grouping, and those groups are based on “what the code does”. But what is that group? Is it a layer? A class? A method? At what size does complexity merit a new group? How do you know a group belongs together? Again there are common camps of thought, for example, we all know about MVC. Many of us can decompose all the small things we need software to do into many smaller composable tasks. But if you open a thousand projects on github written by experienced developers, you’ll be hard pressed to show clear evidence that there’s any kind of consensus on where, what, and how much.

It would appear these two abstract ideas of “good code” are an answer to “the pains of spaghetti code”, and not “a focused attempt at finding the optimal way for a business to run”. In fact, some of the strongest arguments against writing tests, clean architecture, and other techniques (especially ones that require high levels of training and discipline) come from money-focused perspectives. Or, as startups like to say: “Move fast and break things“.

The Problem With Our Professional Approach

Decomposing complexity is a practice that doesn’t seem to happen every day. If a new task requires the addition of one field, it’s all too easy to justify cramming it into an existing object. Sometimes this is perfectly valid, but usually it isn’t, and slowly the entire system loses flexibility by way of “a thousand tiny paper-cuts”. Suddenly you notice your project has One Big Object. Slowly a few more symptoms show up, and after the third year of development the software just isn’t easy to work on.

Decomposing data when given enough time to plan correctly still requires a high level of skill and perhaps clairvoyance. Authors must know deep in their bones that any ideas they have about objects are not “universal truths”, they’re just a version of reality focused through your limited personal point of view at that one moment in time.

Complexity by layers is a great technique but over time reasons to change infrastructure are found, upgrades are not uniformly applied, colaborator objects are injected in strange ways, or work is postponed as long as possible.

Boundaries are established to contain complexity. Sometimes this is done with strict rules, other times it’s done with superficial barriers like using multiple repositories. Eventually crossing boundaries becomes cumbersome when done outside of the previously anticipated design. Business processes end up being encoded to the software, but now business processes run on the software and are trapped in these software boundaries. For the business, instead of everything having it’s nice place and flowing smoothly, it’s a total mess. Changing the business is hard because the code can’t be changed easily. On the other hand, with a trivial change to the ORM configuration the software will run on an entirely different database technology (decades ago, this would be a monumental task). In other words, developers got rid of spaghetti code and replaced it with spaghetti business.

Spaghetti business is a group activity; everyone in the company is responsible for its creation. Businesses will dream up monumentally sized experiments with lots and lots of extra requirements that were never challenged or proven to have a true business case. Justification can sound like “I talked to one customer on the phone and they said they liked it”. And sometimes, "this is obviously what we need (because it's the first idea I had and I don’t have time to think of something else before my next meeting)". Software development everywhere has become more like an agency taking big checks for any requirement provided, and less like a business partner focused on building something that will make “the next major leap forward”. Nevertheless, requirements are added into the project, get created, proven by the market to be of low value, and are never to be removed, simply maintained forever.

This scope creep of an irresponsible business becomes woven into symptoms of bad software (such as The Big Object). Requirements planning is perhaps just as primitive as software development. Requirements are all too often: full of unnecessary cruft; tightly-coupled with different ideas; based on limited perspectives; logically flawed; and many ideas are bloated into “one big idea”; etc…

What we need

First I must acknowledge these problems already have really useful solutions, especially Domain Driven Design, using “real object oriented programming”, and the SOLID principles. However, these are mitigations, not answers. The reason they’re mitigations is because even if you use them, some business changes are "just hard” and “there’s no other options”. Secondly, because these concepts are state-of-the-art: they require lots of training, experience, and discipline. They are not state-of-the-profession (“the common everyday reality“). Even if you’re in the other camp and avoiding OOP (with the common argument-against it being: “data doesn’t have jobs”) it still doesn’t solve our problems.

We need to access data depending on the relationship needed at a step in the business process, not the way we stored it in the database. Also, ad-hoc queries need to be possible. Not having this feature means either your business needs someone to read reports and input them into another part of the business, or the data warehouse becomes the de-facto portal to your spaghetti business.

We know what problems we face. We need to be able to cope with scope-creep, poorly explained ideas, and the unknown future. Making it easier to change a database library is not a frequent and repeated need for most businesses.

We found great techniques to make changes to the software, but now we need to design software to change business. If a medical-focused startup discovers a specific paperwork process is complex and has many corner-cases (eg, an insurance claim process), developers should be able to build on it easily without changing a central fundamental data schema or re-thinking what to do with a specific controller, service, or something unrelated to adding just one specific business concept. It breaks both of the abstract rules for good code we talked about previously: the change wasn’t isolated to one grouping of complexity, and there was no indication for how the new edge-case is related to the business. Oddly enough, the layers in the application architecture didn’t get violated, but every grouping of complexity received a change.

If we can’t figure out how to stop building spaghetti businesses, we will need to continue replatforming all software every 4-8 years, everywhere.