The concept of the Big Ball of Mud has been around for many years and we reported about it back in 2010. The concept is nicely summarised in this article too:

A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.

With the rapid rise in discussion and use of the Microservice concept, it wasn't going to be long before architectural discussions around microservices, SOA etc. turned to the Ball of Mud. Recently Gene Hughson and separately Simon Brown both wrote articles on the subject. Simon starts by discussing how most of the industry is building traditional monolithic architectures, which can quickly turn into Balls of Mud:

On the one side we have traditional monolithic systems, where everything is bundled up inside a single deployable unit. This is probably where most of the industry is. Caveats apply, but monoliths can be built quickly and are easy to deploy, but they provide limited agility because even tiny changes require a full redeployment.

Then at the other end of the spectrum you have service-oriented architectures, which we've reported on for many years and which hopefully most of our readers are familiar with by now. As Simon states, this approach:

[...] buy[s] you a lot of flexibility and agility because each service can be developed, tested, deployed, scaled, upgraded and rewritten separately, especially if the services are decoupled via asynchronous messaging. The downside is increased complexity because your software system now has many more moving parts than a monolith

Now in Simon's view microservices represents a middle-ground:

We can build monolithic systems that are made up of in-process components, each of which has an explicit well-defined interface and set of responsibilities. This is old-school component-based design that talks about high cohesion and low coupling, but I usually sense some hesitation when I talk about it.

Simon refers to how Karma recently posted about how they have re-architected around Microservices and which we reported earlier too. In the original Karma article they mention that with their old monolithic approach everything got "entangled" and this was one of the reasons they decided to re-architect. However, Simon does not believe that is a sufficient reason to make the switch and perhaps even that microservices won't help with the problems that caused the entangling problem in the first place. As he says:

If you're building a monolithic system and it's turning into a big ball of mud, perhaps you should consider whether you're taking enough care of your software architecture. Do you really understand what the core structural abstractions are in your software? Are their interfaces and responsibilities clear too? If not, why do you think moving to a microservices architecture will help? Sure, the physical separation of services will force you to not take some shortcuts, but you can achieve the same separation between components in a monolith.

And Gene agrees, using a slightly different analogy to try to make the point:

Someone building a house using this theory might purchase the finest of building materials and fixtures. They might construct and finish each room with the greatest of care. If, however, the bathroom is built opening into the dining room and kitchen, some might question the design. Software, solution, and even enterprise IT architectures exist as systems of systems. The execution of a system’s components is extremely important, but you cannot ignore the context of the larger ecosystem in which those components will exist.

Simon has discussed the spectrum of architectural approaches with various groups and finds that those building monolithic systems don't want to consider component-based design. He discussed how earlier he had run a workshop with one of the teams and they attemped to produce a diagram of one of their software systems:

The diagram started as a strictly layered architecture (presentation, business services, data access) with all arrows pointing downwards and each layer only ever calling the layer directly beneath it. The code told a different story though and the eventual diagram didn't look so neat anymore. We discussed how adopting a package by component approach could fix some of these problems, but the response was, "meh, we like building software using layers".

He concludes that if teams have trouble building well structured (well architected) monolithic systems how are they going to be able to adapt to, and architect well, microservices? He agrees with Michael Feathers, who hosted a panel session at QCon NY earlier this year and who wrote about it, which included "There's a bit of overhead involved in implementing each microservice. If they ever become as easy to create as classes, people will have a freer hand to create trouble - hulking monoliths at a different scale." As Simon says, distributed Balls of Mud are things we should all be worried about.

It seems that Simon hit a chord because the comments on his article appear to agree with him. For instance Ralf Westphal writes:

[Microservices] are just another type of container. So as long as you have difficulties structuring your software using the "lower level" containers like components, libraries, classes, functions... you won´t be able to reap many benefits from µServices. Where there are monoliths today, where there is no experience with components, µServices will actually make things worse, I guess.

Pieter H also writes:

Yep, the second you introduce distributed, you need to leverage infrastructure that addresses network latency, fault tolerance, message serialization, unreliable networks, asynchronicity, versioning, varying loads within the application tiers etc. etc. Otherwise you're coding it yourself, a la NetFlix OSS and I suspect that is one of the main reasons for monolithic shops not being interested. Takes top level talent at the moment, not something all enterprises have access to.

Michael Groves raises an interesting question when comparing microservices and CORBA:

When we were building CORBA systems in the 90's, we talked about a lot of the same benefits as the Micro-services folks are now. Then the industry seemed to cool on distributed objects, even Martin Fowler declaring "Don't distribute your objects", mostly because of latency. Wondering why distributed Micro-services are good, but distributed objects are bad?

Finally, Kevin Seal finds microservices a more natural way to develop and believes it's more than a rebrand of existing approaches:

When a team chooses microservices I think it's important to acknowledge they have at least made a choice. This alone might be the reason their project turns out better than a more de facto monolithic design. For this reason I think it's important to have "trendy" topics so that they can get teams talking about what they're doing.

Perhaps only time will tell as to whether microservices offers a better approach and away from the Big Ball of Mud, or it represents a way of tying together (micro?) Balls of Mud.