Microservices can be awesome. Splitting up your monolith to scale services independently, accelerate change, and increase resiliency can reap huge returns. The agility that well-implemented microservices provide is one of those things you look back on and think “How did we manage this before?”

Microservices can also be terrible. If implemented poorly, they can cause more problems than they fix and create unmanageable chaos.

I’ve seen both scenarios, but failed microservice projects outnumber the successful ones I’ve come across. It’s getting better as the ideas around microservices mature and people gain more experience implementing them, but there’s still a long way to go.

Failure is where we learn. In thinking through the failed microservice implementations I’ve seen (Granted, this is from an SRE/ops perspective.), they all share similar traits.

1. They were split off too early

A well designed microservice requires a solid understanding of the service boundaries and context you are trying to carve out. If you’ve just built your MVP, you likely don’t have that understanding. If you’re trying to start your app with microservices, you definitely don’t have that understanding.

The rationale that folks tend to lean on when they don’t understand their app’s boundaries well is to split things out by data model rather than behavior. If you split your app into something like “users”, “orders”, & “items”, you’ll end up with three CRUD microservices and then a whole other set of microservices (or a seperate monolith) that handle business logic between the first services you created. You’re basically just abstracting your database in that scenario.

Instead, if you did something like “auth”, “order”, & “stockcheck”, where each service is defined by its behavior instead of its data, you end up with fewer, smarter services, intuitive logic flows, and a clear idea of how and when to scale each piece.

Stick with a monolithic app as long as you can. Let the app and your understanding of the business processes you are building mature. Then, when and where it makes sense, start splitting off microservices.

2. They were tightly coupled

A business process may rely on multiple microservices working together, but each of those services should not rely on another to function. Otherwise, you don’t have microservices. You have a distributed monolith that contains the worst of both worlds.

The worst example I’ve seen of this was an app that had three versions of a microservice-delivered API. Versioning your APIs is legit, there’s nothing wrong with that. But what these folks had done was make each version synchronously reliant on the version before it. V3 referenced v2 and v1, v2 referenced v1.

I have seen the inverse as a migration tactic, where devs implement a new version of an API, keep v1 up and change the logic of v1 to forward requests to v2. That feels reasonable. Coupling dependencies backward made no sense. It meant they could never get rid of their older APIs and troubleshooting was a nightmare.

In this same stack, almost all of the services were reliant on complex logic that took place within the messaging system (RabbitMQ in this case). This resulted in frequent cascading failures that affected all services.

3. They were orchestrated

There’s a microservice (and SOA) concept of “choreography over orchestration“, meaning choreographed services can function independently, whereas orchestrated services require a central conductor.

I’ve seen a pretty common scenario where microservices are implemented and then driven by an enterprise service bus (ESB), like JBoss Fuse. All requests and core business logic have to go through the ESB, which is likely the hardest component to scale in the entire app, usually due to licensing and technology limitations around state management. Because all of that logic is centralized in the ESB, the microservices around the spoke don’t know what to do unless the ESB tells them what to do. This is bad if you’re trying to build a resilient system.

Again, this would put you back in distributed monolith territory. ESBs are inherently complex and often fragile. So you have a complex, fragile, hard-to-scale single-point-of-failure bottlenecking all of your services. There is little chance this won’t cause you pain. That’s not to say that ESBs don’t have their place, just that they don’t line up well with microservice architectures.

API-gateways seem to be becoming a new form of ESB, so it’s important to watch for similar problems there and keep your API-gateways as dumb as possible (basically just using them to consolidate requests or add auth).

Be patient & thoughtful

Well-architected microservices require a lot of dedicated thought and research. This is an area you don’t want to launch into after reading your first blog post on the topic. Having to re-scope service-boundaries and re-implement services is one of the more painful engineering exercises I’ve been through. I hope to save you from that pain.

Luckily, there are some great resources to help. I’d recommend the following:

Don’t pursue microservices because Netflix or Google or Facebook uses them. Use them when they make sense for your app and be OK with the idea that they might not make sense for you at all.

Microservices are not “better” than monolithic architectures. They just solve a different set of problems that mostly have to do with scale and the way a particular business operates. Part of being a good engineer is using the right tool to solve the right problem. Before you jump into microservices, pause, and make sure that’s what you’re doing.