Nowadays everyone talks about microservices (fine grained service oriented architecture). Some of them like Martin Fowler suggests to go for a monolith first approach. One of the benefits is avoiding chaty-ness of boundaries. Hence you can ship early and cut off your domains later on one by one.

There is no silver bullet. Different macro architectures have different trade-offs. There are also other aspects that matter for side projects. For example to get practical experience with a special kind of architecture. I usually check if an architecture of my interest fits well to my project. Hence I hit two birds with one stone. There are a lot of great lectures and talks about microservices. But as soon as you start using them you realize what really matters. Only a few talks and experience reports go into details about the bad and the ugly parts of their architecture. Companies spent millions to get it done. Marketing, reputation and hiring people are the reason to keep those secrets. But as soon as you drink the 4th beer with an experienced developer you get real insights. For instance a honest answer when you ask:

Would you go for the same architecture again if you could just travel back in time keeping the experience you have right now?

Before I go into detail of my project I will define microservice based architecture. Afterwards I talk about disadvantages. If you want to hear pros for microservices go to youtube, get a book or google for it.

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

(M. Fowler)

I will now introduce my common server setup. Followed by the description of my project. Afterwards I will sketch the complexity growth of my services and the experience I made. And in the end I will summarize the bad things and try to find conclusions of my learnings.

Infrastructure

Usually I rent virtual servers from “oldschool” hosting provides. Right now I am happy with Netcup. It’s a german hosting provider. Their virtual servers base upon KVM. Therefore running docker is possible like on bare metal.

For 1 year I am a fan of cloud solutions like Amazon Web Services (AWS). I can add things like caches or load balancer to applications. This saves time and money. Most notably it reduces the complexity of applications. Cloud solutions come with a drawback, their pricing. You will pay 200% until 400% more in comparison to a regular hoster. High peeks related to the daytime allows scaling on demand. Then you can maybe save money, maybe…

In an early state I put everything I need for a side project on one virtual server. Hence other people who are part of the project are not able to harm other projects. Additionally each project can scale individually.

As soon as one projects gets bigger and requires more resources I add more servers. Separating the production environment from my ci-system and testing environment allows better scaling and increases security. Furthermore I move the tools for team distributed work to an autonomous machine.

For this project I start with the following setup:

Intel® Xeon® E5–2680V4

6 GB DDR4 RAM (ECC)

2 dedicated cores

40 GB SSD

This setup costs about 8 € and can handle all infrastructure parts. It serves up to 1000 Users simultaneously for simple applications.

Online competition platform

One side project of mine is an online platform for people who love playing multiplayer games. There you can create and attend in game races and the best players can win prizes.

There is a frontend monolith implemented with React. Additionally there are multiple backend microservices separated by their domain:

Race service: A user can create and manage races to display and signup for available races.

A user can create and manage races to display and signup for available races. Stats service: Fetches, processes and stores match detail data. This is that basic data to calculate rankings.

Fetches, processes and stores match detail data. This is that basic data to calculate rankings. Profile service: Stores and updates player profile data to show player attends. Furthermore it stores won prices and results of the past races.

Iterations

In the beginning I started with a races service to manage competitions in general. This services has a couple of endpoints to create, read, update and delete races. Participants are able to join when the races has not started. Related to the state of the race (upcoming, running, finished) a various set of other constraints exists.

My initial setup was:

Spring boot for the races services,

Gitlab and Gitlab-CI to host the code, run tests and deploy the code,

Docker and docker compose to create deployable units for dev, testing and prod,

React for pure frontend behavior,

Basic NodeJs express app for a landing page and basic authentification using Oauth2.

First CD pipeline

In the beginning this was quiet simple. One frontend, one backend, a quick build and automated tests. Adding features was a charm compared to a big enterprise application. The whole CI-pipeline run about 15 minutes and a new version was live.

I always try to test everything to have a good feeling as soon as I push code. Therefore I added functional tests that fires Rest requests to my service. There is already a task that dockerizes my services. For testing I start all services using another container name prefix. Additionally I change the ports to non production ports. As soon as all instances are up and running I can run my tests. These fires requests to my application and expect certain responses.

To make sure that the whole application works I added CasperJs based end to end tests. They help to make sure that the frontend works. Furthermore it reveals whether the backend communication operates correctly. I only test the happy path of creating a race.

At this point a full pipeline execution required about 20 minutes.

As I added more and more features complexity grew. The second services was the Stats service. First I copied the existing Spring based service, changed the port and added new CI steps. Coming from 20 minutes CI runtime I was already up to 30 minutes. I copied the build, test, rest integration and end-to-end test tasks. Furthermore besides the admin frontend I added a user frontend to access races.

At this point each service has hardcoded names and ports of other services. At least it was partly dynamically using environment variables:

requestedServicePort = basePort + requestedServicePortOffset

Hence test systems starts separately. Therefore I can deploy my application with no downtime. This works by changing the port redirection as soon as the new application is up and running. At this point the application survived the first user tests successfully. Accordingly my team decided to go further.

Time went by and I added the profile service to persist global user statistics. Now the rule of three strikes in. Instead of wire all those services together I went for a more scaleable and flexible solution. I use a service discovery called Eureka. I needed insights about availability and healthyness of my services. A Spring based admin dashboard communicates with Eureka and gives me all the data I need.

On my machine it worked very well. It felt like a great system. No static links, no additionally configuration. Each service only registers itself at Eureka and asks Eureka for other service’s IP and port. For client side load balancing I use Ribbon. Hence the load can spread between multiple service instances. To achieve fault tolerance I also use Hystrix.

After adding all these services my server worked up sweat. CI-tasks started to fail sometimes. Deployments took a lot of time. The machine runs out of memory and swap.

At this state I would have to invest more time and money to scale vertically and horizontally.

The easiest solution would be to get a more expensive virtual server. Anyways, I started to think about what slows me down:

Complex service communication : Services need to call other service’s endpoints to fetch data synchronously using a circuit breaker. Compared to a simple repository method call it’s complex.

: Services need to call other service’s endpoints to fetch data synchronously using a circuit breaker. Compared to a simple repository method call it’s complex. Duplicated Code : Keeping models in sync is annoying. It usually happens that they exists in multiple services. Hence changing them requires adapting services. Especially in the beginning of new projects this happens a couple of times. The same issues exists for copied configuration files. They changes and I need to keep them in sync manually.

: Keeping models in sync is annoying. It usually happens that they exists in multiple services. Hence changing them requires adapting services. Especially in the beginning of new projects this happens a couple of times. The same issues exists for copied configuration files. They changes and I need to keep them in sync manually. Update external dependencies: Each microservice uses external dependencies. To keep my project healthy I try update packages every 2 weeks. For new package releases I usually need to update service by service.

Each microservice uses external dependencies. To keep my project healthy I try update packages every 2 weeks. For new package releases I usually need to update service by service. Integration tests are more complex: When you test interactions between components it’s usually harder compared to unit tests. In a microservice world you need to test interactions on another layer. Execution is slower and they tend to break a lot more often. Additionally refactorings usually break them. In a pure backend project my IDEs are smart enough to automatically fix the issue or at least throw an compile time error.

When you test interactions between components it’s usually harder compared to unit tests. In a microservice world you need to test interactions on another layer. Execution is slower and they tend to break a lot more often. Additionally refactorings usually break them. In a pure backend project my IDEs are smart enough to automatically fix the issue or at least throw an compile time error. CI execution is slow for small projects: Usually each service should be independently deployable. This keeps build times fast and complexity low. But if you have only a few developers which work on all services this does not make sense in an early state. Additionally you usually want to run some integration tests that affects multiple services. Having a single version control system keeps things simple. Therefore you need a lot of ci-jobs. For each service you need tasks for building, testing and packaging.

All these downsides sounds harsh. I am a microservice fan but as I said, this thread will only highlight the bad things.

Summing pros and cons up I decided to migrate to a more monolithic like backend. The target now was to put the race, stats, profile service into a single one. Additionally I could get rid of the admin and eureka service and reduce service communication.

It took me about 6 hours. Having different test layers saved me so much time. My first target was, after copying the tests to the “monolith”, to make the tests work again. After I fixed all Java based tests I did the same with the Rest and CasperJs tests.

I also tidied the CI pipeline a bit up. I could get ride of t he building task. An application build was also part of the testing stage. In the and I was back to 15 minutes build time having a lot of tests and a very safe deployment process.

Afterwards the code looked a lot better. I used domain driven design for my core concepts. The services I had before the refactoring now exists in one service within different package structures. This allows me to check the dependencies between packages later on. Therefore this enables me to cut off microservices again very quickly as soon as I need better horizontal scalability.

So my learnings for future projects are:

Never go for microservices architecture on a greenfield project . When I think microservices are the right architecture I will cut off microservices later on. Everyone should try to challenge this. This project was totally worth the effort when I think about the learnings I got from it.

architecture . When I think microservices are the right architecture I will cut off microservices later on. Everyone should try to challenge this. This project was totally worth the effort when I think about the learnings I got from it. A good micro-architecture and a solid code coverage is mandatory for a system that evolves. Refactoring into microservices and visa versa are easier and less error-prone.

and a is mandatory for a system that evolves. Refactoring into microservices and visa versa are easier and less error-prone. Services should be independently deployable . Otherwise build times explodes. This sounds easy but is hard for services that depend on other services.

. Otherwise build times explodes. This sounds easy but is hard for services that depend on other services. Consumer driven contracts are a good way to reduce the responsibility of end to end tests. They give fast feedback and they point directly to the issue. Hence refactorings are easier. Additionally end to end tests can be executed after your deployment to reduce time to market.

are a good way to reduce the responsibility of end to end tests. They give fast feedback and they point directly to the issue. Hence refactorings are easier. Additionally end to end tests can be executed after your deployment to reduce time to market. Docker is great and I should use it more and more often. My development, testing and production systems are easy to setup and to start. Especially for microservices it’s important to have such an isolation. But also for a monolith it’s definitely worth the effort.

is great and I should use it more and more often. My development, testing and production systems are easy to setup and to start. Especially for microservices it’s important to have such an isolation. But also for a monolith it’s definitely worth the effort. Domain related microservices needs other basic services like a service discovery. All services needs a decent amount of memory to work reliable. Especially the JVM does not know about docker (cgroup) based restrictions. They cause issues when defining memory limits.

Software development is only about trade offs. Before developers commit on a new technology they should get a feeling about the pros and cons. For example by playing around with them. The down sides of microservices are dramatically and can be the reason why projects fail. If handled well a microservice architecture can be a reasonable decision. It accelerates the development process and gives developers more freedom.

The biggest benefit comes for companies that have many development teams for a certain project. Being able to develop and deploy a service independently increases ownership. That means the developers feel more responsible for what and how they implement requirements. Additionally they are able to escort a feature from dev, to test, to production. In combination with for instance canary releases production bugs can be heavily reduced. Scalability is also a characteristic of microservices but in comparison relatively irrelevant.

I hope you enjoyed the reading. Feel free to send me feedback and questions using the platform of your choose. I appreciate any follower on Twitter because this is my main communication tool to you.