For the past six months, our team has been rewriting our application using a micro-frontend approach. Given how popular micro-frontends appear to be at the moment, I wanted to share our experiences and the lessons we learned.

Micro-frontends can be a fantastic tool, but they are not a silver bullet. Depending on the structure of your project, team and business, you may not be able to take advantage of the benefits. Worse, micro-frontends may undermine the architecture of your application and hinder your team’s ability to deliver. For our team, micro-frontends proved to be a particularly poor fit and an example of where the micro-frontend approach isn’t the best option.

The (potential) benefits of micro-frontends

This experience happened to be my second encounter with taking a micro-frontend approach. At a previous role, we had a team maintaining a shell Android application and four teams, each responsible for a separate webview within the shell application. Each team addressed an entirely separate business concern, and the autonomy micro-frontends provided us ensured that each team could own the entirety of their stack. Teams were able to develop and release independently and didn’t have to worry about affecting each other unless they touched the shared library or shared workflows. Maintaining the shell and the separation between micro-frontends required effort, but the trade-off was worthwhile at first due to freedom and flexibility it afforded each team.

However, by the time I left that company, cracks were beginning to show. The shared libraries had become such a headache that each team ended up forking a copy. The workflows and communication between apps had grown into a constant struggle. One micro-frontend was struggling under the performance constraints of being a webview. Releasing updates to the shell became time-consuming as the number of connections between it and its children had increased.

Independent teams

Micro-frontends and microservices are for isolating teams first and code second, and can be fantastic tools to help teams work independently. Each team has full ownership of a vertical slice of the application and can specialize in their domain. They can worry less about accidental interference from other teams and reduce the need to coordinate between teams. Micro-frontends let you apply Conway’s law and let the structure of your application follow the structure of your teams.

Micro-frontends increase the autonomy of independent teams but don’t offer much for interconnected teams. Your development team may be too small, your domains may be too interlinked, or the incoming work may not split equally between the lines of business. With micro-frontends, but without services and domains bound to teams, you can end up building a distributed monolith: your architecture is spread thin, but your teams aren’t.

In a distributed monolith architecture, you have micro-frontends, but they aren’t allocated to teams. Your teams will conflict with each other as they try to update the micro-frontends simultaneously.

We are two small teams totaling twelve developers. We recently rejected a move to shift our backend services to microservices, which I detailed in a previous article. Microservices aren’t a silver bullet that adds value for every team and project. In an environment where microservices are not valuable, the same reasons would apply to micro-frontends. If you have already found one not to be useful, you are unlikely to benefit from the other.

Code organization

In theory, micro-frontends can help you improve the structure of your codebase: each micro-frontend contains a smaller, focused part of the application. However, it’s a mistake to think that monolithic applications can’t be well structured, and a few small adjustments can give you just as many benefits as completely splitting your application. Our team saw this in practice recently: after cancelling a migration to microservices, we spent a little bit of time better structuring our backend service. The improved structure gave us a much cleaner application, without the added trade-offs and complexity of microservices.

Release independence

With micro-frontends, you give yourself the ability to release parts of your frontend independently. Rather than redeploying everything, you can just deploy what has changed. You can empower teams to release when they need to, without needing to coordinate with other teams or fit into a company-wide release cycle. You can rollback a bad release without affecting the work of other teams.

Since all of these frontends contain partial implementations of larger features, none of them can be released independently.

However, as we will soon cover, you can start having links occur between your micro-frontends. These links reduce the independence of each frontend, causing each release to have many dependencies.

Not releasing something that hasn’t changed can increase the risk when you do finally need to make changes to it. The shared libraries it relies on, or its links with other frontends, might have changed dramatically.

If the body of shared code between frontends becomes too large, your releases can no longer be independent. Another team is as likely to break your app before release as if you had a single application.

Reduced surface for testing

Smaller frontends and releases mean the surface for regression testing is much smaller. There are fewer changes per release and, in theory, you can reduce the time spent testing.

However, while each frontend might be small, its connections are not:

The frontend passes and receives messages and events that link it to other frontends.

Multi-page workflows connect this frontend with others.

The shared codebase may have changed significantly since the last release.

To support backwards compatibility, you need to be aware of past and future connections.

Just because you’re not updating a component this release doesn’t mean it’s not affected.

Even if you do manage to save some hours in testing time, the overhead of your architectural decisions may have increased the hours of development time. Expecting to save time testing results in more total hours spent per feature across your teams and roles.

Faster build times

The smaller each client is, the faster it builds, and the faster its tests run. However, for a small to medium application, it is also a premature optimization. It can take a long time before your builds grow so unwieldy that they overwhelm modern tools. File watchers can keep a moderately sized build compiling fast.

Micro-frontends can make your net build times worse. Instead of building a single application, you now have to build multiple applications to check all of the connections and links between your frontends. You now have multiple distinct test suites that need running.

Use different technologies

The last commonly touted benefit of micro-frontends is that it gives everyone the freedom to use different frontend frameworks and tools. While a small team is unlikely to want to learn and use multiple frontends simultaneously, it does allow you an upgrade path once you want to upgrade to the tools and frameworks of the future. Rather than requiring upgrades to be coordinated between all teams, micro-frontends let each team upgrade their technology stack independently. Further, supporting multiple tech stacks makes it easier to pull in off-the-shelf solutions regardless of their framework.

Micro-frontends’ impacts on our architecture

The first and most crucial part of electing to split up a frontend is how to make the divisions. The decisions we made here followed us across everything else we did.

There were at least four ways we could have split our application:

By business concern or feature.

With more granular divisions, splitting by domain objects.

By location in the application and UI/UX concerns.

By teams, if we had teams working on focused concerns.

Ideally, some of these decisions would have lined up to reduce the number of compromises. For example, we could have had teams dedicated to separate business concerns. For us, this situation didn’t apply, and we elected to split granularly by domain objects. We didn’t name all of the divisions upfront but proceeded in a direction that would lead to 15–20 separate micro-frontends.

Splitting features across micro-frontends

By splitting our frontends so finely, many features now required changes across multiple micro-frontends. These divisions removed the independence and autonomy of each frontend, meaning we could no longer develop, test or release them separately.

When features span frontends, you either need to increase the coordination between teams to deliver a feature or make the frontends a shared pool, leading to the distributed monolith architecture.

Splitting the UI/UX between micro-frontends

When creating microservices, your services are developer-facing. You are creating for other developers to use. However, for user-facing applications, your concern should be the user and your UI/UX. To offer a better user experience, you may combine separate domains onto a single page or have workflows that gather data across multiple domains. The UI might show domain objects wrapped by other domain objects or pretending that they are something different entirely. You may need to display pages that aggregate data from multiple domains, ending up with a micro-frontend that is a free-for-all for everyone to update.

We were trying to mimic our existing UI, including all of the user-friendly things it was trying to achieve. But the fine-grained divisions made this approach problematic. Workflows needed to cross boundaries between micro-frontends. In some regards, the micro-frontends were trying to be isolated, individual components. In others, we were trying to join them all back together.

Workflows that cross boundaries

Here, a single user workflow ended up involving the shell, along with multiple frontends.

Creating workflows that cross the boundaries between micro-frontends and the shell are challenging both to create and to maintain. Rather than using the built-in routing and navigation tools for your chosen framework, you need to build custom navigation. You need to pass state across separate applications, languages and frameworks. You need to consider the back and forward workflows, branching paths, and the different places the user can cancel or finish. If you only save at the completion of the workflow, you need to consider who is responsible for saving the completed state. Each application is updatable by different teams and can be released and changed independently. Having built and maintained these before, in terms of added bugs and complexity, I recommend you avoid this approach if you can help it.

Now that you have coupled together a bunch of your frontends, you have reduced their independence. Make a change to one, and you need to test that all of the connected multi-frontend workflows are still intact. To support independent releases or rollbacks, you might end up having to test combinations of different versions of each member. Your code becomes littered with version checks.

`Shared` became a dumping ground

With so much in common between each micro-frontend, we had begun placing a lot into our ‘shared’ project. Having shared libraries isn’t a problem by itself, but they become problematic when used as a catch-all location to work around the arbitrary boundaries created by incorrect divisions.

Our micro-frontends became small nubs on top of a much larger body of shared code.

With a large body of shared code, you increase what can change between each release. As you create additional micro-frontends, each gets touched less frequently, but more changes occur to `shared` between updates. It is much easier to get caught out by changes to `shared` when releasing a micro-frontend you haven’t touched in a while. With a single application, you have compile-time checks, you see the effects of your change immediately, and everything is released together.

Writing reusable library code requires more time and effort; it requires carefully considering the boundaries and writing something where you expect not to know all of the usages. It’s a different skillset from writing application code. You can’t rely on your IDE or static analysis to help you understand the impacts of a change. It is hard to guarantee that a change won’t break other teams’ particular use-cases. It is not practical to check all uses of the shared code, and you are left to make it the problem of whoever pulls the latest version of `shared` into their micro-frontend. The shared library becomes another avenue for bugs to sneak in. For a small team, the trade-off in extra time spent working with shared code cripples your momentum.

Reducing discoverability leads to duplicate implementations

By splitting an application, you reduce the ability to discover existing code. You lose tools that aid discovery. You have to go scavenging through another project looking for pieces to reuse. Even if you found something, you don’t want to be responsible for refactoring an unfamiliar project so that you can gain access to a shared component.

The consequence of reduced discoverability was that we failed to share some standard components, ending up with duplicate implementations across separate frontends.

Duplicate components have a high cost over time. Not only are you spending effort reimplementing an existing solution, but future changes now require changes in more, diverse, places. Duplication increases your risk of missing an instance or misunderstanding the differences between two similar components, opening another avenue for bugs.

Communication between micro-frontends

Your micro-frontends may end up needing to communicate with either the host or with each other, letting another part of your application know that it needs to change, refresh or trigger an action. As your clients shrink and your application grows in complexity, the links across micro-frontends grow. You may even end up needing to create a custom messaging service.

As with workflows mentioned earlier, building communication between components not only is harder to implement and maintain, but strips the components of their independence. You can no longer release, develop or test each separately. It becomes harder to discover what communications exist. At least with microservices, you have control over the boundaries of each service, but frontends are typically less rigid.

Our codebase started to diverge

Another challenge you can face with breaking down an application into multiple components while sharing them among teams is their tendency to diverge. Over time, lots of decisions accumulate, and you end up with very different looking projects. If you’re also splitting the different components between teams, this separation is a benefit, as it increases each team’s independence and leaves each to tackle problems as they see fit. On the other hand, if you have a pool of micro-frontends that each team picks up and puts down as they work on a variety of features, divergence becomes a problem. A micro-frontend that you haven’t touched for a while is a strange, alien place, with new approaches to get used to and traps to fall into. If a feature requires updating multiple webviews simultaneously, you need to continually relearn and shift your understanding of how your codebase works, resulting in additional friction and time.

Even being careful to constrain how much our first micro-frontends diverged, after just a couple of months, we ended up with the following differences:

Writing unit tests in Jest vs. writing tests in Enzyme.

Using rem for sizing vs. using pixels.

Treating UI components as dumb components that received data from their parent vs. UI components responsible for retrieving their data.

Just keeping all the micro-frontends up to date with changes to cross-cutting concerns ends up being time-consuming as you flesh out telemetry, error logging, build pipelines… One change has to ripple across every micro-frontend. Worse, your update misses a spot, and only when you need those logs do you find that they are missing for that particular micro-frontend.

Performance issues

Trying to keep each webview isolated and minimizing our reliance on the host killed the performance of our application. Without a shared cache, each webview was having to pull down its own set of data, resulting in a high volume of duplicate calls. Since we were replacing an application that fetched the majority of data during login, and cached images that infrequently changed in the file system, our backend services supported this approach, making the dramatic switch not to have a cache problematic.

Since the host was directly opening webviews, and since each webview was an isolated view, we were making our users wait twice: first to load the webview and pull down our code, and then a second time to retrieve the data to display in the UI. Our webviews were using a version of Edge predating service workers, and so required each instance of a webview to be created fresh each time, meaning we couldn’t pre-load any data. The result was a janky experience where the users needed to sit through many loading spinners. Micro-frontends inhibited our ability to improve our user experience.

Framework support

One of the supposed benefits of micro-frontends is that it allows for a reasonable upgrade path once the current batch of JavaScript frameworks goes the way of their predecessors. But to achieve this, we ended up locking ourselves into another opinionated JavaScript framework. Our experience with that framework hasn’t been great so far; it didn’t do a sufficient job supporting micro-frontends, so we didn’t gain much aside from an extra layer of complexity and lock-in.

We would have been better off starting with a monolith and if the size became a problem, splitting it then. And perhaps, by then, there would be an established set of frameworks, practices and tools from which we could draw. Instead, we chose to be an early adopter, took a naive approach and used the wrong tools.

We were making it harder to develop and debug

Initially, each frontend was easy to develop. They were small, focused, and could run outside of our host. However, as time went on, we started building connections between each micro-frontend, be they multi-page workflows or messaging. It was hard to set up a micro-frontend to debug with the prerequisite state; it was hard to set up and test an organic flow across frontends. Each connection forced us back into the host to test, where our debugging tools and the overall developing experience were inferior. It meant the overhead of running multiple frontends locally at the same time.

We lost access to compile-time checks and static analysis for apps apart from the one we were currently developing. When changing the shared code, it was harder to understand the impacts. Without stopping and building and testing every single frontend, we were potentially breaking others’ work without even knowing about it. We traded complexity: to make it easier to understand one small piece, we made it much harder to understand the big picture of the application.

We were too focused on size and not focused on our problems

As software developers, we fixate too much on size. “This should be smaller.” “No wait, it should be even smaller.” We argue about what is the appropriate level to start cutting and slicing. In our case, some argued for dividing our product between the two large and distinct domains, with little overlap, matching our lines of business. Others argued for cutting a level deeper, splitting each 15–20 times to make everything really ‘micro’.

When we argue about size, we are missing the point. Instead of thinking about an imaginary, idealized size that each micro should be, we should consider changing size only as a potential solution to a problem.

The misconception that got us to this point is the idea that being small is a replacement for having an architecture: that if you make everything as small as you can, you don’t need to worry about how to lay out your application, how it retrieves and stores data, where and how routing and communication work, how it manages cross-cutting concerns, or how the application grows. Each component is so small that, in theory, it stays naive and simple. In practice, what you get is a mess.

Making mistakes

While establishing the architecture of our micro-frontends, we made many mistakes:

Divided our micro-frontends too finely, without regard for teams, features or the application’s user experience.

Didn’t identify the micro-frontends we expected to create upfront.

Chose a framework that not intended to support a micro-frontend architecture.

Decided on a target size for each frontend before we started, rather than reacting to problems.

Switched features from the old application to the new as we needed to alter them, rather than upgrading our application in a planned, organized fashion. This approach prevented us from developing a sane architecture and delayed vital decisions.

Proceeded without having a plan for how to structure our shared libraries or determine what we should or shouldn’t share between micro-frontends.

Once we realized that things had gone off-track, there were several changes we could have made to better support micro-frontends:

Update our UI/UX and user workflows to support our architecture.

Hire more developers and restructure our teams and the incoming work to isolate the teams better.

Add a web-based shell between the desktop client and the micro-frontends. This shell could have used approaches and frameworks to facilitate proper micro-frontend architecture.

Monoliths simply weren’t one of our pain points, and switching to micro-frontends had few benefits to offer us. There were many other, more pressing concerns that we could have been tackling. With a lot more effort and cost, we could have made the architecture work. But even with the dream setup, micro-frontends wouldn’t have been worthwhile for us.