Communication Between Micro Frontends

Having gained much experience with the implementation of various microfrontend-based solutions — I’ll try to share what I’ve learned.

Microfrontends have become a viable option for developing mid to large scale web apps. Especially for distributed teams, the ability to develop and deploy independently seems charming. One problem that quickly arises: How can one microfrontend communicate with another?

Having gained much experience with the implementation of various microfrontend-based solutions in the past I’ll try to share what I’ve learned. Most of these ways will focus on client-side communication (i.e., using JS), however, I’ll also try to touch server-side stitching, too.

However you choose to implement your MFs, always make sure to share your UI components to a component hub using tools like Bit (Github). It’s a great way to maximize code reuse, build a more scalable & maintainable codebase and keep a consistent UI throughout your different Micro Frontends (some even use Bit as an implementation of Micro Frontends).

Example: browsing through shared components in bit.dev

Loose Coupling

The most important aspect of implementing any communication pattern in microfrontends is loose coupling. This concept is not new and not exclusive to microfrontends. Already in microservice backends, we should take great care not to communicate directly. Quite often, we still do it — to simplify flows or infrastructure, or both.

How is loose coupling possible in microfrontend solutions? Well, it all starts with good naming. But before we come to that we need to take a step back.

Let’s first look at what’s possible with direct communication. We could, for instance, come up with the following implementation:

At first, this may also look nice: We want to talk from microfrontend B to A — we can do so. The message format allows us to handle different scenarios quite nicely. However, if we change the name in microfrontend A (e.g., to mifeA ) then this code will break.

Alternatively, if microfrontend A is not there it all for whatever reason then this code will break. Finally, this way always assumes that callMifeA is a function.

The diagram below illustrates this problem of decoupled coupling.

Coupling between decoupled microfrontends.

The only advantage of this way is that we know for “sure” (at least in case of a working function call) to communicate with microfrontend A. Or do we? How can we make sure that callMifeA has not been changed by another microfrontend?

So let’s decouple it using a central application shell:

Now calling callMife should work in any case - we just should not expect that the anticipated behavior is guaranteed.

The introduced pool can also be drawn into the diagram.

Introducing decoupling via a handler pool.

Up to this point the naming convention is not really in place. Calling our microfrontends A , B etc. is not really ideal.

Naming Conventions

There are multiple ways how to structure names within such an application. I usually place them in three categories:

Tailored to their domain (e.g., machines)

According to their offering (e.g., recommendations)

A domain offering (e.g., machine-recommendations)

Sometimes in really large systems the old namespace hierarchy (e.g., world.europe.germany.munich ) makes sense. Very often, however, it starts to be inconsistent quite early.

As usual, the most important part about a naming convention is to just stick with it. Nothing is more disturbing than an inconsistent naming scheme. It’s worse than a bad naming scheme.

While tools such as custom linting rules may be used to ensure that a consistent name scheme is applied, in practice only code reviews and central governance can be helpful. Linting rules may be used to ensure certain patterns (e.g., using a regular expression like /^[a-z]+(\.[a-z]+)*$/ ) are found. To map back the individual parts to actual names is a much harder task. Who defined the domain specific language and terminology in the first place?

To shorten our quest here:

Naming things will always be one of the unsolved problems.

My recommendation is just to select a naming convention that seems to make sense and stick with it.

Exchanging Events

Naming conventions are also important for the communication in terms of events.

Using custom events for decoupling.

The already introduced communication pattern could be simplified by using the custom events API, too:

While this may look appealing at first it also comes with some clear drawbacks:

What is the event for calling microfrontend A again?

How should we properly type this?

Can we support different mechanisms here, too — like fan-out, direct, …?

Dead lettering and other things?

A message queue seems inevitable. Without supporting all the features above a simple implementation may start with the following:

The code above would be placed in the application shell. Now the different microfrontends could use it:

This is actually the closest way can get to the original code — but with loose coupling instead of an unreliable direct approach.

Decouple via a common event bus provided by the app shell.

The application shell may also live differently than illustrated in the diagram above. The important part is that each microfrontend can access the event bus independently.

Sharing Data

While dispatching events or enqueuing a message seem to be straight forward in a loosely coupled world data sharing seems not.

There are multiple ways to approach this problems:

single location, multiple owners — everyone can read and write

single location, single owner — every can read, but only the owner can write

single owner, everyone needs to get a copy directly from the owner

single reference, everyone with a reference can actually modify the original

Due to loose coupling we should exclude the last two options. We need a single location — determined by the application shell.

Let’s start with the first option:

Very simple, yet not very effective. We would at least need to add some event handlers to be informed when the data changes.

The diagram below shows the read and write APIs attached to the DOM.

Reading and writing shared data via the DOM.

The addition of change events only affects the setData function:

While having multiple “owners” may have some benefits it also comes with lots of problems and confusion. Alternatively, we can come up with a way of only supporting a single owner:

Here, the first parameter has to refer to the name of the owner. In case no one has yet claimed ownership we accept any value here. Otherwise, the provided owner name needs to match the current owner.

This model certainly seems charming at first, however, we’ll end up with some issues regarding the owner parameter quite soon.

One way around this is to proxy all requests.

Centralized API

Global objects. Well, they certainly are practical and very helpful in many situations. In the same way, they are also the root of many problems. They can be manipulated. They are not very friendly for unit testing. They are quite implicit.

An easy way out is to treat every microfrontend as a kind of plugin that communicates with the app shell through its own proxy.

An initial setup may look as follows:

Every microfrontend may be represented by a set of (mostly JS) files — brought together by referencing a single entry script.

Using a list of available microfrontends (e.g., stored in a variable microfrontends ) we can load all microfrontends and pass in an individually created API proxy.

Wonderful! Now please note that currentScript is required for this technique, so IE 11 or earlier will require special attention.

The diagram below shows how the central API affects the overall communication in case of shared data.

The APIs to mediate the shared data are distributed after a global registration.

The nice thing about this approach is that the api object can be fully typed. Also, if the whole approach allows a progressive enhancement since it just passively declares a glue layer ( setup function).

This centralized API broker is definitely also helpful in all the other areas we’ve touched so far.

Activation Functions

Microfrontends are all about “when is my turn?” or “where should I render?”. The most natural way of getting this implemented is by introducing a simple component model.

The simplest one is to introduce paths and a path mapping:

The lifecycle methods now depend fully on the component model. In the simplest approach we introduce load , mount , and unmount .

The checking needs to be performed from a common runtime, which can be simply called “Activator” as it will determine when something is active.

Introduced a runtime activator for performing the activity checks.

How these look is still pretty much up to us. For instance, we can already provide the element of an underlying component essentially resulting in an activator hierarchy. Giving each component a URL and still being able to compose them together can be very powerful.

Component Aggregation

Another possibility is via some component aggregation. This approach has several benefits, however, still requires a common layer for mediation purposes.

While we can use any (or at least most) framework to provide an aggregator component, we will in this example try to do it with a web component — just to illustrate the concept in pure JavaScript. Actually, we will use LitElement, which is a small abstraction on top just to be a bit more brief.

The basic idea is to have a common component that can be used whenever we want to include “unknown” components from other microfrontends.

Consider the following code:

Here we created a new web component which should represent our product page. The page already comes with its own code, however, somewhere in this code we want to use other components coming from different microfrontends.

We should not know from where these components come. Nevertheless, using an aggregator component ( component-reference ) we can still create a reference.

Let’s look how such an aggregator may be implemented.

We still need to add registration capabilities.

Obviously there is a lot left aside here: How to avoid collisions. How to forward attributes / props accordingly. Robustness and reliability enhancements, e.g., for reactivity when the references change. Further convenience methods…

The list of missing features is long here, but keep in mind that the code above should only show you the idea.

The diagram below shows how the microfrontends can share components.

Using a aggregator component for sharing components.

Usage of this is as simple as:

Conclusion

There are many many many possible patterns to apply when loose coupling should be followed. In the end, though, you’ll need a common API. If that one is the DOM or coming from a different abstraction is up to you. Personally, I favor the centralized API for its sandboxing and mocking capabilities.

Learn More