How to propagate data through reactive pipelines the Good, the Bad and the Ugly way.

Propagating data through reactive pipelines is a very common development concern that arises when building reactive applications based on any Reactive Streams implementation (eg. Project Reactor, RxJava, Akka Streams).

We’ll be going through the Good, the Bad and the Ugly of propagating information downstream, using Project Reactor as our Reactive Streams implementation of choice.

NOTE: if you’re quite familiar with Project Reactor and reactive programming already, you can jump to my demo Spring Boot application on GitHub and dig through the source code; it’s quite straightforward!

The Bad

One of the most common solutions employed to solve the data propagation issue is the usage of local (effectively final) variables, which can either be used immediately in the scope of the current method or passed on as extra parameters to other methods.

The pros:

quick and dirty;

and dirty; that’s it…

The cons:

encourages you to build longer methods in order to re-use the same local variable in multiple pipeline steps;

in order to re-use the same local variable in multiple pipeline steps; alternatively pollutes your API by adding extra method parameters whenever you need to refactor the code into smaller pieces;

by adding extra method parameters whenever you need to refactor the code into smaller pieces; code becomes hard to maintain very quickly.

Example:

Controller snippet:

A controller which uses local variables to pass on data through reactive pipelines

Polluted service snippet:

The corresponding service with extra parameters in the method signature

The Ugly

Another prevalent solution consists of the usage of Tuples, which aggregate multiple pieces of data together into a single object that gets propagated downstream and allows specific access to any and every component that is part of it.

The pros:

no extra parameters are needed to propagate data downstream;

are to propagate data downstream; no need to create our own aggregator POJOs;

good for propagating mutable data downstream;

propagating downstream; Tuples are a Project Reactor component, therefore we must be doing things the Reactor way, am I right? 😆

The cons:

methods signatures become quite long , being filled with generics declarations;

, being filled with generics declarations; the code that’s required to handle tuples is definitely ugly ;

that’s required definitely ; code becomes hard to read at first glance.

Example:

Controller snippet:

A controller which uses tuples to pass on data through reactive pipelines

Ugly service snippet:

The corresponding service with an ugly method signature and even uglier code to handle the tuple

The Good

A way less common and often unknown solution consists of the usage of the Project Reactor’s context, a map-like structure that is automagically and transparently propagated throughout the whole reactive pipeline and can be easily accessed at any moment by calling the Mono.subscriberContext() static method.

The context can be populated at subscription time by adding either the subscriberContext(Function) or the subscriberContext(Context) method invocation at the end of your reactive pipeline.

It is an excellent solution for propagating static, technical data about the current process and dealing with cross-cutting concerns, and should, therefore, be used for things such as propagation of authentication contexts, static logging information, correlation ids, transaction contexts.

The pros:

no extra parameters are needed to propagate data downstream;

are to propagate data downstream; methods’ signatures are completely unscathed ;

; very elegant solution for dealing with cross-cutting concerns;

for dealing with cross-cutting concerns; still doing things the Reactor way.

The cons:

not the best tool for propagation of functional, highly mutable data;

propagation of functional, highly verbose compared to some of the other alternatives.

Example:

Controller snippet:

A controller which uses the Project Reactor’s context to pass on data through reactive pipelines

NOTE: in the snippet above, the Spring Security authentication context is retrieved from the Project Reactor’s context since the latter is already filled with it by the Spring WebFlux Security module. Neat!

Clean service snippet:

The corresponding service signature is unscathed as data is propagated transparently!

Wrapping Up

We went through three instances of the same — simple but definitely overcomplicated — example showing different approaches to data propagation in reactive pipelines.

There is no clear and absolute winner for every use case, but Project Reactor’s context certainly deserves an honorable mention for dealing with cross-cutting concerns in a way that’s both elegant and transparent.

References

For more detailed information about Project Reactor’s context, you can refer to the official Project Reactor documentation.

If you want to give a deeper look at some of the code shown above or try it out yourself, feel free to check out my demo project on GitHub.