Actually, there’s just a single test case that has passed, all others have problems. Which clearly demonstrates that the trivial implementation is not a proper one.

The numbers in the names of the test cases refer to the respective items in the Reactive Streams specification, where you can further explore the concepts behind those requirements.

It turns out that most of the problems can be eliminated by a couple of small changes, i.e.:

introducing an implementation of Subscription to link the publisher with its subscribers, which would emit elements according to demand

to link the publisher with its subscribers, which would emit elements according to demand adding some basic error handling,

adding some simple state within the subscription to correctly handle termination.

For details, please have a look at the history of the commits in the repository with the example code.

However, eventually you’re going to come to a point where the problems become less trivial and harder to solve.

Since the implementation is synchronous, there’s an issue with unbounded recursion resulting from the subscription’s request() calling the subscriber’s onNext() , where, in turn, the subscriber calls request() again, etc.

The other serious issue has to do with handling infinite demand (i.e. the subscriber requesting Long.MAX_VALUE elements, possibly a couple of times). If you’re not careful enough here, you may end up either spawning too many threads or overflowing some long value where you could possibly store the accumulated demand.

Don’t try this at home

The bottomline of the example above is that the reactive components are really not trivial to implement correctly. So, unless you’re authoring yet another Reactive Streams implementation, you shouldn’t really be implementing them yourself, but rather use the existing implementations which are verified with the TCK.

And if you decide to write your own implementation anyway, be sure to understand all the details of the specification and remember to run the TCK against your code.

The purpose of the new interfaces

So what are the interfaces there for, you may ask yourself? The actual goal of having them included in the JDK is to provide something called a Service Provider Interface (or SPI) layer. This should eventually serve as a unification layer for different components that have reactive and streaming nature, but may expose their own custom APIs, and thus not be able to interoperate with other similar implementations.

The other, equally important goal is to point the future development of the JDK in a proper direction, leading to a point where the existing streaming abstractions, which are already present in the JDK and widely used, make use of some common interfaces — once again to improve interoperability.

Existing streaming abstractions

So what streaming abstractions are already there in the JDK (with streaming meaning processing large, possibly infinite, amounts of data chunk by chunk, without reading everything into memory upfront)? Those include:

java.io.InputStream / OutputStream

/ java.util.Iterator

java.nio.channels.*

javax.servlet.ReadListener / WriteListener

/ java.sql.ResultSet

java.util.Stream

java.util.concurrent.Flow.*

Although all of the above abstractions expose some kind of streaming-like behavior, they miss a common API that would let you connect them easily, e.g. to use a Publisher to read data from one file and a Subscriber to write it to another one.

The advantage of having such unification layer is the possibility to use a single call:

publisher.subscribe(subscriber)

to handle all the hidden complexities of reactive stream processing (like backpressure and signalling).

Towards an ideal world

What could be the possible results of making the various abstractions use the common interfaces? Let’s see a few examples.

Minimum operation set

The current Reactive Streams support in the JDK is limited to the four interfaces described earlier. If you have ever used some reactive library before — Akka Streams, RxJava, or Project Reactor — you’re aware that their power lies in various stream combinators (like map or filter to name the simplest ones) available out of the box. Those combinators are, however, missing from the JDK, although you’d probably expect at least a couple of them to be available.

To solve this problem, Lightbend has proposed a POC of Reactive Streams Utilities — a library with the basic operations built-in and with the possibility to provide the more complex ones as plug-ins delegating to an existing implementation, specified by a JVM system parameter like

-Djava.flow.provider=akka

HTTP

How receiving a file uploaded via HTTP and uploading it somewhere else, in a reactive fashion of course?

Since Servlet version 3.1 there’s asynchronous Servlet IO. Also, starting from JDK 9 there’s a new HTTP client (which was in the jdk.incubating.http module in Java 9/10, but is considered stable from Java 11 on). Apart from a nicer API, the new client also supports Reactive Streams as input/output. Among others, it provides a POST(Publisher<ByteBuffer>) method.

Now if the HttpServletRequest provided a publisher to expose the request body, uploading the received file would become:

POST(BodyPublisher.fromPublisher(req.getPublisher())

With all the reactive features under the hood, just by using that single line of code.

Database access

When it comes to a universal way to access a relational database in a reactive way, there was some hope brought by the Asynchronous Database Access API (ADBA), which, unfortunately, hasn’t made it to the JDK so far.

There’s also R2DBC — an endeavor to bring a reactive programming API to relational data stores. It currently supports H2 and Postgres, and plays nicely with Spring Data JPA, which may be an advantage that helps with wider adoption.

Then there are some vendor-specific asynchronous drivers. But we’re still missing a perfect solution that would let you do something like:

Publisher<User> users = entityManager

.createQuery("select u from users")

.getResultPublisher()

which is basically a plain old JPA call, just with a Publisher of users instead of a List .

This is still not reality

Just to remind you once again — the above examples are a look into the future, they are not there yet. In which direction would the JDK and the ecosystem go is a matter of time and the efforts from the community.

An actual use of the unification layer

Although the unification of HTTP and databases is not yet there, it’s already possible to actually connect the various Reactive Streams implementations using the unified interfaces found in the JDK.

In this example I’m going to use Project Reactor’s Flux as the publisher, Akka Streams’ Flow as the processor and RxJava as the subscriber. Note: the example code below uses Java 10 var s, so if you plan to try it yourself, be sure to have a proper JDK.

Looking at main you can see that there are three components that form the pipeline: the reactorPublisher , the akkaStreamsProcessor and the Flowable (which prints to standard output).

When you look at the return types of the factory methods, you will notice that they are nothing more than the common Reactive Streams interfaces (a Publisher<Long> and a Processor<Long, Long> ), which are used to seamlessly connect the different implementations.

Also, as you can see, the various libraries don’t return the unified types out of the box (i.e. they internally use a different type hierarchy), but they need some glue code that would convert their internal types to the ones from java.util.concurrent.Flow.* — like the JdkFlowAdapter or the JavaFlowSupport .

Last but not least, you can spot some differences between the different libraries in terms of how they expose the internals of the streaming engine. While Project Reactor tends to completely hide the internals, Akka Streams, on the other hand, requires you to explicitly define a materializer — the runtime for the streaming pipeline.

Summary

Here is a couple of key takeaways from this article:

the Reactive Streams support in the JDK is not a full implementation of the specification, but only the common interfaces,

the interfaces are there to serve as an SPI (Service Provider Interface) — a unification layer for different Reactive Streams implementations,

implementing the interfaces yourself is not trivial and not recommended, unless you’re creating some new library; if you decide to implement them, make sure that all the tests from the TCK are green — this gives you a good chance that your library will work smoothly with other reactive components.

If you wish to experiment with the TCK and the SimplePublisher example, the code is available on my GitHub:

And if you’re interested in digging deeper into Reactive Streams implementation, I truly recommend the Advanced Reactive Java blog.