by Jochen Mader

The build pipeline mentioned in this post will be presented at JUG Frankfurt (24.6.2015)



Spring is doing it.

OSGi is doing it.

Vert.x is doing it.

And guess what: Even Java EE is finally doing it.

I am talking about Fat Jar deployments: The technique of deploying a runnable Java application as a single jar, batteries included.

A note before we start: The purpose of this article is to introduce the general concepts and the benefits you get from using Fat Jars in your development pipeline. I won’t go into the nitty gritty details of the different approaches.

Why?

The past years have been dominated by the notion that runtime and application are to be kept separate. We split our teams along the lines of development and operations (don’t worry, I won’t be writing about DevOps, that’s what other people already did).

In theory the devs would build their application against a certain version of some arcane specification and deliver this to operations who would in turn deploy it to their sacred servers.

So far for the theory.

What’s wrong?

But nothing kills a nice theory better than looking at how things turned out after applying it. In fact we ran into a multitude of problems since we started separating runtime and application:

Minor differences (even on the patch level) between the version used in production and the one used by the devs can cause havoc and are extremely hard to figure out.

Operations has to provide support for each and every different version of available runtimes causing a growing work backlog in a notoriously understaffed department.

Debugging can be pure hell as it is close to impossible to reproduce the system in production.

Setting up a local work environment often gets to the point where people start handing around zipped versions of their IDEs to be able to work.

I am not going to tell you that Fat Jars are going to solve all of these problems. Especially because it’s not the Fat Jars solving the problem but the processes behind their creation.

But let’s start from the beginning.

What are they?

First I should define how they work. As I mentioned before a Fat Jar is a runnable jar that includes all its dependencies. Runnable jars are created by adding Main-Class-attribute to the MANIFEST.MF:

Manifest-Version: 1.0

Main-Class: com.example.MainClass

If you did this for a jar-file name myrunnable.jar you can now do java -jar myrunnable.jar to start it. This is easy enough for very simple applications but won’t work for anything beyond that. The reason lies in the fact that most Java applications are probably 1% of your own code and 99% external dependencies. These need to be bundled with your jar in some way.

Actually, there are three ways to do that.

The pure java way

Trying to stick with pure java shows that people didn’t really think about Fat Jars when they added the Main-Class-Parameter to the Manifest. There is no way of telling the JVM to add some included jars to the classpath. What we have to do is to unzip them and package their contents into the actual Fat Jar.

As this process is quite error prone if done manually we better leave this work to the build system. Most of them provide this capability in the form of a plugin. Here’s a few examples and the frameworks they use them:

Maven Shade PlugIn used by Spring Boot and Vert.x 3

Gradle Shadow PlugIn used by Vert.x 3

SBT Assembly PlugIn that can be used to package Akka applications

Capsule from Parallel Universe for the really tough cases (e.g. native libraries)

They are quite easy to handle and, looking at the frameworks using them, it’s just fair to call them battle proven.

The following snippet shows how Vert.x 3 uses the Maven-Shade PlugIn to create a runnable Fat Jar:

<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.3</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <manifestEntries> <Main-Class>io.vertx.core.Starter</Main-Class> <Main-Verticle>io.vertx.example.HelloWorldVerticle</Main-Verticle> </manifestEntries> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/services/io.vertx.core.spi.VerticleFactory</resource> </transformer> </transformers> <artifactSet> </artifactSet> <outputFile>${project.build.directory}/${artifactId}-${project.version}-fat.jar</outputFile> </configuration> </execution> </executions> </plugin>

And the same using Gradle:

shadowJar { classifier = 'fat' manifest { attributes 'Main-Class' : 'io.vertx.example.HelloWorldEmbedded' } mergeServiceFiles { include 'META-INF/services/io.vertx.core.spi.VerticleFactory' } } Pretty convenient and easy to grasp. shadowJar { classifier = 'fat' manifest { attributes 'Main-Class': 'io.vertx.example.HelloWorldEmbedded' } mergeServiceFiles { include 'META-INF/services/io.vertx.core.spi.VerticleFactory' } } Pretty convenient and easy to grasp.

The tainted-but-fun way

The lack of real modularisation has been plaguing the JVM since its very first version (something that will hopefully get better with JDK 9 and the inclusion of Project Jigsaw). The Sea of Jars and its associated problems prompted several teams to come up with frameworks to work around this limitation. Some notable projects in this area are OSGi, JBoss Modules and Vert.x 2 (they abandoned their module system in Vert.x 3).

All of them introduced some custom class loaders and different strategies to resolve dependencies. With abandoning the default class loaders they were also capable to add some more features. One of them is the capability to load jars packaged inside a Fat Jar.

Vert.x 2 for example provided a custom module system allowing it to put jars into a mods-directory inside a Fat Jar. Using their custom starter they build their own class loader hierarchy allowing them to put the embedded Jars on the classpath.

First of all: I really like module systems as they make it a lot easier to reason about the contents of your class path at a given point in time.

It also makes it a lot easier to figure out what dependencies are part of your application. Remember: The other solution is to unzip everything into one classes-folder, abandoning a clear separation between dependencies.

Angry side remark: I call this approach tainted because many developers in the Java world regard these frameworks as witch craft. To me it’s quite baffling to see to what lengths people will argue to prevent their introduction in a project. I even remember arguing with architects who were trying to sell Maven as the “better” approach for the problems OSGi (or any other module system) solves. Yes, they all add boiler plate and ceremony to your application but in the end I prefer being able to reason over a runtime dependency tree over wild guess work in a Sea of Jars.

Somewhere in between

Recently a colleague of mine pointed me to a very interesting project from the people behind Quasar. Their approach is a mix of both worlds I just introduced, and a little more. The Capsule project provides the infrastructure to package dependencies inside a jar and to load them during runtime. And all that without a custom module format.

So far I can say that it’s as simple as they claim and a very appealing approach. I will refrain from going into more detail until I got time to play around with it some more. Watch out for a follow up on that topic.

What we get

Whichever one you pick you will end up with a nice package containing almost (with the exception of the OS and the JVM, but that’s what Docker is for) everything to run the application. If you got to this point you can already give yourself a nice pat on the back. Your runtime is now part of your built. You develop, build and test on the exact same version of your runtime as you will have in production.

Updating has become a lot simpler and more transparent.

There is only one source of truth, your your pom.xml/build.gradle/build.sbt-file. If an update is required you will adjust a version in there, the build will start and hopefully succeed with all tests showing a nice green. If one goes red you just saved yourself a night of debugging production issues.

What about Docker?

When it comes to delivering a completely packaged software there is no way around Docker. And I definitely use Docker to ship my Fat Jars. I simply don’t like the idea of deploying a WAR/EAR or whatever else you want to ship inside an application server running inside a Docker container because it doesn’t help me while developing my actual application and while running unit tests.

Putting things together

Without an appropriate build pipeline backing them you won’t get all the nice things out of Fat Jars. Take a look at the following image.

The only manual task in this chain is the checkin to Git. After that Jenkins takes over.

After passing unit and integration tests we have a code analysis-step (you are using SonarQube quality gates or something comparable, aren’t you?).

Now we use Docker to package everything together and deploy it our Load Test Server for performing automated load tests. And that’s where we are finally fully integrated with our production environment.

The Load Test Server is running the same configuration as we will have in production and Docker takes care we get everything else in a specific version.

After that we could even deploy directly to other instances.

The End

Version management is one of the biggest problems of IT. Fat Jars are a good start to get versioning and updating under control. They are by far not the ultimate silver bullet we need but combining them with Docker gives us tremendous insight into our environment.

The “need for speed” Uwe wrote about is highly dependent on automatic every possible step and making things transparent to operations and development.

Fat Jars give us this transparency as they concentrate version information in a single place and make reproducing a system state as easy as checking out a branch/tag from Git.