The Java Virtual Machine (JVM) is designed for optimal performance in long running processes, i.e. application servers, without any specific requirement for fast startup times.

Although performant on long running tasks thanks to its Just-In-Time compilation capabilities, the JVM loses ground to native executables in startup times. And despite the presence of stack-dependent recipes to downplay this weakness, the JVM does not shine as the primary go-to solution for use-cases in Functions-as-a-Service.

Still the Java language itself benefits from a very rich ecosystem regarding libraries, frameworks, and tooling that would be nice-to-have in developing structured FaaS services.

So, what if we could use the very same JVM languages that we love without the JVM? We can achieve that with Ahead-of-Time compilation of our Java bytecode.

Just-In-Time compilation vs. Ahead-of-Time compilation

Traditional compilation is the transformation of source files written in a programming language into a native executable for a specific platform. This is the realm of C, C++, and others.

On one hand, the final binary is close to the bare metal, so it’s able to start quite fast. On the other hand, possible improvements have been applied at compile time, without knowing anything about the final workload. That’s Ahead-of-Time compilation.

When the JVM was created, its designers took another radical path: by inserting a platform - the JVM - between the bytecode and the operating system, code compilation can occur just before its execution.

This Just-In-Time compilation allows to understand the workload over time, and to optimize the final code depending on its nature. That’s the reason why over time, the JVM is more performant than native executables doing the same.

Advantages of Just-In-Time compilation

Just-In-Time compilation allows for some bonus features like reflection, a powerful feature that allows querying any class at runtime for its members - attributes and methods.

Reflection is extremely useful when handling generic classes that are unknown by the framework. For example, Spring can create any instance of any class using different kinds of configuration. Notably the legacy way of doing configuration is with XML, which relies heavily on reflection.

Likewise, Hibernate is able to map an instance’s attributes to a database table’s columns. It does so by checking the instance’s class at runtime, and listing the class’ attributes. Then for each attribute, it queries its values.

However, using the reflection API incurs a performance hit, compared to a standard call, resulting in an additional disadvantage for processes that need to startup fast, as in our FaaS use-case.

Ahead-of-Time compilation of Java applications

Features like reflection and proxies, heavily used by Dependency Injection centered frameworks, such as Spring, pose a serious challenge for AoT compilation.

Although practical - they pave the way for listeners and interceptors - those advanced features decouple the relationship between classes. Those relationships require to be manually declared in configuration to allow the compiler know about them – you guessed it – ahead of time.

Recently, two new JVM frameworks appeared with this approach in mind: Micronaut and Quarkus.

Micronaut claims to:

support Java, Groovy and Kotlin

be compatible with GraalVM (more later)

be fully-reactive

offer Advice-Oriented Programming (AOP) capabilities

integrate seamlessly with the Spring framework

Quarkus boasts to:

be a first-class GraalVM citizen

support both Imperative Programming and Reactive Programming

integrate with other frameworks, libraries and platform, such as Hibernate, RestEasy and Eclipse MicroProfile

Both are completely free - as a beer, and as a bird (Apache License 2.0).

Both frameworks implement solutions to transform runtime-dependent relationships in explicit declarations known ahead of time, at the cost of some development time overhead.

Let’s see an example of this using Micronaut.

Enabling Ahead-of-Time compilation with Micronaut

Micronaut provides a command-line application to kickstart one’s project. However, in order to get into the details, we will create the project from scratch.

Let’s start with a simple codebase: two classes Foo and Bar .

Foo is a simple class.

public class Foo { public String foo() { return "Foo"; } }

Foo is a dependency of Bar .

public class Bar { private final Foo foo; public Bar(Foo foo) { this.foo = foo; } public String bar() { return foo.foo() + "Bar"; } }

The Main class does the job of injecting a Foo instance in a Bar instance manually:

public class Main { private static final Logger LOGGER = LoggerFactory.getLogger(Main.class); public static void main(String[] args) { Foo foo = new Foo(); Bar bar = new Bar(foo); LOGGER.info(bar.bar()); } }

It’s possible to create a standalone JAR out of this simple setup, with a Maven project and the maven-shade-plugin :

$ mvn clean compile

Executing the newly created standalone JAR yields the expected result:

$ java -jar target/micronaut-from-scratch-1.0-SNAPSHOT.jar 08:37:45.185 [main] INFO com.exoscale.syslog.micronaut.Main - FooBar

The manual dependency injection approach above works as expected, and should be the preferred way for simple applications. However, we did say frameworks such as Spring would require a lot of overhead to make such setup compilable ahead of time.

Hence, let’s migrate our codebase to Micronaut. Micronaut relies on Java standards, namely the JSR-330 that specify the javax.inject package.

The first step is to actually add the Micronaut injection model as a dependency:

<dependency> <groupId>io.micronaut</groupId> <artifactId>micronaut-inject</artifactId> <version>1.0.4</version> </dependency>

The second step is to promote our classes to beans, so that Micronaut can create instances of them, and add them to the application context. Since the javax.inject dependency is a transitive dependency of micronaut-inject , we can use the @Singleton annotation.

@Singleton public class Foo { // The rest of the class is unchanged } @Singleton public class Bar { // The rest of the class is unchanged }

Now, there is no more need to create new instances as in the previous version: that is taken care of by Micronaut. The only required step is to kick-start this process in the main method:

public static void main(String[] args) { ApplicationContext context = ApplicationContext.run(); }

To get a reference to the Bar instance, one needs to query the application context:

Bar bar = context.getBean(Bar.class);

Last but not least, because Micronaut works its magic at compile-time, it needs to be hooked into the compilation process. This is achieved through a compile-time annotation-processor:

<plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <source>1.8</source> <target>1.8</target> <annotationProcessorPaths> <path> <groupId>io.micronaut</groupId> <artifactId>micronaut-inject-java</artifactId> <version>${micronaut.version}</version> </path> </annotationProcessorPaths> </configuration> </plugin>

Changes are readily apparent when compiling the project, and checking the output:

$ mvn clean compile $ ls -1 target/classes/com/exoscale/syslog/micronaut $BarDefinition.class $BarDefinitionClass$$AnnotationMetadata.class $BarDefinitionClass.class $FooDefinition.class $FooDefinitionClass$$AnnotationMetadata.class $FooDefinitionClass.class Bar.class Foo.class Main.class

While Foo.class , Bar.class and Main.class are expected - and are no different from the previous versions - Micronaut enhanced compilation process created a bunch of additional Definition classes.

Those are used for injection among other things, as is readily apparent in the $BarDefinition class, line 31 to 35:

public Bar build(BeanResolutionContext var1, BeanContext var2, BeanDefinition<Bar> var3) { Bar var4 = new Bar((Foo)super.getBeanForConstructorArgument(var1, var2, 0)); var4 = (Bar)this.injectBean(var1, var2, var4); return var4; }

This is where Micronaut takes care of getting a reference on an existing Foo instance, and inject it during the creation of a new Bar instance, clearing the way for the compilation of our bytecode.

Micronaut not only generates additional classes, it also provides a dedicated custom reflection API that takes advantage of those. As for the standard Java API, this allows to query a class’ structure at runtime.

However, the difference is that the data is already available in those additional generated classes, while standard reflection computes them at runtime.

This is the main benefit of the Java-AoT approach: improving the performance of reflection and its related features.

Bytecode compilation with GraalVM and Substrate VM

At this point, the application is still hindered by the JVM JIT approach. Enter another kind of AoT compilation: the compilation of regular bytecode meant to be run inside the JVM to native code meant aimed at a specific Operating System.

Recently, Oracle published GraalVM, an “universal virtual machine” that allows bytecode compilation of different languages (e.g. Ruby, Python, JavaScript, etc.), seamless integration of those languages in the same project, and much much more. Among those many features, it offers Substrate VM, a framework that allows AOT of bytecode.

By processing the bytecode generated by Micronaut, we can create a native executable that starts up very fast, our primary requirement for our Function-as-a-Service use-case.

The first step is to actually download GraalVM. It contains the native-image binary to transform bytecode into native executables. Two packages are available:

One is for evaluation purpose only The other is free, Open Source, and available on Github. This free version is more than enough regarding our needs.

Follow the instructions there. Whether you have other JDKs installed or not, don’t forget to point the JAVA_HOME environment variable to the relevant location e.g. on OSX, it’s $GRAALVM_HOME/Contents/Home . Baring that, you won’t have access to the native-image binary inside the build.

Creating the native executable from our JAR

Once GraalVM has been downloaded, extracted and configured, it’s possible to create the native executable out of the existing JAR:

$ native-image --no-server -cp target/micronaut-from-scratch-2.1-SNAPSHOT.jar \ -jar target/micronaut-from-scratch-2.1-SNAPSHOT.jar

With that, SubstrateVM works its magic by analyzing the existing JAR to create an executable:

[micronaut-from-scratch-2.1-SNAPSHOT:56326] classlist: 2,893.12 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (cap): 1,322.01 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] setup: 3,331.03 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (typeflow): 15,226.00 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (objects): 12,988.99 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (features): 402.79 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] analysis: 29,118.55 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] universe: 850.66 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (parse): 4,055.09 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (inline): 9,913.68 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] (compile): 38,478.35 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] compile: 53,921.30 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] image: 2,895.77 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] write: 650.03 ms [micronaut-from-scratch-2.1-SNAPSHOT:56326] [total]: 93,864.75 ms

Notice that there’s now a native executable named micronaut-from-scratch-2.1-SNAPSHOT at the root of the project!

While it’s possible to manually create the native executable each time, a build process worthy of the name should be reproducible and automated. Because the Maven build tool is used, it makes sense to configure the POM for that:

<plugin> <groupId>com.oracle.substratevm</groupId> <artifactId>native-image-maven-plugin</artifactId> <version>1.0.0-rc14</version> <executions> <execution> <phase>package</phase> <goals> <goal>native-image</goal> </goals> <configuration> <mainClass>${exec.mainClass}</mainClass> <buildArgs>--no-server</buildArgs> </configuration> </execution> </executions> </plugin>

Because the image generation works now with regular classes instead of an existing JAR:

We can remove the maven-shade-plugin execution We need to explicitly configure the entry-point class i.e. <mainClass>

At that point, the creation of the native image is part of the build process: every call to mvn package will trigger the native image generation. The output is pretty similar. The difference is that the generated executable is now target/com.exoscale.syslog.micronaut.main .

The DefaultEnvironment.determineCloudProvider error

Trying to execute the above executable yields an exception:

$ target/com.exoscale.syslog.micronaut.main Exception in thread "main" com.oracle.svm.core.jdk.UnsupportedFeatureError: Accessing an URL protocol that was not enabled. The URL protocol http is supported but not enabled by default. It must be enabled by adding the --enable-url-protocols=http option to the native-image command. at com.oracle.svm.core.util.VMError.unsupportedFeature(VMError.java:102) at com.oracle.svm.core.jdk.JavaNetSubstitutions.unsupported(JavaNetSubstitutions.java:164) at com.oracle.svm.core.jdk.JavaNetSubstitutions.getURLStreamHandler(JavaNetSubstitutions.java:151) at java.net.URL.getURLStreamHandler(URL.java:60) at java.net.URL.<init>(URL.java:599) at java.net.URL.<init>(URL.java:490) at java.net.URL.<init>(URL.java:439) at io.micronaut.context.env.DefaultEnvironment.createConnection(DefaultEnvironment.java:901) at io.micronaut.context.env.DefaultEnvironment.isGoogleCompute(DefaultEnvironment.java:824) at io.micronaut.context.env.DefaultEnvironment.determineCloudProvider(DefaultEnvironment.java:806) at io.micronaut.context.env.DefaultEnvironment.deduceEnvironmentsAndPackage(DefaultEnvironment.java:688) at io.micronaut.context.env.DefaultEnvironment.getEnvironmentsAndPackage(DefaultEnvironment.java:610) at io.micronaut.context.env.DefaultEnvironment.<init>(DefaultEnvironment.java:184) at io.micronaut.context.DefaultApplicationContext$RuntimeConfiguredEnvironment.<init>(DefaultApplicationContext.java:560) at io.micronaut.context.DefaultApplicationContext.createEnvironment(DefaultApplicationContext.java:161) at io.micronaut.context.DefaultApplicationContext.<init>(DefaultApplicationContext.java:118) at io.micronaut.context.DefaultApplicationContextBuilder.build(DefaultApplicationContextBuilder.java:140) at io.micronaut.context.ApplicationContextBuilder.start(ApplicationContextBuilder.java:129) at io.micronaut.context.ApplicationContext.run(ApplicationContext.java:136) at io.micronaut.context.ApplicationContext.run(ApplicationContext.java:146) at com.exoscale.syslog.micronaut.Main.main(Main.java:12)

While the fix is pretty straightforward thanks to the detailed error message, the question is why does the framework tries to access an URL?

Like most modern frameworks, Micronaut tries as much as possible to alleviate the burden of the developer. For that reason - and because it’s Cloud-Native - it tries to infer which cloud provider it runs on. This is readily apparent by looking at the stack trace:

at io.micronaut.context.env.DefaultEnvironment.determineCloudProvider(DefaultEnvironment.java:806)

So, instead of adding the --enable-url-protocols=http option in the plugin configuration, it would be much better to disable the sniffing altogether.

To achieve that, we need to hook into context creation by replacing ApplicationContext context = ApplicationContext.run(); with:

ApplicationContext context = ApplicationContext .build() .deduceEnvironment(false) .start();

Running the executable now yields the expected output:

$ target/com.exoscale.syslog.micronaut.main 17:54:39.460 [main] INFO com.exoscale.syslog.micronaut.Main - FooBar

Final configuration nitpicking

The native-image executable allows a lot of options. You may list them all with --expert-options-all . The SubstrateVM plugin doesn’t allow for individual configuration, they need to be added on the same line to the <buildArgs> tag. While it works, it becomes pretty unwieldy when their number increases.

Micronaut offers a quite handy alternative in the form of native-image.properties :

it has to be located in META-INF/native-image/${project.groupId}/${project.artifactId}

it’s format must be Args = -H:Option1 -H:Option2=Value just as the expert options

For example, to change the name of the native executable from com.exoscale.syslog.micronaut.main to the Maven artifact id, let’s create such a file:

Args = -H:Name=${project.artifactId}

Additionally, we need to enable Maven resource filtering:

<build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> <!-- Plugins are there --> </build>

At that point, the packaging creates the native executable with the configured name:

$ mvn package $ ls -1 target/micronaut-from-scratch target/micronaut-from-scratch

Using the JVM for Serverless is actually possible

The JVM platform benefits from years of experience of optimization of long-running processes. Serverless has different requirements: a process needs to start as fast as possible, and stop as soon as it’s not necessary anymore.

While it seems the JVM and Serverless don’t match, it would be a shame to discard the whole ecosystem surrounding the JVM. In particular, GraalVM - and more particularly the SubstrateVM - allows to make a native executable out of a standard JAR.

Coupled with Micronaut, which reuses the standard JSR regarding Dependency Injection, it becomes possible to keep the current ecosystem, while developing Serverless functions.

The sources for this project can be found on Github.