Introduction

Many years ago I was an avid user of Google’s App Engine. In many ways this was a serverless product, before the serverless term existed. Today functions (or FaaS) are the technology everyone associates with serverless. The function space continues to evolve e.g. allowing more flexibility and choice with offerings like custom AWS Lambda run-times. But Google’s announcement of Cloud Run takes the flexibility to the next level: serverless for containers. The premise is simple: you give Google a container which accepts requests via a HTTP port and Google do everything else. And I do mean everything - scaling (from 0 to 1000 instances - or more if you need it), SSL, custom domain name support, logging, identity and access management, etc. And this is pay per use in the truest sense of the word, as Google only charge you while a request is being processed.

The simplicity is compelling and the possibilities are almost endless. Anything you can run in a container can run on Google Cloud Run. Of course when operating in such an elastic infrastructure with a stringent pay per use model there is a huge incentive to reduce memory consumption and start-up time. Java is the language I know the best, but its reputation in the lightweight container (or dare I say microservice) space is not the best when compared to a natively compiled language like Go. The requirement to be lightweight and nimble while still using Java is why the Micronaut framework and GraalVM have perked my interest. GraalVM can compile a Java application down to a native executable and the Micronaut framework includes out of the box support for compiling to such a native image (so no messing around needed). It’s worth noting I am usually a Spring Boot kind of person and I’m not ready to abandon it just yet; accordingly I will look at support for native images in Spring Boot at a later point in time.

In this blog post I will try and introduce these technologies by building a basic container using a Micronaut application compiled to a native executable. Then I’ll show how ridiculously simple Google have made it to run such a container in Google Cloud Run.

Getting GraalVM operational

If you only want to build the native image of your Micronaut JAR you can skip this section and just use the GraalVM docker image which Micronaut pre-configures in its Dockerfile. However, I found it useful to have GraalVM up and running on my local machine too.

GraalVM is only available at the time of writing for Linux and Mac operating systems. On my Mac I had a moment of enlightenment when I discovered SDKMAN!. SDKMAN! lets you manage multiple JDK versions from its command line interface. And it has out of the box support for GraalVM’s Community Edition. So after installing SDKMAN! using curl:

curl -s "https://get.sdkman.io" | bash

You simply tell it to use GraalVM:

sdk use java 1.0.0-rc-15-grl

And that’s it done:

$ java -version openjdk version "1.8.0_202" OpenJDK Runtime Environment (build 1.8.0_202-20190206132754.buildslave.jdk8u-src-tar--b08) OpenJDK GraalVM CE 1.0.0-rc15 (build 25.202-b08-jvmci-0.58, mixed mode)

Thank you SDKMAN!

Micronaut framework sample app

As said earlier, I am traditionally a Spring Boot person. But the Micronaut framework appeal is its super quick start up times and the added bonus of being able to easily compile to a native image. The native image means more speed and a lower memory footprint; key characteristics when using a pay per use cloud provider. The Micronaut documentation is very good, so I’ll try and not dwell too much on what I did to get up and running.

That said… now I’ve gained an addiction to SDKMAN! we can use it to install Micronaut’s command line tools and Gradle too:

sdk install micronaut sdk install gradle

Then using the command line interface of micronaut a simple application can be bootstrapped which has native image support:

mn create-app my-api-app --features graal-native-image

I added a simple REST controller:

package my.api.app; import io.micronaut.http.MediaType; import io.micronaut.http.annotation.Controller; import io.micronaut.http.annotation.Get; @Controller("/helloworld") public class HelloWorldController { @Get(produces = MediaType.TEXT_PLAIN) public String index() { return "Hello PlanetJones World"; } }

And that’s enough coding for the purposes of this blog post. You can test it locally by doing a:

$ ./gradlew run

“Hello PlanetJones World” is returned if you access the endpoint at http://localhost:8080/helloworld

The docker image can also be built locally using:

$ ./docker-build.sh

Don’t forget to assemble the project first so you get the Mironaut JAR in build/libs which is where the docker build picks it up from:

$ ./gradlew assemble

Be warned - the native image compilation isn’t a fast process. And on my underpowered and overworked MacBook air it still hadn’t built the image after an hour of trying. Far better to use Google’s compute infrastructure to build it (more on that below)

Finally the Cloud Run part

You need a Google Cloud Platform account with billing enabled (though there is a decent free tier for Cloud Run, so you can experiment for, more or less, nothing). Then you must create a project in Google Cloud Platform. And finally you need the excellent gcloud command line tool installed and operational. Once that’s done you can ask Google to build the docker image for your micronaut application by submitting it to Google Cloud Build:

gcloud builds submit --tag gcr.io/my-gcp-project-name/helloworld

While building the image I received this error:

Step 4/8 : RUN native-image --no-server -cp build/libs/my-api-app-*.jar ---> Running in b8f1f36fcc20 Error: Please specify class containing the main entry point method. (see --help)

By default gcloud build will ignore anything in the build directory. The error above means there is no JAR to create a native image from. You can configure this with a .gcloudignore file, but for expediency I just uploaded everything to Google (I think their servers can take it) by setting this config and then submitting the build again:

gcloud config set gcloudignore/enabled false

Once you have the image built it gets added to Google’s Container Registry. And deploying it to Cloud Run is (again) ridiculously simple:

gcloud beta run deploy --image gcr.io/my-gcp-project-name/helloworld

Shortly afterwards you will get a URL back and your container will be ready to take HTTP requests, as the screenshot below demonstrates:

You can see the status of your instance(s) or view the logs at anytime via the Google Cloud Platform console:

When Google terminate your container you see the following in the logs:

Container terminated by the container manager on signal 9.

But worry not. I know my micronaut application is ridiculously simple, but the the F12 dev tools in Chrome showed no HTTP request taking longer than 1.4 seconds (though in a future post I would like to look at the performance in more detail and see how quickly Google can spin up the container from scratch and how much of a boost the Micronaut native image gives).

Your container gets 1 vCPU per instance and a default 256 MiB of memory. It will be allowed to serve up to 80 concurrent HTTP requests. Google have a very simple contract you need to adhere to, with the key detail being your container must be stateless, as there is no guarantee a request will hit the same container, or that the same container will even exist over requests.

There’s not much more I can say on the Cloud Run part. You build a Docker image. You deploy it to Cloud Run. You profit. This is surely how serverless computing should be ?

I will (I promise) follow this up with another post about how the source code can be added to Google Cloud Repositories and how you can get some Continuous Deployment going on with Google Cloud Build. But I hope this whets your appetite for Google Cloud Run and provides a basis for your experiments.

I committed the micronaut application to github (but please note that everything apart from HelloWorldController was created by the micronaut command line application).