The previous post described the implementation choices for the Crypto Service, whose task is to monitor both events and price fluctuations of a crypto currency. Building on the power of Akka Streams, the end result is concise and its complexity manageable. The goal now becomes to deploy it to a cloud service, and the first step is to embed our service in a container.

Dockerization

The brilliant tutorial from my friend Jeroen Rosenberg goes step-by-step through the process. The main idea is to use the SBT Native Packager plugin to generate Docker images instead of writing a Docker file.

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.5")

There are a few things to set in your build.sbt file. The crucial ones enable the plugin and its Docker feature, name our app and declare a base image to package upon:

enablePlugins(JavaAppPackaging) packageName in Docker := <NAME OF YOUR APP>

dockerBaseImage := "openjdk:jre-alpine" // a smaller JVM base image

To create an image in our local environment, now we simply need to invoke

sbt docker:publishLocal

…and this is where the fun begins. A Docker image in your local environment is of little use. We need to give it wings and fly it to some cloud.

Google Container Registry

I chose Google as my cloud infrastructure provider. I feel more at home there than in AWS, and I guess I’m saving Azure for the next side project. In order to make our container available for other services in the cloud (say, to run it), we need to upload it to Google’s container registry. The initial setup takes a few steps, mostly concerning authentication. Follow the official guide here.

The interesting part is that we will use Google Cloud SDK ( gcloud from the command line) to setup the Docker app and give it permission to upload to our registry. Assuming that we created an image in our registry named my-app and we tagged it with latest , we can upload it with

docker push my-app:latest

As usual, reality is a little richer: I know that I will upload to Google Container Registry, whose name is eu.gr.io , and that the ID of my project in Google Cloud is my-project-id , I embed this information in the image name. The real command is then

docker push eu.gcr.io/my-project-id/my-app:latest

Once our service is neatly packaged and uploaded to an online registry, it is finally available for usage, via some orchestration tool. The current de-facto standard for the job is Kubernetes. I learned a lot about it from my friend Ádám Sándor — we joined forces to prepare a talk about running Akka cluster solutions on Kubernetes.

Google Kubernetes Engine

The Run of the Waiters, 1933. Copyright Archivio Birra Peroni, Naples (Italy)

For me, the easiest way to think about Kubernetes is like a waiter in a restaurant: its role is to make sure that you get what you asked.

The two steps we need to take for this section are:

Create a Kubernetes cluster

Write a list of orders for it to cater

Creating a cluster can be done via the command line or on via the UI console. You can choose how many (virtual) machines you need and many other options. The ordering process is more interesting.

There are many ways to order your meal; here we focus on the most basic one, which is via yaml files. The basic unit of execution in Kubernetes is called a ‘Pod’. The simplest order that we can give our waiter is

I want you to run this container.

This means asking Kubernetes to fetch a container in our registry and start it once. Nothing more — if it stops, that will be it. Translated into yaml language, it looks like this:

apiVersion: v1

kind: Pod

metadata:

name: my-service

spec:

containers:

- image: eu.gcr.io/my-project-id/my-app:latest

name: my-app-image

env:

- name: MY_ENV_VARIABLE

value: the_value_of_my_env_variable

Such an order will retrieve the docker image from the registry and run it, passing it the environment variable that we specified.

After writing our order, we need to deliver it to the waiter. This is done via the kubectl command line tool. If we saved this file as my-order.yaml , we can trigger it via

kubectl apply -f my-order.yaml

At this point our service is finally be up and running in the cloud! To verify it, we can ask kubectl what the status of our cluster is. The command is

kubectl get pods

and its output should be

NAME READY STATUS RESTARTS AGE

my-service 1/1 Running 0 1m

Whatever gets printed out on the STDOUT stream is accessible via

kubectl logs -f my-service

This is how I got to see the first Caterina service running in the cloud. This is a very basic usage, and actually using Pods directly is not recommended.

The next article in this series bites into some of the nicer Kubernetes features.