Using Phoenix with docker, Addendum: Exrm

This is an addendum to a three part series: Part 1 - Part 2 - Part 3

Contents

Bad ideas

I have had some bad ideas in the past - building a container out of a Phoenix application like building it out of a Rails application wasn’t probably the best idea ever.

After all, you never ship your source code into production when using a compiled language.

Regardless, I worked with Golang before - the ability to build a self-contained, static binary is great there. And super useful, if you want to use the scratch container that docker offers to build a very minimal image that does not exceed, let’s say, 15MB.

Exrm

Elixir offers the possibility of using a library called Exrm, the Elixir release manager. It has the self proclaimed goal to make Elixir deployments very easy and delightful to work with.

What happens is that Exrm will build a self-contained artifact, which can be deployed more easily and integrates with a lot of features that Erlang already provides, like versioning or hot deployments.

Using Exrm with Kitteh

Let’s add it as a dependency:

# /mix.exs # [...] defp deps do [{:phoenix, "~> 1.0.2"}, {:phoenix_ecto, "~> 1.1"}, {:postgrex, ">= 0.0.0"}, {:phoenix_html, "~> 2.1"}, {:phoenix_live_reload, "~> 1.0", only: :dev}, {:cowboy, "~> 1.0"}, {:mogrify, "~> 0.2"}, {:exrm, "~> 1.0"}] end

and run mix deps.get .

Since this will be a production release, we need to tweak the prod.exs configuration a bit:

# /config/prod.exs # [...] config :phoenix, :serve_endpoints, true

Note that for testing purposes I also changed the hostname to localhost throughout the configuration.

Building the docker image(s) - once more.

Ideally, we can now copy over the release into our docker container instead of copying everything over and then compiling it inside.

But wait - this also means that we now need a build environment to build our releases, compile our assets, etc. For simplicity, I am going to use the host machine (my laptop) as the build environment.

To reduce the overhead in the docker file, I introduced a build-docker.sh file that builds the web application image, as well as the attached images.

The docker-compose.yml will then pick up the images by the tags generated by this build file.

The agony with build environments

To keep the image contents to a minimum I decided to use the msaraiva/alpine-erlang image as a base and work from that. There is an excellent description on how to use this image as a base for a Phoenix application in the repository for the image.

One problem I ran into was that I use Ubuntu with the latest Erlang environment (18.2) while the current base image uses erlang:18.1 as it’s environment, leading to sideeffects, like:

{"init terminating in do_boot",{'cannot load',error_handler,get_files}}

when trying to run the created image.

Using alpine:edge resolves the problem.

Unfortunately, this is a mojor disadvantage in the build chain at the moment. The target container must provide the same environment as the build system provides, otherwise there will be no guarantee of this working.

Here is the full Dockerfile :

FROM alpine:edge RUN apk --update add erlang erlang-sasl erlang-crypto erlang-syntax-tools && rm -rf /var/cache/apk/* ENV APP_NAME kitteh ENV APP_VERSION "0.0.1" ENV PORT 4000 RUN mkdir -p /$APP_NAME ADD rel/$APP_NAME/bin /$APP_NAME/bin ADD rel/$APP_NAME/lib /$APP_NAME/lib ADD rel/$APP_NAME/releases/start_erl.data /$APP_NAME/releases/start_erl.data ADD rel/$APP_NAME/releases/$APP_VERSION/$APP_NAME.sh /$APP_NAME/releases/$APP_VERSION/$APP_NAME.sh ADD rel/$APP_NAME/releases/$APP_VERSION/$APP_NAME.boot /$APP_NAME/releases/$APP_VERSION/$APP_NAME.boot ADD rel/$APP_NAME/releases/$APP_VERSION/$APP_NAME.rel /$APP_NAME/releases/$APP_VERSION/$APP_NAME.rel ADD rel/$APP_NAME/releases/$APP_VERSION/$APP_NAME.script /$APP_NAME/releases/$APP_VERSION/$APP_NAME.script ADD rel/$APP_NAME/releases/$APP_VERSION/start.boot /$APP_NAME/releases/$APP_VERSION/start.boot ADD rel/$APP_NAME/releases/$APP_VERSION/sys.config /$APP_NAME/releases/$APP_VERSION/sys.config ADD rel/$APP_NAME/releases/$APP_VERSION/vm.args /$APP_NAME/releases/$APP_VERSION/vm.args EXPOSE $PORT CMD trap exit TERM; /$APP_NAME/bin/$APP_NAME foreground & wait

Results!

However, using the same versions everywhere should result in a new docker image for our application:

docker images | grep kitteh

should yield:

kitteh/prod 0.0.1 a9c2a80abfe1 35 seconds ago 28 MB

Nice, that is way better when it comes down to size for the image.

Assets

Since the web application image now completely ignores any asset containers, we need to add these in the build process, more specifically, in the build-docker.sh :

# [...] # assets MIX_ENV=prod mix phoenix.digest docker build -f Dockerfile.assets -t kitteh/assets:0.0.1 .

This will create another image called kitteh/assets . While we’re at it, let’s change the docker-compose.yml to make use of the images, which incidentally would allow us to use the version 1 of the YAML-syntax again, if we wanted to.

Wait! Migrations!

A downside of the compiled approach we’re now taking is the fact that migrations are no longer part of the kitteh/web image, as the priv/repo/migrations folder gets dismissed at build (not really, but Mix is no longer available to us).

One solution to this comes from a discussion in the exrm repository: Running the migrations whenever the server comes up using the Ecto.Migrator API directly.

A second option I found was proposed by Github user @mschu - using an Exrm-Plugin to generate an escript file that would in turn load an Elixir module to run the migrations - again, via Ecto.Migrator .

A demo for the second solution can be found as a gist here.

However, after toying with the second option I found that the first one would be the one to go for (I couldn’t get the escript solution to run).

The argumentation for the second solution is that migrations are idempotent anyway and can always be run on startup.

I disagree - running the migrations should be a separate task. You may want to control the point of time precisely and not just migrate upon every restart. But for now it should suffice.

The problem with Mix.env

We’re not using Mix.env anymore in production, so shenanigans like:

Mix.env == :prod

will now fail. Luckily, we can check on the environment directly:

System.get_env("MIX_ENV") == "prod"

It’s a subtle thing, but running the application will fail because of that.

So, we’re going to ignore ImageMagick?

By now, looking at the latest commit, the application should run again.

We will run into a subtle crash in the background, when we’re trying to upload a picture anyway:

web_1 | =CRASH REPORT==== 7-Mar-2016::16:21:31 === web_1 | crasher: web_1 | initial call: Elixir.Kitteh.PageController:-create_sizes/1-fun-0-/0 web_1 | pid: <0.1145.0> web_1 | registered_name: [] web_1 | exception exit: {enoent, web_1 | [{'Elixir.System',cmd, web_1 | [<<"mogrify">>, web_1 | [<<"-resize">>,<<"90">>, web_1 | <<"/tmp/177012-DaintyTanCyprus.jpeg">>], web_1 | [{stderr_to_stdout,true}]], web_1 | [{file,"lib/system.ex"},{line,450}]}, web_1 | {'Elixir.Mogrify',resize,2, web_1 | [{file,"lib/mogrify.ex"},{line,60}]}, web_1 | {'Elixir.Kitteh.PageController',resize,3, web_1 | [{file,"web/controllers/page_controller.ex"}, web_1 | {line,105}]}, web_1 | {'Elixir.Kitteh.PageController', web_1 | '-create_sizes/1-fun-0-',3, web_1 | [{file,"web/controllers/page_controller.ex"}, web_1 | {line,93}]}, web_1 | {'Elixir.Task.Supervised',do_apply,2, web_1 | [{file,"lib/task/supervised.ex"},{line,89}]}, web_1 | {proc_lib,init_p_do_apply,3, web_1 | [{file,"proc_lib.erl"},{line,240}]}]} web_1 | in function 'Elixir.Task.Supervised':exit/4 (lib/task/supervised.ex, line 120)

Yep, no ImageMagick in the Base Image.

Let’s fix that:

# Dockerfile # [...] RUN apk --update add erlang erlang-sasl erlang-crypto erlang-syntax-tools imagemagick && rm -rf /var/cache/apk/*

After this change, the images should be properly resized in the background again, but we bloated our image to 37MB.

Conclusion

If you skipped directly to my conclusion, please see the tag 10-imagemagick-finally for the codebase at the end of this article.

So - I think we are a little bit better off with this approach:

Smaller image size for the main web image

image Detached build process for the images (not completely, one could move the uploads nginx image over or merge it with the assets image)

nginx image over or merge it with the image) Ability to downgrade docker-compose if we wanted to

There are some downsides though:

Running migrations is currently tied to the server directly

Configuration is not completely independent from the application - as of now, your admin needs to be able to read and write Elixir

The assets are immutable and have to be rebuilt each time, however, this allows for independent versioning for the assets

The app version is hard coded - one could extract it from mix.exs , if ones beard is long enough

While looking at this process in general, I must also say that running Elixir in docker containers does not work towards the microservice approach, the images all need to have the full run time environment and it might be better to use the abstraction for applications and processes that the Erlang VM provides directly.

If you find yourself having to integrate a service based on Elixir or Erlang in an existing environment, this might be an option for you after all.