Continuously deploy Meteor apps to Azure

Setting up a continuous deployment of a Meteor app to Azure App Services with the help of Docker and Bitbucket Pipelines

A Meteor being deployed to Azure (Pixabay, RafaelMousob)

Recently, I helped colleagues who are developing a Meteor application deploy their application to Azure. Our approach was that we wanted to have everything as automated as possible. The team is using Bitbucket to host their source and we wanted to deploy to Azure. The goal of this article is to get you through the whole process with a sample project.

Creating a Meteor sample application

Meteor is a relatively new (started in 2012, first stable version in 2017) web client and server JavaScript framework aimed at rapid prototyping. It can produce cross platform applications.

First, you’ll need to install the framework, head to https://www.meteor.com/install and follow the instructions there. It should install a command-line interface (CLI) called meteor. We can use this CLI to scaffold one of the example applications that are offered by the Meteor team. I’ll use the todo sample, but you can use any application. Run the command meteor create --list will show you the list of all available samples. Let’s create the todo sample.

meteor create todo

You can then run the app to check that everything is working as expected. Beware, this will take some time as meteor is going to download all its packages and the runtime needed. Once the process is finished, open the browser to http://localhost:3000 to see the app, and test it.

cd ./todo

meteor

Creating the docker image

There are a few options to create ready to deploy Docker images for Meteor applications. The Meteor documentation recommends 4 different packages, some which haven’t been maintained for a while. I decided to go with jshimko/meteor-launchpad because it seemed to be the one whose repository was the most active.

We need to create a Dockerfile to create the Docker image. Dockerfiles are simple text document that contains all the commands a user would call to create the Docker image. It allows to automate the creation of the Docker image. In our case, the Dockerfile will be very easy. Create a file called Dockerfile (without extension) at the root of the source code and add the following line

FROM jshimko/meteor-launchpad:latest

Yep, that’s it. Dockerfiles are hierarchical and this one simply does everything that his parent is doing. If you are curious, check how the meteor launchpad image has been created, it’s very instructive. Basically it makes use of the ONBUILD Docker command that tells the image to run a command when he “child” image is being built. Therefore, just by adding that FROM line, the Docker build process will copy the source, code, run NPM, download meteor, and do all the other things needed to run the image. Once the image is built, it will run very quickly because everything will be in place and ready to run.

Before we run the image build, an important step, especially if you ran your app locally, is to create a .dockerignore file. If you don’t, all your local files, included the local builds, will be copied to the Docker image and will be used in the build process which can cause some really annoying to debug errors. Simply create a file called .dockerignore and add all the files that are not used to create your application. The most important files to ignore are the node_modules folder and the .meteor/local folder. A good approach is to ignore all the files that are ignored by the .gitignore file.

node_modules/

.meteor/local/

Now, we’re going to try to create the docker image locally. Since we have our Dockerfile ready, it’s just a simple call to docker build , setting a name for our newly created Docker image.

docker build -t todo_test .

This will build the image and save it with a todo_test name. Wait for the whole process to complete (it can take around 10 minutes). You can then test the application by running it and checking that it’s running correctly in your browser.

docker run -d -p 80:3000 todo_test

You can see that I’m linking the container’s port 3000 (the default meteor port) to the host’s port 80. To access the application, open your browser and go to http://localhost:80.

Great, this means that we have our image ready to be published! But, that’s not exactly what we want, we’d like to have this annoying task automated by the continuous deployment, we’ll get to that, but first we need to have a place to store our image.

Creating an Azure container registry account

Since we are going to deploy our application to Azure, it makes sense to use Azure’s own container registry to store our images. This means that we need to setup this container registry before we start to build our continuous deployment.

Open the Azure portal, click More services and search for Container registries. Click on the Add button and fill-in the required fields. It’s all pretty standard settings, you can keep the SKU to Basic.

Once the registry has been created, open it. You will have the Login server information, which should be nameOfYourRegistry.azurecr.io keep that information somewhere, we’ll need them later. Open the Access keys tab and store the username and password.

If you want (this isn’t necessary) you can try to push an image to the registry from Docker’s command line interface. The commands are as follow:

docker login yourRepoName.azurecr.io --username yourUsername --password yourPassword docker tag todo_test yourRepoName.azurecr.io/todo_test:latest docker push yourRepoName.azurecr.io/todo_test:latest

This should upload the previously created image called todo_test to the container registry. Check that everything is working correctly before going to the next step.

Setting up continuous deployment using Bitbucket Pipelines

First, we need to get our code to Bitbucket, so we need a git repository. That’s probably the first thing that we should have done. But it’s OK, let’s do it now.

git init

git add --all

git commit -m "First commit, add existing project"

Now, login to Bitbucket, and create a new repository for this project. Then simply set the origin url to your newly created repository’s URL and push.

git remote add <your repo URL>

git push

All done? Great, now that we have the code in the repository, we can setup Pipelines. Pipelines is Bitbucket’s continuous integration / deployment tool. It’s based on Docker. What it will do is start a Docker image that you selected when a trigger is executed on your repository, for example when you have published code to the main branch. Then, once the container is running, it will execute a set of instructions on it which are used to deploy your code. All the process can be tracked by an UI on Bitbucket’s website.

Pipelines expects to have a file called bitbucket-pipelines.yml at the root of your repository. This yaml file will contain all the pipelines settings needed to create and deploy the image. Let’s create a Pipelines config file.

# enable Docker for your repository

options:

docker: true pipelines:

branches:

master:

- step:

script:

- export IMAGE_NAME=demo.azurecr.io/todo-app:$BITBUCKET_COMMIT # build the Docker image (this will use the Dockerfile in the root of the repo)

- docker build -t $IMAGE_NAME .

# tag the image with the latest flag

- docker tag $IMAGE_NAME yourRepoName.azurecr.io/todo-app:latest

# authenticate with the Azure container registry

- docker login test.azurecr.io --username $AZURE_CR_USERNAME --password $AZURE_CR_PASSWORD

# push the new Docker image to the Docker registry

- docker push $IMAGE_NAME

- docker push test.azurecr.io/todo-app:latest

The script is relatively straightforward, we’re doing all the steps we did manually previously with the little difference that we push the image twice. Once with the tag with the commit id, and once with latest. The goal is that the image in production will always link to latest, but we want to keep an history of all the images, so we push also the tag with commit id that is unique.

You can also see that there are a few $VARIABLES. $BITBUCKET_COMMIT is automatically replaced by Pipelines with the id of the commit. The two variables $AZURE_CR_USERNAME and $ AZURE_CR_PASSWORD are to be set in Bitbucket’s pipeline’s settings page. This is to secure those secrets so that they are not in scripts in the repository itself.

To set those, in Bitbucket, go to your repository, select Settings, then Pipelines/Environment variables. In this menu, add the two settings. For the password, don’t forget to check the Secret checkbox so that it doesn’t display when you go to the settings.

While we’re in those settings already, go to the Pipelines/Settings page and enable pipelines.

Great, now commit your changes and push them to Bitbucket, this should trigger a Pipelines execution.

In Bitbucket, go back to your repository, select the Pipelines tab, and you should be greeted with something like that: Success.