Sometimes you just need to get a server and deployment flow up and running. My go to solution is a small server running docker, docker-compose and a small load-balancer. With the help of Gitlab’s excellent continuous integration tools, it’s super easy to automatic deploy your sites and applications. In this post I will show you how to setup a server with automatic deployments in five simple steps.

Get Server Install Docker & Docker-compose Install Loadbalancer Prepare your Gitlab-project Launch your first auto-deploy project

Step 1: Prerequisites

Before we can start we need the following things:

A server running linux (I’m gonna use Ubuntu 16.04, feel free to use whatever you want, as long as it runs Docker and Docker-compose. See supported platforms @ Docker for the supported platforms);

A Gitlab-account, and a project to deploy to our new server. If you don’t have a project you can clone this project @ Github

A domain name to direct to your server;

Basic knowledge of Docker, Dockerfiles, and Docker-compose-files.

Rent a server

For this project I will rent a small server from Scaleway. With only a couple of clicks you can get a fast, reliable and cheap server. I want to run my server in Amsterdam. For this project I’m going to pick the START1-M virtual server; 4 cores and 4GB of RAM will be enough to run a couple of containers. I’ll get 50 GB of extra storage, on top of the 50 GB that come with the server. I’ll choose Ubuntu Xenial (16.04) for an operating system, and keep any other settings as they are. We are ready to launch or VPS.

Secure your server

The next step is totally up to you, but it is good practice to take some precautions to make you server more secure. This DigitalOcean post provides a overview of things you can do to secure your new server. In this guide I will create a non-root user, add that user to the sudoers-group and disable ssh for the root user.

Add a new user

Because I already have my SSH-key in my Scaleway-profile, I can run ssh root@$IP_ADDRESS , and I’m connected to our new server. The first thing I’m going to do is to add a new user.

To add a new user (in this case, me):

adduser wolthuis

Provide a password and some other information for your new user:

Adding user ` wolthuis' .. . Adding new group ` wolthuis ' (1002) ... Adding new user ` wolthuis' ( 1001 ) with group ` wolthuis' .. . Creating home directory ` /home/wolthuis' .. . Copying files from ` /etc/skel' .. . Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for wolthuis Enter the new value, or press ENTER for the default Full Name [ ] : Dirk H. Wolthuis Room Number [ ] : Work Phone [ ] : Home Phone [ ] : Other [ ] : Is the information correct? [ Y/n ]

To add the user to the sudoers group:

usermod -aG sudo username

Disable ssh for root user

You can now ssh into your server with the new user. To disable ssh on root we need to edit /etc/ssh/sshd_config and change PermitRootLogin yes or PermitRootLogin without-password to PermitRootLogin no . Exit nano with ctrl + x . Restart the SSH service. Bellow is the code you need to run.

ssh username@ipaddress sudo nano /etc/ssh/sshd_config sudo service ssh restart

Step 2: Install Docker, docker-compose and the loadbalancer

So now that the boring part is done, we can focus on why we’re actually here: get the server and services up and running. Let’s start by installing Docker. You can follow the Docker docs. Below are the commands that need to be run.

sudo apt-get update sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $( lsb_release -cs ) \ stable" sudo apt-get update sudo apt-get install docker-ce

To check if the installation is successful run sudo docker ps -a , if everything is ok it will return a empty table.

Install docker-compose

Same here, you can follow the docs.

sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose- ` uname -s ` - ` uname -m ` -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose docker-compose --version

Output:

docker-compose version 1.18.0, build 8dd22a9

Manage Docker as a non-root user

It’s a nightmare to run everything that has to do with docker with sudo: so we are going to fix this problem. Follow the Docker docs

sudo groupadd docker sudo usermod -aG docker $USER

Log out and run docker ps -a again, you’ll notice it now works without sudo.

Step 3: Get yourself a fancy loadbalancer

A loadbalancer sounds super fancy, but don’t worry: we only use our loadbalancer to connect our Docker containers to port 80/443 and automatically get SSL/HTTPS with Let’s Encrypt. In this guide we’ll use Traefik as loadbalancer. It has built in Let’s Encrypt support, and is easy to use. We also follow the Traefik quickstart .

mkdir /srv/docker sudo chown -R wolthuis:docker /srv/docker/ mkdir /srv/docker/lb mkdir /srv/docker/lb/data touch /srv/docker/lb/docker-compose.yml nano /srv/docker/lb/docker-compose.yml

Paste the following into the file:

version : '3' services : traefik : image : traefik : 1.7.3 restart : always command : - - api - - docker ports : - - 443 : 443 - 8080 : 8080 networks : - web volumes : - /var/run/docker.sock : /var/run/docker.sock - ./data/traefik.toml : /traefik.toml - ./data/acme.json : /acme.json container_name : traefik networks : web : external : true

A couple of important notes: the network we creating for Traefik is important for other services. A loadbalancer and your application need to be on the same network, otherwise the traffic can’t be routed to the proper container.

Create a network for the containers:

docker network create web

We just need to add some settings for the loadbalancer, and run the following commands:

touch /srv/docker/lb/data/acme.json && chmod 600 /srv/docker/lb/data/acme.json touch /srv/docker/lb/data/traefik.toml nano /srv/docker/lb/data/traefik.toml

Paste the config into the file. Don’t forget to change the [acme] email field:

debug = false logLevel = "ERROR" defaultEntryPoints = ["https","http"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] [retry] [docker] endpoint = "unix:///var/run/docker.sock" domain = "my-awesome-app.org" watch = true exposedByDefault = false [acme] email = "your-email-here@my-awesome-app.org" storage = "acme.json" entryPoint = "https" onHostRule = true [acme.httpChallenge] entryPoint = "http"

Ready to run the first docker container on your brand new server? Go ahead and run docker-compose up -d inside the /srv/docker/lb directory.

Cool. Visit your Traefik Admin ui via $yourIP:8080 .

Step 4: Prepare your server for auto deployment

So for the next two steps we need to do two things: we’re going to write a .gitlab-ci.yml file that takes care of the building process on the Gitlab side. The other thing is writing a docker-compose.yml file that we store on our server. That file takes care of the deployment. In short: the Gitlab build process reaches out via SSH to our server and does a docker-compose of the compose file. We’re going to create a special deploy user on our server that takes care of the deployment and has minimal rights on our server.

Deploy user + SSH keys

sudo adduser deploy

And add the user to the docker group:

sudo usermod -aG docker deploy

Now we need to generate SSH keys for this user account, but we are going to do this on our own machine. When generating SSH keys you generate two files, a public and a private key. We need to copy the public key to the ~/.ssh/authorized_keys file on our server. The private key will be used in our Gitlab project, more on that in step 5.

So on our local machine we run:

ssh-keygen -f /Users/wolthuis/Desktop/deploy/id_rsa

We get two files, we copy the content of id_rsa.pub . On the server we do the following:

su deploy Password: mkdir ~/.ssh nano ~/.ssh/authorized_keys

Paste the content en exit nano with ctrl + x .

Docker-compose.yml

The next step is to prepare the docker-compose.yml file for our project. I know what kind of project I want to deploy, so I prepared a docker-compose.yml file.

mkdir /srv/docker/ikbendirk-v1 touch /srv/docker/ikbendirk-v1/.env sudo chown deploy:docker /srv/docker/ikbendirk-v1/.env nano /srv/docker/ikbendirk-v1/docker-compose.yml

And paste the following:

version: '3' services: ikbendirk-v1: restart: always image: "${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}/${CI_COMMIT_REF_NAME}:${IMAGE_TAG}" labels: - "traefik.enable=true" - "traefik.frontend.rule=Host:v1.ikbendirk.nl" - "traefik.port=80" networks: - web networks: web: external: name: web

You just need to edit the service name (where I wrote ikbendirk-v1 ) and replace v1.ikbendirk.nl with the webaddress you want to use. Don’t worry about the other variables yet, we take care of those on the Gitlab side of things.

Step 5: Prepare your Gitlab project for auto deployment

In the last step we need to place a .gitlab-ci.yml file into our Gitlab project. I’m going to use my first portfolio website. (The fist website I created. Be gentle.)

DNS records / domain setup

The first thing we need to do is to edit our DNS records for the domain we want to use. I use TransIP (a dutch domain/hosting company). I want to use v1.ikbendirk.nl as domain, so I make an A record for v1.ikbendirk.nl that resolves to the ip address of my server. You can do the same for your domain.

Deploying with Gitlab

Deploying with Gitlab is super easy, but there are some things we need to keep in mind. The deployment flow looks as following:

We have a project on Gitlab, we can push and pull; When we push our commits to our project, Gitlab starts to look for the .gitlab-ci.yml file inside the repository, and will run it if the branches match. So if you specified that it needs to run when the master branch is updated, it will run the .gitlab-ci.yml script; In the .gitlab-ci.yml file create two steps: a build and deploy step; In the build step we use the included Dockerfile to create a Docker image, and push that image to the Gitlab image registry; In the deploy step, log in to our server via SSH (we created the deploy user for this), and pull our image from the Gitlab image registry Then we run docker-compose up -d on our created docker-compose file to run the image we created.

This flow will happen everytime we push to our selected branch. I’m going to setup auto deployment on my master branch.

Create .gitlab-ci.yml file

.gitlab-ci.yml (in the root of the project):

image : docker : git services : - docker : dind variables : IMAGE_TAG : $CI_COMMIT_SHA stages : - build - deploy build : stage : build script : - docker build - - build - arg NODE_ENV=prod - t $CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_COMMIT_REF_NAME : $IMAGE_TAG . - docker login - u gitlab - ci - token - p $CI_BUILD_TOKEN $CI_REGISTRY - docker push $CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_COMMIT_REF_NAME : $IMAGE_TAG only : - master deploy : stage : deploy before_script : - mkdir - p ~/.ssh - echo "$PRIVATE_KEY" | tr - d '\r' > ~/.ssh/id_rsa - chmod 600 ~/.ssh/id_rsa - eval "$(ssh - agent - s)" - ssh - add ~/.ssh/id_rsa - ssh - keyscan - H $DEPLOYMENT_SERVER > > ~/.ssh/known_hosts script : - echo - e "IMAGE_TAG=$ { IMAGE_TAG }

CI_REGISTRY=$ { CI_REGISTRY }

CI_PROJECT_NAMESPACE=$ { CI_PROJECT_NAMESPACE }

CI_PROJECT_NAME=$ { CI_PROJECT_NAME }

CI_COMMIT_REF_NAME=$ { CI_COMMIT_REF_NAME } " > .env - scp ./.env $DEPLOYMENT_USER@$DEPLOYMENT_SERVER : $$DEPLOYMENT_LOCATION/.env - ssh $DEPLOYMENT_USER@$DEPLOYMENT_SERVER "docker login - u gitlab - ci - token - p $CI_BUILD_TOKEN $CI_REGISTRY" - ssh $DEPLOYMENT_USER@$DEPLOYMENT_SERVER "cd $$DEPLOYMENT_LOCATION && docker - compose stop" - ssh $DEPLOYMENT_USER@$DEPLOYMENT_SERVER "cd $$DEPLOYMENT_LOCATION && docker - compose up - d" only : - master

There are a lot of variables in this file, some are auto generated by Gitlab, some must be insert in de Gitlab repository settings. We will take care of this in a next step. The file is pretty straightforward, the only magic happens during the deployment. A .env file is generated on our server and some variables are pushed into the file. Those variables are read by our docker-compose command when docker-compose up -d is ran.

Create Dockerfile

Dockerfile (in the root of the project):

FROM exiasr/alpine-yarn-nginx:8.9.4 WORKDIR /usr/share/nginx/www ADD ./ /usr/share/nginx/www RUN yarn install RUN yarn global add gulp RUN gulp sass RUN mv nginx/default.conf /etc/nginx/conf.d EXPOSE 80

You can’t just copy paste this Dockerfile for your project, but I wanted to share how simple a Dockerfile is, for a small project. Take note, you don’t have to expose port 443 for HTTPS traffic, the loadbalancer takes care of SSL termination, and routes the traffic to port 80 of the container.

Environment variables in your Gitlab project

In the .gitlab-ci.yml file we use some variables. Some are provided by Gitlab, but some need to be defined. In your Gitlab project go to Settings > CI / CD and expand the Variables tab. There we need to add the following variables.

DEPLOYMENT_SERVER (your server IP)

PRIVATEKEY (the contents of `idrsa` file we created in the previous step)

DEPLOYMENT_USER (the name of our server user we want to deploy, so in our case deploy )

) DEPLOYMENT_LOCATION (the directory in which we created our docker-compose.yml file in the previous step. So in my case /srv/docker/ikbendirk-v1 )

All we need to do, is push our files in the master branch to Gitlab. If every is setup correctly, Gitlab will start processing your gitlab-ci.yml file.

This is the build result on Gitlab:

This is what my website looks like:

This is what my Traefik backend looks like: