Setting up CI/CD

Setting up a Continuous Integration and Continuous Deployment pipeline is very simple in Gitlab. It’s baked into the Gitlab offering and easily configurable by just adding a .gitlab-ci file to the root of your project. A CI/CD pipeline is triggered when you push code to the Gitlab repo. The pipeline must run on a server, which is called a “Runner”. Runners can be virtual private servers, public servers, or anywhere you can install the Gitlab runner client. In our use case, we are going to install a runner on the k8s cluster so that jobs are executed in pods. This client also makes it scaleable so we can run multiple jobs in parallel.

To install the Gitlab runner client on the cluster, we will first need to install another tool named Helm. Helm is a package manager for Kubernetes and simplifies installation of software. I like to think of Helm being similar to Brew for Mac, they both have a repo of software that can be installed onto a system.

Installing Helm

Installing Helm through Gitlab just requires you to click the Install button. Assuming everything was configured properly, it will take just a few seconds to install.

Gitlab — Install Helm tiller on the k8s cluster

After that has completed, lets take a peek at the cluster to see what Gitlab has installed. Using the kubectl get ns command we can see that Gitlab has created its own namespace, named gitlab-managed-apps.

➜ kubectl get ns NAME STATUS AGE

default Active 1d

gitlab-managed-apps Active 23s

kube-public Active 1d

kube-system Active 1d

If we run kubectl get pods we won’t see anything as the default namespace when not specified is default . To see pods in the Gitlab namespace run kubectl get pods -n gitlab-managed-apps .

➜ kubectl get pods -n gitlab-managed-apps NAME READY STATUS RESTARTS AGE

tiller-deploy-7dd47f89cc-27cmt 1/1 Running 0 5m

Here we see that the “Helm tiller” has been created successfully and is running.

Installing the Gitlab Runner

As stated earlier, the Gitlab runner allows our CI/CD jobs to be run in the k8s cluster. Installing the runner with Gitlab is simple, just click the Install button.

Gitlab — Install the runner on the k8s cluster

This took about 1 minute for my cluster. After its completed, let’s take a look at the pods again and you should see a new pod for the Gitlab runner.

➜ kubectl get pods -n gitlab-managed-apps NAME READY STATUS RESTARTS

runner-gitlab-runner-5cffc648d7-xr9rq 1/1 Running 0

tiller-deploy-7dd47f89cc-27cmt 1/1 Running 0

You can verify that the runner is connected to your project by viewing the Settings ➜ CI/CD ➜ Runners section within Gitlab.

Gitlab — Kubernetes runner is activated for the project

Run a Pipeline

Great, so now we have a fully functional Gitlab project, connected to Kubernetes with runners ready to execute our CI/CD pipelines. Let’s setup an example Golang project to see how these pipelines can be triggered. For this project we will run a simple HTTP server that returns the classic “Hello World”.

First, write the go code:

# main.go

package main import (

"fmt"

"log"

"net/http"

) func main() { http.HandleFunc(

"/hello",

func(w http.ResponseWriter, r *http.Request) {

fmt.Fprintf(w, "Hello World!")

},

) log.Fatal(http.ListenAndServe(":8080", nil))

}

And then the Dockerfile to run it:

# Dockerfile

FROM golang:1.11 WORKDIR /go/src/app

COPY . . RUN go get -d -v ./...

RUN go install -v ./... CMD ["app"]

Next we will create a .gitlab-ci.yml file to define our CI/CD pipeline. The file will be evaluated on every code push and if the branch, or tags, match any jobs, they will be executed automatically by one of the Gitlab runners that we configured earlier.

The first step in our pipeline will be to create a docker image of our application whenever we push to the master branch. We can do so with the following configuration:

# Gitlab CI Definition (.gitlab-ci.yml)

stages:

- build

- deploy services:

- docker:dind variables:

DOCKER_HOST: tcp://localhost:2375 build_app:

image: docker:latest

stage: build

only:

- master

script:

- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_NAME} .

- docker login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}

- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_NAME}

Let’s walk through each block in the file. The stages: block defines the order of stages in our pipeline. We only have 2 stages, build and then deploy.

The services: block includes the official Docker-in-Docker (or dind) image, that will be linked in all jobs. We need this as we will be our application docker container inside of the Gitlab CI docker containers.

Next we have the build_app: job. This name is made up by our project and can be anything you would like. The image indicates we are using the latest Docker image from Docker Hub. The stage tells Gitlab what stage this job is in. One neat thing to keep in mind is that jobs in the same stage will run in parallel. The only: tag indicates that we will only run this job on commits to the master branch. Finally the script: is the meat of the job, which will run the docker build command to create our image, then docker login to the Gitlab registry, and then docker push that image to our registry.

At this point we can commit and push the code and you should see a brand new image in the Gitlab registry.

After an image is built and saved on the registry, the next step is to deploy it. We need to define the deployment configuration that tells kubernetes how we want to run the application. The following yaml file is exactly that:

# Deployment Configuration (deployment-template.yaml)

apiVersion: apps/v1

kind: Deployment

metadata:

name: example-deployment

labels:

app: example

spec:

replicas: 1

selector:

matchLabels:

app: example

template:

metadata:

labels:

app: example

spec:

containers:

- name: example

image: registry.gitlab.com/thisiskj/example:latest

ports:

- containerPort: 8080

This file defines a deployment, with 1 single replica that will run the image from the project (registry.gitlab.com/thisiskj/example:latest).

To trigger a deployment, I have configured the .gitlab-ci.yml file to do so when the code repo is tagged. Here is the job definition to do so:

deploy_app:

image: thisiskj/kubectl-envsubst

stage: deploy

environment: production

only:

- tags

script:

- envsubst \$CI_COMMIT_TAG < deployment-template.yaml > deployment.yaml

- kubectl apply -f deployment.yaml

This job will run the envsubst command to replace the $CI_COMMIT_TAG variable inside of the deployment-template.yaml, with the name of the git tag that triggered the build. The environment variable $CI_COMMIT_TAG is set by the Gitlab runner and we tell envsubst to essentially search and replace that variable within the file.

Viewing the Application

At this point everything is wired up and our deployment will run on every new tag.

We can see the running pod:

➜ kubectl -n example-10311640 get pods

NAME READY STATUS RESTARTS

example-deployment-756c8f6dc5-jk85w 1/1 Running 0

Now, thats great that the pod is running, however we cannot access the golang HTTP service externally. To allow external access, we can create a service of type LoadBalancer. Add the following spec to the deployment yaml to create a LoadBalancer on DigitalOcean.

---

kind: Service

apiVersion: v1

metadata:

name: example-loadbalancer-service

spec:

selector:

app: example

ports:

- protocol: TCP

port: 80

targetPort: 8080

type: LoadBalancer

On the next deployment, we can monitor the creation of the LoadBalancer. It might take a few minutes for the external IP to appear.

➜ kubectl -n example-10311640 get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-loadbalancer-service LoadBalancer 10.245.40.9 <pending> 80:30897/TCP 4s ... ➜ kubectl -n example-10311640 get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-loadbalancer-service LoadBalancer 10.245.40.9 157.230.64.204 80:30897/TCP 2m9s

We can also monitor the load balancer creation within the DigitialOcean console:

DigitalOcean — Networking console

Finally, we can view our application by navigating to the IP address in our browser: