While there are plenty of DevOps tools that can fulfill some of the functions of GitOps, GitLab is the only tool that can take your application from idea to code to deployment all in one collaborative platform. GitLab strategic account leader Brad Downey shows users how we make GitOps work in a three-part blog and video series. In part two, Brad demonstrates how infrastructure teams can use GitLab and Terraform to deploy their infrastructure as code to the cloud. Learn how GitLab powers GitOps processes in part one of our series.

When multiple teams use a Git repository, such as GitLab, as the single source of truth for all infrastructure and application deployment code, they’re performing a good GitOps procedure.

Brad Downey, strategic account leader at GitLab, demonstrates how infrastructure teams can collaborate on code in GitLab and then deploy their code to multiple cloud services using Terraform for automation.

“I'm going to walk you through how we create three different Kubernetes clusters in three different public clouds – all using a common process and collaborating with my team, all within GitLab,” says Brad in the demonstration embedded below.

Building your infrastructure as code in GitLab

Getting Started

Begin by logging into the group where the project lives within GitLab. Brad created gitops-demo group for this blog series. The next step is to open the README.md file, which shows the underlying structure of the gitops-demo group. There are a few individual projects and two subgroups: infrastructure and applications. This demo focuses on infrastructure, but we’ll be visiting the application deployment project in the third blog post in the series.

Inside the infrastructure subgroup

There is a separate repository for each cloud: Azure, GCP, and AWS, and a repository for templates.

While similar files can be found in all three cloud repositories, Brad opens the AWS repository in this demo. All of the files are written in Terraform to automate the deployment process, while a gitlab-ci.yml file is also stored in the repository to provide instructions for automation.

The backend file

We are using HashiCorp's new Terraform Cloud Service as a remote location for our state file. This keeps our state file safe and in a central location so it can be accessed by any process. One advantage of using Terraform Cloud is it has the ability to lock the state to ensure only one job can run at once. This prevents multiple jobs from making conflicting changes at the same time. The code says that we’re storing the state files in the Terraform Cloud, in an organization called gitops-demo in a workspace called aws .

terraform { backend "remote" { hostname = "app.terraform.io" organization = "gitops-demo" workspaces { name = "aws" } } }

“This keeps our running state in the cloud provider, so anybody – well, anybody on my team, at least – can access this at any time,” says Brad.

EKS.tf file

The EKS is another Terraform file that leverages the EKS module for the Terraform cluster.

module "eks" { source = "terraform-aws-modules/eks/aws" cluster_name = "gitops-demo-eks" subnets = "${module.vpc.public_subnets}" write_kubeconfig = "false" tags = { Terraform = "true" Environment = "dev" } vpc_id = "${module.vpc.vpc_id}" worker_groups = [ { instance_type = "m4.large" asg_max_size = 5 tags = [{ key = "Terraform" value = "true" propagate_at_launch = true }] } ] }

We can define parameters such as what kind of subnets, how many nodes, etc., in the EKS terraform file.

Define the GitLab admin

“I need to create a GitLab admin user on the Kubernetes cluster,” explains Brad. “I want that done automatically as code and managed by Terraform. So I leveraged the Kubernetes provider to do this.”

Since the code contained in this file is longer, we’re just including a link to the gitlab-admin file rather than the full code excerpt.

Register the cluster with GitLab

We just built a Kubernetes cluster! 🎉 Now, we must register the cluster with GitLab so we can deploy more code to the cluster in the future.

First we use the GitLab provider to create a group cluster named AWS cluster.

data "gitlab_group" "gitops-demo-apps" { full_path = "gitops-demo/apps" } provider "gitlab" { alias = "use-pre-release-plugin" version = "v2.99.0" } resource "gitlab_group_cluster" "aws_cluster" { provider = "gitlab.use-pre-release-plugin" group = "${data.gitlab_group.gitops-demo-apps.id}" name = "${module.eks.cluster_id}" domain = "eks.gitops-demo.com" environment_scope = "eks/*" kubernetes_api_url = "${module.eks.cluster_endpoint}" kubernetes_token = "${data.kubernetes_secret.gitlab-admin-token.data.token}" kubernetes_ca_cert = "${trimspace(base64decode(module.eks.cluster_certificate_authority_data))}" }

The code contains the domain name, environment scope, and Kubernetes credentials.

“So after this runs, all of this will be deployed,” says Brad. “My cluster will be created in AWS and it will be automatically registered to my gitops-demo/apps group.”

Deploying our code using GitLab CI

Terraform template

Return to the infrastructure group and open up the Templates folder. When looking at the terraform.gitlab-ci.yml file, we see how the CI works to deploy your infrastructure code to the cloud using Terraform.

Inside the CI file we see a few different stages: validate, plan, apply, and destroy.

We use Hashicorp’s Terraform base image to run a few different tasks.

First, we initialize Terraform.

before_script: - terraform -- version - terraform init - apk add -- update curl - curl - o kubectl https :/ / amazon - eks . s3 - us - west - 2 . amazonaws . com / 1.13 . 7 / 2019 - 06 - 11 / bin / linux / amd64 / kubectl - install kubectl /usr/ local / bin / && rm kubectl - curl - o aws - iam - authenticator https :/ / amazon - eks . s3 - us - west - 2 . amazonaws . com / 1.13 . 7 / 2019 - 06 - 11 / bin / linux / amd64 / aws - iam - authenticator - install aws - iam - authenticator /usr/ local / bin / && rm aws - iam - authenticator

Next, we validate that everything is correct.

validate: stage: validate script: - terraform validate - terraform fmt - check = true only: - branches

We learned in the previous blog post that good GitOps workflow has us creating a merge request for our changes.

merge review: stage: plan script: - terraform plan - out = $PLAN - echo \ ` \`\` diff > plan.txt - terraform show -no-color ${PLAN} | tee -a plan.txt - echo \`\`\` >> plan.txt - sed -i -e 's/ +/+/g' plan.txt - sed -i -e 's/ ~/~/g' plan.txt - sed -i -e 's/ -/-/g' plan.txt - MESSAGE=$(cat plan.txt) - >- curl -X POST -g -H "PRIVATE-TOKEN: ${GITLAB_TOKEN}" --data-urlencode "body=${MESSAGE}" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/merge_requests/${CI_MERGE_REQUEST_IID}/discussions" artifacts: name: plan paths: - $PLAN only: - merge_requests

The merge request

The merge request (MR) is the most important step in GitOps. This is the process to review all changes and see the impact of those changes. The MR is also a collaboration tool. Team members can weigh in on the MR and stakeholders can approve your changes before the final merge into master.

In the MR we define what will happen when we run the infrastructure as code. After the MR is created, the Terraform plan is uploaded to the MR.

After all changes have been reviewed and approved, we click the merge button. This will merge the changes into the master branch. Once the code changes are merged into master , all the changes will be deployed into production.

And that’s how we follow good GitOps procedure to deploy infrastructure as code using Terraform for automation and GitLab as the single source of truth (and CI). In part three of our blog series, we’ll show application developers how to deploy to any cloud service using GitLab.

Want more infrastructure as code? Read on to learn how GitLab works with Ansible to create infrastructure as code.

Big thank you to Brad Downey for recording the videos that are the basis for the content in this blog series.