Building containers and deploying to your clusters by hand can be very tedious.

In this article I will show how I built a pipeline for Shopgun on AWS using CodePipeline, CodeBuild, CloudWatch, ECR, DynamoDB, Lambda some Python and Terraform.

It requires no running build server and builds & deploys automatically to the staging environment.

The pricing model is pay per minute as builds are running. You also get a bunch of free build minutes every month from Amazon ❤️👍

From here I’ll refer to the orchestration of these different pieces as “the pipeline”.

Staging & Production are Kubernetes clusters running in AWS.

You can read about them in my earlier article Kubernetes in production @ Shopgun.

Introduction

The purpose of the pipeline is to build our software projects, assemble the artefacts in docker images, store these images and deploy them to the staging environment.

Whenever a project release has been built and deployed to the staging environment, a developer can — with one or two invocations of CLI scripts — easily deploy that build to the production environment.

Trying to illustrate it, it looks something along these lines.

Without further ado, let’s dive into it!

CodePipeline

This is the glue that strings it all together in AWS.

AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. You can quickly model and configure the different stages of a software release process. CodePipeline automates the steps required to release your software changes continuously.

We create a CodePipeline project that in several steps builds, tests & deploys code.

The software project pipelines is created by a Terraform module which has been included shown verbatim below.

The CodePipeline project it creates has the following steps for the projects:

Source

This step fetches the source code from a Github repo and prepares it for usage in the pipeline.

Build

The source is sent to CodeBuild for building and then pushed to ECR.

Deploy

This step takes the artifacts from the build step and invokes a custom written lambda function that handles deploying to the Kubernetes cluster.

Integration Tests

This step invokes a lambda function that sends a trigger to a internal software to run integrations tests in the staging environment.

for most of them we use a in-house tool called “Blue Marlin” or “katt” that can report back via Slack & Github if it fails, like so:

cat tax included

To invoke these softwares we use Lambda functions to act as a “proxy” between CodePipeline and the test suite.

It’s only purpose is to forward the event to the test suite running a web server listening for “start” signals.

and the accompanying terraform code:

CodeBuild

AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artefacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests.

In this pipeline we are only using CodeBuild to build Docker images but it comes with a collection of different build images you can use.

Each project has a buildspec.yml & deployspec.yml that is placed in the root of the project source folder containing the build settings for the project as well as the deployment settings.

Install phase

The install phase gets a ECR login so our build project can push it’s Docker image once it’s built.

Build phase

Here we build our docker image and tag it latest and git-<commitref>.

After the image is built it’s pushed to the ECR repo using the credentials obtained in the install phase.

Artifacts

deployspec.yml which holds template values & settings for the deployment lambda function.

which holds template values & settings for the deployment lambda function. deploy/* Is the jinja2 templated Kubernetes yaml files for the project, they are saved in the Github repo together with the Dockerfile and the project source code.

Buildlogs

Logs from CodeBuild is pushed to CloudTrail.

Developers can get the logs directly in their terminal using a small script. It even comes with a nifty tail function 😃

We also output CodeBuild & CodePipeline status to Slack using a Lambda function that triggers on CloudWatch events.

The terraform code used to create the Lambda function:

Create a file called terraform.tfvars to hold your Slack hook url and KMS id.

slack_hook_url = “hooks.slack.com/services/123/456/789”

kms_id = “<your kms id>”

Deployspec.yml

This file contains the specification for what files & namespace the deployer should use as well as any custom templating values you would like to pass to the template renderer.

Envsecret

if set to true, a secret in K8S is created named <projectname>-<env> in the target cluster containing the applications environment config.

The deployer fetches these from a S3 bucket s3://<my-config-bucket/<projectname>/config

Spec

This section specifies what resources to create when the deployer is executing.

Supported right now is

configmap

service

deploy — this is for deployments

stateful — statefulset

ingress

cron — cronjob

pdb — pod disruption budget

servicemonitor

Templating

The names staging.domain.com & production.domain.com refers to the context names in the kubectl config used by the Lambda function to know what cluster it’s executing against.

Any manifests listed under `spec` will be parsed through the jinja2 template renderer before getting deployed.

You map the values available to the renderer by adding them to the Deployspec.

Both trim_blocks and lstrip_blocks are enabled, see the docs.

Template values

The following are the built in template values:

cluster_name — the name of the cluster

— the name of the cluster deploy_env — current deploy environment

— current deploy environment namespace — the namespace defined in deployspec.yml

— the namespace defined in deployspec.yml deploy_image — the full URI to the docker image from the last build

— the full URI to the docker image from the last build docker_tag — the docker tag of the image e.g “git-asd123”

— the docker tag of the image e.g “git-asd123” project — the project name

You use them in templates like so: {{cluster_name}}.

ECR — Elastic Container Registry

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

It is used to store the images that are produced in the pipeline by CodeBuild.

All Kubernetes nodes are also authorized to pull images from the private repos created there.

The terraform module to create pipelines also sets a few lifecycle policy rules to clean out old un-used images.

When CodeBuild builds and pushes images we tag them git-<commitref> and latest.

When deploying a image to a environment the lambda function also adds the environment name to the image as a tag.

If it already exists the tag is simply just moved to the new image.

DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

I use this as the “point of truth” to what version has been deployed where and when for the deployer and cli tools.

Listing whats deployed

I wrote a small bash script to list whats deployed in our beloved terminals ❤

The sauce:

Python Lambda function

This is the heart of the pipeline, this lambda function accepts invocation from CodePipeline and users via the sgn-deploy script.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume — there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app

The source code for the lambda function can be found here https://github.com/roffe/k8s-deployer.

Builds and deploys to staging is triggered by commits to Github.

Deploying to production after a automated build is done in staging is as simple as:

Then on slack we can see:

And some sauce for the above script:

Terraform

To create pipelines I wrote a small terraform module i called “Mr Robot”.

Create necessary IAM permissions & roles

Creating CodePipeline project and steps

Create CodeBuild project

Create ECR repo and set image lifecycles

To use the module call on it with the necessary parameters:

The module code is as follows:

Bonus stuff

Mirroring images

I found it’s sometimes handy to mirror public images to the private ECR repo to know for sure you are in control of the images.

Also it can guards against those rare occasions where Docker Hub is down and no one can pull images 😅

Source

Managing env config files from CLI

A few nifty shell scripts to manage the env config files stored on a S3 bucket.

sgn-cfg-cat <project> <env>

This script will print out your config file.

sgn-cfg-edit <project> <env>

Opens the config in your preferred editor and asks you if you want to upload (any) changes when saving and exiting.

sgn-cfg-ls

Lists what applications you have in your config bucket.

Last words

The examples in this article might not be a 1:1 fit for everyone but it should be a sufficient base to be able to adapt it in to whatever needs one might have.

I hope you found the reading worth while and that I have inspired you in new creative ways to use the different components in AWS!

Shopgun is on the lookout for new talents with interest in infrastructure & Docker/Kubernetes. If you feel that this is your cup of tea and maybe have a background as a software developer and want to take the next step and work with the whole stack from frontend & backend all the way down to the infrastructure, you should hit me up on joakim (a) roffe.nu.

See you next time! 👋 👋 👋