11-minute read

Web Captioner now runs on AWS Elastic Container Service (ECS) and Fargate, services by Amazon that allow you to deploy a Dockerized application without having to configure servers. This post explains how I deploy the Web Captioner application to a AWS Fargate task type using GitLab.

The end result

With GitLab and AWS, I can make one-click deployments to my staging and production environments. I can independently deploy different branches of code to each environment. With minimal effort, it’s also possible for me to create an additional stack (load balancer and all), and deploy to that.

AWS also provides alerts if my application ever becomes unavailable (or the number of concurrent instances of my application falls below a threshold) and some cool graphs. Graphs are always cool.

AWS Fargate and Elastic Container Service

AWS Fargate is a new technology in the Amazon Web Services Elastic Container Service that allows you to run a Dockerized application without having to provision virtual servers. In conjunction with other AWS services, you can:

Configure multiple instances of your application to run concurrently for redundancy

Use a load balancer to distribute traffic among multiple instances

Run constant health checks on instances, and if an instance fails, start a new one in its place

Automatically scale the number of running instances up or down depending on CPU usage

I decided to use AWS ECS and Fargate for Web Captioner because of the redundancy and high availability it provides. It also abstracts away just enough of the work of server management so that I can spend more time on application development.

Preparing the AWS stack

The AWS stack includes the following resources that you will need to set up.

An Amazon ECS cluster A service that belongs to the cluster that runs Fargate type tasks. An Amazon Elastic Container Registry where Docker images will be stored A task definition that references a Docker image stored in your registry and defines CPU and memory requirements for that image. The service uses this task definition to start one to many running instances, called tasks. An application load balancer that redirects requests to healthy targets in a target group. My load balancer is listening on ports 80 and 443 and redirecting all traffic on those ports to one target group where one to many Web Captioner application tasks are running. A target group. My application is simple, so I only have one target group answering all types of requests. AWS Fargate abstracts away much of the work of dealing with a target group. Behind the scenes, Fargate tasks are running on EC2 instances that are members of this target group.

Using the cluster creation wizard

A good way for getting all this set up is by following Amazon’s cluster creation wizard in the AWS console. It’ll create these resources and link them all together, but there’s certain things you won’t be able to change after they’re created, like the naming of some resources. To get around that, you can use CloudFormation to create an entire stack.

Making the entire stack with CloudFormation

It’s difficult to recreate a stack you’ve made with the wizard (for example, if you want to have staging and production environments), so I use this Web Captioner CloudFormation stack template (JSON) template for easily creating an entire new instance of my stack.

Note that there are references to Web Captioner in here (search for “webcaptioner”) that you will need to change. When you create this stack, it also asks for the name of an existing task definition (so you will need to have a task definition already created). You’ll need to set 80 and 443 (or some other ports) as listening ports in your load balancer after the stack is created, or maybe add a certificate from AWS Certificate Manager if you’re going to use a domain name. If you use this template, treat it as a starting point and customize it to fit your needs.

webcaptioner-stack-template.json

You could also try creating your own CloudFormation stack based on the stack created by the cluster creation wizard, but you’ll have to do some tweaking to get it working in a repeatable way. The CloudFormation template above is the result of my tweaking to get something that works well for Web Captioner.

Deploying from GitLab

My application is a Node.js application in a Docker container that exposes itself on port 8080. AWS’s application load balancer takes care of routing traffic from ports 80 and 443 to the container’s port 8080. Before you continue, you’ll want to make sure your application runs in a container and exposes itself on a single port. To keep things straight when configuring the load balancer I’m exposing a port that isn’t 80 or 443.

gitlab-ci.yml

My gitlab-ci.yml file looks like this:

gitlab-ci.yml

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 image: docker:latest services: - docker:dind stages: - build - deploy before_script: - docker login -u gitlab-ci-token -p \$CI_JOB_TOKEN registry.gitlab.com Build: stage: build script: - docker build --pull -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG Staging: stage: deploy services: - docker:dind environment: name: staging url: https://staging.webcaptioner.com script: - source scripts/deploy.sh Production: stage: deploy services: - docker:dind environment: name: production url: https://webcaptioner.com script: - source scripts/deploy.sh when: manual

1 2 3 4 5 image: docker:latest services: - docker:dind

These lines let us use the docker-in-docker executor on Gitlab.com, which gives us access to docker and docker-compose in our CI scripts.





6 7 8 9 stages: - build - deploy

build

deploy

10 11 12 before_script: - docker login -u gitlab-ci-token -p \$CI_JOB_TOKEN registry.gitlab.com

13 14 15 Build: stage: build script: - docker build --pull -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 Staging: stage: deploy services: - docker:dind environment: name: staging url: https://staging.webcaptioner.com script: - source scripts/deploy.sh Production: stage: deploy services: - docker:dind environment: name: production url: https://webcaptioner.com script: - source scripts/deploy.sh when: manual

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 # Install AWS Command Line Interface # https://aws.amazon.com/cli/ apk add --update python python-dev py-pip pip install awscli --upgrade docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG # Set AWS config variables used during the AWS get-login command below export AWS_ACCESS_KEY_ID = $AWS_ACCESS_KEY_ID export AWS_SECRET_ACCESS_KEY = $AWS_SECRET_ACCESS_KEY # Log into AWS docker registry # The `aws ecr get-login` command returns a `docker login` command with # the credentials necessary for logging into the AWS Elastic Container Registry # made available with the AWS access key and AWS secret access keys above. # The command returns an extra newline character at the end that needs to be stripped out. $( aws ecr get-login --no-include-email --region $AWS_REGION | tr -d '\r' ) # Push the updated Docker container to the AWS registry. # Using the \$CI_ENVIRONMENT_SLUG variable provided by GitLab, we can use this same script # for all of our environments (production and staging). This variable equals the environment # name defined for this job in gitlab-ci.yml. docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG $AWS_REGISTRY_IMAGE:$CI_ENVIRONMENT_SLUG docker push $AWS_REGISTRY_IMAGE:$CI_ENVIRONMENT_SLUG # The AWS registry now has our new container, but our cluster isn't aware that a new version # of the container is available. We need to create an updated task definition. Task definitions # always have a version number. When we register a task definition using a name that already # exists, AWS automatically increments the previously used version number for the task # definition with that same name and uses it here. Note that we also define CPU and memory # requirements here and give it a JSON file describing our task definition that I've saved # to my repository in a aws/ directory. aws ecs register-task-definition --family webcaptioner-$CI_ENVIRONMENT_SLUG --requires-compatibilities FARGATE --cpu 256 --memory 512 --cli-input-json file://aws/webcaptioner-task-definition-$CI_ENVIRONMENT_SLUG.json --region \$ AWS_REGION # Tell our service to use the latest version of task definition. aws ecs update-service --cluster webcaptioner-$CI_ENVIRONMENT_SLUG --service webcaptioner --task-definition webcaptioner-$CI_ENVIRONMENT_SLUG --region \$ AWS_REGION

19 20 21 22 23 24 25 Staging: stage: deploy services: - docker:dind environment: name: staging url: https://staging.webcaptioner.com script:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 { "volumes" : [], "family" : "webcaptioner" , "executionRoleArn" : "arn:aws:iam::096015811855:role/ecsTaskExecutionRole" , "networkMode" : "awsvpc" , "containerDefinitions" : [ { "logConfiguration" : { "logDriver" : "awslogs" , "options" : { "awslogs-group" : "/ecs/webcaptioner" , "awslogs-region" : "us-east-1" , "awslogs-stream-prefix" : "ecs" } }, "portMappings" : [ { "hostPort" : 8080 , "protocol" : "tcp" , "containerPort" : 8080 } ], "cpu" : 0 , "memoryReservation" : 300 , "volumesFrom" : [], "image" : "096015811855.dkr.ecr.us-east-1.amazonaws.com/webcaptioner:production" , "name" : "webcaptioner" , "environment" : [ { "name" : "HUGO_BASE_URL" , "value" : "https://webcaptioner.com" } ] } ] }



(The HTTPS warning here is due to the fact that my application redirects all HTTP requests to HTTPS and the DNS name provided by ELB does not have a certificate registered. But in production you wouldn’t distribute the ELB name — you would have your own DNS name create an A record that points to your load balancer.)

Got questions about how I’m deploying from GitLab to AWS? Feel free to comment below or message me.

For questions about Web Captioner, the Help Center answers some commonly asked questions and the Web Captioner Users Group on Facebook is a great place to get help from the Web Captioner community. Like Web Captioner on Facebook to be notified of new updates and upcoming features. If you’ve got an idea for something you’d like to see Web Captioner do, let’s hear about it!