If you are new to hosting applications on AWS or considering switching to AWS then this article is for you. An array of AWS services are at play to deploy your production application for the world to use. It gets difficult for a beginner to get a full picture of how these services work together. And it’s overwhelming to sift through all relevant AWS docs.

In this post we will specifically talk about EC2 container service (ECS) Application Load Balancer (ALB) Auto Scaling Groups (ASG) and how they work together to get your application up and running. We are going to assume a basic understanding of docker containers. If not read this.

Foreword

At HyperTrack we started out with hosting our API (Django) and Dashboard (Angular) application on Heroku. It quickly became expensive to scale up Heroku dynos, databases and other add-ons as traffic increased. Therefore in Q1 of 2017 we moved our applications to AWS and handle over a million API hits every day. Let’s dig into the details.

EC2 Container Service (ECS)

ECS is the core service to deploy your application update it and keep the desired number of instances of your app running (known as Tasks). More formally, it is a container management service that lets you run applications on a managed cluster of Amazon EC2 instances. Key terms to understand here are:

ECS cluster : An ECS cluster is a logical grouping of Amazon EC2 instances. You can use the grouping in any way you want. The most common use is to have separate clusters for prod, pre-prod and staging environments.

: An ECS cluster is a logical grouping of Amazon EC2 instances. You can use the grouping in any way you want. The most common use is to have separate clusters for prod, pre-prod and staging environments. Task definition: It is a recipe to execute your app. It tells ECS which docker container to use which deployment script to run how much memory to reserve what are the environment variables and so on. ECS uses task definition in order to spin up the desired number of instances of your app. Each such running instance is called a task.

It is a recipe to execute your app. It tells ECS which docker container to use which deployment script to run how much memory to reserve what are the environment variables and so on. ECS uses task definition in order to spin up the desired number of instances of your app. Each such running instance is called a task. Versioning: Once created task definitions are immutable. The only way to change any field is to make a copy of the existing version update the field and save the new version. Typically you only keep the latest version active and update the ECS service with it.

Once created task definitions are immutable. The only way to change any field is to make a copy of the existing version update the field and save the new version. Typically you only keep the latest version active and update the ECS service with it. ECS service: This represents your application in its entirety. Here you can define the number of tasks you want running at any time which task definition version to use what the rate of deployment should be and what the task placement strategy should be.

This represents your application in its entirety. Here you can define the number of tasks you want running at any time which task definition version to use what the rate of deployment should be and what the task placement strategy should be. EC2 container registry (ECR): This is a service which stores container images for your application.

Deployment workflow

At this time let us look at the steps carried out in a typical deployment. You need not understand all the steps just to deploy your code but it is important to understand them in order to debug an issue.

Once we are ready to deploy code changes we locally perform a docker build to create a docker image of the application with the latest code changes.

to create a docker image of the application with the latest code changes. We upload this image to the EC2 container registry with docker push.

We create a new task definition version that is identical to the old one except it uses the recently uploaded docker container image.

The new task definition version is marked as active and the ECS service is updated to use the new task definition. ECS performs a rolling deployment where some tasks are started with new task definition. The requests on old tasks are drained and the old tasks are killed.

Application Load Balancer (ALB)

Once your app is deployed and running on ECS you need to setup an external facing load balancer to expose it for the world to use. You need to understand the following concepts to hook up your ECS service to an ALB:

Target Group: As the name suggests this a collection of instances (targets) of your app. Application Load Balancer continually monitors the health of all targets registered with the target group. ALB routes requests to the registered targets that are healthy. Configure a single target group for each ECS service so that all tasks within that ECS service become targets of the target group.

As the name suggests this a collection of instances (targets) of your app. Application Load Balancer continually monitors the health of all targets registered with the target group. ALB routes requests to the registered targets that are healthy. Configure a single target group for each ECS service so that all tasks within that ECS service become targets of the target group. ALB Routing: Configure ALB routing in order to distribute all incoming traffic to a target group. Optionally traffic may be routed to different target groups based on the request URL (path-based routing) and domain name (host-based routing). You may set a combination of these rules for ALB to evaluate in order to route an incoming request. The screenshot shows path-based routing we are using to forward requests to different target groups.

Configure ALB routing in order to distribute all incoming traffic to a target group. Optionally traffic may be routed to different target groups based on the request URL (path-based routing) and domain name (host-based routing). You may set a combination of these rules for ALB to evaluate in order to route an incoming request. The screenshot shows path-based routing we are using to forward requests to different target groups. Dynamic port mapping: Each task runs on a port inside a container and this port is mapped to a specific port of the EC2 instance. Static port mapping specifies this mapping in the task definition. It has a limitation that only one task of a given service can run on an EC2 instance because every task tries to access the same port. Dynamic port mapping provides the solution. ECS dynamically assigns an available port to your task in case you do not specify a port mapping in the task definition. That way multiple tasks of the same service can run on a single machine if there is enough memory and CPU.

Auto Scaling Groups (ASG)

An auto scaling group contains a collection of EC2 instances of the same instance type. This is treated as a logical grouping in order to scale and manage instances.

Launch configuration: This is a template used by ASG to spin up EC2 instances. The launch configuration specifies the type of EC2 instances and AMI to be used. You can also specify any shell commands that need to run at launch. Launch configuration makes sure that all instances in an ASG are identical. We create a 1:1 mapping between our ASG and ECS clusters by setting ECS_CLUSTER variables as part of launch configuration. So any instance that spins up in an ASG is immediately available in the ECS cluster to put a task on it.

This is a template used by ASG to spin up EC2 instances. The launch configuration specifies the type of EC2 instances and AMI to be used. You can also specify any shell commands that need to run at launch. Launch configuration makes sure that all instances in an ASG are identical. We create a 1:1 mapping between our ASG and ECS clusters by setting ECS_CLUSTER variables as part of launch configuration. So any instance that spins up in an ASG is immediately available in the ECS cluster to put a task on it. Dynamic scaling: You can specify policies to increase or decrease instances based on chosen metrics. E.g. scale instances based on number of incoming requests.

Best practices

The following DevOps practices have helped us manage our overall infrastructure.

Extensively use the Cloudwatch monitoring service. We set up alarms to hit the slack channel in case of an anomaly. Set up Cloudwatch dashboards to get continuously updated metrics graphs.

Use ecs-deploy to easily deploy to ECS. Created handy scripts to initiate deployments change environment variables etc. This helps get the job done quickly have less room for error and avoids us going to the crappy AWS console UI for simple tasks.

Configure all ECS Service logs to go directly to the logstash endpoint (ELK setup) through task definition. As a result logs will be available on Kibana without any configuration in the app.

Configure Codeship to run the test suite after every code push and post the result to Slack. Good way to avoid bad code from getting into production.

Steps forward

Use AWS CloudFormation template to codify how the whole stack is configured.

Setup a full CD pipeline so developers have one less thing to think about Deployments.

That’s all for now! Does this give you a good idea of tools and services needed to get your app running on AWS? Please post comments if you have any questions.

Interested in joining the HyperTrack platform team in San Francisco? Check out the Full stack engineer position on our website. If you are building applications with user movement, build with HyperTrack now.