How to deploy a Docker app to Amazon ECS using AWS Fargate

Deploy an app running on Docker that consists of an API service, worker, queue, and database without managing virtual machines

The Voting App was created used to provide developers an introduction course to become acquainted with Docker. The goal is to provide a demonstration and orientation to Docker, covering a range of concepts and tasks for building, deploying and monitoring an application.

The Voting App is a great demonstration of how Docker can be used to containerize any process of a modern polyglot application — regardless of the programming language used and runtime environment needed for each one.

The application comprises a number of services:

AWS ECS using Fargate

Recipes will be provided for deploying the Voting App into various public cloud environments. The first cloud recipe outlined here will show how to deploy the Node.js version of the app to Amazon ECS using AWS Fargate.

The Voting App consists of a number of backend services, and the clever thing about using Fargate is that it saves us the trouble of having to manage EC2 infrastructure.

Rather than focus on deployment at the (EC2) machine level, we can focus on our application topology by:

packaging our services as Docker images

pushing them to the Amazon Elastic Container Registry

configure our desired service performance characteristics

let Fargate handle infrastructure provisioning and management

To configure the task definitions for our containers as well as other resources to assemble the application, we will leverage a few AWS CloudFormation templates from Nathan Peck’s blog series.

The use of templates gives us the benefit of reusable infrastructure-as-code, further fortifying an automation strategy. You can find more templates in his CloudFormation Templates for Fargate repo on GitHub .

ElastiCache and MongoDB

In addition to using Fargate and ECS, we will also leverage two other services for this particular deployment:

Amazon ElastiCache for Redis for message queuing MongoDB Atlas for data queries and storage — in the same region

By the end of this recipe, this is what our physical deployment will look like:

Voting App Components

The application consists of the following components:

Vote Client

The command-line Vote Client application communicates with the Vote API service to cast votes and query voting results.

Vote API

Container instances of this service host the REST Vote API on the back end. When a vote is posted to the API, the service pushes it to a queue for subsequent, asynchronous processing by workers.

When a request is made for vote results, the service queries the database where votes are stored by workers processing the queue. Service instances push votes to Redis using the Voting App’s queue package and query MongoDB using the database package.

Worker

Container Instances of the Vote Worker service watch the queue for votes and store them in the database. Workers pop votes from Redis using the Voting App’s queue package and store them in MongoDB using the database package.

Queue

The hosted Redis service — Amazon ElastiCache.

Database

The hosted MongoDB service — MongoDB Atlas.

How to Deploy the Voting App

If you want to, you can start by running the Voting App on your own computer first. There is nothing you need to install except Docker itself, such as Docker for Mac and Docker for Windows. See the how-to in the Voting App wiki for various ways to start the app once you’ve installed Docker.

This is the beauty of Docker container technology! You don’t need to install Node.js, Redis, or MongoDB at all, let alone the specific versions needed for this application to work, just so you can test drive the application — or hack on it if you want to clone the Voting App repo.

We will deploy the Voting App to the cloud using a combination of AWS ECS, Amazon ElastiCache, and Atlas MongoDB to host our services. We will build and push our service images to Amazon ECR, use CloudFormation templates with Fargate to start our containers and wire up Redis, and then test the application with the command-line client app.

The awesome thing about Fargate is that you don’t need to provision and manage a cluster of machines; instead, you will declare the topological relationships and desired performance characteristics of your application services.

At this stage, we don’t have a fully automated delivery pipeline yet. The intention for this article is to understand the process of deploying the application. Therefore, we will walk through the following administrative tasks that will need to be performed to jump start the application.

Set up a MongoDB cluster with Atlas Build our service images and push them to AWS ECR Deploy our basic stack using a CloudFormation template Deploy our Redis stack using a CloudFormation template Deploy our ECS vote api and worker stacks using CloudFormation templates for Fargate Use the voter command-line client to cast a few votes and query the results

Step 1: Set up a MongoDB cluster with Atlas

In keeping with the theme of this article, we want to avoid provisioning machine infrastructure for our services, and that includes our database. To be able to this with the Voting App’s current database, we will use a hosted Database-as-a-Service, MongoDB Atlas, provided by MongoDB, Inc.

The main disadvantage of this approach is that we need to create an account with a third-party with separate billing and security management. On the other hand, Atlas does a superb job of managing the database cluster for us and stores it in the same region in AWS infrastructure as the rest of our services. This is a neat approach that gets us up and running quickly with minimal friction and excellent performance.

You will need to create an account on Atlas and perform the following steps:

Create an organization. Give it a name and choose MongoDB Atlas for Cloud Service.

Create a new project in the organization.

Build a new cluster. Choose AWS Free Tier in us-east-1. Name your cluster voteapp.

Important: make sure to expand Cluster Tier and select M0 for shared cluster. Click through and Atlas will deploy your new database cluster.

make sure to expand Cluster Tier and select for shared cluster. Click through and Atlas will deploy your new database cluster. After the cluster has been deployed, you will need finish the following steps for connect to cluster.

Create a user. I created a user with admin privilege, named it service, and gave it the password Password1.

Select connect from anywhere or manually add 0.0.0.0/0 to the whitelist to be able to connect from any IP.

Copy the connection string for connecting your application using driver 3.6 or later. You will need this for step 5, below, when you deploy the vote api, so the service will be able to connect to your database cluster.

Step 2: Build our service images and push them to AWS ECR

Go to the ECS Repositories page.

If you don’t have any repos yet, you will see the following.

Click the Get Started button

Amazon ECR

At the Repositories page, click the Create repository button

Create a new repo called voteapp

When finished, you should see something like the following:

After creating a repo in ECR

How to Get the ECR Login for Docker

As you can see from the previous image, steps 1 and 2 are required to ensure that you are logged into ECR. To make it easy to copy and paste, the steps are repeated below:

$ aws ecr get-login — no-include-email — region us-east-1

This command prints the docker login command you need with your credentials for logging into ECR. Copy and paste, then press enter to log in.

$ docker login -u AWS -p ...

Login Succeeded

Build the API and Worker Service Images

Save the repo root in an environment variable for the next steps.

$ VOTEAPP_ROOT=$PWD

Push to the Voteapp ECR repo

Build and push the worker service image

$ cd $VOTEAPP_ROOT/src/worker

$ docker build -t worker .

$ docker tag worker 654814900965.dkr.ecr.us-east-1.amazonaws.com/worker

$ docker push 654814900965.dkr.ecr.us-east-1.amazonaws.com/worker

Build and push the vote api service image

$ cd $VOTEAPP_ROOT/src/vote

$ docker build -t voteapi .

$ docker tag voteapi 654814900965.dkr.ecr.us-east-1.amazonaws.com/voteapi

$ docker push 654814900965.dkr.ecr.us-east-1.amazonaws.com/voteapi

Step 3: Deploy our basic stack using a CloudFormation template

The following command will create the initial stack with the essential cluster resources including security groups, roles, http load balancer, etc.

$ aws cloudformation deploy --stack-name=voteapp --template-file=aws/cluster.yml --capabilities=CAPABILITY_IAM

Once the stack is deployed, we need to note the public address. Store it in a shell variable to use later when we supply it to the client.

Step 4: Deploy our Redis stack using a CloudFormation template



$ aws cloudformation deploy --stack-name redis --template-file=aws/redis.yml



Step 5: Deploy the API and Worker Stack using CFT for Fargate

To deploy the Worker Stack, for MongoUri use the connection string you obtain from Step 1. Make sure to replace USER and SECRET with the mongo user and secret for your Atlas database cluster.

Important Note

Passing confidential information like SECRET in plaintext is not a good practice. For an AWS deployment, we would want to use the AWS Key Management Service to generate a data key that we would then use to encrypt the secret.

In the service implementation we would use the AWS SDK (for Node.js in this case) to decrypt the secret on the backend. This is beyond the scope of this article, but we will examine this approach in detail in a future post.

$ aws cloudformation deploy --stack-name worker --template-file=aws/worker.yml --parameter-overrides ServiceName=worker ImageUrl=654814900965.dkr.ecr.us-east-1.amazonaws.com/worker:latest DesiredCount=1 MongoUri=”mongodb+srv://USER: SECRET@voteapp-cluster-pg5mx.mongodb.net

To deploy the Vote API stack, for MongoUri use the connection string you obtain from Step 1. Make sure to replace USER and SECRET with the mongo user and secret for your Atlas database cluster.

$ aws cloudformation deploy --stack-name voteapi --template-file=aws/voteapi.yml --parameter-overrides ServiceName=voteapi ImageUrl=654814900965.dkr.ecr.us-east-1.amazonaws.com/voteapi:latest ContainerPort=3000 DesiredCount=1 MongoUri=”mongodb+srv://USER: SECRET@voteapp-cluster-pg5mx.mongodb.net

Step 6: Use the voter CLI to cast a few votes and query the results

We will need to use the VOTEAPI shell variable we set in Step #3. If you have the supported versions of Node and Yarn installed on your system, you can run the client as shown here:

$ cd $VOTEAPP_ROOT/src/voter

$ yarn $ VOTE_API_URI=$VOTEAPI npm start vote

? What do you like better? cats $ VOTE_API_URI=$VOTEAPI npm start results

Total votes -> cats: 1, dogs: 0 … CATS WIN!

But perhaps you don’t have Node or Yarn installed, or you don’t want to worry about ensuring you have the necessary version. You can simply run the client using Docker:

$ cd $VOTEAPP_ROOT/src/voter

$ docker build -t voter .

$ alias voter="docker run -it — rm -e VOTE_API_URI=$VOTEAPI voter" $ voter vote

$ voter results

The Results

We were able to successfully deploy the Voting App to the cloud — consisting of an API service, worker, queue, and database, that runs under Docker. We did not need to make any changes to our application, only to service configuration through the environment.

Amazon Fargate allowed us to specify our service requirements without having to manage EC2 infrastructure. It used the service images we built and pushed to ECR to launch our containers in ECS.

In future posts, we will continue to build on the Voting App as we look into automating our delivery pipeline, monitoring and scaling our services, and adding more detail, analysis, and guidance around best practices.

Stay tuned and also watch the repo for updates!

Thanks Nathan!

Credit to Nathan Peck for taking time to review the article, make the suggestion to jump start the demo by using MongoDB Atlas, suggest usingAWS Key Management Service to protect secrets, and for sharing his CloudFormation templates.