(Some) Differences

I did not use Docker Compose. Docker Compose doesn’t play well with ECS, at least not the way I was using it.

Volumes were necessary for NGINX and my Rails app to load files form a shared directory.

Rails App Setup

First, add a dockerfile to your rails app. Here’s an example:

FROM ruby:2.6.1-alpine RUN apk add --update \

make \

g++ \

musl-dev \

gcc \

libc-dev \

file \

git \

postgresql-dev \

tzdata #Install rails_application ENV APPLICATION_NAME rails_application RUN mkdir -p /var/www/rails_application WORKDIR /var/www/rails_application COPY Gemfile Gemfile.lock /var/www/rails_application/ RUN gem install bundler RUN bundle install COPY . . VOLUME /var/www/rails_application/ VOLUME /etc/nginx/ RUN apk del make \

g++ \

musl-dev \

gcc \

libc-dev \

file \

git EXPOSE 8080 RUN chmod +x app_start.sh ENTRYPOINT ["sh", "app_start.sh"]

I used alpine and removed compilers to keep space low. Also, you’ll noticed I used a shell script to start the application—more on that later.

NGINX Setup

I set up my nginx sidecar application in a seperate repo. This is what the file structure of that nginx repo looks like:

-rails_application-nginx

-dockerfile

-nginx.conf

-nginx (directory)

-application.conf

Here’s what my nginx dockerfile looks like:

# Base image

FROM nginx RUN apt-get update -qq && apt-get -y install apache2-utils && apt-get install -y vim # Copy Nginx config template

COPY nginx/application.conf /etc/nginx/conf.d/default.conf

VOLUME /var/www/rails_application/



Here’s the nginx.conf:

worker_processes auto; error_log /var/log/nginx/error.log warn;

pid /var/run/nginx.pid; events {

worker_connections 1024;

} http {

include /etc/nginx/mime.types;

default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 14; proxy_http_version 1.1; include conf.d/*.conf;

}

And this is my application.conf:

server {

listen 80; add_header X-Cache-Status $upstream_cache_status; underscores_in_headers on; error_page 404 /404.html;

error_page 403 /403.html;

error_page 500 502 503 504 /500.html; location /health-czech {

return 200 'ok';

} include /etc/nginx/mime.types; root /var/www/rails_application/public; location / {

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header Host $http_host; if (-f $request_filename) {

break;

}

proxy_pass

break;

}

}

} if ( !-f $request_filename ) {proxy_pass http://127.0.0.1:8080 break;

Setting up ECS

First thing is pushing the applications to ECR (Elastic Container Registry). In your AWS account, navigate to ECR, or just click this link and double check you’re in the correct region: https://us-west-2.console.aws.amazon.com/ecr/repositories

Once you’ve created the new repo, if you have the AWS CLI installed, uploading via the “View push commands” button is an easy option. I have a little script I use for updating my repos (note: I set my region via a profile command):

#!/bin/bash if (( "$#" != 1 ))

then

echo "Usage:

./ecr_push [environment]"

exit 1

fi if [[ $@ != acceptance && $@ != staging && $@ != production ]] ;

then

echo "Environment must be either Acceptance, Staging or Production."

exit 1

elif [[ $@ = staging || acceptance ]] ;

then

AWS_ACCOUNT=XXXXXXXXXXXX

elif [[ $@ = production ]] ;

then

AWS_ACCOUNT=XXXXXXXXXXXX

fi REPO="$AWS_ACCOUNT.dkr.ecr.us-west-2.amazonaws.com/rails_application-$@"

IMAGE=$REPO:latest-$@ $(aws ecr get-login --no-include-email --region $AWS_REGION)

docker build -t $IMAGE .

docker push $IMAGE

Once your NGINX and Rails Application repos are added to ECR, you can set up the Task Definition. The first choice you’ll be confronted with is FARGATE or EC2. One of the requirements I had to meet was being able to ssh into the container running my rails and nginx apps, so I went with EC2 since you can’t ssh into fargate.

If you’ve used ECS before, you’ll likely already have a task role to chose from. The one that is auto-generated by AWS is called “ecsTaskExecutionRole”.

In order for you containers to communicate with each other on the same instance you must select “Host” as the Network Mode.

Next, at the bottom, hit the “Add volume” button. Give the volume a Name, but adding the Path is not necessary.

Next, we’ll set up our containers. I inteneded to use a “t3.micro” instance to run the containers, so I specified CPU units and memory allocation accordingly. I’ll let the screenshots do most of the explaining.

Rails_Applicaiton Container

NGINX Container

Some big takeaways are the following:

• The rails_application container will use port 8080, and the nginx container is assigned port 80.

• Both containers will share the rails_application volume with /var/www/rails_application as the mount path.

Also, notice the start compands. Nginx uses:

nginx,-g,daemon off;

While the rails application uses the Entry point option and runs this script:

sh,app_start.sh

This script, which is part of the rails_application repo, is super basic:

#!/bin/sh

bundle exec rake assets:precompile

bundle exec puma -C config/puma.rb

This particular rails app requires precompiling static assests and then starting the application via puma. Other rails applications that don’t require asset precompiling, can use the Command field, rather than Entry point, in the task definition’s Environment section with this command:

bundle,exec,puma,-t,0:5,-w,4,-b,tcp://127.0.0.1:8080

This is a nice option if you don’t want to deal with setting up your threads and workers via a puma.rb file.

Cluster Deployment

When setting up your cluster, you’re going to select the EC2 Linux + Netorking template option.

Once you’ve configured your cluster, you should be able to just launch a new task. (Note: For this tutorial, I’m using the Tasks option rather than the setting up Services, which is very useful, especially for autoscaling, but requires additional explanations).

In the Run Task configuration page, select EC2 as the launch type, then select the Task Definition you just created. The cluster should be pre-selected and correct, so you can just hit the Run Task button on the bottom right-side of the page. If you get a “Run tasks failed” error, it’s likely a issue between your Task Definition’s CPU/RAM allocation and the EC2 instance-type option.

Hopefully, you don’t hit any snags, and the “Last status” of your task is RUNNING!

Assuming you’ve set up your load balancer and target group, added the new ECS container instance to that target group, set that target group’s Health Check Path correctly (in may case it’s “/health-czech”), and properly configured your security groups, then you’re application should be up and accessible.

Happy ops-ing 😊