The Ultimate Cheat Sheet On Deployment Automation

A 2016 State of DevOps Report indicates that high-performing organizations deploy 200 times more frequently, with 2,555 times faster lead times, recover 24 times faster, and have three times lower change failure rates. Irrespective of whether your app is greenfield, brownfield, or legacy, high performance is possible due to lean management, Continuous Integration (CI), and Continuous Delivery (CD) practices that create the conditions for delivering value faster, sustainably.

And with AWS Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define. Moreover, Auto Scaling allows you to run your desired number of healthy Amazon EC2 instances across multiple Availability Zones (AZs).

Additionally, Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during less busy periods to optimize costs.

The Scenario

We have an application of www.xyz.com. The web servers are setup on Amazon Web Services (AWS). As a part of the architecture, our servers are featured with AWS Auto Scaling service which is used to help scale our servers depending on the metrics and policies we specified. So every time a new feature is developed, we have to manually run the test cases before the code gets integrated and deployed. Later, we need to pull the latest code to all the environment servers. There’re several challenges while doing it manually.

The Challenges

The challenges of manually runing the test cases before the code gets integrated and deployed are:

Pulling and pushing code for deployment from a centralized repository Working manually to run test cases and pull the latest code on all the servers Deploying code on new instance that are configured in AWS Auto Scaling Pulling the latest code on one server, taking the image of that server, re configuring it with AWS Auto Scaling, since the servers were auto scaled Deploying build automatically on instances in a timely manner Reverting back to previous build

The above challenges requires lots of time and human resource. So we have to find a technique that can save time and make our life easy while automating all the process from CI to CD.

Here’s a complete guide on deployment automation using AWS S3, CodeDeploy, Jenkins & Code Commit.

To that end, we’re going to use:

Jenkins as CI tool

AWS CodeDeploy as CD tool

AWS CodeCommit as Application Repo to automate the code push process

Now, let’s walk through the flow, how it’s going to work, and what are the advantages before we implement it all. When a new code is pushed to a particular GIT repo/AWS CodeCommit branch:

Jenkins will run the test cases (Jenkins listening to a particular branch through git web hooks ) If the test cases fail, it will notify us and stop the further after-build actions. If the test cases are successful, it will go to post build action and trigger AWS CodeDeploy. Jenkins will push the latest code in the zip file format to AWS S3 on the account we specify. AWS CodeDeploy will pull the zip file in all the Auto Scaled servers that have been mentioned. For the auto scaling server, we can choose the AMI that has the default AWS CodeDeploy Agent running on it. This agent helps AMIs to launch faster and pull the latest revision automatically. Once the latest code is copied to the application folder , it will once again run the test cases. If the test cases fail, it will roll back the deployment to previous successful revision. If it is successful , it will run post deployment build commands on server and ensure that latest deployment does not fail. If we want to go back to previous revision then also we can roll back easily

This way of automation using CI and CD strategies makes the deployment of application smooth, error tolerant, and faster.

The Workflow:

Here’s the workflow steps of the above architecture:

The application code with the Appspec.yml file will be pushed to the AWS CodeCommit. The Appspec.yml file includes the necessary scripts path and command, which will help the AWS CodeDeploy to run the application successfully As the application and Appspec.yml file will get committed in the AWS CodeCommit, Jenkins will automatically get triggered by poll SCM function. Now Jenkins will pull the code from AWS CodeCommit into its workspace (Path in Jenkins where all the artifacts are placed) and archive it and push it to the AWS S3 bucket. This can be considered as Job1.

Here’s the Build Pipeline

Jenkins pipeline (previously workflow) refers to the job flow in a specific manner. Building Pipeline means breaking the big Job into small individual jobs, relying on which, if first job get failed then it will trigger the email to the admin and stop the building process at that step only and will not move to the second job.

To achieve the pipeline, one should need to install the pipeline plugin in Jenkins.

According to the above scenario, the Jobs will be broken into three individual Jobs

Job 1: When the code commit runs, the Job 1 will run and it will pull the latest code from the CodeCommit repository, and it will archive the artifact and email about the status of Job1, whether it got successful build or got failed altogether with the console output. If the Job1 got build successfully then it will trigger to Job 2

Job2: This Job will run only when the Job 1 will be stable and run successfully. In Job2, the artifacts from Job1 will be copied to workspace 2 and will be pushed to AWS S3 bucket. Post to that if the artifacts will be send to S3 bucket, the email will be send to the admin. And then it will trigger the Job3

Job3: This Job is responsible to invoke the AWS CodeDeploy and pull the code from S3 and push it either running EC2 instance or AWS auto scaling instances. When it will be done

The below image shows the structure of pipeline.

Conditions:

If Job 1 executes successfully then it will trigger the Job2, which is responsible to pull the successful build version of code to S3 bucket and then trigger the Job3. If Job 2 fails, then again email will be triggered with a message of Job Failure. When Job 3 gets triggered, the archive file (Application code along with Appspec.yml) will be pushed to AWS CodeDeploy deployment service, where AWS Code Deploy will run the CodeDeploy agent in the instance and run the Appspec.yml file that will help the application to get up and running. If at any point the Job fails then the application will be deployed with the previous build.

Below are the five steps necessary for deployment automation using AWS S3, CodeDeploy, Jenkins & CodeCommit.

Step 1: Set Up AWS CodeCommit in Development Environment

Create an AWS CodeCommit repository:

1. Open the AWS CodeCommit console at https://console.aws.amazon.com/codecommit.

2. On the welcome page, choose Get Started Now. (If a Dashboard page appears instead of the welcome page, choose Create new repository.)

3. On the Create new repository page, in the Repository name box, type xyz.com

4. In the Description box, type Application repository of http://www.xyz.com

5. Choose Create repository to create an empty AWS CodeCommit repository named xyz.com

Create a Local Repo

In this step, we will set up a local repo on our local machine to connect to our repository. To do this, we will select a directory on our local machine that will represent the local repo. We will use Git to clone and initialize a copy of our empty AWS CodeCommit repository inside of that directory. Then we will specify the username and email address that will be used to annotate your commits. Here’s how you can create a Local Repo:

1. Generate ssh-keys in your local machine #ssh-keygen without any passphrase.

2. Cat id_rsa.pub and paste it into the IAM User->Security Credentials-> Upload SSH Keys Box. And Note Down the SSH-KeyID

$ cat /.ssh/id_rsa.pub

Copy this value. It will look similar to the following:

Click on Create Access keys and Download the Credentials having Access Key and Secret Key.

2. Set the Environment Variables in BASHRC File at the end.

# vi /etc/bashrc

export AWS_ACCESS_KEY_ID=AKIAINTxxxxxxxxxxxSAQ

export AWS_SECRET_ACCESS_KEY=9oqM2L2YbxxxxxxxxxxxxzSDFVA

3. Set the config file inside .ssh folder

# vi ~/.ssh/config

Host git-codecommit.us-east-1.amazonaws.com

User APKAxxxxxxxxxxT5RDFGV

IdentityFile ~/.ssh/id_rsa —> Private Key

# chmod 400 config

4. Configure the Global Email and Username

#git config –global user.name “username”

#git config –global user.email “emailID”

5. Copy the SSH URL to use when connecting to the repository and clone it

#git clone ssh://git-codecommit.us-east-1.amazonaws.com/xyz.com

6. Now put the Application/Code inside the cloned directory and also write the Appspec.yml file and you are ready to push it.

7. Install_dependencies.sh includes.

#!/bin/bash

yum groupinstall -y “PHP Support”

yum install -y php-mysql

yum install -y httpd

yum install -y php-fpm

Start_server.sh includes

#!/bin/bash

service httpd start

service php-fpm start

Stop_server.sh includes

#!/bin/bash

isExistApp=`pgrep httpd`

if [[ -n \$isExistApp ]]; then

service httpd stop

fi

isExistApp=`pgrep php-fpm`

if [[ -n \$isExistApp ]]; then

service php-fpm stop

Fi

Appspec.yml includes

version: 0.0

os: linux

files:

– source: /

destination: /var/www/xyz.com

hooks:

BeforeInstall:

– location: .scripts/install_dependencies.sh

timeout: 300

runas: root

ApplicationStart:

– location: .scripts/start_server.sh

timeout: 300

runas: root

ApplicationStop:

– location: .scripts/stop_server.sh

timeout: 300

runas: root

Now push the code to the CodeCommit

# git add .

# git commit -m “1st push”

# git push

8. Now we can see that the code will be pushed to the CodeCommit.

Step 2: Setting Up Jenkins Server in EC2 Instance

1. Launch the EC2 instance (CentOS7/RHEL7) and perform the following operations

# yum update -y

# yum install java-1.8.0-openjdk

Verify the java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

# yum install java-1.8.0-openjdk

2. Verify the Java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

3. Install Jenkins:

# yum install Jenkins

4. Add Jenkins to system boot:

# chkconfig jenkins on

5. Start Jenkins:

# service jenkins start

6. By default Jenkins will start on Port 8080, this can be verified via

# netstat -tnlp | grep 8080

7. Go to browser and navigate to http://:8080. You will see Jenkins dashboard.

8. Configure the Jenkins username and password, and install the AWS and GIT related plugins.

Here’s how to Setup a Jenkins Pipeline Job:

Under Source Control Management click on GIT.

Pass the GIT ssh URL and under credentials click on ADD and then in kind option click SSH username with PrivateKey.

Note that username will be same as mentioned in the config file of development machine where repo was initiated and we have to catch the private key of development machine and paste it here.

In Build Trigger, click on Poll SCM and mention the time whenever you want to start the build.

For the Post Build Action, we have to archive the files and provide the name of Job 2, if the Job 1 will get successful build after then it should trigger the email.

Now for the time being we can start building the Job and we have to verify that when the code is committed. By now, Jenkins should start building automatically and tell whether it is able to pull the code into its workspace folder. But before that we have to create S3 bucket and pass credentials (Access key and Secret key) in Jenkins so that when the Jenkins pulls code from AWS CodeCommit it can push build in the s3 bucket after archiving.

Step 3: Create S3 Bucket

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

Now when we run Job 1 of Jenkins it will pull the code from AWS CodeCommit. After archiving, it will keep it into the workspace folder of Job1.

From the above Console output, we can see that it is pulling the code from AWS CodeCommit. After archiving, it is triggering the email. Post that it calls for the next job, Job 2.

The above image shows that after building Job2, the Job3 will also get triggered. Now before triggering Job3, we need to setup AWS CodeDeploy environment.

Step 4: Launch the AWS CodeDeploy Application

Creating IAM Roles

Create an IAM instance profile and attach AmazonEC2FullAccess policy and also attach the following inline policy:

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Action”: [

“s3:Get*”,

“s3:List*”

],

“Effect”: “Allow”,

“Resource”: “*”

}

]

}

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create an auto scaling group for a scalable environment.

Here’re the steps below:

1. Choose an AMI and select an instance type for it. Attach the IAM instance profile that we created in the earlier step.

2. Now go to Advanced Settings and type the following commands in “User Data” field to install AWS CodeDeploy agent on your machine (if it’s not already installed on your AMI)

#!/bin/bash

yum -y update

yum install -y ruby

yum install -y aws-cli

sudo su –

aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . –region us-east-1

chmod +x ./install

./install auto

3. Select Security Group in the next step and create the launch configuration for the auto scaling group. Now using the launch configuration created in the above step, create an auto scaling group.

4. Now after creating Autoscaling group, it’s time to create the Deployment Group.

5. Click on AWS CodeDeploy and Click on create application.

6. Mention the application name and deployment Group Name.

7. In tag type, click on either EC2 instance or AWS AutoScale Group. Mention the name of EC2 instance or AWS Autoscale Group.

8. Select ServiceRoleARN for the service role that we created in the “Creating IAM Roles” section of this post.

9. Go to Deployments and choose Create New Deployment.

10. Select Application and Deployment Group and select the revision type for your source code.

11. Note that the IAM role associated with the instance or autoscale group should be same as CodeDeploy and the arn name must have the CodeDeploy policy associated with it.

Step 5: Fill CodeDeploy Info in Jenkins and build it

1. Now go back to Jenkins Job 3 and click on “Add PostBuild Action” and select “Deploy the application using AWS CodeDeploy.

2. Fill the details of AWS CodeDeploy Application Name, AWS CodeDeploy Deployment Group, AWS CodeDeploy Deployment Config, AWS Region S3 Bucket, Include Files ** and click on Access/secret to fill the Keys for the Authentication.

3. Click on save and build the project. After few minutes, the application will be deployed on the Autoscale instances.

4. When this Job3 will get build successfully then we will get the console output as below:

5. After this Build, there will be changes that takes place in AWS CodeDeploy group.

6. Once you hit the DNS of the instance, you will get your application up and running.

To Wrap-Up

It’s proven that teams and organizations who adopt continuous integration and continuous delivery practices significantly improve their productivity. And with AWS CodeDeploy with Jenkins, it is an awesome combo when it comes to automating app deployment and achieve CI and CD.

Are you an enterprise looking to automate app deployment using CI/CD strategy? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments in the section below or give us a shout out on Twitter, Facebook or LinkedIn.