The traditional way of building AWS environments

When DevOps engineers need to build an infrastructure on AWS cloud, they tend to use CloudFormation for this. CloudFormation is a graphical tool that allows you to draw how your infrastructure should look and behave. CloudFormation can use JSON or YAML files to automate the process.

But there is a number of advantages of using Ansible over CloudFormation:

You already know the tool so, why waste your time learning another one? While CloudFormation is going to automate building the infrastructure, it will not deploy your application, create users, and so on. Ansible will do that.

Prerequisites

I am assuming that you are using a modern version of Linux like Ubuntu or Centos. You need to have the latest version of Ansible installed.

The following prcedure was not tested on Microsoft Windows or macOS.

We’re going to deploy Apache

You can build almost any sort of environment of AWS no matter how simple or complex it can get. So, in order not to overwhelm you with so much information, we’ll create one EC2 instance from scratch and use it deploy the Apache web server. We’re going to do the following:

Make an AWS account Create an IAM role and obtain your access and secret keys Generate a public/private key pair.

Then, using Ansible, we’ll create a playbook that will:

Create a security group for the environment and add the appropriate rules Launch an EC2 instance based on the type and region

In the second part of the tutorial, we’ll modify the playbook to deploy Apache.

Another playbook will be used to shut down or destroy the environment.

Installing boto3

Firt things first. Ansible depends on the Python module boto3 to communiate with AWS API. So, boto3 needs to be installed on your machine. Issue the following command on your terminal:

pip install boto boto3

Both boto and boto3 packages are needed for this lab.

Storing your keys in Ansible vault

After creating the IAM account, we'll need to store the AWS keys. Since they are sensitive data, we should use Ansible vault for this:

ansible-vault create aws_keys.yml

Once open, add the following to it:

aws_access_key: AKIAJLHNMCBOITV643UA aws_secret_key: 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

Once you save the file, all the content will be encrypted. Let's check that:

Ahmads-iMac:~ ahmad$ cat aws_keys.yml $ANSIBLE_VAULT;1.1;AES256 63333038396266346466383037653433613336643164316566353030663162303434323339316330 3661333432313564646432333563343935323463346163630a656233303535333534346262616465 34373063393132336165313562613830306262646538656334643532303861366539336234363462 6438376165396638390a326334303263303530643965373539323239623931383839383539616631 38343534643061373361373239313264633562323936663130626537333164666262633636306464 39396531616335313563323339633237396131363938616262663536303664333065636334616163 35383631616231626661346532346666386338346336666535636263663334343364326237303366 61313764363331356330386639323666373433323733383636373635656335313234643364333832 3066

Setting up the hosts file

Next, we need to create/update the hosts file to handle our new EC2 instance that yet to be created. Adding the following to ./hosts file:

[local] localhost

Part 1: Building the EC2 instance

OK, now let's edit our playbook file. Create a new file called aws_provisioning.yml and the following:

--- - hosts: local connection: local gather_facts: False vars: instance_type: t2.micro security_group: webservers_sg image: ami-db710fa3 keypair: fakharany region: us-west-2 count: 1 vars_files: - aws_keys.yml

Let's have a quick look at what each line of the file does:

First, you're limiting the scope of the playbook to the local hosts group. It contains localhost and this is the way Ansible will work with EC2 instances. Behind the scenes, Ansible connects to Python boto on the local machine and use to establish connection with the AWS API and issue the necessary commands.

We need to set the connection to local so that Ansible won't attempt to establish an SSH connection session with localhost unnecessarily. [the_ad id="369”] The variables section contains the optinos we intend to use with our instance:

The instace type is t2.micro, this is suitable for our lab. It's also eligible for the free tier, in which Amazon will not charge you for some services (including selected EC2 instance types) for a period of 1 year.

Then we specify the name of the security group that Ansible will create for us. A security group is like a virtual firewall that must be created for your EC2 instances. If you already have one created, you can associate it with the new EC2 instance. In our case, we'll be creating a new one from scratch.

The image specifies the AMI (Amazon Machine Image). AMI's are like templates that are used to spawn machine instances. If you've used Vagrant before, they serve the same purpose as the box. You can even create and use your own AMI images. Amazon provides a list of its own AMI's that can be found here: https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#LaunchInstanceWizard. Notice that it depends on your region of choice.

They keypair refers to the name of the public/private key pair that you created earlier.

refers to the name of the public/private key pair that you created earlier. The region is the region of your choice. If you receive huge volumes of traffic, it's advised that you choose a region that it geographically closest to where most of your customers are located. This is mainly to reduce network latency and enhance performance. However, if you are creating a lab, or if you are not expecting extremely high traffic volume, then you can choose the region based on the best pricing rates. Yes, each region may have different rates than the other for the AWS services you consume.

is the region of your choice. If you receive huge volumes of traffic, it's advised that you choose a region that it geographically closest to where most of your customers are located. This is mainly to reduce network latency and enhance performance. However, if you are creating a lab, or if you are not expecting extremely high traffic volume, then you can choose the region based on the best pricing rates. Yes, each region may have different rates than the other for the AWS services you consume. The count variable is the number of instances you need to launch. All of them will share the same settings. In our case, we'll only going to create one instance.

Task 1: Creating a security group

Now that we've defined the settings that will be used in the playbook, let's start adding the tasks:

tasks: - name: Create a security group ec2_group: name: "{{ security_group }}" description: The webservers security group region: "{{ region }}" aws_access_key: "{{ aws_access_key }}" aws_secret_key: "{{ aws_secret_key }}" rules: - proto: tcp from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 80 to_port: 80 cidr_ip: 0.0.0.0/0 - proto: tcp from_port: 443 to_port: 443 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all cidr_ip: 0.0.0.0/0

Our first task will be to create a “Security Group” for our instance. As mentioned, a security group is nothing but a firewall that will selectively allow/deny traffic from and two your instances.

We use the ec2_group module provided natively by Ansible. The module needs a name for the security group. We passed the security_group variable. It also needs a region and a description.

Now comes the main part of the task: the rules. AWS security groups access two types of tules: incoming and outgoing (engress). We're more interested in what arrrives at our instance rather than what leaves it. So, we instruct our security group to allow:

SSH on port 22 (that's the only way you can remotely access your instance over the network). The security group can also filter the source IP address from which the traffic is originating. This is controlled by the cidr_ip option. AWS recommends that you set that to the IP or the IP range of the machine(s) you will be using to access the instance. If want to, you can leave it at 0.0.0.0/0 , which means accept traffic from anywhere in the world.

option. AWS recommends that you set that to the IP or the IP range of the machine(s) you will be using to access the instance. If want to, you can leave it at , which means accept traffic from anywhere in the world. The web traffic that normally arrives at port 80. We also enabled port 443; as we will be adding HTTPS support later.

The rules_engress controls the network traffic leaving your instance to the outside world. We are not placing any filters on this.

Task 2: Creating and launching the EC2 instance

After creating the security group, our playbook may go ahead and create the instance itself. Add the following to the playbook file:

- name: Launch the new EC2 Instance ec2: aws_access_key: "{{ aws_access_key }}" aws_secret_key: "{{ aws_secret_key }}" group: "{{ security_group }}" instance_type: "{{ instance_type }}" image: "{{ image }}" wait: true region: "{{ region }}" keypair: "{{ keypair }}" count: "{{count}}" register: ec2

Nothing new here. We've just used a different Ansible module, ec2 . Then, we passed the necessary paramters that it will need to create our instance:

The necessary credential keys

The security group name

The instance type

The image AMI id

The wait parameter instructs the Ansible to wait for the instance to get created before reporting that the task is complete.

parameter instructs the Ansible to wait for the instance to get created before reporting that the task is complete. Then the region, keypair, and count.

Notice that the end of the task, we register a variable called ec2 . We'll further need the information inside this variable (like the instance id, the public IP and so on) later on. [the_ad id="369”]

Task 3: Adding the newly created instance to the hosts file

Once the instance is created, we'll need to be able to contact it. The following task will add the instance(s) to a group called webservers

- name: Add the newly created host so that we can further contact it add_host: name: "{{ item.public_ip }}" groups: webservers with_items: "{{ ec2.instances }}"

The add_host module allows you to add one or more hosts to a group. The group will be created if it does not already exist. In our case, we are adding the instance to webservers group.

Notice the use of with_items . It takes the instances list in the ec2 variable that we created in the previous task. This is necessary if you are creating more than one instance so that Ansible will loop through all of them. Each instance can be referred to by item . So, item_public_ip will get the public IP address assigned by AWS to that specific instance in the list.

Task 4: Tag the instance

AWS allows you to add tags to your instances. A tag consists of a name and a value. We will need to add at least one tag to our instance specifying its name. The reason we need this tag is to be able to identify our instances later on when we need to perform additional actions against them, including termination. You can add the following task to the playbook to tag the instance:

- name: Add tag to Instance(s) ec2_tag: aws_access_key: "{{ aws_access_key }}" aws_secret_key: "{{ aws_secret_key }}" resource: "{{ item.id }}" region: "{{ region }}" state: "present" with_items: "{{ ec2.instances }}" args: tags: Type: webserver

The task is pretty simple, use the ec2_tag module and specify the tags in the args parameter.

Task 5: Finishing up instance creation

Before starting to communicate with our machine to deploy Apache, we need to ensure that the creation process is complete and that the SSH daemon is ready to receive connections. This can be done with the following task:

- name: Wait for SSH to come up wait_for: host: "{{ item.public_ip }}" port: 22 state: started with_items: "{{ ec2.instances }}"

Here, we are making use of the wait_for Ansible module, which does nothing but pause playbook execution till a specific condition is met. In our case, it's port 22 (default SSH port) on our host coming up and accepting connections.

Part 2: Deploying Apache

Now that our instance is up and running, let's use Ansible to deploy the Apache web server. Modify the playbook by adding the following:

- hosts: webservers remote_user: ubuntu become: yes gather_facts: no pre_tasks: - name: 'install python' raw: 'sudo apt-get -y install python' tasks: - name: Install Apache apt: name: apache2 state: present - service: name: apache2 state: started enabled: yes

Let's anaylze the changes we made:

We specify the hosts directive to point to the group that we've just created earlier using the add_host module.

directive to point to the group that we've just created earlier using the module. Ansible needs a user to connect to the remote host with. By default, the Ubuntu image has a user named ubuntu with admin privileges. We use the remote_user to specify it.

with admin privileges. We use the to specify it. We need to use sudo to install Apache so become: yes

to install Apache so Notice that we are attempting to connecto to a host were Python is not yet installed. This means that we need to use Ansible to install it first. Prior to installing Python, Ansible is extremely limited in what it can accomplish on the remote host. So, we need to set gather_facts to no to avoid using Python modules to collect information about the host.

to to avoid using Python modules to collect information about the host. Now, we need to ensure that Python gets installed the first thing before any other task. The perfect place for this is the pre_tasks section.

section. We use the raw module, which will just execute the give command on the remote machine. Even the command or the shell modules won't work at this stage; as Python is not installed yet. We use the raw module to install Python.

module, which will just execute the give command on the remote machine. Even the or the modules won't work at this stage; as Python is not installed yet. We use the module to install Python. Once Python is installed, we can safely use our tasks as normal. The first task installs Apache using the apt module. Then we ensure that Apache is started and enabled on system boot by using the service module. [the_ad id="369”]

Running the playbook

Before running the playbook, we need to configure the following:

The private key that Ansible will use to connect to the host.

Avoid displaying the host identification dialog that SSH shows whenever you want to connect to a host for the first time. This is necessaty if you want to run the playbook unattended.

To do this, we need to override the default Ansible's configuration file, Ansible.cfg . It is located by default in /etc/ansible . But, placing a file with the same name in the working directory wil override the default one. Create a new file called ansible.cfg in the current working directory and add the following:

[defaults] host_key_checking = False private_key_file = /home/ahmad/.ssh/fakharany.pem

Now, we're ready to run the playbook by issuing the following command:

ansible-playbook -i hosts --ask-vault-pass ec2.yml

After the playbook finishes running successfully, you can check your AWS console for a new EC2 instance created and assigned the correct security group.

Further, you can fire up your browser and naviagate to http://ec2-ip, you should see the default Ubuntu page, where ec2-ip is the public IP address that got assigned to your instance by AWS.

Terminating the instance

Unless you are still in the free-tier period offered by Amazon, which lasts for 1 year, you are going to be charged for running the instance on a time basis. So, if you don't need the instance for the time being or at all, you should stop or terminate it.

The difference between stopping and terminating the instance

Stopping the instance is like issuing the shutdown command. You can start it up again without losing any data. You will not be charged for a stopped instance. You may be charged - however - for other resources related to the intance like storage.

The following playbook is very simple: it will grab all the instances by a specific tag and terminate them. Create a new file called ec2_down.yml and add the following:

- hosts: local connection: local vars: region: us-west-2 vars_files: - aws_keys.yml tasks: - name: Gather EC2 facts ec2_instance_facts: region: "{{ region }}" filters: "tag:Type": "webserver" aws_access_key: "{{ aws_access_key }}" aws_secret_key: "{{ aws_secret_key }}" register: ec2 - debug: var=ec2 - name: Terminate EC2 Instance(s) ec2: instance_ids: '{{ item.instance_id }}' state: absent region: "{{ region }}" aws_access_key: "{{ aws_access_key }}" aws_secret_key: "{{ aws_secret_key }}" with_items: "{{ ec2.instances }}"

The playbook starts with declaring the required variables that will be used throughout the file, then it defines two tasks;

ec2_instance_facts : This task is responsible for collecting the instance facts. Don't confuse this with the traditional fact-gathering that Ansible performs by default when it executes any playbook. Here, Ansible is collecting facts that are related to the presence of this instance on the AWS platform. Facts like the tags that were assigned to the instance are collected, which is what interests us.

ec2 : Again, we use the ec2 module, but this time to terminate the instance. The state parameter can take other values than absent depending on your requirements. For example, stopped will just shut down the instance, restarted will reboot it, and running will ensure that it is running (it will start the machine if stopped).

Did you enjoy this post? Enroll in my course “Learn Ansible on Vagrant and Amazon AWS” at a 90% discounted price. For a limited time, you can have this course for $10.99. Just use this coupon code on checkout: ZSAVE2018