Run Multiple Isolated Web Applications on Containers with a Single IP for Free

An introduction of the architectural logic and the step-by-step guide to building an automated NGINX reverse proxy container with one public IP address serving multiple container-based applications inside a software-defined LAN.

Click here to share this article on LinkedIn »

If you are like me and always building new applications and various technical solutions for customers, having a cost effective and efficient test environment to spin up new applications on-demand is vital to delivering well tested solutions to your customers. Test environments in the old days were quite costly and I used to have racks and racks of servers at my office. Then came VMs and Cloud which dramatically reduced costs of test environments. More recently, containers are becoming extremely popular both in replacing VM’s completely and/or running on top of VM instances in the cloud. Here’s an excellent comparison table from Google explaining the benefits of containers compared to VMs at a glance:

Above picture from: https://cloud.google.com/containers/

What I usually tell people unfamiliar with the concept is that containers are a new and improved way to do what VM’s do for many occasions but with much less resources so that you can run many more “server instances” than if you were to use VMs. This is the reason why we are able to do what I am about to walk-through cause my T2-Micro VM would probably hurl over and die if I tried to run more than 3 VMs on it. If you are still a bit confused about this whole container thing, I found that CIO Magazine has a great article that explains things comprehensively in layman terms.

Now there are plenty of ways to create a container-based environment including some of the newer offerings like AWS ECS, AWS EKS, AWS Fargate, Microsft AKS, GCP Kubernetes Engine, Alibaba Cloud’s Container Service, and the list goes on. I am sure you will also notice the word “Kubernetes” popping up in some of these services which is something they call “Production-Grade Container Orchestration”; essentially a sophisticated abstraction layer with a software defined environment that is extremely useful in deploying and managing large clusters of containers; an absolute must for large scale container deployments in my opinion. However, while I am quite excited about most of these services, they are all beyond the scope of this topic as I intend to walk-through the building of a super simple environment that allows you to spin up containers sitting behind an NGINX reverse proxy container all residing within a small AWS EC2 T2-Micro instance. This way, you can host multiple small scale web applications associated with multiple domains/sub-domains on a single external IP. Also, as a side note, FWIW, I have a T2-Nano instance as a SSH/OpenVPN gateway in the same VPC so that I can block direct remote access into the container host for security reasons which I won’t cover the details in here. The great news about the monetary cost is that if you sign up for a new account with AWS, you have these resources that I just talked about all for free for the duration of one year with very generous limits before they start charging for excess usage.

So here goes but before we begin, let me show the diagram from up top again to help visualize the implementation:

On the left, you will see the AWS architecture while on the right, you will find the architecture inside that EC2 T2-Micro instance.

For the purpose of this walk-through, I am going to assume you are well versed enough to sign up for a new AWS account and launching a T2-Micro instance with the Ubuntu 16.04 image. I will begin the walk-through at the point of where the instance is up and running and that you have logged into the instance via SSH regardless if you are ssh-ing into the instance directly or doing it like me via a gateway host or what some call a bastion host.

I have named my EC2 instance chimp and the domain I am using is my domain, dragon-ventures.com. Let’s get started!

First, let’s make sure that Ubuntu is up-to-date and then install Docker:

sudo apt-get update

sudo apt-get upgrade

sudo apt-get install docker.io

After it’s done installing, you can do the following to make sure it’s working:

ubuntu@chimp:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

ubuntu@chimp:~$

It’s normal that there’s nothing that shows up underneath the columns since we don’t have anything running yet. From here, you are ready to configure your container-based environment. That was super easy right?

So let’s get the NGINX reverse proxy container running with its own network. Should we build things from scratch? Nope! There’s already an existing Docker image out there created by Jason Wilder that can help you deploy an automated reverse proxy container with pretty much one command line but before we do that, let’s make sure we have created a network called nginx-proxy for where all the containers behind the proxy container will sit:

ubuntu@chimp:~$ sudo docker network create nginx-proxy

2ba0a9b09db49f1524b1b4b6cd3d5d16b8cdf38e61690a1301f4015c319e9e73

If you are successful, you will see a string of random characters which is the unique ID of your network. After the network has been created, because we want both http and https to work, you can now prepare a local directory of choice and place some SSL certificates created for your domain from something like ZeroSSL or any other cert provider you choose. You can then choose a directory that you think is safe to store ssl certs but for the purpose of this walk-through, I have chosen /home/ubuntu/ssl to make things easy:

ubuntu@chimp:~$ mkdir ssl

There are a few ways to upload your SSL cert/key to the server which I assume you know how already but here’s an example done from my ssh gateway instance (could be your local computer) uploading dragon-ventures.com.crt and dragon-ventures.com.key which was generated at zerossl.com to the container host instance. NOTE that the cert and key must be named after the domain name used for the automation to work:

ubuntu@bastion:~$ scp -i aws.pem dragon-ventures.com.crt ubuntu@chimp.dragon-ventures.com:~/ssl/ ubuntu@bastion:~$ scp -i aws.pem dragon-ventures.com.key ubuntu@chimp.dragon-ventures.com:~/ssl/

Note that for hosting additional domains, all you have to do is place the cert and key named domain.com.crt and domain.com.key respectively into the directory you are using instead of /home/ubuntu/ssl for it to automatically use those certs for the associating containers.

Now, we are ready to launch the automated NGINX reverse proxy container with our one liner:

ubuntu@chimp:~$ sudo docker run -d --name nginx-proxy --net nginx-proxy -p 80:80 -p 443:443 -e HTTPS_METHOD=noredirect -e HSTS=off -v /home/ubuntu/ssl:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

Unable to find image 'jwilder/nginx-proxy:latest' locally

latest: Pulling from jwilder/nginx-proxy

e7bb522d92ff: Pull complete

6edc05228666: Pull complete

cd866a17e81f: Pull complete

d9f2d6a1f8f6: Pull complete

e9c7e986c8c1: Pull complete

a51bcd518fd9: Pull complete

66df98413ed2: Pull complete

aff8c6473b42: Pull complete

1c91fd608be1: Pull complete

7319453a5fbe: Pull complete

Digest: sha256:41506b2095779e6e64f34e26ccba35cb3668ee56a735cd740ac8c183af583294

Status: Downloaded newer image for jwilder/nginx-proxy:latest

a02048ca47b59325fc9e4ea9d4f4717c2cb63a4d4a78c94c3709c5d3e4bbd012

Believe it or not, you now already have an automated reverse proxy container on a custom network. Super quick right? So next, let’s test by launching a new container. For simplicity sake, let’s say we want to build a new website for a customer on wordpress. Well, you guessed it right. There’s actually a wordpress container image already available to us to use.

First, because the wordpress image by default only supports port 80 without SSL, we need to customize the wordpress image to support SSL and port 443.

The following steps are purely personal preference and for organizational purposes so I can easily refer back the different Dockerfiles I create which are files that define customized Docker images. The logic behind it is that I make a top level directory with a sub-directory for each container I customize. I also copy the ssl certs into a ssl sub-directory within the container sub-directory because Docker build can only stat from the directory and its sub-directories where the Dockerfile sits.

ubuntu@chimp:~$ mkdir -p containers/wordpress/ssl

ubuntu@chimp:~$ cp ssl/dragon-ventures.* containers/wordpress/ssl

ubuntu@chimp:~$ cd containers/wordpress/

ubuntu@chimp:~$ vim Dockerfile

Paste the following content into the file:

FROM wordpress:4.8.0-php7.1-apache RUN apt-get update && \

apt-get install -y --no-install-recommends ssl-cert && \

rm -r /var/lib/apt/lists/* && \

a2enmod ssl && \

a2ensite default-ssl EXPOSE 80

EXPOSE 443

Note that this container will be using a self-signed cert but that shouldn’t matter too much in this situation as the reverse proxy container will be offloading SSL with a real cert. However, for my real test environment, I actually, add a few more steps that makes the container use a real cert as well but I won’t cover them here.

We are now ready to build the custom image and I am labeling it dv/wordpress:

sudo docker build -t dv/wordpress .

Sending build context to Docker daemon 2.048 kB

Step 1/6 : FROM wordpress:4.8.0-php7.1-apache

4.8.0-php7.1-apache: Pulling from library/wordpress

ad74af05f5a2: Pull complete

a1e75557f244: Pull complete

6ab4f72a86ad: Pull complete

55e3508d42ca: Pull complete

88792c88e1bc: Pull complete

1d8a48cffe59: Pull complete

0c30cf9b4233: Pull complete

37ec3cd3c9fb: Pull complete

1925fdff3f6a: Pull complete

f1a75ee98d0d: Pull complete

b9e0483f0c09: Pull complete

8c5d8b4070d7: Pull complete

ffd7c73efd91: Pull complete

31b2a59ece05: Pull complete

df9af0decc33: Pull complete

b0b6fe59a468: Pull complete

9c183d73d613: Pull complete

a3be0b191a8e: Pull complete

Digest: sha256:241f092e70d128d047f2fc162904e1dba89dbe4c2f0e225d5791ff596ed2c96f

Status: Downloaded newer image for wordpress:4.8.0-php7.1-apache

---> 56649ecf398a

Step 2/6 : RUN apt-get update && apt-get install -y --no-install-recommends ssl-cert && rm -r /var/lib/apt/lists/* && a2enmod ssl && a2ensite default-ssl

---> Running in 1bcd6bbeb433

Get:1

Ign

Get:2

Get:3

Get:4

Get:5

Get:6

Get:7

Fetched 10.1 MB in 8s (1145 kB/s)

Reading package lists...

Reading package lists...

Building dependency tree...

Reading state information...

Suggested packages:

openssl-blacklist

The following NEW packages will be installed:

ssl-cert

0 upgraded, 1 newly installed, 0 to remove and 45 not upgraded.

Need to get 20.9 kB of archives.

After this operation, 104 kB of additional disk space will be used.

Get:1

debconf: delaying package configuration, since apt-utils is not installed

Fetched 20.9 kB in 1s (13.5 kB/s)

Selecting previously unselected package ssl-cert.

(Reading database ... 13657 files and directories currently installed.)

Preparing to unpack .../ssl-cert_1.0.35_all.deb ...

Unpacking ssl-cert (1.0.35) ...

Setting up ssl-cert (1.0.35) ...

debconf: unable to initialize frontend: Dialog

debconf: (TERM is not set, so the dialog frontend is not usable.)

debconf: falling back to frontend: Readline

Considering dependency setenvif for ssl:

Module setenvif already enabled

Considering dependency mime for ssl:

Module mime already enabled

Considering dependency socache_shmcb for ssl:

Enabling module socache_shmcb.

Enabling module ssl.

See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates.

To activate the new configuration, you need to run:

service apache2 restart

Enabling site default-ssl.

To activate the new configuration, you need to run:

service apache2 reload

---> 6c5155317d89

Removing intermediate container 1bcd6bbeb433

Step 3/6 : ADD ssl/dv.crt /etc/ssl/certs/

---> 732f061b3bf2

Removing intermediate container 909a2770fd40

Step 4/6 : ADD ssl/dv.key /etc/ssl/private/

---> 2b675e639d1a

Removing intermediate container b07c19546149

Step 5/6 : EXPOSE 80

---> Running in 3efc852f72b9

---> 50201abe9632

Removing intermediate container 3efc852f72b9

Step 6/6 : EXPOSE 443

---> Running in 723b660431bf

---> 5e4dac2c8e9c

Removing intermediate container 723b660431bf

Successfully built 5e4dac2c8e9c ubuntu@chimp:~/containers/wordpress$Sending build context to Docker daemon 2.048 kBStep 1/6 : FROM wordpress:4.8.0-php7.1-apache4.8.0-php7.1-apache: Pulling from library/wordpressad74af05f5a2: Pull completea1e75557f244: Pull complete6ab4f72a86ad: Pull complete55e3508d42ca: Pull complete88792c88e1bc: Pull complete1d8a48cffe59: Pull complete0c30cf9b4233: Pull complete37ec3cd3c9fb: Pull complete1925fdff3f6a: Pull completef1a75ee98d0d: Pull completeb9e0483f0c09: Pull complete8c5d8b4070d7: Pull completeffd7c73efd91: Pull complete31b2a59ece05: Pull completedf9af0decc33: Pull completeb0b6fe59a468: Pull complete9c183d73d613: Pull completea3be0b191a8e: Pull completeDigest: sha256:241f092e70d128d047f2fc162904e1dba89dbe4c2f0e225d5791ff596ed2c96fStatus: Downloaded newer image for wordpress:4.8.0-php7.1-apache---> 56649ecf398aStep 2/6 : RUN apt-get update && apt-get install -y --no-install-recommends ssl-cert && rm -r /var/lib/apt/lists/* && a2enmod ssl && a2ensite default-ssl---> Running in 1bcd6bbeb433Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB]Ign http://deb.debian.org jessie InReleaseGet:2 http://deb.debian.org jessie-updates InRelease [145 kB]Get:3 http://deb.debian.org jessie Release.gpg [2434 B]Get:4 http://deb.debian.org jessie Release [148 kB]Get:5 http://security.debian.org jessie/updates/main amd64 Packages [631 kB]Get:6 http://deb.debian.org jessie-updates/main amd64 Packages [23.1 kB]Get:7 http://deb.debian.org jessie/main amd64 Packages [9064 kB]Fetched 10.1 MB in 8s (1145 kB/s)Reading package lists...Reading package lists...Building dependency tree...Reading state information...Suggested packages:openssl-blacklistThe following NEW packages will be installed:ssl-cert0 upgraded, 1 newly installed, 0 to remove and 45 not upgraded.Need to get 20.9 kB of archives.After this operation, 104 kB of additional disk space will be used.Get:1 http://deb.debian.org/debian/ jessie/main ssl-cert all 1.0.35 [20.9 kB]debconf: delaying package configuration, since apt-utils is not installedFetched 20.9 kB in 1s (13.5 kB/s)Selecting previously unselected package ssl-cert.(Reading database ... 13657 files and directories currently installed.)Preparing to unpack .../ssl-cert_1.0.35_all.deb ...Unpacking ssl-cert (1.0.35) ...Setting up ssl-cert (1.0.35) ...debconf: unable to initialize frontend: Dialogdebconf: (TERM is not set, so the dialog frontend is not usable.)debconf: falling back to frontend: ReadlineConsidering dependency setenvif for ssl:Module setenvif already enabledConsidering dependency mime for ssl:Module mime already enabledConsidering dependency socache_shmcb for ssl:Enabling module socache_shmcb.Enabling module ssl.See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates.To activate the new configuration, you need to run:service apache2 restartEnabling site default-ssl.To activate the new configuration, you need to run:service apache2 reload---> 6c5155317d89Removing intermediate container 1bcd6bbeb433Step 3/6 : ADD ssl/dv.crt /etc/ssl/certs/---> 732f061b3bf2Removing intermediate container 909a2770fd40Step 4/6 : ADD ssl/dv.key /etc/ssl/private/---> 2b675e639d1aRemoving intermediate container b07c19546149Step 5/6 : EXPOSE 80---> Running in 3efc852f72b9---> 50201abe9632Removing intermediate container 3efc852f72b9Step 6/6 : EXPOSE 443---> Running in 723b660431bf---> 5e4dac2c8e9cRemoving intermediate container 723b660431bfSuccessfully built 5e4dac2c8e9c

Now it’s all done building the custom image. Let’s setup a mysql server for the wordpress container:

ubuntu@chimp:~$ sudo docker run -d --name share-mysql --expose 3306 --net nginx-proxy -e MYSQL_ROOT_PASSWORD=my-secret-pw mysql:latest

Unable to find image 'mysql:latest' locally

latest: Pulling from library/mysql

8176e34d5d92: Pull complete

17e372a8ec90: Pull complete

47b869561d3a: Pull complete

c90ab4483f28: Pull complete

d6af16572c5c: Pull complete

6d16794d04ac: Pull complete

aaf442a8fe75: Pull complete

7c6fa8f07ec4: Pull complete

ece17b689642: Pull complete

c55b06e76eaf: Pull complete

661fabfb4fc2: Pull complete

Digest: sha256:227d5c3f54ee3a70c075b1c3013e72781564000d34fc8c7ec5ec353c5b7ef7fa

Status: Downloaded newer image for mysql:latest

addbb43072a94d7d1e7ebced10083a3980f01af84b619621d267f8e05f436232

Note that we have set the mysql root password as “my-secret-pw” in our docker run command above. This will be important to remember later. Now, we can give our custom image a run for its money:

ubuntu@chimp:~$ sudo docker run -d --name share --expose 80 --expose 443 --net nginx-proxy --link share-mysql:mysql -e VIRTUAL_HOST=share.dragon-ventures.com -e VIRTUAL_PROTO=http -e VIRTUAL_PORT=80 -e VIRTUAL_PROTO=https -e VIRTUAL_PORT=443 -e HTTPS_METHOD=noredirect dv/wordpress

02dce41c606cb30bc5b1240606f0d83c2d5e6a80f31fc910069639637ce2276d

Note that I have specified for the wordpress container to link to the share-mysql database which gives it the required access to mysql.

That’s it… REALLY! So let’s see if it really works by editing the hosts file and look at it with a browser:

sudo vim /etc/hosts

Insert the following into the hosts file:

[Your EC2 Instance Public IP] share.dragon-ventures.com

Then fire up the browser to check!

Bam! Looking good! However, the point of having a container-based environment with a reverse proxy server is that you can host multiple applications (in a real use case, with multiple majorly different server configurations rather than 2 containers with the same configuration) so let me setup wordpress and put a different look to the site for the purpose of differentiating between this site and the new site which we will create after only as a proof of concept that NGINX is in fact routing to multiple containers. In order to do that, we must first prepare the mysql server. Let’s start by installing mysql-client:

sudo apt-get install mysql-client

Then, identify the IP address of the mysql container by finding the container id and then executing an inspect command based on that identified container id as per below:

ubuntu@chimp:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

02dce41c606c dv/wordpress "docker-entrypoint..." 17 minutes ago Up 17 minutes 80/tcp, 443/tcp share

addbb43072a9 mysql:latest "docker-entrypoint..." 25 minutes ago Up 25 minutes 3306/tcp share-mysql

a02048ca47b5 jwilder/nginx-proxy "/app/docker-entry..." About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy

ubuntu@chimp:~$ sudo docker inspect addbb43072a9 |grep IPAddress

"SecondaryIPAddresses": null,

"IPAddress": "",

"IPAddress": "172.18.0.3",

We can now connect to the mysql database and prepare the database as follows:

ubuntu@chimp:~/ssl$ mysql -h 172.18.0.3 -u root -p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 4

Server version: 5.7.21 MySQL Community Server (GPL) Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> create database sharedb;

Query OK, 1 row affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON sharedb.* TO wpuser IDENTIFIED BY 'secret';

Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> \q

Then let’s go back to the browser and get the wordpress site going:

Reference for what to put in the setup config screen

Go through the installation motions, hit the address https://share.dragon-ventures.com, and you will end up here:

Now let’s make the second site, portal.dragon-ventures.com:

ubuntu@chimp:~$ sudo docker run -d --name portal-mysql --expose 3306 --net nginx-proxy -e MYSQL_ROOT_PASSWORD=my-secret-pw mysql:latest

'e73f179192f2ab4644a760833355673d528d79269eaff83d06e864912227b572

ubuntu@chimp:~$ sudo docker run -d --name portal --expose 80 --expose 443 --net nginx-proxy --link portal-mysql:mysql -e VIRTUAL_HOST=portal.dragon-ventures.com -e VIRTUAL_PROTO=http -e VIRTUAL_PORT=80 -e VIRTUAL_PROTO=https -e VIRTUAL_PORT=443 -e HTTPS_METHOD=noredirect dv/wordpress

f08fde106687b039996747bf78d8605581fca6c18384b252606995f9b427937a

Edit the hosts file again and add:

[Your EC2 Instance Public IP] portal.dragon-ventures.com

Then check on the browser and look! Even though portal.dragon-ventures.com points to the same IP, it goes to a different website!

Now let’s open a new browser window and try share.dragon-ventures.com again:

There you have it! You can keep adding more containers until you run out of capacity on that EC2 instance or you can also take containers down and spin them up super quickly to develop or showcase solutions only when you need it. Check out how little resources the 5 containers and the host together are using even with a few computers hitting the websites:

Of course, this is obviously meant for small scale test environments and as I mentioned before, for the high traffic test environments and large production environments, in-house Kubernetes or one of the awesome cloud offerings are the way to go!

Happy Testing!