On 24th Sep, 2015, ARIN (IP registry for North America) announced that there were no more free IPv4 addresses. It meant that the already scarce IPv4 addresses became even more difficult to obtain. This prompted many web hosts and website developers to look at alternate solutions to host new websites.

The recommended solution was to switch to IPv6, but due to application compatibility issues, many service providers were unable to adopt IPv6 right away. A popular alternative was to use a network gateway, which allowed applications to run on private IP addresses, while a central gateway linked them to the internet using a public IPv4 address.

Recently a web application developer signed up for our server management services to setup and maintain a server virtualization infrastructure with a single IP application gateway. The developer was using a shared server to host multiple websites for their customers, but the single server solution prevented hosting web applications with conflicting software dependencies. They were on the lookout for a solution that can run more than 50 light weight virtual servers on a single server, but with just 5 public IPs.

We recommended a light weight server virtualization solution called LXD, with an Nginx reverse proxy to act as a web application gateway. In this solution, the LXD system would be used to create independent server instances with very low resource usage. The virtual servers would be given private network IPs, and would be linked to a central Nginx reverse proxy. The reverse proxy would use a public IP, and it would act as a link to the internet for all the virtual servers behind it.

This is the story of how we setup the LXC/LXD server virtualization infrastructure, and the Nginx reverse proxy.

Setting up the LXD/LXC server virtualization

The basic setup was pretty straight forward. LXD is included in Ubuntu 15.04. So, it was only a matter of running “ apt-get install lxd" to get the hypervisor running.

Note: In older Ubuntu systems, the “ppa:ubuntu-lxc/lxd-stable” needs to be added to install LXD.

Setting up Nginx app gateway

Now, we had an LXD server, but by default it assigned private IPs to containers that is not visible from the internet. To be able to assign public IPs, the default network interface of the host server had be bridged to the containers. For that, we converted the server ethernet (eth0) to a bridge (br0), disabled USE_LXC_BRIDGE in /etc/default/lxc-net, and set the lxc.network.link as br0 in the default LXC profile.

This gave us the ability to assign public IPs to containers. We then created an LXC container, gave it a public IP (say 203.0.113.5), and installed Nginx server in it. Since this was an internet facing server, we hardened the network and firewall settings to protect users from a slew of common attacks prevalent in the internet. Now the Nginx server was visible from the internet.

Configuring Nginx as reverse proxy

At this point, everyone in the internet was able to see the Nginx server. Now, we needed a way for Nginx to pass on specific website requests to various other LXC containers with private IPs.

To start off, we created 3 LXC containers and configured WordPress websites in it. By default, these containers were created with the private IPs 172.17.20.1, 172.17.20.2 and 172.17.20.3. These containers were configured as “upstream” servers for content in the Nginx servers. For eg., the container with the IP 172.17.20.1 hosted a domain called “mydomain.com”. So, 172.17.20.1 was configured as the upstream server for all requests to “mydomain.com”. The DNS of “mydomain.com” was configured to point to the Nginx server’s public IP “203.0.113.5”.

The configuration setting for “mydomain.com” looked like this:

upstream wp01 { server 172.17.20.1:80; } server { listen 80; server_name mydomain.com; location / { proxy_pass http://wp01; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

Similar settings were configured for 172.17.20.2 and 172.17.20.3. Finally, the end result looked like this:

Domains in each container was able to interact with the outside world with a single front-end public IP “203.0.113.5”.

Once we confirmed everything worked fine, we migrated all websites (that included Magento, Joomla!, Drupal, etc.) to new containers in the LXD server.

Since the new system used light weight virtual servers for each website, it provided greater flexibility in a few key areas:

Each virtual server could have custom server settings, such as different PHP versions, web server components, etc.

Containers could be quickly created using server templates, reducing the time for a new project setup.

Containers could be easily moved between servers, which allowed the developer to take a low config server to start with, and move to larger servers as the number of customers grew.

Now this server is used to host 35 projects in parallel, and is all set to scale up as needed.