CloudBoost.io is a server-less platform + backend as a service (BaaS) which helps developers build apps in half less time by taking care of all the mundane tasks like Authentication, Notifications, Emails, Managing and Scaling your Database, Files, Cache, and a whole lot more. We use MongoDB and Redis clusters for data store and NodeJS powers most of our micro-services. CloudBoost is completely open source under Apache 2 License, so you can modify it the way you like and install it on your own servers for free. You can check out our GitHub here : https://github.com/cloudboost/cloudboost (Pull Requests are a LOT appreciated!)

Before I dive into how we use Docker in production, for readers who are very new to containers. Let me explain what a “container” actually is. If you’re someone who knows about Docker and containers. Please feel free to skip to the implementation part of this post.

Why Containers

Before containers, we used to install our pieces of our application stack on a VM. The problem with that approach was — Let’s say you need to install MongoDB to power your app then you would have three (or more) machines running MongoDB and you would literally name them by what’s installed on each of these machines — mongo-1, mongo2, mongo-3. You could not have any other piece of your stack (let’s say Redis) installed on the same VM because the memory and compute were not isolated between these two.

Think of containers as tiny self-contained isolated packages which has enough software to run a particular service. For example: MongoDB container has enough software backed in to run MongoDB and nothing else. Containers can then be installed on a VM (or bare metal) and you can have as many containers you want on a machine. Containers help you isolate these services, so you can literally install multiple services on one machine.

Few advantages of containerization:

Consistency and Isolation : The biggest advantage is consistency and isolation. Whatever works in your test — will work in staging and production because you’re literally using the same container image for those environments. It also helps you have consistent environment across your entire ops pipeline. All of the containers are isolated with each other so you can run multiple containers on one machine.

Resource Utilisation: We’ve had problems with resource utilisation before — For example if your MongoDB uses just 10% of compute on a VM. 90% of compute would be rendered as waste which would be a LOT of compute and resources. Containers help us pack one or more services and isolates them from each other which actually helps us utilise the majority of the compute on a machine before scaling out.

Before deploying our stack on containers (when we were on plain vanilla VM’s), our average cluster utilisation was under 15% which was a HUGE waste of compute.

Scale: Scale was hard. Really hard. We used tools like Chef which was responsible for managing and installing software on a VM. The problem with these tools was they were clunky, they did not work well and had a huge learning curve. Most of the times, they needed multiple modules to be installed and configuring each one of them was tough and even if we could configure them — they were not reliable. Installation failed a lot of times, and we had to try over and over again for it to succeed. Looking at Chef forums and StackOverflow, a lot of other DevOps team faced similar issues and not just us. Ultimately, working with Chef created too many internal and external conflicts for us to produce significant ROI.

Container ecosystem has better tooling by far and is built for scale (like Kubernetes or Docker Swarm) which helped us scale our services flawlessly to many machines. More on this later.

Micro-services : Microservices are a very attractive DevOps pattern because it helps teams deploy and scale each of these services independently which tremendously increases speed to market. Engineers responsible to work with few services do not need to know about the code in other services which also helps a new hire get started fairly quickly and keeps the codebase for each of these services small. With each micro-service being developed, deployed, run, and maintained independently (often using different languages and technology stacks), these allow companies like us to “divide and conquer,” and scale teams and applications more efficiently. When the pipeline is not locked into a monolithic configuration — of either toolset, component dependencies, release processes, or infrastructure — there is a unique ability to better scale development and operations. It also helps companies easily determine what services don’t need scaling in order to optimise resource utilisation saving them cloud expenses.