Yo.

Containers came to our life and changed the way we deploy and host web applications. I knew this before I first read Docker’s “Getting started” guide. I knew that, and I knew nothing, because this wasn’t until after I started working with containers when I realised the full potential of the idea, and that they are much more than some thin virtual machines to host one’s REST APIs. Today let’s talk about how containers could make your life better even if you’ve sworn to never deploy your precious cat-ranking app to anything but bare metal. Here are some less orthodox ways of using containers I’ve discovered in the past six months.

Container as a build/deployment server

This is where it all started for me.

The thing is — I love DevOps. At some point in my career I learnt about a dynamic duo of TeamCity and Octopus Deploy that enchanted me with its sparkling-white magic of CI/CD. I was the only one in my team who was happy to contribute his time into writing scripts, maintaining dev servers and setting up new environments but that was mostly fun for me. Except those times when it sucked because of limitations coming from the fact that all of our infrastructure was but a bunch of physical servers and Azure VMs, that I didn’t always had access to.

Skipping forward, not so long ago, I started working on a new project which was 1000% cloud-native. One of our first agreements was to maintain our infrastructure as code and implement the best CI/CD practices we could. Then, since non of us wanted to be restrained by the boundaries of company’s internal network we decided that all of our Ops tech should also be cloud-based blah-blah-as-a-service, in which way we could have access to our docker, git and npm repos from everywhere in this world, provided the place has internet connectivity. That said I rolled up my sleeves, and started looking for the best candidate for a build server with one annoying thought on my mind: “How on earth am I going to build a build server without a server?”. My primary fear was that without the ability to create whatever enviroment I wanted online I will not be in control, I will be limited in the way I build, deliver and audit our artifacts. For a moment I was in a very dark place.

I didn’t panic though. First things first — I knew that we needed a cloud based version control system, and after some discussions, despite my strong sympathies towards GitLab, we decided to use Atlassian’s Bitbucket (for the reasons beyond technical, I would say). Knowing that both GitLab and GitHub had their own neat offerings for continuous everything I decided to check what does Bitbucket had up its sleeves, only to discover a thing called Bitbucket Pipelines. And boy, wasn’t I disappointed.

So the idea of the pipelines is as follows: you provide a series of commands and rules of why and when this commands should be run — these are your so-called pipelines. Now a pipeline may include a whole lot of operations from building and running unit tests, to deploying to prod and validating the deployment. The best part of the story is that commands you use in pipelines are simply bash shell commands, that are executed in a docker container built from an image of your choice. The process looks like this:

It was my first lesson learnt on how containers can actually be used in real life. I personally believe that this idea of having your own on-demand build and deployment servers in the cloud is very powerful and I can’t see many reasons why you wouldn’t use it for all your cloud deployments. The key advantages of this approach, as for me, are as follows:

If you create your images from docker files it makes it super-duper easy to track changes in your CI/CD infrastructure. Because, since your infrastructure becomes code, you can put it into your version control system. Which makes it easier to find errors, pinpoint bits and pieces for improvements, and transfer knowledge to team members. It is much easier to make a change in a dockerfile than in a virtual machine. No physical infrastructure to maintain. I guess this is an obvious one. Cost efficiency. Money spent on a bitbucket premium subscription and an artifact storage are well justified by the lack of expenses on hardware, licenses and developers/Ops time spent wiring everything up and fixing it when it falls apart.

If any of these sounds appealing to you — give it a go. Although I’ve only mentioned Atlassian’s pipelines, there are another valid options out there, such as Jenkins or GitLab CI. Bitbucket’s entry level account tier is free, and if you don’t fancy Atlassian for some reason, you can try competitors’ options that I believe will be mighty similar. I don’t care, let’s move on to the next one.

Container as a development workspace

Most of us developers have a lot of dev tools installed on our machines. We may not even remember about some of them until the day we try to install them again, and get a notification that a version of whatever thing we are about to install is conflicting with the version already installed. What’s worse, is a need to install everything again after we get our machine upgraded/replaced. Most of the time it takes forever to find all the tools, libraries and extensions we need, and somtimes it is a matter of trial and error. Like this: “Install this thing, install that plug-in, check if I can build a project. Oh, I cannot! Come on Alex, what’s missing? Ah why don’t I buy a remote cucumber field instead and become a farmer!?”.

This becomes even a bigger problem when you regularly onboard new developers in your company, and you have a bunch of people going through the same process of waisting everybody’s time. Life would’ve been a lot easier if we could configure and distribute encapsulated dev workspaces with a snap of a finger. Which we actually can do. If we use containers it is. However one blog post won’t be enough to cover the topic, so I’ll redirect to this YouTube video instead: https://youtu.be/vE1iDPx6-Ok#t=34m35s. The bit that is relevant to the today’s story starts at 34:35, but the rest of it is still worth watching.

I also can’t resist but to add a following illustration of how I see this process.

Of course this concept is not for everybody, and you will most likely need to find a compromise between what you keep on your machine and what you run as a container, but hey, software development is all about finding compromises! More about this in a next section.

Container as an “isolated workspace module”

I must admit that despite my plans to implement a solution similar to one described in the video, I’ve not done that yet. Partly because I am lazy, kinda because we don’t have any new developers that would need it, doesn’t matter. What matters though is that that talk has become a great source of inspiration for me.

I still don’t have a docker image including all the tools and infrstructure I need to do development. I have however created a set of images that contain stuff like AWS CLI, serverless etc., that I don’t want to install directly to my machine. I also host in containers all type of servers my applications rely on, such as PostrgreSQL, Redis, ES.

Despite not being as efficient nor effective as a fully automated and nicely packaged solution from the previous section, this approach still has some obvious benefits:

Cleaner system — every application I use inside a container is isolated within this container, and cannot cause any side effects throughout the host system. I still have some portability since my containers are run with the same command, that will work the same way on any machine with *nix shell. Potential for automation. I can create shell scripts bundling together different sets of containers. For example start and stop Redis server, MySql server and a couple of Koa applications with just one command. Ability to use a pipeline’s image to build locally, which means I may make sure that all the artifacts are going be created in exactly the same way on my local machine as they are on the “build server”. I can also add tags to my docker images, and version them to make sure I get the same result every time I build a certain revision of my code. Ability to test new or risky stuff. When it comes to professional IT hygiene, there are two anecdotes that I remember well. First is a story about a Russian sysadmin, who was so paranoid he opened all email attachments in a special virtual machine. Another one is a twitter thread I saw something two years ago (back then it was just a twit and some replies underneath). A blogger/security researcher posted a shell script, that according to description, should’ve played some AMAZING ASCII ANIMATION, but in reality was downloading a botnet client to a victim’s machine. In a couple of days this person got a robust botnet of tens of thousands clients. That was just an exercise meant to demonstrate to people how dangerous some random code from the net can be. So my point is, that though we may not be as determined and cautious as the Russian sysadmin, we may still find a compromise in using isolated containers to experiment with new things — no matter how potentially harmful they may seem.

Containers are developers’ Mirror Dimensions 🤯 © Marvel Studios.

I must say, that I regret that I didn’t invest into learning at least the very basics of docker sooner, just to start using containers this way — like detached workspaces within my host system. That could’ve definitely helped me to organise my working process better, but most importantly it would’ve saved me all that time I’ve spent cleaning up all the mess created by another misconfigured blah-blah-server.

Below is an example of a dockerfile I use to build an image with Serverless Framework, Newman, localstack and AWS CLI, which I use to run serverless projects locally from my /git folder. The last line is a shell command I execute to run a container from this image in the interactive mode (assuming I’ve called my image “my-dev-image-yo”).

Kinda conclusion

If you’ve been asking yourself for long time whether you should start spending your precious time learning about containers or not, then the answer is yes. You absolutely, totally should. Event if your company is stuck with COBOL and Teletype based system, you may still find good use for your docker skills. Containers are a big deal, even bigger than ̶j̶Q̶u̶e̶r̶y̶ <JS_framework_name>.

Docker’s official getting started guide is a good starting point. Don’t forget about the conf video I’ve mentioned above too. However the best thing about Docker now is its gigantic and ever-growing community, so you can make sure that if you google “How do I rung <it-thing-name> in a Docker container” — you won’t leave disapointed.

Bye.