A crash course on Docker — Learn to swim with the big fish

The quick start guide you are looking for.

If you’ve been following software development trends in the past year, Docker is a term you must have grown tired of hearing every once in a while. You may have felt overwhelmed by the vast number of developers talking about containers, isolated virtual machines, hypervisors and other DevOps related voodoo magic. Today I’ll break it all down for you. It’s time to finally understand what Containers as a Service is and why you need it.

TL;DR

“Why do I need this?”:

- Overview of all the key terms

- Why we need CaaS and Docker Quick Start:

- Installing Docker

- Creating a container Real-life scenario:

- Creating an nginx container to host a static website

- Learning to use build tools to automate Docker commands

“Why do I need this?”

I asked myself the same question not so long ago. After being a stubborn developer for way too long, I finally sat down and accepted the awesomeness of using containers. Here’s my take on why you should try it out.

Docker?

Docker is a software for creating containerized applications. The concept of a container is to be a small, stateless environment for running a piece of software.

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. — Official Docker website

Avoiding all the fancy words, it’s just a tiny virtual machine with the bare bones features for running the application you put in it. Okay, virtual machine?

Virtual machine?

A virtual machine (VM) is literally what the name says. A virtual version of a real machine. It simulates the hardware of a machine inside of a larger machine. Meaning, you can run many virtual machines on one larger server. Have you ever seen the movie Inception? Yeah, well somewhat like that. What enables the VMs to work is a cool piece of software called a Hypervisor.

Hypervisor?

I’m killing you with these terms. But, bear with me, it’s all for a reason. Virtual machines only work because of the Hypervisor. It’s a special software that enables a physical machine to host several different virtual machines. All of these VMs can run their own programs and will appear to be using the host’s hardware. However, it’s actually the Hypervisor that’s allocating resources to the VM.

Note: If you’ve ever tried installing software such as VirtualBox, only to have it fail miserably, it may most likely have been due to not enabling Hyper-V in the bios of your computer. This has perhaps happened to me more times than I can remember. *nervous laugh*

If you’re a nerd like me, here’s an awesome write-up on the topic of what Hypervisors are.

Answering my own questions…

Why do we really need CaaS? We’ve been using virtual machines for so long, how come containers are so good all of a sudden? Well, nobody said virtual machines are bad, they’re just hard to manage.

DevOps is generally hard, and you need one dedicated person to do the work all the time. Virtual machines take up a lot of storage and RAM, and they are timely to set up. Not to mention you need a fair share of experience to manage them the right way.

Instead of doing it twice, automate it

With Docker you can abstract away all the timely configurations and environment set ups, and focus on the coding instead. With the Docker Hub, you can grab pre-built images and get up and running in a fraction of the time it would take with a regular VM.

But, the biggest advantage is creating a homogeneous environment. Instead of having to install a list of different dependencies to run your application, now you only need to install one thing, Docker. With it being cross platform, every single developer in your team will be working in the exact same environment. The same applies to your development, staging and production servers. Now, this is cool. No more “it works on my machine.”

Quick Start

Let’s get crackin’ with the installation. It’s awesome that you can have just one piece of software installed on your development machine, and still be sure everything will work just fine. Docker is, quite literally, all you need.

Installing Docker

Luckily the installation process is very easy. Let me show you how you do it on Ubuntu.

$ sudo apt-get update

$ sudo apt-get install -y docker.io

That’s all you need. To make sure it’s running you can run another command.

$ sudo systemctl status docker

It should return back to you some output like this.



Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)

Active: active (running) since Sun 2018-01-14 12:42:17 CET; 4h 46min ago

Docs:

Main PID: 2156 (dockerd)

Tasks: 26

Memory: 63.0M

CPU: 1min 57.541s

CGroup: /system.slice/docker.service

├─2156 /usr/bin/dockerd -H fd://

└─2204 docker-containerd --config /var/run/docker/containerd/containerd.toml ● docker.service - Docker Application Container EngineLoaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2018-01-14 12:42:17 CET; 4h 46min agoDocs: https://docs.docker.com Main PID: 2156 (dockerd)Tasks: 26Memory: 63.0MCPU: 1min 57.541sCGroup: /system.slice/docker.service├─2156 /usr/bin/dockerd -H fd://└─2204 docker-containerd --config /var/run/docker/containerd/containerd.toml

If the system service is stopped, you can run a combo of two commands to spin it up and make sure it starts on boot.

$ sudo systemctl start docker && sudo systemctl enable docker

That’s it, you’re ready to go.

With the basic installation of Docker you’ll need to run the docker command as sudo . However, you can add your user to the docker group , and you’ll be able to run the command without sudo .

$ sudo usermod -aG docker ${USER}

$ su - ${USER}

Running these commands will add you user to the docker group . To verify this, run $ id -nG and if you get back an output with your username in the list rest assured you did everything right.

But, what about Mac and Windows? Luckily the installation is just as easy. You download a simple file that starts an installation wizard. Doesn’t get any easier than that. Check those out here for Mac and here for Windows.

Spin up a container

With Docker installed and running, we can go ahead and play around for a bit. The four first commands you need to get up and running with Docker are:

create — Creates a container from an image.

— Creates a container from an image. ps — Lists running containers, optional -a flag to list all containers.

— Lists running containers, optional flag to list all containers. start — Starts a created container.

— Starts a created container. attach — Attaches the terminal’s standard input and output to a running container, literally connecting you to the container as you would to any virtual machine.

Let’s start small. We’ll grab an Ubuntu image from the Docker Hub and create a container from that.

$ docker create -it ubuntu:16.04 bash

We’re adding -it as an option to give the container an integrated terminal, so we can connect to it, while also telling it to run the bash command, so we get a proper terminal interface. By specifying ubuntu:16.04 we pull the Ubuntu image, with the version tag of 16.04, from the Docker Hub.

Once you’ve run the create command go ahead and verify the container was created.

$ docker ps -a

The list should look somewhat like this.

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7643dba89904 ubuntu:16.04 "bash" X min ago Created name

Awesome, the container is created and ready to be started. Running the container is as simple as just giving the start command the ID of the container.

$ docker start 7643dba89904

Once again check if the container is running, but now without the -a flag.

$ docker ps

If it is, go ahead and attach to it.

$ docker attach 7643dba89904

Did you see that? The cursor changes. Why? Because you just entered the container. How cool is that. You can now run any bash command you’re used to in Ubuntu, just as if it was an instance running in the cloud. Go ahead and try one.

$ ls

It’ll work just fine, and list all directories. Heck, even $ ll will work. This simple little Docker container is all you need. It’s your own little virtual playground, where you can do development, testing or whatever you want! There’s no need to use VMs or heavy software. To prove my point, go ahead and install whatever you like in this little container. Go ahead. Installing Node will work fine, be my guest and try it out. Or, if you want to exit the container, all you need to do is to literally just type exit . The container will stop and you can list it again with typing $ docker ps -a .

Note: Every Docker container is running as sudo by default, meaning the sudo command doesn’t exist. Every command you run will automatically be run with sudo privileges.

Real Life Scenario

Time to get into some real stuff. This is what you’ll be using in real life for your own projects and production applications.

Containers are stateless?

I mentioned above that every container is isolated and stateless, meaning once you delete a container, the contents will be deleted forever.

$ docker rm 7643dba89904

Okay, this is a problem right? How do you persist data in such a case?

Now’s when shit gets real. Have you ever heard of volumes? Let me tell you. Volumes let you map directories on your host machine to directories inside of the container. Here’s how.

$ docker create -it -v $(pwd):/var/www ubuntu:latest bash

While creating a new container add the -v flag to specify what volume to create and persist. This command will bind the current work directory on your machine to the /var/www directory inside of the container.

Once you start the container with the $ docker start <container_id> command you’ll be able to edit the code on the host machine and see the changes immediately in the container. Giving you the ability to persist data for various use cases, from keeping images to storing database files, and of course for development purposes where you need live reload capabilities.

Note: Let me tell you a secret. You can also run the create and start commands in one with the run command.

$ docker run -it -d ubuntu:16.04 bash

The only addition is the -d flag which tells the container to run detached, in the background, meaning you can go ahead and attach to it right away.

Why am I talking about volumes this much?

Indulge me for a bit longer. Let me show you why. We can create a simple nginx web server for hosting a static website in a couple of simple steps.

Create a new directory, name it whatever you like, I’ll name mine myapp for convenience. All you need is to create a simple index.html file in the myapp directory, and paste this in.

<!-- index.html -->

<head>

<link href="

<title>Docker Quick Start</title>

</head>

<body>

<div class="container">

<h1>Hello Docker</h1>

<p>This means the nginx server is working.</p>

</div>

</body>

</html> https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css " rel="stylesheet" integrity="sha256-MfvZlkHCEqatNoGiOXveE8FIwMzZg4W85qfrfIFBfYc= sha512-dTfge/zgoMYpP7QbHy4gWMEGsbsdZeCXz7irItjcC3sPUFtf0kuFbDz/ixG7ArTxmDjLXDmezHubeNikyKGVyQ==" crossorigin="anonymous"> Docker Quick Start Hello Docker This means the nginx server is working.

We have a generic web page, with some heading text. What’s left is to run an nginx container.

$ docker run --name webserver -v $(pwd):/usr/share/nginx/html -d -p 8080:80 nginx

Here you can see we’re grabbing an nginx image from Docker Hub so we can get an instant configuration of nginx. The volume configuration is similar to what we did above, we only pointed to the default directory where nginx hosts HTML files. What’s new is the --name option we set to webserver and the -p 8080:80 option. We mapped the container’s port 80 to the port 8080 on the host machine. Of course, don’t forget to run the command while in the myapp directory.