How can you to take a pragmatic approach to Docker and look beyond the hype?

Docker has been getting a lot of hype recently, and it’s easy to understand why. Shipping code is challenging. Container technology has traditionally been messy with lots of requirements and templates involved. Docker gives you a simple way to create containers in a repeatable manner. It’s generally faster, more comfortable, and easier to understand than other container and code shipping methods out there. Hence the hype! But with hype comes misunderstandings and misconceptions. As always, don’t believe the hype. Taking a pragmatic look at Docker will help you understand if it’s right for your needs.

In this post, we’re writing about five Docker misconceptions and the Java angle for Docker. But first, a bit of background. To learn more about it, we looked for someone with lots of Docker experience so we set up a chat with Avishai Ish-Shalom from Fewbytes, who’s also an organizer of the DevOps Days conference. We used his great insights and experience with Docker to build this list of misconceptions with him.

The Main Misconceptions

1: Docker is a lightweight VM

This is a primary misconception for people looking to sum up and get a high level grasp of Docker. And it’s an understandable one: Docker does look a bit like a VM. They even do a comparison between Docker and VMs on the Docker website. However, Docker is better understood as an evolution of Linux containers (LXC) rather than as a lightweight VM. They are different animals, and treating Docker containers like they’re lightweight VMs will lead to problems.

There are several crucial differences between Docker containers and VMs that must be understood before you use Docker.

Resource Isolation: Docker does not provide the same level of resource isolation that VMs have. VMs have high levels of resource isolation, and by design, Docker has several kinds of shared resources that it can’t isolate and protect, such as page caches and entropy pools. (Just a side note: entropy pools are really interesting. If you’re unfamiliar, essentially entropy pools collect and store random bits generated by system actions. This pool is used by the computer any time randomization is needed, such as for cryptography.) Should a Docker container use up all of the shared resources, other processes will have to wait until they’re freed and refilled.

Overhead: Most people know that VMs provide near native performance for CPU and RAM, but come with a lot of IO overhead. By forgoing the guest OS that VMs have, Docker makes for a smaller package and less storage overhead than VMs. This doesn’t mean that Docker is immune to overhead problems however. There is still enough IO overhead with Docker containers to require your attention, albeit not as much as with VMs

Kernel Usage: Docker containers and VMs have very different usages of kernels. VMs use one kernel for each VM. Docker containers share one kernel among all the containers. Sharing a kernel creates some efficiencies, but at the cost of high reliability and redundancy. If you have a kernel panic with your VMs, only the VM on that kernel will die. Kernel panic with Docker containers means that all your containers will fail.

2: Docker makes your app scalable

Because Docker makes it easy to deploy your code on many servers in a short time frame, it’s a small jump to take to conflate that with thinking that Docker will make your application itself scalable. Unfortunately for all of us, this isn’t the case. Your app is a property of your code, and Docker doesn’t rewrite your code. Scalability in your app is still up to you, like it’s always been. Using Docker won’t make your code more scalable automatically – just easier to deploy across servers.

3: Docker is widely used in production

As a result of all the hype and conversation, many people assume that Docker must be widely deployed in production. In reality, that’s not really the case. Some people forget how new Docker is! It’s still young and growing, meaning that there are annoying bugs and missing features throughout. There’s nothing wrong with being excited about the technology, but you would be best served to understand the right use cases and tradeoffs involved with it. Today, Docker is very easy to adopt for development. Docker makes it simple to set up many different environments (or at least, it gives you the feeling of setting up different environments), which is an excellent benefit for development.

On the production side, a lot of the young technology growing pains create too large a damper for widespread use in many cases. For example, Docker lacks straightforward support for monitoring capabilities and multiple machine networking, which limits its usefulness in production today. There is plenty of potential however. The benefit of shipping the same package from development to production is a huge one. And there are several runtime features of Docker that have value in production environments. But overall, the limitations outweigh the gains in many production use cases today. That doesn’t mean you can’t use it successfully in production, just that you can’t expect it to be fully mature and complete today.

4: Docker is OS Agnostic

Another misconception is that Docker works universally across any and all operating systems and environments. It makes sense if you think about the metaphor Docker uses of physical shipping containers, but software and operating systems aren’t as straightforward as ocean ports.

In reality, Docker is a Linux-only technology. And as certain elements of Docker rely on a particular kernel features, you’ll want to make sure you have a recent kernel version running too. Because of the differences between OS’s, using features that aren’t in the lowest common denominator across OS’s can run into some trouble. Some of these issues may arise in only 1% of cases, but as you scale up, 1% becomes significant.

While Docker only runs natively on Linux, there are methods for using Docker with OS X and Windows. Using boot2docker will integrate a Linux VM locally on your OS X or Windows machine and take care of most elements of the Docker set up.

5: Docker makes your application more secure

There’s a misconception that using Docker will improve the security of your code and shipments. Again perhaps, the differences between the idea of physical containers and software containers come into play here. Docker is a containerization technology and adds orchestration, but containers in Linux have several security vulnerabilities that can be attacked. Docker does not add any additional security layers or patches to these vulnerabilities. There’s no giant metal locked box surrounding your application.

The Java Angle

Docker has been finding a lot of play among Java developers. There are elements of Docker that enable us to easily build contexts that can scale. Unlike an uber-jar, with Docker you can really pack ALL of your dependencies (including the JVM!) together in one ready-to-ship image. This is a great benefit that can make Docker worth kicking the tires on. However, this comes with some downsides. Generally, you’re going to want to interact with your code in different ways – monitor it, debug it, connect to it, tune it… All of these present extra hoops or challenges when working with Docker.

For example, say we want to use jconsole, which relies on JMX features that require networking since they use RMI. We’ll not be able to use this in a straightforward way with Docker and will need to rely on some tricks to open the appropriate ports. What initially piqued our interest in this was that when we were building OverOps’s Docker installer, we had to find a way to have a daemon process alongside the JVM in the container. You can check out the solution we’ve assembled on GitHub.

Another critical issue is that performance tuning becomes even more difficult with Docker containers. When you use containers, you don’t really know how much memory will be allocated to each of them. If you have, say, 20 containers, memory is allocated across them based on factors you may not be aware of. This presents a challenge when attempting to tune your heap size with flags like -Xmx, since communicating with JVMs inside a Docker container requires some automation for understanding how much memory has been allocated to you. Not knowing how much memory you have makes fine tuning performance extremely tricky.

Conclusion

Docker is a very interesting technology, with several real and valid use cases. And as a young technology, there is still plenty of time and potential for addressing several of the missing features and bugs. But boy, is there a lot of hype right now. Hype does not automatically equal success of course –

Betamax says hi

but nor does it automatically signify failure. Using Docker is a matter of understanding what it actually is and what you can do with it. Disillusionment comes from treating it like magic. We hope clarifying some of these misconceptions will give you a better picture of Docker if you decide to take it out for a spin. If you’ve already been playing around with it, have you noticed any other misconceptions that I didn’t mention above? Let me know in the comments.

Much thanks to Avishai Ish-Shalom for agreeing to talk with us for this post.