We’ve talked about docker in a few of my more recent posts but we haven’t really tackled how docker does networking. We know that docker can expose container services through port mapping, but that brings some interesting challenges along with it.

As with anything related to networking, our first challenge is to understand the basics. Moreover, to understand what our connectivity options are for the devices we want to connect to the network (docker(containers)). So the goal of this post is going to be to review docker networking defaults. Once we know what our host connectivity options are, we can spread quickly into advanced container networking.

So let’s start with the basics. In this post, I’m going to be working with two docker hosts, docker1 and docker2. They sit on the network like this…



So nothing too complicated here. Two basic hosts with a very basic network configuration. So let’s assume that you’ve installed docker and are running with a default configuration. If you need instructions for the install see this. At this point, all I’ve done is configured a static IP on each host, configured a DNS server, ran a ‘yum update’, and installed docker.

Default Inter-host network communication

So let’s go ahead and start a container on each docker host and see what we get. We’ll run a copy of the busybox container on each host with this command…

docker run -it --rm busybox

Since we don’t have that container image locally docker will go and pull busybox for us. Once its running it kicks us right into the shell. Then let’s see what IP addresses the containers are using…

So as you can see, both containers on both hosts are using the same IP address. This brings up an important aspect of docker networking. That is, by default, all container networks are hidden from the real network. If we exit our containers and examine the iptables rules on each host we’ll see the following…



There’s a masquerade (hide NAT) rule for all container traffic. This allows all of the containers to talk to the outside world (AKA the real network) but doesn’t allow the outside rule to talk back to the containers. As mentioned earlier, this can be achieved with mapping container ports to ports on the hosts network interface. For example, we can map port 8080 on the host to port 80 on the busybox container with the following run command…

docker run -it –rm –p 8080:80 busybox

If we run that command, we can see that iptables creates an associated NAT rule that forwards traffic destined for 8080 on the host (10.20.30.100 in this case) to port 80 on the container…



So in the default model, if the busybox container on docker1 wants to talk to the busybox container on docker2, it could only do so through an exposed port on the hosts network interface. That being said, our network diagram in this scenario would look like this…



We have 3 very distinct network zones in this diagram. We have the physical network of 10.20.30.0/24 as well as the the networks created by each docker host that are used for container networking. These networks live off of the docker0 bridge interface on each host. By default, this network will always be 172.17.0.0 /16. The docker0 bridge interface itself will always have an IP address of 172.17.42.1. We can see this by checking out the interfaces on the physical docker host…



Here we can see the docker0 bridge, the eth0 interface on the 10.20.30.0/24 network, and the local loopback interface.

Default Intra-host network communication

This scenario is slightly more straight forward. Any containers living on the same docker0 bridge can talk to each other directly via IP. There’s no need for NAT or any other network tricks to allow this communication to occur. There is however one interesting docker enhancement that does apply to this scenario. Container linking seems to often get grouped under the ‘networking’ chapter of docker but it doesn’t really have anything to do with networking at all. Let’s take this scenario for example…



Let’s assume that we want to run both busybox instances on the same docker host. Let’s also assume that these two containers will need to talk to each other in order to deliver a service. I’m not sure if you’ve picked up on this yet, but docker isn’t really designed to provide specific IP address information to a given container (note: there are some ways this can be done but none of them are native to docker that I know of). That leaves us with dynamically allocated IP address space. However, that makes consuming services on one container from another container rather hard to do. This is where container linking comes into play.

To link a container to another container, you simply pass the link attribute in the docker run attribute. For instance, let’s start a busybox container called busybox1…

docker run --name busybox1 -p 8080:8080 -it busybox

This will start a container named busybox1 running the busybox image and mapping port 8080 on the host to port 8080 on the container. Nothing too new here besides the fact that we’re naming the container. Now, let’s start a second container and call it busybox2…

docker run --name busybox2 --link busybox1:busybox1 -it busybox

Note that I included the ‘—link’ command. This is what tells the container busybox2 to ‘link’ to busybox1. Now that busybox2 is loaded, let’s look at what’s going on in the container. First off, we notice that we can ping busybox1 by name…

If we look at the ‘/etc/hosts’ file we see that docker created a host entry with the correct IP address for the busybox1 container on busybox2. Pretty slick huh? Also – if we check the containers ENV variables we see some interesting items there as well…



Interesting, so I now have a bunch of ENV variables that tell me about the port mapping configuration on busybox1. Not only is this pretty awesome, it’s pretty critical when we start thinking about abstracting services in containers. Without this kind of info it would be hard to write application that could make full use of docker and its dynamic nature. There’s a whole lot more info on linking and what ENV variables get created over here at the docker website.

Last but not least I want to talk about the ‘ICC’ flag. I mentioned above that if any two containers live on the same docker host they can be default talk to one another. This is true, but that’s because the default setting for the ICC flag is also set to true. ICC stands for Inter Container Communication. If we set this flag to false, then ONLY containers that are linked can talk to one another and even then ONLY on the ports the container exposes. Let’s look at a quick example…

Note: Changing the ICC flag on a host is done differently based on the OS you’re running docker on. In my case Im running CentOS 6.5.

First off, lets set the ICC flag to false on the host docker1. To do this, edit the file /etc/sysconfig/docker. It should look like this when you first edit it…



Change the ‘other_args’ line to read like this…



Save the file and restart the docker service. In my case, this is done with the command ‘service docker restart’. Once the service comes back up ICC should be disabled and we can continue with the test.

On host docker1 I’m going to download an apache container and run it exposing port 80 on the container to port 8080 on the host…



Here we can see that I started a container called web, mapped the ports, and ran it as a daemon. Now, let’s start a second container called DB and link it to the web container…



Notice that the DB container can’t ping the web container. Let’s try and access the web container on the port it is exposing…



Ahah! So its working as expected. I can only access the linked container on its exposed ports. Like I mentioned though, container linking has very little to do with networking. ICC is as close as we get to actual network policy and that doesn’t have much to do with what actually happens when we link containers.

In my upcoming posts I’m going to cover some of the non-default options for docker networking.