Networking with Docker Containers

Need Hosting? Try ours, it's fast, reliable and feature loaded with support you can depend on. View Plans

As you build your distributed application, the services is composes of will need to be able to communicate with each other. These services, running in the containers, might be on a single host, on multiple hosts or even across data centers - therefore container networking is a critical factor of any Docker-based distributed application.

Networking with docker containers is a very important featured of Docker. The feature allows users to define their own networks and connect containers to them. You can create a network on a single host or a network that spans across multiple hosts using docker network feature.

In this article, we will learn some basic and advanced networking tools that you can use for managing Docker Containers.

Requirements

Ubuntu server 14.04 with docker installed on your system

A non-root user account with sudo privilege set up on your server

Docker Default Network

Docker creates a pair of virtual Ethernet interfaces on each container, randomly assigning them an IP address and a subnet from a private address range not already used by the host system. When you install docker, it creates three networks automatically.

You can list these networks by simply running the following command:

sudo docker network ls

You should see the following output:

NETWORK ID NAME DRIVER 869479a5b4ce bridge bridge 82f7b88ba977 none null f5c76b57fe4f host host

The bridge network represents the docker0 network present in all Docker installations. The none network adds a container to a container-specific network stack. The host network adds a container on the hosts network stack.

Find the Docker Interface

By default docker creates a bridge interface - docker0 on the host system when the docker process starts up. Docker assigns the IP address 172.17.0.2/16 to the bridge interface docker0 which acts as a subnet for all the IP addresses of the running containers.

You can easily find out the docker bridge interface and IP address by running the following command:

sudo ip a

You should see the output like this:

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: wlan0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:bb:58:9c:f5:55 brd ff:ff:ff:ff:ff:ff inet 192.168.0.105/24 brd 192.168.0.255 scope global wlan0 valid_lft forever preferred_lft forever inet6 fe80::4ebb:58ff:fe9c:f555/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:25:2f:fd:5d brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 scope global docker0 valid_lft forever preferred_lft forever

Basic Container Networking

When docker starts a container, it creates a virtual interface on host system with unique name like vethef766ac, and assign IP address within the same subnet.

This new interface will be connected to the eth0 interface on the container and allow networking between containers by adding iptable rules. A NAT rule is used to forward traffic to external hosts and the host machine must be set up to forward IP packets.

You can see the iptable NAT rules by running the following command on host system:

sudo iptables -t nat -L

The output looks something like this:

Chain PREROUTING (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 172.17.0.0/16 anywhere Chain DOCKER (2 references) target prot opt source destination RETURN all -- anywhere anywhere

You should see the POSTROUTING chain in the above output. It has a rule that will masquerade all traffic originating from 172.17.0.0/16 (docker bridge network). This masquerade rule will allow containers to reach the outside world.

Creating Your Own Network Bridge

If you want to assign a custom static IP address to the containers, you will need to create new bridge interface br0 in the host system.

To do this, Run following commands on the host machine running docker.

First stop docker service:

sudo service docker stop

Add br0 interface:

sudo ip link add br0 type bridge

Assign the network range as you wish and up br0 interface:

sudo ip addr add 192.168.1.0/24 dev br0

sudo ip link set br0 up

After creating the docker bridge, you will need to add the following line in /etc/default/docker file:

sudo nano /etc/default/docker

Add the following line at the end of file:

DOCKER_OPTS="-b=br0"

Save and close the file and start docker service:

sudo service docker start

You should see the new bridge interface br0 by running the following command on host machine:

ifconfig

br0 Link encap:Ethernet HWaddr b2:31:08:29:92:3d inet addr:192.168.1.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::b031:8ff:fe29:923d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:64 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:10425 (10.4 KB) docker0 Link encap:Ethernet HWaddr 02:42:25:2f:fd:5d inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1941 errors:0 dropped:0 overruns:0 frame:0 TX packets:1941 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:198438 (198.4 KB) TX bytes:198438 (198.4 KB) wlan0 Link encap:Ethernet HWaddr 4c:bb:58:9c:f5:55 inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::4ebb:58ff:fe9c:f555/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17817 errors:0 dropped:0 overruns:0 frame:0 TX packets:18678 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13667483 (13.6 MB) TX bytes:2759827 (2.7 MB)

In the above output, the bridged interface will range the docker containers with the IP address followed by bridge IP subnet.

Retrieve a Docker container's IP address

You can use the docker inspect command with "Container ID" to find the IP address of the running container.

To do this, run following command to find detailed information about your docker container including its internal IP address:

sudo docker inspect "container ID"

You should see the following output:

"NetworkSettings": { "Bridge": "", "SandboxID": "eecb33f2612bf4ab3f726e51cb3f6abb763194abbf0c2673abb05bf12f1cce55", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/eecb33f2612b", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "d144ed794bec7d603e3c5a89ad05fd2560a3e7c476a5976941d1c4d45e9514a5", "Gateway": "192.168.1.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "192.168.1.2", "IPPrefixLen": 24, "IPv6Gateway": "", "MacAddress": "02:42:c0:a8:01:02", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "848b570ed482f2dad8b7d95482cb1b6f51700ea3b070059454781326ac6a34d2", "EndpointID": "d144ed794bec7d603e3c5a89ad05fd2560a3e7c476a5976941d1c4d45e9514a5", "Gateway": "192.168.1.1", "IPAddress": "192.168.1.2", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:c0:a8:01:02"

If you want to get the container's IP address value only, run:

sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' "container ID"

You should see the following output:

192.168.1.2

Depending on the operating system running within your docker container you can also retrieve it's IP address using ifconfig command:

sudo docker exec -it "container ID" /sbin/ifconfig eth0

The output looks something like this:

eth0 Link encap:Ethernet HWaddr 02:42:c0:a8:01:02 inet addr:192.168.1.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:c0ff:fea8:102/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:50 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7803 (7.8 KB) TX bytes:648 (648.0 B)

Docker Single Host Networking

You can create a new network with docker network create command. In this example, we'll create a network called net1 and run an ubuntu container inside it:

sudo docker network create net1

You can list the network by running the following command:

sudo docker network ls

You should see the following output:

NETWORK ID NAME DRIVER 01fd54caa4cb bridge bridge 5afe75f128a9 none null 36f9ac8fb08a host host cc69e97309e5 net1 bridge

Now, you can easily run ubuntu container inside net1 network:

sudo docker run -itd --net=net1 ubuntu

You can disconnect and delete docker network any time by running the following command:

sudo docker network disconnect net1

sudo docker network rm net1

Docker Multi Host Networking

In this example, we will create three docker hosts on virtual box using a docker machine. One host runs consul and other two hosts share the network information using the consul service discovery container on the first host.

Before creating a docker machine, you will need to download the docker-machine binary.

To do so, run the following command:

sudo curl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-

-

> /usr/local/bin/docker-machine

sudo chmod +x /usr/local/bin/docker-machine

uname -suname -m

Now, create a docker machine named host1:

sudo docker-machine create -d virtualbox host1

You should see the following output:

Creating CA: /root/.docker/machine/certs/ca.pem Creating client certificate: /root/.docker/machine/certs/cert.pem Running pre-create checks... (host1) Image cache directory does not exist, creating it at /root/.docker/machine/cache... (host1) No default Boot2Docker ISO found locally, downloading the latest release... (host1) Latest release for github.com/boot2docker/boot2docker is v1.11.1 (host1) Downloading /root/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v1.11.1/boot2docker.iso... (host1) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100% Creating machine... (host1) Copying /root/.docker/machine/cache/boot2docker.iso to /root/.docker/machine/machines/host1/boot2docker.iso... (host1) Creating VirtualBox VM... (host1) Creating SSH key... (host1) Starting the VM... (host1) Check network to re-create if needed... (host1) Found a new host-only adapter: "vboxnet0" (host1) Waiting for an IP... Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with boot2docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env host1

You can launch a consul container on the host1 host using the following docker run command.

sudo docker $(docker-machine config host1) run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap

You should see the following output:

Unable to find image 'progrium/consul:latest' locally latest: Pulling from progrium/consul c862d82a67a2: Pull complete 0e7f3c08384e: Pull complete 0e221e32327a: Pull complete 09a952464e47: Pull complete 60a1b927414d: Pull complete 4c9f46b5ccce: Pull complete 417d86672aa4: Pull complete b0d47ad24447: Pull complete fd5300bd53f0: Pull complete a3ed95caeb02: Pull complete d023b445076e: Pull complete ba8851f89e33: Pull complete 5d1cefca2a28: Pull complete Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274 Status: Downloaded newer image for progrium/consul:latest 55bdd87b9551416b2d02b11452960578f9c9c9bb4d0afb74ec1957ca1d210d2e

You can verify the status of running container using the following command:

sudo docker $(docker-machine config host1) ps

You should see the running container in following output:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 55bdd87b9551 progrium/consul "/bin/start -server -" Less than a second ago Up 2 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp cocky_franklin

Now, launch the second docker machine host with parameters to register it with consul running on the host.

To do so, run the following command:

sudo docker-machine create -d virtualbox --engine-opt="cluster-store=consul://$(docker-machine ip host1):8500" --engine-opt="cluster-advertise=eth1:0" host2

You should see the following output:

Running pre-create checks... Creating machine... (host2) Copying /root/.docker/machine/cache/boot2docker.iso to /root/.docker/machine/machines/host2/boot2docker.iso... (host2) Creating VirtualBox VM... (host2) Creating SSH key... (host2) Starting the VM... (host2) Check network to re-create if needed... (host2) Waiting for an IP... Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with boot2docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env host2

Next, launch the third docker machine by running the following command:

sudo docker-machine create -d virtualbox --engine-opt="cluster-store=consul://$(docker-machine ip host1):8500" --engine-opt="cluster-advertise=eth1:0" host3

Now the two hosts have the default networks which can be used only for single host communication.

To have a multi-host network you need to create an overlay network on host2. You can do this by running the following command:

sudo docker $(docker-machine config host2) network create -d overlay myapp

Now, if you check the networks on host3, you will be able to see the overlay network we created on host2. Because our two hosts are registered with consul and the network information is shared among all the hosts which are registered with it.

To check the network on host2 and host3, run the following command:

sudo docker $(docker-machine config host2) network ls

The output looks like the following:

NETWORK ID NAME DRIVER 5740258f612b myapp overlay 870643d6c7d5 bridge bridge 3766f811e564 none null 6703f168bf19 host host

sudo docker $(docker-machine config host3) network ls

The output looks like the following:

NETWORK ID NAME DRIVER 5740258f612b myapp overlay 0edc1acf9412 bridge bridge 2c582786337d none null 44306c99cd11 host host

Now, if you launch containers in a different host, you will be able to connect them using the container name. Let test it by launching a Nginx container on host2 and test the connection by downloading the default Nginx page from host3 using a busybox container.

You can lunch a Nginx container on host2 by specifying the network myapp we have created.

sudo docker $(docker-machine config host2) run -itd --name=webfront --net=myapp nginx

You should see the following output:

Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx 8b87079b7a06: Pull complete a3ed95caeb02: Pull complete 31c7abf879e0: Pull complete 4ef177b369db: Pull complete Digest: sha256:46a1b05e9ded54272e11b06e13727371a65e2ef8a87f9fb447c64e0607b90340 Status: Downloaded newer image for nginx:latest 0367c64912557d6cdc5449fc068fa9696a474cacf70eb7702c005cbdeceec70e

Verify the running container using the following command:

sudo docker $(docker-machine config host2) ps

The output looks like the following:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0367c6491255 nginx "nginx -g 'daemon off" 12 minutes ago Up 15 minutes 80/tcp, 443/tcp webfront

Now, launch a busybox container on host3 with parameters to download the homepage of nginx running on host2.

sudo docker $(docker-machine config host3) run -it --rm --net=myapp busybox wget -qO- http://webfront

You should see the following output:

Unable to find image 'busybox:latest' locally latest: Pulling from library/busybox 385e281300cc: Pull complete a3ed95caeb02: Pull complete Digest: sha256:4a887a2326ec9e0fa90cce7b4764b0e627b5d6afcb81a3f73c85dc29cea00048 Status: Downloaded newer image for busybox:latest <title>Welcome to nginx!</title> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br /> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>

Congratulations! The above command returns an HTML output which means the containers are able to connect to other hosts using the overlay network you have created.