This post is to consolidate all our LXC networking guides and also explore some advanced container networking that have limited use but are interesting nonetheless hence the Flockport labs monicker. Experimental containers will now be posted under this label in our container section.

We previously looked at basic LXC container networking; bridging, NAT, static IPs, public IPs etc and then at connecting LXC containers across hosts with GRE tunnels or secure Tinc or IPSEC VPNs.

We also covered basic failover and load balancing with Keepalived and Nginx and with LVS. These networking guides apply to both LXC and VM networking in Linux in general with KVM or Xen for instance.

This would be a good time to brush up. This guide explores a few advanced LXC networking possibilities that depend on a fair understanding of LXC and VM networking.

We will cover extending layer 2 across remote LXC hosts with L2tpv3 or Ethernet over GRE in Part I and using LXC's support for multiple network interfaces to explore using a container as a router and touch on using VMs of software routers like Vyatta, Vyos or Pfsense to route your container or VM networks in Part II.

Jump directly to Extending Layer 2 across LXC hosts if you are up to date on LXC networking.

LXC Networking Refresh

The default LXC installation creates what is a known as a NAT bridge, this is a standalone software bridge that is created on the host (a software bridge is like a switch and is a basic functionality provided by the Linux kernel)

Your containers or VMs connect to this bridge and get IPs in a private subnet. The routing is done by some iptables rules.

The default lxcbr0 is this kind of a bridge. Bridging, DHCP and basic routing is configured by the lxc-net script. The virbr0 bridge used by Virt Manager for KVM is similar.

Take a look at the /etc/init.d/lxc-net script (in/etc/init/lxc-net in Ubuntu) Here is what the script does in short:

1. brctl addbr lxcbr0 ----- adds bridge 2. ifconfig lxcbr0 10.0.3.1 netmask 255.255.255.0 up ----- gives the bridge an IP and brings it up 3. Starts a dnsmasq instance with a specified interface lxcbr0 with DHCP subnet range 10.0.3.2-10.0.3.254 4. iptables -t nat -A POSTROUTING -s 10.0.3..0/24 ! -d 10.0.3.0/24 -j ACCEPT ----- Adds an iptables masquerading rule for lxcbr0 so containers can access the net

In the default lxcbr0 network, containers are isolated in a private 10.0.3.0/24 subnet within the host and can only be accessed by each other and the host.

To access the containers from beyond the host you would need to use port forwarding ie forward port 80 of the host to port 80 of the container to for instance make a web server in the container available on the host and the network. You can of course forward 'n' number of host ports to various containers but you cannot forward the same port to multiple containers.

If you are in an internal network you can use basic routing to connect containers across hosts with the IP route utility. A typical command to connect to let's say the 10.0.4.0/24 container network on a host with IP 192.168.1.10 from 192.168.1.5 would look like this.

ip route add 10.0.4.0/24 via 192.168.1.10

To make this kind of routing work you need to ensure containers subnets are different across hosts.

You can set static IPs inside the container using /etc/network/interfaces file (depending on how your container OS configures networking) or via the dnsmasq instance configured for the lxcbr0 network on the host by associating specific containers to IPs in the /etc/lxc/dnsmasq file.

In a NAT type network there is no way to associate a public IP to the container. If your host has a public IP you can associate that IP with a container with 1-1 NAT mapping or if you have 2 public IPs use basic IP aliasing to associate the public IP to a container via NAT mapping.

LXC Host Bridge

That was the NAT bridge. You can also create a different network for LXC containers in which containers are on the same network as your host. This is a direct bridge creating by bridging your physical interface usually eth0 to a bridge say br0 which containers and VMs then connect to.

If the host is 192.168.1.5 the containers will be in the same subnet 192.168.1.0/24. This is a flat network and easier to work with with no NAT layer between containers and the network. Containers connecting to this interfaces get their IP and networking services directly from the router your host is connected to.

In case this is a public network you can easily associate public IPs to the containers, and they can be directly accessed from the internet. If you bridge the other hosts on this network to their respective eth0 and connect their containers to br0 interfaces, all you hosts and containers will be on the same network and thus be directly accessible by all containers and hosts.

If you have 2 network interfaces in the host you can bridge eth1 to br1 for instance, to put containers across hosts in their own network via the br1 interface. Static IPs can be configured inside the container or the router.

Container networking in the cloud

For cloud vps instances you can use either of the above methods depending on the cloud provider. You can use a private NAT network and use port forwarding to access resources on the container, associate public IP via NAT mapping or if the cloud provider allows you to bridge or gives you private networks, you can become more creative in building your container network.

At this point its important to remember a lot of cloud, vps, server providers may not support bridging and most do not support things like multicast so services like Keepalived, LVS or an overlay protocol like VXLAN that uses multicast may not work in these networks, unless they support unicast.

Connect containers across several hosts over layer 3

We already showed you how to connect containers across hosts on the same network with a simple routing rule. You can connect containers across several remote hosts with IPSEC VPNs, plain GRE tunnels or the awesome tool Tinc for mesh networks and VPNs. Containers and VMs that you connect across hosts need to be on different subnets.

We have detailed guides on these in our News and Guides section.

You can think of these as overlay networks. But remember building VPNs across the public internet has a performance penalty; via the encryption of packets, latencies between your hosts and mtu issues. But these are tried and tested methods to build resilient networks and offer distributed services.