Setting Up an Overlay Network on Docker Without Swarm

Edit: An updated version of this post can be found here

In this walkthrough, we will create a Docker overlay network across multiple hosts without using Docker Swarm. To create this walkthrough, I referenced the official Docker Documentation, as well as Docker Con EU 2015’s Hands-On Labs at https://github.com/docker/dceu_tutorials (try https://github.com/nigelpoulton/dceu_tutorials instead)

Pre-Requisite

You should at least know the overall architecture of Docker. If you have not read it yet, you can do so here:

Part 1: Understanding Docker Overlay Network

To understand Docker’s overlay network, here are two links for background reading

  • Docker Documentation: Understanding Docker Networks ( https://docs.docker.com/engine/userguide/networking/

  • Docker Documentation: Get Started with Multi-Host Networking ( https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Part 2: Setting Up Hosts

In this part, we will setup the VMs used to represent different Docker Hosts instances.

For this walkthrough, I have created four 64-bit Lubuntu 15.04 Virtual Machines (VMs) on VirtualBox, with the Bridged Adapter to the Host’s network card. The VMs will be referred to Docker Hosts 1 to 4 from this point forward.

Docker Host 1 will run the Key-Value store, while the others will connect to the overlay network as regular Docker Hosts.

On each Docker Host, I installed Docker Engine by following Docker’s official documentation. The current version I am using is 1.11.1. You can check your Docker Engine version by typing the following command:

$ docker info

Part 3: Configure Docker Daemon Options

In this part, we will configure the Docker daemons to use default settings.

On each host, edit the file /etc/default/docker to ensure that the DOCKER_OPTS line is commented out:

# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS= "--dns 8.8.8.8 --dns 8.8.4.4"

IP addresses 8.8.8.8 and 8.8.4.4 are Google’s DNS servers which the Docker daemon uses to resolve domain names. We comment these out so that the daemon will use the default local DNS to resolve hosts.

If you had to comment out the DOCKER_OPTS line, restart the docker daemon:

$ sudo service docker restart

Part 4: Set Up a Key-Value Store

In this part, we will set up the Key-Value store used by the Docker daemons.

An overlay network requires a Key-Value store. The key-value store holds information about the network state which includes discovery, networks, endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper key-value stores. This example uses Consul.

As mentioned earlier, we have designated Docker Host 1 as the Key-Value store. In Docker Host 1, we will start the Consul container:

$ docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap

Recall that the -p flag is to bind the port of the container to the host. Meaning that Docker Host 1’s port 8500 will be connected to the consul container. Double check that the port binding of the consul container has been successful:

$ docker port consul
8500/tcp -> 0.0.0.0:8500

Part 5: Set Docker Daemon to use Key-Value Store for Clustering

In this part, we will configure the Docker daemons on the remaining Docker Hosts to use the Key-Value store for network clustering.

Method #1: Configuring the Docker Daemon via Command Line

In this method, we will run the Docker Daemon directly, passing in all the necessary parameters via the Docker Client CLI. The disadvantage of this method is that if the Docker Hosts restart, the Docker Engine Daemon reverts to the defaults, without the Key-Value store variables.

First, stop the Docker Daemon with the following command:

$ sudo service docker stop

Then, run the Docker Daemon with the following commands:

$ sudo /usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise {Docker Host 1 network interface}:2375 --cluster-store consul://{Docker Host 1 IP Address}:8500[/]

The path is optional. In my setup, I used the following command after replacing the text in the curly braces { }  :

$ sudo /usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise enp0s3:2375 --cluster-store consul://172.20.10.2:8500

Hint: Each unique cluster-store IP address and path is a separate store. This means that you can have one key-value store at 172.20.10.2:8500, and a separate one at 172.20.10.2:8500/mynetwork.

Method #2: Set the DOCKER_OPTS Environment Variable

The advantage of this method is that even if the Docker Host restarts, the new Docker Engine daemon will still be configured to use the Key-Value store for the network state.

On non Debian/Ubuntu distros, the Docker Engine daemon reads the custom variables from the file at /etc/default/docker.

Open the /etc/default/docker file with your favourite text editor, and add the following line:

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise {Docker Host 1 network interface}:2375 --cluster-store consul://{Docker Host 1 IP Address}:8500[/]"

The path is optional. In my setup, I used the following variables after replacing the text in the curly braces { } :

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise enp0s3:2375 --cluster-store consul://172.20.10.2:8500"

Save and close the file.

If you are using a non-Debian/Ubuntu distro, restart the Docker Engine daemon:

$ sudo service docker restart 
$ docker info
...
Cluster store: consul://172.20.10.2:8500
Cluster advertise: 172.20.10.5:2375

If you are using a Debian/Ubuntu distro, do note that it uses systemd to manage the Docker Engine daemon. More information can be found at https://docs.docker.com/engine/admin/systemd/

To ensure that the daemon options take effect, we are going to add the configuration file at /etc/default/docker into the Docker systemd file.

Open the Docker Engine daemon configuration file with your favourite text editor, which can be found at /lib/systemd/system/docker.service.

Find the [Service] section, and the line which declares the ExecStart variable. Above this line, add the /etc/default/docker path as an EnvironmentFile.

For the ExecStart variable, add the environment variable $DOCKER_OPTS. The output should look something similar to (see red arrow):

docker-overlay-network-1.jpg

Save and close the file. Reload the systemd state by running the following command:

$ systemctl daemon-reload

Restart the Docker Engine daemon:

$ sudo service docker restart
$ docker info
...
Cluster store: consul://172.20.10.2:8500
Cluster advertise: 172.20.10.5:2375

Part 6: Create an Overlay Network

In this part, we will create the Docker overlay network.

On any Docker Host connected to the Consul Key-Value store (i.e. Docker Host 2 to 4), create the overlay network by using the overlay driver, and define your own subnet and subnet mask.

$ docker network create -d overlay --subnet=/

where 1) overlay is the network driver flag, 2) subnet IP range is the range of IP addresses that containers will get, and the number after the slash is the number of subnet mask bits, and 3) overlay network name is a custom name of the network. Items 2 and 3 can be customized to your needs.

One example will be as follows:

$ docker network create -d overlay --subnet=192.168.3.0/24 my-overlay

The network should be seen from all Docker Engine daemons that have been configured to use the Consul Key-Value store for network state (i.e. Docker Hosts 2 to 4). Use the following command to see the overlay network on any of the Docker Hosts connected to the Key-Value store:

$ docker network ls

Part 7: Add Containers to Overlay Network

In this part, we will add containers to the overlay network, but on different Docker Host.

In each Docker Host 2 to 4, run a busybox container and add it to the overlay network:

$ docker run -itd --name containerX --net my-overlay busybox

where X is a unique differentiator for each container (e.g. a running number). For this tutorial, I created ContainerA for Docker Host 2, ContainerB for Docker Host 3, and ContainerC for Docker Host 4.

Check that all containers are added to the overlay network by running the following command on any Docker Host 2 to 4:

$ docker network inspect my-overlay

docker-overlay-network-2

Part 8: Ping Containers Across the Overlay Network

In this part, we will test that communication across the overlay network is working.

Have the busybox container in Docker Host 2 to ping the busybox container in Docker Host 4 by passing in the unique container name:

$ docker exec containerA ping -w 5 containerC

Output:
docker-overlay-network-3.jpg

Note that the Key-Value store helps to maintain the state of the overlay network, including containers that are added or removed.

Part 9: Create Another Overlay Network

In this part, we will create another overlay network with a different name, and populate it with two hosts.

Docker documentation states that: “You can create multiple networks. You can add containers to more than one network. Containers can only communicate within networks but not across networks. A container attached to two networks can communicate with member containers in either network.”

Let’s try this out.

Create another overlay network by referring to Part 6 of this walkthrough. One example will be as follows:

$ docker network create -d overlay --subnet=192.168.4.0/24 my-overlay-new

In Docker Host 3, add the current running busybox container (i.e. containerB) to the new overlay network

$ docker network connect my-overlay-new containerB

In Docker Host 4, create a new busybox container (e.g. containerD) and add it to the new overlay network.

$ docker run -itd --name containerD --net my-overlay-new busybox

Now to test the connection. The busybox container in Docker Host 3 (i.e. containerB) should be able to communicate with containerA, and containerC, via the my-overlay network, and containerD via the my-overlay-new network.

The busybox container in Docker Host 4 (i.e. containerD) should only be able to communicate with containerC via the my-overlay-new network.

The following ping commands from Docker Host 3 should be successful:

  • docker exec containerB ping -w 2 containerA

  • docker exec containerB ping -w 2 containerC

  • docker exec containerB ping -w 2 containerD

Output:
docker-overlay-network-4.jpg

The following ping commands from Docker Host 4 should be successful:

  • docker exec containerC ping -w 2 containerA (via first overlay network)

  • docker exec containerC ping -w 2 containerB (via first overlay network)

  • docker exec containerD ping -w 2 containerB (via second overlay network)

Output:
docker-overlay-network-5.jpg

The following ping commands from Docker Host 4 should fail because of the separate overlay networks:

  • docker exec containerD ping -w 2 containerA

  • docker exec containerD ping -w 2 containerC

Output:
ping: bad address ‘containerA’
ping: bad address ‘containerC’

It is interesting to note that even though containerC and containerD are on the same Docker Host 4, they are unable to communicate with each other. Therefore, Docker networking can support multi-tenancy by separating containers/applications into different networks.

And now you have finished setting up an overlay network without using Docker Swarm.

Share this:

Like this:

Like

Loading…