Multi-host Docker network without Swarm – Dots and Brackets: Code Blog

Docker has several types of networks, but one of them is particularly interesting. Overlay network can span across hosts boundaries, so your web application container at HostA can easily talk to database container at HostB by its name. It doesn’t even have to know where that container is.

Unfortunately, you can’t just create overlay network and hope that it magically finds out about all participating hosts. There should be one more component in order to make that happen.

Of cause, we could use Docker in Swarm mode and problem’s solved. But we don’t have to. Configuring multi-host Docker network without Swarm is actually quite easy.

Prerequisites

As we’ll have to deal with several hosts, here’re some additional tools we’ll use:

  1. VirtualBox – to run virtual hosts.
  2. docker-machine – to create and provision those hosts. If you’re running Docker at Mac or Windows most likely it’s already installed. But if it doesn’t, installation instructions are short and simple.

The plan

As individual Docker hosts don’t know about each other, it would be tricky for them to share anything, especially something as complex as a network. But if we introduced a special service, whose sole job would be keeping a list of participating hosts, as well as network configuration in general, and then told Docker engines to use it, that would probably do the trick.

Out of the box Docker can work with several discovery services. I’ll use Consul, but ZooKeeper and Etcd also would work. After it’s up and running, we’ll create few Docker hosts and configure them to use one with Consul as cluster configuration storage. Then it would be trivial to create an overlay network that spans across these hosts.

Installing discovery service

So we need to create a host with Consul installed on it. Easy peasy. First, let’s create a host for that:

Create host for Consul

1

docker

machine

create

d

virtualbox

keyvalue

It basically tells docker-machine to create a host named keyvalue using virtualbox as a driver. The host it creates will have fully configured Docker engine, so we can use it to pull and run Consul image.

One more piece of magic: as docker executable itself is just a client to Docker engine, we can tell it to connect to engine on another machine and send our local commands there! Of cause, we could just docker-machine ssh keyvalue and do everything directly there, but c’mon, it’s not nearly as cool.

docker-machine config keyvalue can provide settings for connecting to Docker engine inside of newly created host, so all we need to do is to pass those to docker client:

1

2

docker

$

(

docker

machine

config

keyvalue

)

\

  

run

d

p

8500

:

8500

progrium

/

consul

server

bootstrap

Having keyvalue‘s ip address, we actually can navigate a browser to port 8500 and see if anything responds:

1

2

docker

machine

ip

keyvalue

#192.168.99.104

Consul at :8500

Configuring hosts with Docker engines

Now we’ll need to create two hosts with regular Docker engines that know about discovery service we just created. Docker engine has two properties for cluster mode:

  1. cluster-store to point to discovery service and
  2. cluster-advertise to specify a door to current Docker engine for others. For docker-machine created hosts it’s usually eth1:2376.

In common scenario we’d go to Docker configuration file to set those, but as we’re using docker-machine, we actually can specify those settings during host creation:

1

2

3

4

docker

machine

create

d

virtualbox

\

engine

opt

=

“cluster-store=consul://$(docker-machine ip keyvalue):8500”

\

engine

opt

=

“cluster-advertise=eth1:2376”

\

node

0

And for the second node:

1

2

3

4

docker

machine

create

d

virtualbox

\

engine

opt

=

“cluster-store=consul://$(docker-machine ip keyvalue):8500”

\

engine

opt

=

“cluster-advertise=eth1:2376”

\

node

1

The magic: part 1

Now let’s SSH into the first host and create overlay network in it. The one that supposed to span across the host boundaries, remember?

1

2

3

4

docker

machine

ssh

node

0

#….

docker

@

node

0

:

~

$

docker

network

create

d

overlay

multi

host

net

#e1d516c32c3c525d5e8a9a73663e4ec7dd951ecbb8e5faab657c0eb957220d5a

Now, after we created network called multi-host-netexit that host, head to the second one and check out what networks it knows about:

1

2

3

4

5

6

7

8

docker

machine

ssh

node

1

#…

docker

@

node

1

:

~

$

docker

network

ls

#NETWORK ID          NAME                DRIVER              SCOPE

#42131dca03d6        bridge              bridge              local

#46793b96eaef        host                host                local

#e1d516c32c3c        multi-host-net      overlay             global

#c68e3ec2231b        none                null                local

Behold! Mighty gods of Docker made multi-host-net network available at that host as well.

The magic: part 2

Let’s try one more thing. While we’re still at node-1, let’s start nginx container attached to overlay network we just created:

1

docker

run

d

p80

:

80

net

=

multi

host

net

name

=

webserver

nginx

I called it webserver so it’s easier to refer to it over the network. Just out of curiosity let’s type curl localhost to confirm the server is up and running and then head back to node-0:

1

2

3

4

5

6

docker

@

node

1

:

~

$

curl

localhost

#…

#<title>Welcome to nginx!</title>

#…

docker

@

node

1

:

~

$

exit

$

docker

machine

ssh

node

0

I’m going to start regular ubuntu container in it and launch a command line browser to see if I can access webserver:

1

2

3

docker

@

node

0

:

~

$

docker

run

ti

net

=

multi

host

net

ubuntu

bash

$

apt

get

update

&&

apt

get

install

y

elinks

$

elinks

elinks: welcome

elinks: navigate

A drumroll…

webserver container accessed via overlay network

Voilà! A container running at node-0 was able to reach webserver container at node-1 just by its name.

Summary

Today we checked out the simplest way to build multi-host Docker network without involving Swarm mode into it. Surprisingly, that was very easy. Firstly, we need a discovery service, which can be set up by just a few keystrokes, no configuration involved. And secondly we need to tell participating Docker hosts to use it. Piece of cake. After that you can create as an overlay network, attach few containers to it, and they will be able to talk to each other having no idea or intention to know where they physically are. Pure magic.