Docker Networks – Bridge driver network – 2020

Docker Networks – Bridge driver network

Docker_Icon.png

bogotobogo.com site search:

docker network ls

To see the default docker network, we may want to remove unused networks that we built while we were playing with docker. We can do it with:

$ docker system prune

Now, let’s see the default networks on our local machine:

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c48c8f37fc21        bridge              bridge              local
b83022b50e7a        host                host                local
500e68f4bd59        none                null                local

As we can see from the output, Docker provides 3 networks. The bridge is the one we’re interested in this post.

If we want to get more info about a specific network, for example, the bridge:

$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "c48c8f37fc21c05a0c46bff6991d6ca31b6dd2907c4dcc74592bfb02db2794cf",
        "Created": "2018-12-05T04:54:33.509358655Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Note that this network has no containers attache to it yet.

When we start Docker, a default bridge network is created automatically, and newly-started containers connect to it.

In docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.

By default, the Docker server creates and configures the host system’s an ethernet bridge device, docker0.

Docker will attach all containers to a single docker0 bridge, providing a path for packets to travel between them.

Container network model (CNM)

Container network model (CNM) is an open source container network specification. CNM defines sandboxes, endpoints, and networks.

container-and-cnm.png

Picture from Docker Networking

Libnetwork is Docker’s implementation of the CNM. Libnetwork is extensible via pluggable drivers.

docker network interfaces

is Docker’s implementation of the CNM. Libnetwork is extensible via pluggable drivers.

Let’s create a container and see how the bridge network works.

$ docker run -it alpine sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
4fe2ade4980c: Already exists 
Digest: sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528
Status: Downloaded newer image for alpine:latest

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1
    link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
87: eth0@if88: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "c48c8f37fc21c05a0c46bff6991d6ca31b6dd2907c4dcc74592bfb02db2794cf",
        "Created": "2018-12-05T04:54:33.509358655Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "34f1ccc8888138aff6665ae106ae6037de2f244d7f79f721f72d0517227c5892": {
                "Name": "awesome_proskuriakova",
                "EndpointID": "5abb16d7bb8748e34cbdcd76e784cff8f23b38bdfbb956e4840e8071a046cd9e",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Now we can see we have a new container attached to the bridge network.

docker0-bridge.png

Creating a custom (user defined) docker bridge network and attaching a container to the network

When we install Docker Engine it creates a bridge network automatically. This network corresponds to the docker0 bridge. When we launch a new container with docker run it automatically connects to this bridge network.

Though we cannot remove this default bridge network, we can create new ones using the network create command. We can attach a new container by specifying the network name while it’s being created.

$ docker network create -d bridge my-bridge-network
43539acb896e522a9872b9e1a29716baee9c31448de6109c774437f048829f6c

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c48c8f37fc21        bridge              bridge              local
b83022b50e7a        host                host                local
43539acb896e        my-bridge-network   bridge              local
500e68f4bd59        none                null                local

We see we’ve just created a new bridge network!

Note that each of the two bridge networks is isolated and not aware of each other.

Two illustrate the communications between containers created in different bridge network, we’ll create two nginx servers.

Let’s launch a new container from nginx:alpine with a name “nginx-server1” in detached mode. This container will be using the default bridge network, docker0.

$ docker run -d -p 8088:80 --name nginx-server1 nginx:alpine

We can check the container has been attached to bridge network:

$ docker inspect nginx-server1
[
...
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "c48c8f37fc21c05a0c46bff6991d6ca31b6dd2907c4dcc74592bfb02db2794cf",
                    "EndpointID": "9685480777bb2b3dfb6d07d602d1d77efe284e90cdacffc98a1b74aab7b7585a",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]

Go to “localhost:8088”:

nginx-8088.png

Then, we want to launch the second container from nginx:alpine with a name “nginx-server2” in detached mode. This container will be using the custom bridge network, my-bridge-network we’ve just created.

$ docker run -d -p 8788:80 --network="my-bridge-network" --name nginx-server2 nginx:alpine

We can check the container has been attached to my-bridge-network:

$ docker inspect nginx-server2
[
...
            "Networks": {
                "my-bridge-network": {
                    "Links": null,
                    "Aliases": [
                        "5340fce28a3c"
                    ],
                    "NetworkID": "43539acb896e522a9872b9e1a29716baee9c31448de6109c774437f048829f6c",
                    "EndpointID": "1bf3eaf7f60791eab2ae0a42c9d6b02c4095d5aa83473d4d3a38e6c0a66eb3af",
                    "Gateway": "172.20.0.1",
                    "IPAddress": "172.20.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:14:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

Let’s access the 2nd container with “localhost:8788”:

nginx-8788.png

Let’s navigate into the 2nd container:

$ docker exec -it nginx-server2 sh

/ # ping nginx-server1
ping: bad address 'nginx-server1'
/ # 

The 2nd server not talking to the 1st server since they are in different networks. To make sure we can try with IP address:

# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes

Not working. We can compare it with the case when self-ping’ed:

# ping 172.20.0.2
PING 172.20.0.2 (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.148 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.123 ms
...

Let’s remove the 1st server and recreate it into the customized bridge network.

$ docker run -d -p 8088:80 --network="my-bridge-network" --name nginx-server1 nginx:alpine

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES
40af30a944d0        nginx:alpine        "nginx -g 'daemon of…"   4 seconds ago       Up 3 seconds        0.0.0.0:8088->80/tcp   nginx-server1
5340fce28a3c        nginx:alpine        "nginx -g 'daemon of…"   21 minutes ago      Up 21 minutes       0.0.0.0:8788->80/tcp   nginx-server2

Let’s go into the 2nd container and check if we can ping to the 1st container:

$ docker exec -it nginx-server2 sh

/ # ping nginx-server1
PING nginx-server1 (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.120 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.138 ms
64 bytes from 172.20.0.3: seq=2 ttl=64 time=0.140 ms
64 bytes from 172.20.0.3: seq=3 ttl=64 time=0.165 ms
...

We can see they are now talking each other (hostnames are recognized) since they are in the same network!

The following picture from Microservices Network Architecture 101 illustrates how the communications between different bridge networks (docker0 and docker1) work:

docker0-docker1.png

Bridge-Driver-Network.png

Picture from Docker Networking

Remove – docker network rm

Let’s delete the user defined network:

$ docker network rm 43539acb896e

Now, we’re back to the default networks:

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c48c8f37fc21        bridge              bridge              local
b83022b50e7a        host                host                local
500e68f4bd59        none                null                local

Before the Docker network feature, we could use the Docker link feature to allow containers to discover each other. With the introduction of Docker networks, containers can be discovered by its name automatically.

Unless we absolutely need to continue using the link, it is recommended that we use user-defined networks to communicate between two containers.

Here is a simplified sample version for WordPress using the legacy link, docker-compose.yaml

wordpress:
  image: wordpress
  links:
    - wordpress_db:mysql
wordpress_db:
  image: mariadb

Because docker-compose creates containers into its own bridge network, the containers within the network can talk each other. So, we don’t need to use the legacy link as shown in the following docker-compose.yaml file:

version: '3.3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data: {}

Run docker-compose:

$ docker-compose up -d
Creating network "wordpress_default" with the default driver
Creating wordpress_db_1 ... done
Creating wordpress_wordpress_1 ... done

Let’s check if the network has been created:

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c48c8f37fc21        bridge              bridge              local
b83022b50e7a        host                host                local
500e68f4bd59        none                null                local
a86637c93977        wordpress_default   bridge              local

WordPress-localhost-8000.png

We can remove the network:

$ docker-compose down
Stopping wordpress_wordpress_1 ... done
Stopping wordpress_db_1        ... done
Removing wordpress_wordpress_1 ... done
Removing wordpress_db_1        ... done
Removing network wordpress_default

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
c48c8f37fc21        bridge              bridge              local
b83022b50e7a        host                host                local
500e68f4bd59        none                null                local

With the compose file, the default bridge network for the compose has been created for us. But note that we can specify our own bridge network in the compose file:

version: '3.3'
  
services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress
     networks:
       - my_bridge_network

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
     networks:
       - my_bridge_network

volumes:
    db_data: {}

networks:
    my_bridge_network:

$ docker-compose up -d
Creating network "wordpress_my_bridge_network" with the default driver
Creating wordpress_db_1 ... done
Creating wordpress_wordpress_1 ... done

$ docker network ls
NETWORK ID          NAME                          DRIVER              SCOPE
c48c8f37fc21        bridge                        bridge              local
b83022b50e7a        host                          host                local
500e68f4bd59        none                          null                local
3cceb09613bc        wordpress_my_bridge_network   bridge              local

WordPress-localhost-8000-login.png

Network driver summary

Network driver summary from https://docs.docker.com/network/. Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

  1. User-defined bridge networks are best when we need multiple containers to communicate on the same Docker host.
  2. Host networks are best when the network stack should not be isolated from the Docker host, but we want other aspects of the container to be isolated.
  3. Overlay networks are best when we need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.

Docker & K8s