docker containers cannot reach internet in bridge network mode

TL;DR
How can I fix the bridge docker0 to be able to reach internet from my containers when in network mode bridge?

There was a power outage a couple of weeks ago that apparently destroyed a server network configuration, and for some time, the DHCP server wasn’t assigning the correct IP to the machine. I managed to fix that by configuring the network using netplan (which I cannot assure was done before the power outage, it was managed by another team).

However, docker containers are not being able to reach internet when network mode is bridge.

From the host, I can ping google.com and DNS resolution works fine. Everything else seems to be working too. However, when I start a container (for instance: docker run -it --rm python:3.6.1 /bin/bash) ping doesn’t work anymore.

So here it goes the couple of things I have checked:

  1. ping within the container does not work:

ping google.com just hangs:

root@85deb9b2ae95:/# ping google.com
^C

ping 8.8.8.8 lost all packages:

root@85deb9b2ae95:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C--- 8.8.8.8 ping statistics ---
9 packets transmitted, 0 packets received, 100% packet loss
  1. /etc/resolv.conf in the docker container looks ok:
root@85deb9b2ae95:/# cat /etc/resolv.conf 
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 8.8.8.8
nameserver 1.1.1.1

(It’s the same as the host’s one)

  1. docker network inspect bridge looks fine to me:
my_user@my_host:~$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "ba15db4d28312e1e7147704d4b04ed12d5acf3ffc03d5308a61e88171ff21b59",
        "Created": "2022-09-30T12:04:54.442845526Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "85deb9b2ae951810887faf6a89b1980bd4ae35d6a61a1639982a3a6558f8100f": {
                "Name": "strange_fermi",
                "EndpointID": "181fede143c82cee3aff9feec8bfff10daf9fbcbd1e70a69bd99ee51f28ed5e4",
                "MacAddress": "xx:xx:xx:xx:xx:xx",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
  1. ifconfig looks fine to me (it’s RUNNING because the container is running):
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 xx::xx:xx:xx:xx  prefixlen 64  scopeid 0x20<link>
        ether xx:xx:xx:xx:xx:xx  txqueuelen 0  (Ethernet)
        RX packets 660  bytes 36996 (36.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 263  bytes 26836 (26.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  1. I don’t see anything weird in the output of networkctl:
alberto_perdomo@builder02:~$ networkctl
IDX LINK        TYPE     OPERATIONAL SETUP      
  1 lo          loopback carrier     unmanaged  
  2 enp67s0f0   ether    routable    configured 
  3 enp67s0f1   ether    no-carrier  configuring
  4 wlp70s0     wlan     no-carrier  unmanaged  
 60 docker0     bridge   routable    unmanaged

Apart from the iface enp67s0f1 “configuring” (which I think might have something to do with the way I configured the network using netplan).

  1. Kernel route looks fine:
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.1.1     0.0.0.0         UG    0      0        0 enp67s0f0
172.17.0.0      0.0.0.0         255.255.255.0   U     0      0        0 enp67s0f0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 enp67s0f0
  1. ip route looks fine:
default via 192.168.1.1 dev enp67s0f0 proto static 
172.17.0.0/24 dev enp67s0f0 proto kernel scope link src 172.17.0.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
192.168.1.0/24 dev enp67s0f0 proto kernel scope link src 192.168.1.32
  1. iptables look ok to me:
my_user@my_host:~$ sudo iptables --list --table nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        anywhere            

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere
  1. These are my current docker networks:
NETWORK ID     NAME      DRIVER    SCOPE
ba15db4d2831   bridge    bridge    local
938ad254f4d2   host      host      local
72ca52dfdedb   none      null      local

Finally, if I run the container in host mode (for instance: docker run -it --rm --net=host python:3.6.1 /bin/bash), DNS resolution works.

It cannot be as simple as appending --net=host to my docker commands because this server run some CI/CD pipelines that should be able to reach internet, resolve domain names and so on and so forth.

So basically, the main question is how can I fix the bridge docker0 to be able to reach internet from my containers when in network mode bridge?

I have tried several things among which it was restoring docker, reinstalling docker, removing the docker0 iface and forcing docker to create it again.

Any help, feedback or comments on how to solve or troubleshoot this will be much appreciated!