Docker tutorial: Get started with Docker networking
A common use case for Docker is networked services, and Docker has its own networking model to let containers speak to both each other and the outside world.
Originally, Docker containers had to be networked together by hand, or exposed manually to the outside world. The current networking model lets containers find each other automatically on the same host (or across different hosts), and be exposed to the world at large in a more controlled way.
There are four basic ways that Docker supplies developers with networking for containers. The first two, bridge and overlay networks, cover the most common use cases in production. The other two, host and Macvlan networks, exist to cover less common cases.
Mục Lục
Docker networking: Bridge networks
Bridge networks let containers running on the same Docker host communicate with each other. A new instance of Docker comes with an default bridge network named bridge
, and by default all newly started containers connect to it.
The bridge
network comes with many convenient out-of-the-box defaults, but they might need fine-tuning in production. For example, containers on bridge
automatically have all ports exposed to each other, but none to the outside world. That’s useful for when you need to test communication between containers, but not for deploying a live service.
For the best results, create your own bridge network. User-defined bridges have many features the bridge
bridge does not:
- DNS resolution works automatically between containers on a custom bridge. This way, you don’t need to use raw IP addresses to communicate between them as you do on the
bridge
bridge. Containers can locate other containers via DNS using the container name. - Containers can be added and removed from a custom bridge while they’re running.
- Environment variables can be shared between containers on a custom bridge.
In short, you can start tinkering with containers using the default bridge, but for any serious production work you’ll want to create a custom bridge.
Docker networking: Overlay networks
Bridge networks are for containers on the same host. Overlay networks are for containers running on different hosts, such as those in a Docker swarm. This lets containers across hosts find each other and communicate, without you having to worry about how to set that up for each individual participating container.
Docker’s swarm mode orchestrator automatically creates an overlay network, ingress
. By default any services on the swarm attach themselves to ingress
. But as with the default bridge
, this isn’t the best choice for a production system, because the defaults may not be appropriate. Your best bet is to create a custom overlay network, with or without a swarm, and attach nodes to it as needed.
If you want to use an overlay network with containers not running in a swarm, that’s another use case for creating a custom overlay network. Note that each Docker host on an overlay network must have the proper ports open to its peers to be seen, and without swarm mode each node needs access to a key-value store of some kind.
Also note that overlay networks, by default, allow only 256 distinct IP addresses. You can raise this limit, but Docker recommends using multiple overlays instead.
Docker networking: Host networking
The host
networking driver lets containers have their network stacks run side by side with the stack on the host. A web server on port 80 in a host
-networked container is available from port 80 on the host itself.
The biggest boon of host networking is speed. If you need to give a container port access and you want to make it as close to the underlying OS as possible, this is the way to go. But it comes at the cost of flexibility: If you map port 80 to a container, no other container can use it on that host.
Docker networking: Macvlan networking
A Macvlan network is for applications that work directly with the underlying physical network, such as network-traffic monitoring applications. The macvlan
driver doesn’t just assign an IP address to a container, but a physical MAC address as well.
Note that this type of Docker networking comes with many of the same caveats that you’d have if, say, you were creating virtual MAC addresses using VMs. In short, Macvlan should be reserved only for the applications that don’t work unless they rely on a physical network address.
Docker networking: Creating and managing networks
All network management in Docker is done using the docker network
command. Many of its subcommands are similar to other Docker commands; for example, docker network ls
displays all the configured networks on the current Docker instance:
$ docker network ls NETWORK ID NAME DRIVER SCOPE 2e0adaa0ce4a bridge bridge local 0de3da43b973 host host local 724a28c6d86d none null local
To create a network, use the create
subcommand along with the --driver
flag to indicate which driver to use (bridge, overlay, macvlan
):
$ docker network create --driver bridge my-bridge
Host-networked containers don’t require a network to be created for them. Instead, launch the container with the --network host
flag. Any processes on the container listen on their preconfigured ports, so make sure those are set first.
The options for creating a network also include specifying its subnet, IP address range, and network gateway, much as would be the case for creating a network using other means.
Containers run by default on the bridge
network. To use a particular network, just use the --network
flag when launching the container and specify the network name.
You can also pair a running container with a network:
$ docker network connect bridge my_container
This attaches my_container
to the bridge
network, while preserving any existing network connections it already has.
When a container is spun down, any networks associated with it are left intact. If you want to remove networks manually, you can do so with the docket network rm <network_name>
command, or use docker network prune
to remove all networks no longer in use on the host.
Docker networking and Kubernetes networking
If you’re eyeing Kubernetes as an orchestration solution, but already have a fair amount of work sunk into a Docker networking setup, you won’t be thrilled to hear there’s no one-to-one correspondence between how Docker and Kubernetes handle networking.
The details are described in the Kubernetes documentation, but the short version is that they have fundamentally different models for how network resources are allocated and managed. So, you’ll need to devise a Kubernetes-specific network setup for your application.
One possible halfway-house approach is to use a Kubernetes Container Network Interface (CNI) plugin that works with Docker’s own networking controls. But this is an interim solution at best; at some point, you’ll need to build your Kubernetes projects using its own networking metaphors from the inside out.