Docker Network Drivers Overview | Networking in Docker #3

Learn how Docker Network Drivers allow easy Container network configuration by implementing the Container Network Model

What are Network Drivers in Docker? How do they enable easy network configuration for Containers?

This blog will try to answer that as simply as possible.

Introduction

This blog is the third one in a series for Docker Networking.

If you are looking to learn more about the basics of Docker, I’ll recommended checking out the Docker Made Easy series.

Here’s what we will discuss today:

  • The Container Network Model
  • Network Drivers: none, host, bridge & overlay

Okay.

Firstly, it’s good to know that networking in Docker is made possible by the…

Container Network Model

The Container Network Model (CNM) formalizes the steps required to provide networking for containers while providing an abstraction that can be used to support various types of networks.

CNM has 3 main components: Sandbox, Endpoint, and Network.

  1. Sandbox: It contains the configuration of a container’s network stack, which means management of the container’s network interfaces, DNS settings, route tables, etc. A Sandbox may contain many endpoints from multiple Networks.
  2. Endpoint: It joins a Sandbox to a Network. An Endpoint can belong to only one Network and it can belong to only one Sandbox if connected.
  3. Network: It is formed by a group of Endpoints that are able to communicate with each other directly.

You can think of the Container Network Model as an abstract class that defines the required interface, whereas network drivers correspond to concrete classes implementing the interface.

This is, of course, an oversimplification so feel free to learn more about the design of CNM here.

As users of Docker, instead of the detailed implementations of CNM, we should learn more about how to properly use…

Network drivers

Network drivers are pluggable interfaces that provide the actual network implementations for Docker Containers.

Docker comes with several drivers out-of-the-box that provide core networking functionality for many use cases — like service discovery, encryption, multi-host networking, etc.

Then there are 3rd party drivers (by plugin providers) available for special use cases.

Lastly, one can even build their own custom drivers if available ones don’t suffice (although that will rarely ever be required).

The 4 out-of-the-box network drivers are:

  1. none
  2. host
  3. bridge
  4. overlay

The driver can be specified by the --network option for the docker run command like this:

This command runs a nignx container using the host driver in the background (specified by the -d flag).

It’s interesting to note that a Container is generally unaware of the network driver it uses, except when using the none driver.

So, how do each of the drivers differ?

Reminder: a Docker host is a host/computer running the Docker daemon. You can learn more about Docker’s architecture here.

None Driver

The none driver simply disables networking for a container, making it isolated from other containers.