Network Configuration – Proxmox VE

Bonding (also called NIC teaming or Link Aggregation) is a technique
for binding multiple NIC’s to a single network device. It is possible
to achieve different goals, like make the network fault-tolerant,
increase the performance or both together.

High-speed hardware like Fibre Channel and the associated switching
hardware can be quite expensive. By doing link aggregation, two NICs
can appear as one logical interface, resulting in double speed. This
is a native Linux kernel feature that is supported by most
switches. If your nodes have multiple Ethernet ports, you can
distribute your points of failure by running network cables to
different switches and the bonded connection will failover to one
cable or the other in case of network trouble.

Aggregated links can improve live-migration delays and improve the
speed of replication of data between Proxmox VE Cluster nodes.

There are 7 modes for bonding:

  • Round-robin (balance-rr): Transmit network packets in sequential
    order from the first available network interface (NIC) slave through
    the last. This mode provides load balancing and fault tolerance.

  • Active-backup (active-backup): Only one NIC slave in the bond is
    active. A different slave becomes active if, and only if, the active
    slave fails. The single logical bonded interface’s MAC address is
    externally visible on only one NIC (port) to avoid distortion in the
    network switch. This mode provides fault tolerance.

  • XOR (balance-xor): Transmit network packets based on [(source MAC
    address XOR’d with destination MAC address) modulo NIC slave
    count]. This selects the same NIC slave for each destination MAC
    address. This mode provides load balancing and fault tolerance.

  • Broadcast (broadcast): Transmit network packets on all slave
    network interfaces. This mode provides fault tolerance.

  • IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP): Creates
    aggregation groups that share the same speed and duplex
    settings. Utilizes all slave network interfaces in the active
    aggregator group according to the 802.3ad specification.

  • Adaptive transmit load balancing (balance-tlb): Linux bonding
    driver mode that does not require any special network-switch
    support. The outgoing network packet traffic is distributed according
    to the current load (computed relative to the speed) on each network
    interface slave. Incoming traffic is received by one currently
    designated slave network interface. If this receiving slave fails,
    another slave takes over the MAC address of the failed receiving
    slave.

  • Adaptive load balancing (balance-alb): Includes balance-tlb plus receive
    load balancing (rlb) for IPV4 traffic, and does not require any
    special network switch support. The receive load balancing is achieved
    by ARP negotiation. The bonding driver intercepts the ARP Replies sent
    by the local system on their way out and overwrites the source
    hardware address with the unique hardware address of one of the NIC
    slaves in the single logical bonded interface such that different
    network-peers use different MAC addresses for their network packet
    traffic.

If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
the corresponding bonding mode (802.3ad). Otherwise you should generally use the
active-backup mode.

If you intend to run your cluster network on the bonding interfaces, then you
have to use active-passive mode on the bonding interfaces, other modes are
unsupported.

The following bond configuration can be used as distributed/shared
storage network. The benefit would be that you get more speed and the
network will be fault-tolerant.

Example: Use bond with fixed IP address

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

auto bond0
iface bond0 inet static
      bond-slaves eno1 eno2
      address  192.168.1.2/24
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.2/24
        gateway  10.10.10.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0

default-network-setup-bond.svg

Another possibility it to use the bond directly as bridge port.
This can be used to make the guest network fault-tolerant.

Example: Use a bond as bridge port

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto bond0
iface bond0 inet manual
      bond-slaves eno1 eno2
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.2/24
        gateway  10.10.10.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0