Proxmox VE Cluster: LACP Bonding, VLANs & VM Networking
Configure Proxmox VE cluster network with LACP bonding on 25G fiber for Corosync/migration, separate management VLAN on vmbr1, VLAN-aware vmbr2 for VMs. Full /etc/network/interfaces examples, testing, no host routing needed.
Proxmox VE Cluster Network Configuration: Bonding LACP Fiber Ports for Cluster Traffic, Separate Management VLAN, and VLAN-Aware VM Networking on Copper Ports
I have three identical Proxmox servers, each with 2 fiber optic ports (25 Gbit/s) and 2 copper ports. Here’s my network configuration using /etc/network/interfaces:
Cluster Network (vmbr0 on bonded fiber ports - bond0):
auto eno1 eno2
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.101/24 # 102 and 103 on other servers
gateway 10.10.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
Switch ports (2 per server, total 6) configured for LACP on access ports.
Questions:
- Is
vmbr0the correct network to specify when creating the Proxmox cluster? Will live VM migration work over this network? Is 25 Gbit/s bandwidth sufficient for cluster traffic (Corosync, migration)?
Management Network (vmbr1 on first copper port - eno3):
auto vmbr1
iface vmbr1 inet static
address 10.20.20.101/24 # etc. for other servers
bridge-ports eno3
bridge-stp off
bridge-fd 0
Connected to switch access port with a separate management VLAN for Proxmox web UI access.
- Is this management network setup correct?
VM Network (vmbr2 on second copper port - eno4, no IP address):
Switch port configured in trunk mode with required VLANs. For VMs, I enable ‘VLAN aware’ in network settings and specify the desired VLAN tag.
-
Is this correct for providing VMs network access across different VLANs and external access?
-
With
vmbr0(cluster),vmbr1(management), andvmbr2(VMs), is any routing configuration required at the Proxmox host level?
Yes, vmbr0 on your LACP-bonded fiber ports (bond0 in 802.3ad mode) works for Proxmox cluster creation and Corosync traffic, with live VM migrations defaulting to this high-speed 25 Gbit/s network for excellent performance. Your separate management VLAN on vmbr1 (eno3 access port) is spot-on for secure Proxmox web UI access, while the VLAN-aware vmbr2 on eno4 trunks multiple VLANs perfectly for VMs. No host-level routing is needed across vmbr0, vmbr1, and vmbr2—these isolated bridges keep traffic segmented without extra config.
Contents
- Proxmox Cluster Network with LACP Bonding
- Management VLAN on Dedicated Bridge
- VLAN-Aware VM Networking Setup
- No Host Routing Required
- Performance Tuning and Testing
- Full Example Configuration
- Sources
- Conclusion
Proxmox Cluster Network with LACP Bonding
Your bond0 setup using the two 25 Gbit/s fiber ports looks solid at first glance. Proxmox officially supports LACP (802.3ad mode) if your switch handles it properly—think dynamic link aggregation with load balancing via layer2+3 hashing. But here’s the catch for Proxmox cluster traffic: Corosync (the heartbeat keeping your three nodes in sync) prefers stability over speed. The official Proxmox Network Configuration wiki recommends active-backup mode unless you’re certain LACP won’t flap during failovers.
To answer your first question directly: Yes, specify vmbr0 when creating the Proxmox cluster via the GUI or pvecm create. It’ll carry Corosync multicast traffic (UDP 5404-5405) just fine. Live VM migrations? Absolutely—they default to the cluster network, as noted in the Cluster Manager docs. With 25 Gbit/s aggregated (real-world users hit ~26 Gbit/s on similar Mellanox cards), you’ll smoke through even large VMs. Just ensure switch ports are LACP access (not trunk) for this vmbr0, matching your config.
One tweak: Add bridge-vlan-aware no to vmbr0 if not already implied, since this isn’t VLAN-heavy. Test quorum post-setup with pvecm status. Community threads warn LACP can cause quorum loss if one link drops oddly—fall back to active-backup (bond-mode active-backup) if issues pop up.
Management VLAN on Dedicated Bridge
Spot on with vmbr1. Dedicating eno3 (first copper port) to a static IP in the 10.20.20.0/24 management subnet keeps your Proxmox web UI (HTTPS 8006) isolated. Connect that switch port as access mode in the management VLAN—no tagging needed on the host side, since the switch handles untagged traffic.
This answers question 2: Yes, it’s correct and a best practice. The Proxmox Network docs endorse VLAN segregation for management. You’ll access nodes via https://10.20.20.101:8006 from that VLAN only. Pro tip: Set your laptop’s VLAN accordingly or use a VLAN-aware router. No gateway on vmbr1? Fine if management’s local—otherwise, add one matching the VLAN’s router.
Users in forums confirm this setup dodges common pitfalls, like tangled routing when management shares a bridge with VMs.
VLAN-Aware VM Networking Setup
Nailed it for question 3. vmbr2 on eno4 (second copper) with no host IP, switch port in trunk mode allowing your VLANs—this screams VLAN-aware bridging. In Proxmox GUI, edit vmbr2: check “VLAN aware”, set bridge-ports eno4. VMs then tag their own traffic (e.g., VLAN 10 for prod, 20 for dev).
Why does this rock for multi-VLAN VMs? It trunks everything without host involvement, per Virtualization HowTo’s guide. Switch handles inter-VLAN routing via your core router. VMs get external access seamlessly—trunk allows tagged frames out to the world.
Your /etc/network/interfaces snippet would look like:
auto vmbr2
iface vmbr2 inet manual
bridge-ports eno4
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
Apply, reboot interfaces (ifreload -a), and you’re golden. Forum examples abound with this exact pattern.
No Host Routing Required
Straight no to question 4. With vmbr0 (cluster fiber), vmbr1 (mgmt copper), and vmbr2 (VM trunk copper), Proxmox treats them as isolated Linux bridges. No inter-bridge routing by default—traffic stays segregated, which is what you want for security.
Don’t enable it. As Unix StackExchange experts point out in this routing discussion, making the host route between networks invites pain: ARP issues, firewall holes, and performance hits. Gateways? Set per-bridge if needed (e.g., vmbr0’s 10.10.10.1 for cluster pings). External router handles VLAN-to-VLAN or internet.
If a VM on vmbr2 VLAN 10 needs cluster access? Route at L3 (your switch/router), not here.
Performance Tuning and Testing
25 Gbit/s on bond0? Overkill in the best way for Corosync (low bandwidth) and migrations (high throughput). Tune datacenter.cfg for limits:
bwlimit: migration=10G
Per datacenter.cfg manual—caps at 10 GiB/s to avoid saturating.
Test: pvecm status for quorum, cat /proc/net/bonding/bond0 for LACP health, migrate a test VM and watch iftop or nload. Forums report 3+ GiB/s easy on dedicated links.
Switch side: LACP active, no STP on those ports, MTU 9000 if jumbo.
Full Example Configuration
Here’s your polished /etc/network/interfaces for all nodes (adjust IPs):
auto lo
iface lo inet loopback
auto eno1 eno2 eno3 eno4
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad # Or active-backup for Corosync safety
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.101/24
gateway 10.10.10.1 # Cluster gateway
bridge-ports bond0
bridge-stp off
bridge-fd 0
auto vmbr1
iface vmbr1 inet static
address 10.20.20.101/24
bridge-ports eno3
bridge-stp off
bridge-fd 0
auto vmbr2
iface vmbr2 inet manual
bridge-ports eno4
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
ifreload -a, then create cluster on node 101.
Sources
- Network Configuration - Proxmox VE
- Cluster Manager - Proxmox VE
- Manual: datacenter.cfg - Proxmox VE
- LACP Bonding Questions | Proxmox Forum
- Cluster LACP | Proxmox Forum
- Proxmox VLAN Configuration Guide
- Two Gateways on Bridges | Unix StackExchange
Conclusion
Your Proxmox setup—LACP bonding on fiber for cluster traffic, dedicated management VLAN bridge, and VLAN-aware trunk for VMs—is production-ready with minor tweaks like bond mode testing and bwlimit tuning. It delivers blazing migrations, ironclad isolation, and flexible VM networking without host routing hassles. Roll it out, monitor with pvecm and corosync-cfgtool, and scale confidently.