DevOps

Fix OpenLB KubeSphere LoadBalancer Pending No Speaker Layer2

Resolve OpenLB KubeSphere LoadBalancer pending status, no EXTERNAL-IP assignment, 'no registered speaker layer2' EIP error, StrictARP config, Layer-2 mode enablement, and speaker registration issues in bare-metal clusters.

1 answer 1 view

OpenLB LoadBalancer service status pending and EXTERNAL-IP not assigned in KubeSphere. Installed OpenLB, set StrictARP to true, pulled IP, but service creation remains pending.

Speaker logs show repeated errors:

  • E0126 13:41:34.929104 1 controller.go:329] “Reconciler error” err=“no registered speaker:[layer2] eip:[eip-pool]” controller=“EIPController” …
  • Similar errors for eip-pool.
  • Warnings about deprecated v1 Endpoints.

How to troubleshoot and resolve ‘no registered speaker layer2’ error for EIP in OpenLB?

If your OpenLB KubeSphere LoadBalancer service is stuck in pending status with no EXTERNAL-IP assigned, it’s often due to Layer-2 speakers failing to register, triggering errors like “no registered speaker:[layer2] eip:[eip-pool]” in the EIPController logs. You’ve already set StrictARP to true and pulled an IP pool, but the speakers aren’t announcing themselves over gRPC—common in bare-metal setups. Fix it by enabling Layer-2 mode explicitly on the openlb-speaker DaemonSet, verifying EIP configuration with the right protocol and interface, and restarting kube-proxy for ARP handling.


Contents


Understanding OpenLB LoadBalancer Pending in KubeSphere

Picture this: You’ve installed OpenLB via KubeSphere’s App Store, created a LoadBalancer service, and… nothing. It hangs in “pending” with no EXTERNAL-IP, even after tweaking StrictARP. Frustrating, right? This hits hard in Layer-2 mode on edge or bare-metal clusters where OpenLB (now OpenELB) handles IP assignment without cloud providers.

The core issue? Speakers—those DaemonSet pods on every node—aren’t registering with the controller. Without them, the EIP (Elastic IP) can’t bind IPs from your pool to services. Logs scream “Reconciler error” for eip-pool because no Layer-2 speakers check in via gRPC on port 50051. Deprecated v1 Endpoints warnings are a red herring; they’re just noise from Kubernetes evolving.

OpenLB shines here for KubeSphere users needing LoadBalancer support sans MetalLB complexity. But Layer-2 demands same-subnet nodes and ARP mastery. Let’s diagnose.


Root Causes of No Registered Speaker Layer2 Error

That “no registered speaker:[layer2] eip:[eip-pool]” error? It’s the EIPController in openlb-controller complaining—no speakers announced availability for your EIP. Dig into OpenELB’s Layer-2 concepts: Speakers broadcast ARP replies for service IPs, but only if Layer-2 is enabled and they’re on the right interface.

Common triggers:

  • Layer-2 disabled by default: Fresh installs default to BGP; speakers ignore Layer-2 until flagged.
  • StrictARP mismatch: Kube-proxy needs strictARP: true in ConfigMap, but speakers must align.
  • EIP misconfig: Protocol must be layer2; missing interface: eth0 (or your NIC) blocks binding.
  • Network isolation: Firewalls block UDP/50051 gRPC; nodes not in L2 subnet.

Speaker logs repeat this because the controller reconciles endlessly. Check kubectl logs -n openelb-system ds/openelb-speaker -c speaker—look for “layer2 mode disabled” or registration fails.

Deprecated v1 Endpoints? Kubernetes 1.22+ warns on them; OpenLB uses them internally but functions fine. Ignore unless services flake.


Verify OpenLB Installation and Pods in KubeSphere

First things first: Is OpenLB even breathing? Head to KubeSphere dashboard > App Store > Installed, confirm openlb version (v1.5+ recommended as of 2026).

Run these:

kubectl get pods -n openelb-system

Expect openlb-controller-xxx Running and openlb-speaker-xxx on every node (kubectl get ds openelb-speaker -n openelb-system -o yaml | grep nodeSelector should be empty).

kubectl get eips.kubeelb.io -A

No EIPs? That’s your smoking gun. Pods up but speakers not ready? kubectl describe pod openlb-speaker-xxx -n openelb-system for crashes.

KubeSphere install via official guide auto-sets namespaces. If tainted, edit DaemonSet tolerations.

Ever scaled nodes post-install? Speakers redeploy automatically, but verify:

kubectl get nodes -o wide | grep -E 'INTERNAL-IP|your-subnet'

All nodes same L2 subnet? Good. Mismatch = no ARP magic.


Enable Layer-2 Mode and StrictARP in OpenLB Speaker

Here’s the fix 80% of users miss: Layer-2 isn’t on by default. Edit the speaker DaemonSet.

kubectl edit ds openelb-speaker -n openelb-system

Add to spec.template.spec.containers[0].args:

yaml
args:
- --enable-layer2=true
- --default-interface=eth0 # Your primary NIC; check `ip link`

Save. Pods roll out. Why? Speakers now listen for Layer-2 EIPs.

Next, StrictARP—you set it, but confirm kube-proxy:

kubectl edit cm kube-proxy -n kube-system

Under kubeProxyConfig:

yaml
strictARP: true

Restart:

kubectl rollout restart ds kube-proxy -n kube-system

Per OpenELB Layer-2 usage, this duo lets kube-proxy ignore conflicting ARPs from speakers. Test ARP post-restart: ip neigh show on nodes.

Logs should shift: kubectl logs ds/openelb-speaker -n openelb-system --tail=20 | grep registered.


Create and Configure EIP for Layer2 Protocol

No speakers without an EIP pulling them in. Create one for your pool.

Sample eip-layer2.yaml:

yaml
apiVersion: kubeelb.io/v1alpha2
kind: Eip
metadata:
 name: eip-pool
 namespace: openelb-system # Or default
spec:
 address: "192.168.1.100-192.168.1.110" # Your pulled IPs, same subnet
 protocol: layer2
 interface: eth0 # Critical: node's outbound NIC

Apply: kubectl apply -f eip-layer2.yaml

Watch: kubectl get eip eip-pool -n openelb-system -w

Status evolves: speakers: node1,node2 once registered. From OpenELB issue examples, missing interface kills it—nodes can’t bind.

Controller logs clear up: No more “no registered speaker layer2”. Pulled IPs? Ensure pool from your DHCP/reserved range, no overlaps.


Annotate Services and Troubleshoot EXTERNAL-IP Pending

Service ready? Annotate for Layer-2:

yaml
apiVersion: v1
kind: Service
metadata:
 name: my-lb-svc
 annotations:
 kubeelb.io/eip: eip-pool # Your EIP name
spec:
 type: LoadBalancer
 ports:
 - port: 80
 targetPort: 8080

kubectl apply -f svc.yaml

kubectl get svc my-lb-svc -w—EXTERNAL-IP assigns in ~30s.

Still pending? kubectl describe svc my-lb-svc for EIP bind fails. Check speaker status in EIP: kubectl get eip eip-pool -o jsonpath='{.status.speakers}'.

openlb loadbalancer service status pending? Tail controller: kubectl logs deploy/openlb-controller -n openelb-system | grep eip-pool.


Common Pitfalls: Network Prereqs, Logs, and Deprecated Endpoints

L2 needs basics:

  • Same broadcast domain: ping between nodes.
  • ARP/NDP open: No firewalls on eth0; sysctl net.ipv4.ip_nonlocal_bind=1.
  • gRPC port: UDP/TCP 50051 allowed node-to-node.
Pitfall Symptom Fix
Wrong interface Speakers empty ip route for default, update EIP/speaker args
NodeSelector Speakers missing nodes Edit DS: nodeSelector: {}
externalTrafficPolicy: Local Uneven assignment Use Cluster; pods everywhere (issue #416)
Deprecated Endpoints Warnings only Upgrade k8s or ignore; OpenLB handles v1alpha2

Speaker logs: grep "layer2\|registered". Controller reconciler error? Restart: kubectl rollout restart deploy/openlb-controller -n openelb-system.

From community post, annotations like protocol.openelb.kubesphere.io/v1alpha1: layer2 help legacy.


Verification Steps and BGP Alternative

All set? Verify:

  1. kubectl get svc -A—EXTERNAL-IP from pool.
  2. curl <assigned-ip>—hits backend.
  3. Node: ip neigh show <ip>—speaker MAC replies.
  4. kubectl get eip -o yaml—speakers populated.

Scale test: Deploy nginx, load test.

Production? Layer-2 has SPOF (leader election). Switch BGP mode: Edit speaker --enable-bgp=true, EIP protocol: bgp, announce to router. OpenELB repo Helm charts simplify: helm install openlb openelb/openelb --set speaker.layer2=false.

KubeSphere UI shows EIPs under Networking. Done.


Sources

  1. Use OpenELB in Layer-2 Mode — Official troubleshooting for speaker registration and StrictARP config: https://openelb.io/docs/getting-started/usage/use-openelb-in-layer-2-mode/
  2. Layer-2 Mode Concepts — Explains speaker gRPC registration and L2 network requirements: https://openelb.io/docs/concepts/layer-2-mode/
  3. Install OpenELB on KubeSphere — Guide for App Store installation and pod verification: https://openelb.io/docs/getting-started/installation/install-openelb-on-kubesphere
  4. OpenELB GitHub Repository — Source code for EIPController and Helm values like layer2=true: https://github.com/openelb/openelb
  5. OpenELB EIP Example Issue — Sample Layer-2 EIP YAML and status checks: https://github.com/openelb/openelb/issues/220
  6. OpenELB Issues Tracker — Community reports on externalTrafficPolicy and node detection: https://github.com/openelb/openelb/issues
  7. OpenELB Documentation Hub — Overview of concepts, FAQs, and KubeSphere integration: https://openelb.io/docs/

Conclusion

Resolving openlb kubesphere loadbalancer pending boils down to Layer-2 enablement, proper EIP with interface/protocol, StrictARP sync, and L2 network checks—speakers register, IPs assign, services fly. Most traps? Default BGP mode and NIC mismatches. Test thoroughly, monitor logs, and scale to BGP for HA. Your cluster’s ready for real traffic now. Questions? Dive into those docs or ping OpenELB Slack.

Authors
Verified by moderation
Fix OpenLB KubeSphere LoadBalancer Pending No Speaker Layer2