In this post I’m sharing my approach for exposing Kubernetes services and workloads to my local network(s), it’s an addition to my simple Kubernetes setup which is documented here.
Kubernetes doesn’t natively support network load balancers (Services of type LoadBalancer) for bare-metal clusters. The built-in load balancer solutions are designed to work with cloud platforms like GCP, AWS, and Azure, and rely on specific integrations with those IaaS providers. For clusters not running on these platforms, LoadBalancers remain in a perpetual “pending” state.
For bare-metal environments, Kubernetes operators are limited to two alternatives — NodePort and externalIPs services. However, both come with their own set of limitations, making them less ideal for production workloads. This leaves bare-metal clusters at a disadvantage within the Kubernetes ecosystem.
Enter MetalLB — a project designed to level the playing field by providing a network load balancer for bare-metal clusters. MetalLB integrates with standard network infrastructure, enabling external services to function seamlessly on bare-metal setups, just like they do on cloud-based environments.
Prerequisites
Check what mode kube-proxy
is using (iptables
or IPVS
), the default is iptables
but IPVS
is far more suitable for larger deployments but requires some additional configuration to be done.
So, let’s check that out:
jmaas@k8s$ kubectl describe configmap -n kube-system kube-proxy | grep ^mode:
mode: ""
Apparantly I’m using the default iptables
mode, which is perfectly fine for my environment.
Installation
MetalLB can be installed through various means (manifest, Kustomize, Operator, Helm) - I opted for going with the Helm approach.
First create a dedicated namespace for MetalLB:
jmaas@k8s$ kubectl create namespace metallb-system
namespace/metallb-system created
The next step is to install MetalLB in it’s own namespace:
jmaas@k8s$ helm install metallb metallb/metallb -n metallb-system
NAME: metallb
LAST DEPLOYED: Sun Jan 26 11:56:49 2025
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.
Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
Installation is almost always easy :)
Configuration
Before we can actually use the MetalLB powered LoadBalancer we first need to configure a couple of things.
IP address pool
Obviously, exposing services on a network requires IP addresses ;). For my home environment a simple range of addresses will be sufficient, but you can use entire subnets too (both Ipv4 and IPv6).
Let’s define the IPAddressPool
object:
jmaas@k8s$ cat mlb-ip-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: sx-lan-pool
namespace: metallb-system
spec:
addresses:
- 172.xx.yy.64-172.xx.yy.128
Apply the configuration:
jmaas@k8s$ kubectl apply -f mlb-ip-pool.yaml
ipaddresspool.metallb.io/sx-lan-pool created
Let’s verify if the configuration was applied properly:
jmaas@k8s$ kubectl get IPAddressPool -n metallb-system
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
sx-lan-pool true false ["172.xx.yy.64-172.xx.yy.128"]
L2 advertisement
The IP addresses from the pool will only work if the system responds to MAC address events. To enable this behavior, we configure an L2Advertisement object.
jmaas@k8s$ cat mlb-l2-config.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: sx-l2-config
namespace: metallb-system
spec:
ipAddressPools:
- sx-lan-pool
Apply the configuration:
jmaas@k8s$ kubectl apply -f mlb-l2-config.yaml
l2advertisement.metallb.io/sx-l2-config created
Check the configuration:
jmaas@k8s$ kubectl get L2Advertisement -n metallb-system
NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES
sx-l2-config ["sx-lan-pool"]
Enabling the Speaker
Since this is a single node cluster the node is by default labeled with node.kubernetes.io/exclude-from-external-load-balancers which will prevent the Speaker
component of MetalLB from announcing the MAC address for the service IPs to the local network.
Let’s verify if this node is labeld with the offending label:
jmaas@k8s$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s Ready control-plane 70d v1.31.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,openebs.io/nodeid=k8s,openebs.io/nodename=k8s
Remove label and verify the operation:
jmaas@k8s$ kubectl label nodes k8s node.kubernetes.io/exclude-from-external-load-balancers-
node/k8s unlabeled
jmaas@k8s$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s Ready control-plane 70d v1.31.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,openebs.io/nodeid=k8s,openebs.io/nodename=k8s
Exposing Kubernetes service
Before dealing with any workloads you probably want to expose Kubernetes itself to the local network. One way to do this is by editting the service manifest:
jmaas@k8s$ kubectl edit svc kubernetes
Make the following changes:
spec:
allocateLoadBalancerNodePorts: false
type: LoadBalancer
loadBalancerIP: 172.31.xx.yy
ports:
- name: https
port: 443
protocol: TCP
targetPort: 6443
You should now be able to connect to the Kubernetes service from another machine in the network (e.g. telnet 172.31.xx.yy 443
).
This concludes the MetalLB installation!
Using the LoadBalancer
You should now be able to use the LoadBalancer type in the service definitions of any Kubernetes workload, something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
loadBalancerIP: 172.xx.yy.zz # use this if you want a predictable IP address
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
That’s it for today!