Another quick note on my very simple (single-node) Kubernetes setup in my home lab.

Hardware & OS

I’m using Intel NUCs for my home lab and decided on running Kubernetes in a VM initially, the NUC is running the latest Ubuntu LTS (24.04) and I’m using the KVM hypervisor.

Virtual Machine

The designated VM was created with the following specs:

  • CPU: 8 cores
  • RAM: 32GB
  • Disk: 1TB (virtio)
  • Network: bridge (virtio)

Operating System

I used to be a Red Hat guy for all things server, but after the CentOS debacle I basically gave up on them. So for this project I (again) choose to use the latest Ubuntu LTS (24.04) for running Kubernetes.

Storage setup

In the Ubuntu installer I defined one parition to boot from (/boot) and a volume group (VG) that uses the remainder of the disk. After that only a root filesystem was defined as a logical volume (LV) named lv_root. Kubernetes doesn’t like swap, so make sure not to create a swap parition/lv.

After the OS installation I’ve manually created additional storage volumes (using LVM) for holding Kubernetes related data. To start with a 50GB volume for holding container images and 500GB for actual Kubernetes workloads. Using LVM these volumes can later be extended quite easily.

Creating the logical volumes:

root@k8s# lvcreate -L 50G -n lv_containerd /dev/ubuntu-vg
root@k8s# lvcreate -L 500G -n lv_k8s /dev/ubuntu-vg

Creating XFS filesystems:

root@k8s# mkfs.xfs /dev/ubuntu-vg/lv_containerd
root@k8s# mkfs.xfs /dev/ubuntu-vg/lv_k8s

Create the mountpoints:

root@k8s# mkdir - /data/{containerd,k8s}

Now we need to add the new filesystems to /etc/fstab so they get mounted on boot. I prefer to use uuid’s for the storage volumes as they are the most stable. You need to figure out the device names by using ls -l /dev/disk/by-id/, in my case the following entries were added to the fstab:

/dev/disk/by-id/dm-uuid-LVM-EnSGO770OgzWcKrAKRhAi4IlQbGCro52IbNe10FltD3GvEFRLhPNWpCCsVnwwBHr /data/containerd xfs defaults 0 1
/dev/disk/by-id/dm-uuid-LVM-EnSGO770OgzWcKrAKRhAi4IlQbGCro524cukFKzectzh34NGCZwiNIZVQRayCdlu /data/k8s xfs defaults 0 1

Next, apply the filesystem configuration by executing the following commands:

root@k8s# systemctl daemon-reload
root@k8s# mount -a

Verify using df -h that the storage resembles below table.

VGLVMountpointFilesystemSize
--/bootext42GB
ubuntu-vglv_root/xfs16GB
ubuntu-vglv_containerd/data/containerdxfs50GB
ubuntu-vglv_k8s/data/k8sxfs500GB

The next step is to prepare the OS for running Kuberntes and associated components.

Containerd

Kubernetes requires a container runtime in order to function, my choice is to use containerd from the Ubuntu standard repository.

Install:

root@k8s# apt-get install containerd

I use a very default configuration with just one tweak:

root@k8s# mkdir -p /etc/containerd
root@k8s# containerd config default > /etc/containerd/config.toml
root@k8s# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

Move the data directories created by the package to our dedicated logical volume:

root@k8s# mv /var/lib/containerd/* /data/containerd/
root@k8s# rm -rf /var/lib/containerd
root@k8s# ln -s /data/containerd /var/lib/containerd

Containerd requires some kernel modules, you need to load these modules on-boot:

root@k8s# cat /etc/modules-load.d/containerd.conf 
overlay
br_netfilter

Networking

Some configuration changes to the network stack are required for Kubernetes networking.

root@k8s# cat /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Activate the above configuration:

root@k8s# sysctl —system

Installing Kubernetes

There’s several options available for installing a Kubernetes cluster. I’ve tried several options including microk8s and minukube, but I have settled on kubeadm for now.

Using kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests.

Packages

The Kubernetes project provides packages for what I need, so let’s add their key and repository:

root@k8s# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
root@k8s# cat /etc/apt/sources.list.d/kubernetes.list                    
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /

Now we’re ready to install the three packages we need from the Kubernetes repository:

root@k8s# apt-get update
root@k8s# apt-get install kubeadm kubelet kubectl
root@k8s# apt-mark hold kubeadm kubelet kubectl

We used apt-mark to pin the packages to the current version so that we don’t upgrade our Kubernetes environment when deploying regular OS patches.

Again, we need to move the data directories to the dedicated logical volumes:

root@k8s# mv /var/lib/kubelet /data/k8s
root@k8s# ln -s /data/k8s/kubelet /var/lib/kubelet

Configuration

This command initializes a Kubernetes control plane node.

root@k8s# kubeadm init

Next we need to setup permissions to the cluster for a regular user, in this case the jmaas user.

root@k8s# mkdir -p /home/jmaas/.kube
root@k8s# cp -i /etc/kubernetes/admin.conf /home/jmaas/.kube/config
root@k8s# chown -R jmaas:jmaas /home/jmaas/.kube

Drop privileges and see if kubectl is working properly:

jmaas@k8s:~$ kubectl get nodes
NAME   STATUS   ROLES           AGE   VERSION
k8s    Ready    control-plane   29m   v1.31.2

Finally, since we’re setting up a single node we need to tell the k8s scheduler that this node is suitable for scheduling workloads on:

jmaas@k8s$ kubectl taint nodes k8s node-role.kubernetes.io/control-plane:NoSchedule-

Networking

The network model is implemented by the container runtime on each node. The most common container runtimes use Container Network Interface (CNI) plugins to manage their network and security capabilities. Many different CNI plugins exist from many different vendors. After toying around with a couple of plugins I’ve settled on Cilium fow now.

Cilium is an open source, cloud native solution for providing, securing, and observing network connectivity between workloads, fueled by the revolutionary Kernel technology eBPF

Download software

It’s pretty simple, just follow the instructions here.

For completeness sake, these are the commands that I ran:

jmaas@k8s$ CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
jmaas@k8s$ CLI_ARCH=amd64
jmaas@k8s$ if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
jmaas@k8s$ curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
jmaas@k8s$ sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
jmaas@k8s$ sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
jmaas@k8s$ rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Cilium is provided as a statically linked binary — an old-school approach that remains highly convenient ;)

Installation

Now that Cilium has been dropped onto the filesystem in /usr/local/bin we can start the actual installation into Kubernetes.

jmaas@k8s$ cilium install

Now check if the installation was succesful:

jmaas@k8s$ cilium status

Check the output to see if Cilium and Operator are in the OK state.

Cilium provides a connectivty test, let’s run it to get a better sense of whats available (this takes a while…).

jmaas@k8s$ cilium connectivity test
<snip>
[cilium-test-1] All 57 tests (233 actions) successful, 48 tests skipped, 1 scenarios skipped.

Helm

Helm is a package manager for Kubernetes. It simplifies the deployment, management, and sharing of Kubernetes applications by using charts.

Helm is the best way to find, share, and use software built for Kubernetes.

You’ll find it’s commonly used, so let’s install it:

jmaas@k8s$ curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
jmaas@k8s$ sudo apt-get install apt-transport-https --yes
jmaas@k8s$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
jmaas@k8s$ sudo apt-get update
jmaas@k8s$ sudo apt-get install helm

Storage

The last missing piece - which is also the hardest part in my mind - is persistent storage! I’ve settled on OpenEBS as it provides some cluster storage options next to a good option for single node setups. Hopefully the OpenEBS ecosystem will provide me a nice upgrade path whenever I move to a multi-node cluster setup.

Let’s add the OpenEBS helm repository

jmaas@k8s$ helm repo add openebs https://openebs.github.io/openebs
"openebs" has been added to your repositories
jmaas@k8s$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "openebs" chart repository
Update Complete. ⎈Happy Helming!⎈

Now we can install OpenEBS, please note that I disable the Mayastor replication:

jmaas@k8s$ helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace

You can monitor the progress or check the status of the installation:

jmaas@k8s$ kubectl get pods -n openebs

The last step is to set the default storageclass to openebs-hostpath:

jmaas@k8s$ kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

That’s all for now!