Hello Keepalived
My K8S cluster has been pretty flaky, with lots of errors on the KubeAPI and Metrics Server going unavailable regularly. I was also frequently unable to get logs from my pods. Turns out I should've read the K0S documentation better.
My K8S cluster has been pretty flaky, with lots of errors on the KubeAPI and Metrics Server going unavailable regularly. I was also frequently unable to get logs from my pods. Turns out I should've read the K0S documentation better.
Well, it's been a few days and now we want to check for updates on our host systems and be able to patch them. After all, we can't deploy workloads until we have reached some baseline operational maturity. While there are some cluster-native ways to orchestrate patching I'm still happy to have my legacy tooling (Ansible) do it as a slightly simpler workflow.
Perfect, we've got storage sorted out and now we can run real applications. Let's deploy monitoring to our cluster so we can see how it's performing and check that everything is working correctly. We'll be deploying the Prometheus Stack, as this is the most popular and best supported monitoring solution for Kubernetes. Any guesses how this starts?
Ok, now we have a problem. We haven't told Kubernetes where it can find storage for our apps. In this example, we're going to use NFS but there are loads of other options like OpenEBS and Gluster. I'm going to assume you've already configured your NFS server of choice, and you know the appropriate mount options.
Ok, we need an easy way to connect services running in our cluster to the external network. The method I chose is MetalLB, because it's a very simple and flexible way to expose services without relying on an external load balancer. MetalLB plays well with the CNI (Kube-Proxy) used by k0s, so we don't need to do anything special to prepare for the installation. I already have Helm installed on my administration system, so deploying MetalLB is really simple. First we need to add the repository to Helm.
Ok, we laid down a base operating system and now we need to get Kubernetes running on top. I selected the K0S distribution, for a balance of being a mostly standard Kubernetes stack and having some really easy tooling for deploying and maintaining the cluster.
Well, first thing's first. If we're going to commission some hardware it needs an operating system. I've standardized on Ubuntu 20.04 for everything, so that's what I'm running here. These nodes were built using the Server Live ISO installed offline to ensure a minimal install and then connected to the network.
Staring down the end of life of my existing vSphere cluster I decided I wanted to refresh on something a little lighter weight and more modern. My need to run traditional VMs has all but evaporated, with a number of my fleet being VMs that existed simply to host a container or set of containers.
Copyright © 2025 Alex Conner