Host your GKE, EKS, or AKS at home ☁️
Introduction
This blog will show you how to set up Kubernetes on your home network and expose your workloads with MetalLb. This tutorial may be helpful to individuals working on side projects or even startups looking forward to exposing their applications running On-Premises.
Prerequisites
- Two home servers/computers with a Unix-based OS (recommended)
- Internet
Kubernetes with K3s
I used K3s a.k.a. Lightweight Kubernetes to set up my home Kubernetes cluster because of the simplicity of its installation.
Let us say that our master node has the following IP address 192.168.1.15.
Install master node
The command above downloads K3s on your master node without K3s’s dedicated load balancer service (Klipper), and traefik.
Klipper is disabled because it is not compatible with MetalLB (ref: https://metallb.universe.tf/configuration/k3s/).
Once K3s is installed, you can check if your cluster is running by typing the following command:
This should return you the Kubernetes ClusterIP service:
Bare-metal Load Balancer — MetalLb
The issue of Kubernetes On-Premises
While using a managed Kubernetes cluster on a public cloud can let you expose applications as fast as thunder, it can be quite tough to do so On-Premises.
By default, you will not be able to expose your application using the Service of type LoadBalancer. Using it will only put your service on <pending> infinitely.
Otherwise, you are free to use the Service of type NodePort or externalIP-like services but those are recommended for production environments.
MetalLb
That is where MetalLb gets in the picture. MetalLb allows you to do the same as you would do with a Kubernetes cluster on a public cloud.
Basically, you will have to provide a range of IPv4 addresses to MetalLb to hand out over your workloads, and when you instantiate a Service of the type LoadBalancer to expose your workloads, MetalLb will select an external IPv4 address from the provided range to expose your application.
Disclaimer: This article is not meant to describe the architecture behind MetalLb's inner workings.
Set up MetalLb
Type the following command to install the latest version of MetalLb from the official GitHub repository:
Once you should get a bunch of resources running on the metallb-system Namespace. Let us check them by typing the following command:
Provide the IPv4 addresses range
As mentioned earlier, in order to make the load balancer work you need to provide the IPv4 addresses.
To do so create the following file on your server:
Apply it to your cluster by typing sudo kubectl apply -f <filename>
.
The resource above will create an IPv4 addresses pool named
default
. According to your needs you can create many other pools with different ranges to properly manage your workloads.
Now that everything is ready, let’s test our setup! 🚀
Deploying our app
In order to test if the load balancer is working (or not) we will run a Deployment of Nginx with 3 replicas (pods). In addition to that, we will add a Service of the type LoadBalancer to expose our app using MetalLb.
First things first, create a file with the following content:
To create those resources, type the following command sudo kubectl apply -f <filename>
.
Now that we created the resource using the YAML file, we should be able to see them in the default namespace resources. Using sudo kubectl get all
your output should be similar to the following image :
As you can see service/nginx
has been provided an external IP! If you check your web browser at http://192.168.1.100 you should get the following content:
Now you can consider yourself lucky as you have your own Kubernetes cluster similar to the cloud ☁️ managed-ones (GKE, EKS, or AKS).
Thank you for reading! Please reach me if you meet any issues during the process.
References
- MetalLB documentation: https://metallb.universe.tf/
- K3s: https://k3s.io/
- Rancher K3s documentation: https://rancher.com/docs/k3s/latest/en/quick-start/