Back to Blog
Kubernetes
K3s
Tutorial
Setting Up a Kubernetes Cluster in Your Homelab
January 10, 2024
12 min read
# Setting Up a Kubernetes Cluster in Your Homelab
Running Kubernetes in your homelab is an excellent way to learn container orchestration, test applications, and build production-ready skills. In this guide, we'll deploy a K3s cluster on Proxmox with multiple nodes and high availability.
## Why K3s?
K3s is a lightweight Kubernetes distribution perfect for homelabs:
- **Lightweight**: Uses only 512MB of RAM
- **Easy Installation**: Single binary, no complex dependencies
- **Full Kubernetes**: 100% compliant with upstream Kubernetes
- **Built-in Features**: Includes Traefik, CoreDNS, and local storage
## Architecture Overview
We'll create a 3-node cluster:
- **1 Control Plane Node**: Manages the cluster
- **2 Worker Nodes**: Run application workloads
```
┌─────────────────┐
│ Control Plane │
│ 192.168.1.10 │
└────────┬────────┘
│
┌────┴────┐
│ │
┌───▼───┐ ┌──▼────┐
│Worker1│ │Worker2│
│ .11 │ │ .12 │
└───────┘ └───────┘
```
## Prerequisites
- Proxmox VE installed and configured
- 3 VMs or LXC containers (Ubuntu 22.04 recommended)
- At least 2GB RAM per node (4GB recommended)
- 2 CPU cores per node
- 20GB storage per node
## Preparing the Nodes
### Create VMs in Proxmox
For each node, create a VM with:
```bash
# CPU: 2 cores
# RAM: 4GB
# Disk: 20GB
# Network: Bridge to vmbr0
```
### Install Ubuntu Server
1. Boot from Ubuntu Server ISO
2. Complete the installation wizard
3. Install OpenSSH server
4. Update the system:
```bash
sudo apt update && sudo apt upgrade -y
```
### Configure Static IP Addresses
Edit `/etc/netplan/00-installer-config.yaml`:
```yaml
network:
version: 2
ethernets:
ens18:
addresses:
- 192.168.1.10/24 # Change for each node
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1, 8.8.8.8]
```
Apply configuration:
```bash
sudo netplan apply
```
### Disable Swap
Kubernetes requires swap to be disabled:
```bash
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
```
### Configure Hostnames
Set unique hostnames for each node:
```bash
# Control plane
sudo hostnamectl set-hostname k3s-master
# Worker nodes
sudo hostnamectl set-hostname k3s-worker1
sudo hostnamectl set-hostname k3s-worker2
```
Update `/etc/hosts` on all nodes:
```
192.168.1.10 k3s-master
192.168.1.11 k3s-worker1
192.168.1.12 k3s-worker2
```
## Installing K3s
### Install Control Plane
On the master node, run:
```bash
curl -sfL https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--disable traefik \
--node-name k3s-master
```
**Note**: We disable Traefik to install it manually later with custom configuration.
Verify installation:
```bash
sudo kubectl get nodes
```
Get the node token for workers:
```bash
sudo cat /var/lib/rancher/k3s/server/node-token
```
Save this token; you'll need it for worker nodes.
### Install Worker Nodes
On each worker node, run:
```bash
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.10:6443 \
K3S_TOKEN=YOUR_NODE_TOKEN \
sh -s - agent \
--node-name k3s-worker1 # Change for each node
```
Replace `YOUR_NODE_TOKEN` with the token from the master node.
### Verify Cluster
On the master node:
```bash
kubectl get nodes
```
You should see all three nodes in "Ready" state:
```
NAME STATUS ROLES AGE VERSION
k3s-master Ready control-plane,master 5m v1.27.3+k3s1
k3s-worker1 Ready <none> 2m v1.27.3+k3s1
k3s-worker2 Ready <none> 2m v1.27.3+k3s1
```
## Configure kubectl Access
### Local Machine Access
Copy the kubeconfig from the master node:
```bash
# On master node
sudo cat /etc/rancher/k3s/k3s.yaml
```
On your local machine:
```bash
mkdir -p ~/.kube
# Paste the content and update the server IP
nano ~/.kube/config
```
Change `server: https://127.0.0.1:6443` to `server: https://192.168.1.10:6443`
Test connection:
```bash
kubectl get nodes
```
## Installing Essential Components
### Helm Package Manager
Install Helm on your local machine:
```bash
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
```
### MetalLB Load Balancer
For bare-metal LoadBalancer support:
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
```
Create IP pool configuration:
```yaml
# metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 192.168.1.200-192.168.1.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
```
Apply configuration:
```bash
kubectl apply -f metallb-config.yaml
```
### Traefik Ingress Controller
Install Traefik with Helm:
```bash
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik traefik/traefik \
--namespace traefik \
--create-namespace \
--set service.type=LoadBalancer
```
Get Traefik LoadBalancer IP:
```bash
kubectl get svc -n traefik
```
### Cert-Manager
For automatic TLS certificates:
```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3/cert-manager.yaml
```
### Longhorn Storage
Distributed block storage for Kubernetes:
```bash
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn \
--namespace longhorn-system \
--create-namespace
```
Access Longhorn UI:
```bash
kubectl -n longhorn-system port-forward svc/longhorn-frontend 8080:80
```
Visit `http://localhost:8080`
## Deploying a Test Application
Create a test deployment:
```yaml
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
```
Deploy:
```bash
kubectl apply -f nginx-deployment.yaml
kubectl get svc nginx-service
```
Access the application using the LoadBalancer IP.
## Monitoring with Prometheus
Install kube-prometheus-stack:
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace
```
Access Grafana:
```bash
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
```
Default credentials: `admin` / `prom-operator`
## Best Practices
### Resource Limits
Always set resource requests and limits:
```yaml
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
```
### Namespace Organization
Use namespaces to organize workloads:
```bash
kubectl create namespace production
kubectl create namespace development
```
### Backup Strategy
Regular backups with Velero:
```bash
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm install velero vmware-tanzu/velero \
--namespace velero \
--create-namespace
```
## Troubleshooting
### Pod Not Starting
```bash
kubectl describe pod POD_NAME
kubectl logs POD_NAME
```
### Node Not Ready
```bash
kubectl describe node NODE_NAME
sudo systemctl status k3s
sudo journalctl -u k3s -f
```
### Network Issues
```bash
kubectl get pods -n kube-system
kubectl logs -n kube-system COREDNS_POD
```
## Next Steps
- Implement GitOps with ArgoCD
- Set up CI/CD pipelines
- Configure horizontal pod autoscaling
- Implement network policies
- Deploy service mesh (Istio/Linkerd)
## Conclusion
You now have a fully functional Kubernetes cluster running in your homelab! This setup provides a solid foundation for learning and experimenting with cloud-native technologies.
## Resources
- [K3s Documentation](https://docs.k3s.io/)
- [Kubernetes Official Docs](https://kubernetes.io/docs/)
- [Helm Charts](https://artifacthub.io/)
Keep Exploring
It Works... But It Feels Wrong - The Real Way to Run a Java Monolith on Kubernetes Without Breaking Your Brain
A practical production guide to running a Java monolith on Kubernetes without fragile NodePort duct tape.
Kubernetes Isn’t Your Load Balancer — It’s the Puppet Master Pulling the Strings
Kubernetes orchestrates load balancers, but does not replace them; this post explains what actually handles production traffic.
Should You Use CPU Limits in Kubernetes Production?
A grounded take on when CPU limits help, when they hurt, and how to choose based on workload behavior.
We Thought Kubernetes Would Save Us - The Production Failures No One Puts on the Conference Slides
A field report on real Kubernetes production failures and the human factors that trigger them.