Building a Kubernetes Dev Cluster

Kubernetes - κυβερνήτης, Greek for “helmsman”

Intro

The first Kubernetes post has arrived. Recently I’ve been doing a good amount of work in Kubernetes (K8s), using minikube on my local machine for dev work. While it’s good for small workloads, larger apps seem to really crush minikube and my computer.

Since I’ve got access to a XCP-ng cluster I decided I should build a Kubernetes dev cluster to work from! So this means no more minikube running locally, or having to pay for a small EKS cluster ($0.10 per hour for EKS plus the cost of compute) to run on.

For this implementation I’m using Ubuntu 18.04 as the cluster’s OS. This install guide is pretty much the same for 20.04, in the future I’ll reinstall the cluster on 20.04. For the Kubernetes installation kubeadm is going to be used. Two VMs have been provisioned with 8 vCPUs and 8 GiB memory each, that should be good enough for this dev cluster.

This is meant to help you get a dev cluster up and running quickly. If you’re new to Kubernetes, I recommend further reading on the topic; in addition to this guide.

Pre-Flight

Run through the pre-flight checks on the servers you’ll be installing.

  • Set your hostname(s)
1
2
sudo hostnamectl set-hostname k8smaster.int.mikemiller.tech
sudo hostnamectl set-hostname k8smaster.int.mikemiller.tech
  • Setup your hosts files
1
2
3
$ cat /etc/hosts
127.0.0.1 localhost k8smaster k8smaster.int.mikemiller.tech
10.0.0.244 k8snode k8snode.int.mikemiller.tech
  • Disable Swap
1
2
3
4
5
sudo swapoff -a

# Comment this line out in /etc/fstab
$ grep swap /etc/fstab
#/swap.img none swap sw 0 0
  • Remove nano, if you don’t want it :)

sudo apt remove nano

  • Remove snap

We shouldn’t have snap on these systems.

1
2
3
4
# Remove any of the default snaps
snap list
sudo snap remove --purge package-name
sudo apt purge snapd

Reboot

Docker

Using Docker as the container runtime for this implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
sudo apt update

sudo apt install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

sudo apt install docker-ce

Make sure /etc/docker/daemon.json looks like this

1
2
3
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

Restart docker sudo systemctl restart docker

Kubernetes

Yes, still using Xenial repos; as that’s what’s available.

1
2
3
4
5
sudo apt update && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list && sudo apt update

sudo apt install -y kubeadm kubelet kubernetes-cni

Helm

1
2
3
4
curl -s https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt update
sudo apt install helm

Install K8s

Master

1
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<SERVER_IP>

Get your kubeconfig. Move this to any other server, or machine you want to control the cluster from.

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Allow the master to run pods

1
kubectl taint nodes --all node-role.kubernetes.io/master-

Worker

Once the above command finishes, it should output a command we can use to join the worker. It’ll look something like this

1
sudo kubeadm join 10.0.0.245:6443 --token gg7rod.hux88t845aqlk5yd --discovery-token-ca-cert-hash sha256:123456789b1904e3e698610b00782b1b041ead66cf992c611cece61b3eeafaee

Label the worker

1
kubectl label node k8snode.int.mikemiller.tech node-role.kubernetes.io/worker=worker

Install Calico

Calico is our networking and network security for the cluster. Read more about it here.

1
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

metallb

Now we need to install metallb, so that can have a load balancer available to our cluster. Read more about metallb here

Run kubectl edit configmap -n kube-system kube-proxy

And set the following

1
2
3
4
5
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true

Let’s configure metallb, with the IPs it’s allowed to handout.

vim metallb.yaml

1
2
3
4
5
6
configInline:
address-pools:
- name: default
protocol: layer2
addresses:
- 10.0.0.250-10.0.0.255

Install it

1
2
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb -f metallb.yaml --namespace metallb --create-namespace

nginx controller

1
2
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

Test it

Let’s deploy something, to test this. I’ve got a small app put together for testing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
helm install k8s-flask-react k8s-flask-react/

$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
k8s-flask-react default 1 2021-09-30 20:35:24.872675 -0600 MDT deployed k8s-flask-react-0.1.0 1.0.0

$ helm test k8s-flask-react
NAME: k8s-flask-react
LAST DEPLOYED: Thu Sep 30 20:35:24 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: test-connection-react
Last Started: Sat Oct 2 23:58:39 2021
Last Completed: Sat Oct 2 23:58:45 2021
Phase: Succeeded
TEST SUITE: test-connection-webapp
Last Started: Sat Oct 2 23:58:45 2021
Last Completed: Sat Oct 2 23:58:51 2021
Phase: Succeeded
NOTES:
1. Get the application URL by running these commands:
http://k8s.mikemiller.tech/?(.*)
http://k8s.mikemiller.tech/api/?(.*)

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-8b449c7c9-g98xv 1/1 Running 0 2d3h
react-55584c7bc4-8s2bw 1/1 Running 0 2d3h
react-55584c7bc4-kphnf 1/1 Running 0 2d3h
webapp-6fff8764cc-86r99 1/1 Running 0 2d3h
webapp-6fff8764cc-fjmgs 1/1 Running 0 23h
webapp-6fff8764cc-rl4dk 1/1 Running 0 2d3h
webapp-6fff8764cc-s28nf 1/1 Running 0 2d3h

$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
k8s-flask-react <none> k8s.mikemiller.tech 10.0.0.251 80 2d3h

$ curl -XPOST http://k8s.mikemiller.tech/api/notes -d '{
"Document": {
"Name": "Sampson",
"Note": "This is another note"
}
}' -H 'Content-Type: application/json'

$ curl http://k8s.mikemiller.tech/api/notes
[{"Name":"Mike","Note":"This is a note"},{"Name":"Sampson","Note":"This is another note"}]

This shows our small app is working, on our Kubernetes lab cluster! 🎉

Monitoring

Let’s add some bare bones monitoring. Take note, that this configuration has no persistence; that’s for another episode.

Let’s get our values for Prometheus.

Prometheus.yaml

1
2
3
4
5
6
7
8
pushgateway:
persistentVolume:
enabled: false
alertmanager:
enabled: false
server:
persistentVolume:
enabled: false

Install Prometheus

1
2
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus --namespace monitoring --create-namespace -f Prometheus.yaml

Install Grafana

1
2
helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana --namespace monitoring

Update the grafana service, to be of the LoadBalancer type. This is so we don’t have to do any K8s proxy magic.

Then we get the admin password

1
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Now we can check it out

1
2
3
kgs -n monitoring | grep -B1 grafana
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana LoadBalancer 10.109.90.155 10.0.0.252 80:31346/TCP 44m

Grafana Login

Setup the data source

Grafana Prometheus

Have fun with dashboards!

Grafana Dashboards

Thoughts

Outside of being able to start doing dev work on this cluster, further configurations of Grafana/Prometheus should be taken care of. But as mentioned in the beginning, this was to get up and running fast.

Have fun!