Setting up a local Kubernetes cluster
Introduction
I recently decided to setup a Kubernetes cluster on my PC. I wanted to have a multiple node setup, so Minikube and MicroK8s were not an option as they run a single node. I found quite a few tutorials on how to setup a Kubernetes cluster, but all of them were lacking information about the certain aspects of my particular setup. My requirements were the following:
- Kubernetes nodes and master running as VMs in Hyper-V on Windows 10
- I can manage the cluster from my Windows 10 host (no access needed from other machines)
Setting up the network switch for VMs
The first thing to do is to create a new internal switch. Execute the following in Powershell:
New-VMSwitch -name k8sSwitch -SwitchType Internal
Next we need to configure a static IP and an address range. In the snippet below I’ve chosen to use a network with 172.17.101.0/24 IP range:
$adapter = Get-NetAdapter | ? {$_.Name -eq "vEthernet (k8sSwitch)" }
$adapter | Remove-NetIPAddress -AddressFamily IPv4 -Confirm:$false
$adapter | New-NetIPAddress -AddressFamily IPv4 -IPAddress 172.17.101.1 -PrefixLength 24
We will need to be able to access internet from our cluster, so we need to setup NAT for the IP range above:
New-NetNat –Name k8sNATNetwork –InternalIPInterfaceAddressPrefix 172.17.101.0/24
Creating the VMs
I decided to use the Ubuntu Server 18.04 image for my cluster. It can be downloaded from here. Once downloaded you can start creating your VMs.
For each VM, make sure to select k8sSwitch for the network adapter and to manually assign static IP when the installation prompts you to do so. For example for master node you could enter the following:
- Subnet: 172.17.101.0/24
- Address: 172.17.101.2
- Gateway: 172.17.101.1 (this must match the IP we used when executing New-NetIPAddress command)
- Name servers: 172.17.101.1 (this must match the IP we used when executing New-NetIPAddress command)
Once you have the VMs up and running you will have to switch of the swap partition on all of them as it is not supported by Kubernetes. Run:
sudo swapoff -a
Then, to prevent the swap being turned on again after a reboot, comment out the swap line in /etc/fstab file:
sudo nano /etc/fstab
I created three VMs, one master and two nodes. After creating them I stopped them and changed their MAC address to static instead of dynamic, just to be safe (select VM, choose Settings -> Network Adapter -> Advanced Features to change it).
Now we are ready to install Kubernetes.
Installing Kubernetes
All nodes
First, you need to install docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo apt-mark hold docker-ce
Verify Docker is up and running:
sudo systemctl status docker
Then you will need to install kubeadm, kubelet and kubectl:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
sudo apt-mark hold kubelet kubeadm kubectl
Master node only
You will need to do this:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Take note of the result of this command, you will need it to join the child nodes to the master node.
When it is done, setup kubectl:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify that all is well:
kubectl version
Child nodes
Run the output of kubeadm init command on both child nodes, it will look something like:
sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash
Now go back to master node and run:
kubectl get nodes
After some time all nodes should be shown as Ready.
Setup cluster networking
Turn iptable bridge calls on all three nodes:
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Next run this on master node only:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Make sure all is working
kubectl get nodes
After a few moment everything should be in Ready status again.
Configuring access to cluster from host PC
At this point we can use kubectl from master node only. It will be easier to work with the cluster if we configure access to it from host machine.
To do so we will need to create a user, go to master node and run the following:
kubectl create sa shrek
kubectl create rolebinding shrek-admin-binding --clusterrole=admin --user=system:serviceaccount:default:shrek
secret=$(kubectl get sa shrek -o json | jq -r .secrets[].name)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
c=`kubectl config current-context`
name=`kubectl config get-contexts $c | awk '{print $3}' | tail -n 1`
endpoint=`kubectl config view -o jsonpath="{.clusters[?(@.name == \"$name\")].cluster.server}"`
Note down the value of endpoint and user_token variables and copy copy the ca.crt file created above to host machine.
Download kubectl on host machine:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe
Then run:
kubectl config set-cluster k8s.local --embed-certs=true --server=$endpoint --certificate-authority=./ca.crt
kubectl config set-credentials shrek --token=$user_token
kubectl config set-context shrek/k8s.local/default --cluster=k8s.local --user=shrek --namespace=default
kubectl config use-context shrek/k8s.local/default
Verify that you can use kubectl from host:
kubectl get all
Setting up access to Kubernetes dashboard
If you are like me, you like dashboards and Kubernetes has a quite decent one.
First we will need some additional permissions. From master node run:
kubectl create clusterrolebinding shrek-cluster-admin-binding --clusterrole=cluster-admin --user=system:serviceaccount:default:shrek
Deploy the dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Then run:
kubectl proxy
Now you can access the dashboard at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
When it asks you to login you can either use a token or simply point it to the location of the kube config file (%HOMEPATH%/.kube/config
).
Quick test
We can now quickly test the cluster by deploying sample aspnet web app. In this sample I’ve chosen to use ReplicaSet to test how if the pods get distributed over the nodes correctly.
Save the following manifest into aspnet.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
selector:
matchLabels:
run: aspnetapp
replicas: 3
template:
metadata:
labels:
run: aspnetapp
spec:
containers:
- name: aspnetapp
image: "mcr.microsoft.com/dotnet/core/samples:aspnetapp"
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: aspnetapp-service
spec:
type: NodePort
selector:
run: aspnetapp
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
Then execute:
kubectl apply -f aspnet.yaml
Once it is executed you can go to the IP of your master node at port 30000 to see the app, if the master node has IP 172.17.101.2 then the website will be available at http://172.17.101.2:30000.
To see how the pods are distributed you could run:
kubectl describe pods
That is it, the setup is still somewhat basic, but a good start.