0. Cluster Setup and Management - REPEAT
Time to Complete
Planned time: ~30 minutes
Explanation
- This is the hands-on exercise from the Admin course
- We repeat it to get started for the Security Training
Minimal Kubernetes Cluster Setup on Ubuntu (1 Master/Worker Node + 1 Worker-Only Node)
Official Kubernetes documentation
In this lab, we will set up a basic Kubernetes cluster consisting of two Ubuntu nodes: Don’t worry if you’re new to Kubernetes – we’ll walk you through everything step by step!
- Node 0: Acts as control plane (master) and worker
- Node 1: Acts as a worker-only
We will use kubeadm for cluster initialization and assume Ubuntu 22.04 is used. This tool helps simplify the process of setting up Kubernetes clusters manually.
This lab will help you understand the steps required to bootstrap a functional cluster without external tooling (like Rancher or Minikube) and provide a practical base for multi-node Kubernetes setups.
Danger
If at one point the installation gets stuck or the whole setup is broken, reset both nodes as follows
sudo apt purge containerd kubelet kubeadm kubectl
# remove all folders mentioned in the purge command, e.g.
# rm -rf /var/lib/containerd /var/lib/kubelet
# additionally remove
sudo rm -rf /etc/containerd /etc/kubernetes $HOME/.kube
# then restart both nodes
sudo reboot
And then do the installation process all over again
1. Preparing the Hosts
Make sure you have two Ubuntu machines ready (physical, VMs, or cloud VMs). Both nodes must be able to reach each other via private IP.
Assign roles:
Node 0: controlplane-node (Master)
Node 1: worker-node
Connect to both Nodes (two terminal tabs) through ssh: Opening two terminal tabs can help you manage both machines easily during setup.
ssh azuser@NODE
# e.g if you are user 'c5' and workshop 's2' it will be
ssh azuser@c5-s2-admin-0 # controlplane -> NODE 0
ssh azuser@c5-s2-admin-1 # worker -> NODE 1
The password is Train@Thinkport
On both nodes:
Update system and install dependencies:
sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https curl containerd
sudo mkdir -p /etc/containerd
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
# verify it is running
sudo systemctl status containerd
sudo swapoff -a
# as separate copy paste
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Enable kernel modules:
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Add Kubernetes APT repo:
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | gpg --dearmor | sudo tee /etc/apt/keyrings/kubernetes-apt-keyring.gpg > /dev/null
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
Install Kubernetes components:
VERSION=1.32.0-1.1
sudo apt install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
sudo apt-mark hold kubelet kubeadm kubectl
2. Initialize the Cluster
Now initialize the cluster on master-node (Node 0).
We’ll use Calico CNI and CIDR 192.168.0.0/16:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
# NOTE: this will print you at the end a command in the form of
# which you will need later
kubeadm join <CONTROLPLANE-IP>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Once done, configure kubectl:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/calico.yaml
Optional
3. Join the Worker Node
On worker-node (Node 1), copy&paste and run the kubeadm join command shown after init on control plane node:
sudo kubeadm join <CONTROLPLANE-IP>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
HINT
If lost, recreate join command on controlplane node:kubeadm token create --print-join-command
Verify cluster status on controlplane:
watch kubectl get nodes
You should see both nodes in Ready state. This might take ~1 minute
4. Allow schedules on Control-Plane node
To let controlplane act also as worker-node, we need to activate scheduling on the node
kubectl taint nodes NODE-NAME node-role.kubernetes.io/control-plane:NoSchedule-
5. Optional: Label AS Worker node
kubectl label node NODE-0 node-role.kubernetes.io/worker=worker
kubectl label node NODE-1 node-role.kubernetes.io/worker=worker
6. Test Deployment
Deploy nginx to test:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
Find the port:
kubectl get svc nginx
Then access nginx via the nodeport:
SERVER_IP=$(kubectl get po -l app=nginx -ojsonpath='{.items[*].status.hostIP}')
NODEPORT=$(kubectl get svc nginx -ojsonpath='{.spec.ports[*].nodePort}')
curl "http://${SERVER_IP}:${NODEPORT}"
You should see the nginx welcome page.
7. Add the cluster to kubeconfig
Now we add the newly created cluster to the existing kubeconfig as follows
now back on you code vm:
# IMPORTANT: run on code vm
## first we backup the aks config
mv ~/.kube/config ~/.kube/config-aks
## then we get the newly create kubeconfig from the controlplane NODE-0
ssh azuser@NODE-0 kubectl config view --raw > ~/.kube/config-kubeadm
## and merge the two
KUBECONFIG=~/.kube/config-aks:~/.kube/config-kubeadm: kubectl config view --merge --flatten > ~/.kube/config
Rename the context of the cluster created by kubeadm in ~/.kube/config as follows
###
- context:
cluster: kubernetes
user: kubernetes-admin
name: cluster-kubeadm # <- here
###
So when you run kx, it looks as follows
kx
cluster-kubeadm
c<x>-s<y> # e.g c5-s2, which is your aks cluster
Recap
You have:
- Provisioned two Ubuntu hosts
- Installed Kubernetes components
- Initialized the cluster with kubeadm
- Joined a second node
- Installed Calico as CNI
- Deployed and exposed an app via nodeport
End of Lab