Appearance
Bootstraping - Deploying K8S cluster
This is from the official docs, we can later script this as well to a fully automated deployment.
The goal of this deployment is to follow the official way to understand how K8S is deployed and to learn , prepare for cka as well.
Guide on bootstrapping your first Kubernetes cluster on Ubuntu 24.x. Lets build a standard architecture consisting of one Control Plane node (often called the master) and three Worker nodes.
We will use kubeadm, the official cluster bootstrapping tool provided by the Kubernetes project. This tool automates the complex configuration of the control plane components while adhering to security best practices.
Prerequisites
Before we begin, ensure your four machines meet these requirements to ensure stability:
- OS: Ubuntu 24.x (Debian-based distributions are supported).
- Hardware:
- Control Plane: 2 GB RAM, 2 CPUs.
- Workers: 2 GB RAM recommended.
- Networking: Full network connectivity between all machines.
- Unique Identifiers: Ensure every node has a unique hostname, MAC address, and
product_uuid.
Phase 1: Preparation (Run on ALL Nodes)
We must prepare the underlying operating system on all four servers (Control Plane + 3 Workers) to host Kubernetes components.
1. Disable Swap
Kubernetes requires swap memory to be disabled. If swap is active, the kubelet (the agent running on every node) will fail to start by default. This is done to ensure performance predictability.
bash
sudo swapoff -a
# To keep it disabled after reboot, you must remove the swap entry from /etc/fstab2. Enable IPv4 Packet Forwarding
Kubernetes networking relies on bridging traffic between pods across different nodes. We must configure the Linux kernel to allow IPv4 packet forwarding.
bash
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply the changes
sudo sysctl --system3. Install a Container Runtime
Kubernetes does not run containers directly; it orchestrates a Container Runtime that manages the container lifecycles. We will use a runtime compliant with the Container Runtime Interface (CRI), such as containerd,.
Note: You must install containerd using your package manager (e.g., apt install containerd) or from binaries.
Critical Configuration: Cgroup Drivers On Linux, control groups (cgroups) constrain resources allocated to processes. It is critical that both the kubelet and your container runtime use the same cgroup driver. On Ubuntu (which uses systemd), you must configure your runtime to use the systemd driver to avoid instability.
4. Install Kubeadm, Kubelet, and Kubectl
We will install the three core binaries:
- kubeadm: The command to bootstrap the cluster.
- kubelet: The component that runs on all machines and starts pods.
- kubectl: The command line utility to talk to your cluster.
Step A: Install dependencies and add the Kubernetes repository key As of September 2023, you must use the pkgs.k8s.io community-owned repositories.
bash
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Create the keyrings directory if it doesn't exist (Standard on Ubuntu 24.04)
sudo mkdir -p -m 755 /etc/apt/keyrings
# Download the public signing key for the specific Kubernetes version (e.g., v1.30)
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgStep B: Add the repository and install
bash
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectlStep C: Pin the versions We mark these packages to prevent apt-get upgrade from automatically updating them. Kubernetes upgrades require a specific manual workflow; accidental updates can break the cluster.
bash
sudo apt-mark hold kubelet kubeadm kubectlPhase 2: Initialize the Control Plane (Run on Master ONLY)
We will now initialize the control plane. This node will run the API Server, Scheduler, and Controller Manager.
1. Initialize the Cluster
We use kubeadm init. We must specify a --pod-network-cidr because the networking plugin (CNI) we will install later requires a specific IP range.
bash
# Initialize the control plane
sudo kubeadm init --pod-network-cidr=10.244.0.0/16Note: 10.244.0.0/16 is the default range for the Flannel CNI plugin. If you choose a different plugin, adjust this CIDR accordingly.
This process runs pre-flight checks and installs the control plane components. This may take several minutes.
2. Configure Kubectl
Once initialization finishes, the output will provide commands to set up your local user configuration. Run these as a regular user (not root) to administer your cluster:
bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config3. Install a Pod Network Add-on (CNI)
Your pods cannot communicate with each other yet. DNS (CoreDNS) will not start until a network is installed. We must apply a Container Network Interface (CNI) plugin.
We will use kubectl apply to install a plugin (example using Flannel):
bash
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml4. Save the Join Command
The end of the kubeadm init output provided a command starting with kubeadm join. Copy this command. It contains a token and a discovery hash required for workers to authenticate with the control plane.
Phase 3: Join the Workers (Run on 3 Worker Nodes)
On each of your three worker nodes, execute the join command you copied from the Master node.
bash
sudo kubeadm join <control-plane-host>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>This command performs the TLS bootstrapping for the node. The node will:
- Connect to the API server.
- Authenticate using the token.
- Sign its certificate.
- Join the cluster as a worker.
Phase 4: Validation
Return to your Control Plane (Master) node to verify the cluster status.
Check the nodes:
bash
kubectl get nodesNote
You should see all four nodes listed. Initially, they may report NotReady. Once the CNI plugin initializes on every node, the status should change to Ready.
You now have a functioning Kubernetes cluster running on Ubuntu 24.x.