Appearance
How do I upgrade a Kubernetes cluster from 1.34 to 1.35 using kubeadm?
Upgrading a Kubernetes cluster with kubeadm is a highly orchestrated, sequential process — control plane first, workers last. Skipping steps or upgrading out of order will break your cluster. This guide covers the 1.34 to 1.35 upgrade path including the breaking changes that will stop you cold if you ignore them.
Breaking Changes — Read Before You Touch Anything
Version Skew Constraints
You cannot skip MINOR versions. Your cluster must be running 1.34.x before upgrading to 1.35.x. The kubeadm binary version must match the target control plane version exactly. The kubelet can lag behind but must stay within the supported skew (1.32 through 1.35).
Cgroup v1 Deprecation
The most significant breaking change in 1.35 is that cgroup v1 is now rejected by default. The kubelet sets FailCgroupV1=true — if your nodes are still on cgroup v1, the kubelet will fatally exit during initialization after the upgrade.
Check your cgroup version first:
bash
stat -fc %T /sys/fs/cgroup/cgroup2fs→ you're on cgroup v2, you're finetmpfs→ you're on cgroup v1, you must migrate before upgrading
Migration path from cgroup v1:
- Upgrade your OS to one that enables cgroup v2 by default — Ubuntu 22.04+, Debian 11+, RHEL 9+
- Ensure Linux kernel 5.8 or later
- Upgrade container runtime — containerd v1.4+ or CRI-O v1.20+
- Configure both kubelet and container runtime to use the
systemdcgroup driver
If you absolutely cannot migrate before the upgrade, you can temporarily override by setting FailCgroupV1: false in the kubelet configuration — but this is a stopgap, not a solution.
Package Repository Migration
The legacy apt.kubernetes.io and yum.kubernetes.io repositories are frozen. You must be using the community-owned repositories at pkgs.k8s.io. Update your repository configuration to point to the v1.35 channel before proceeding:
bash
# Update the kubernetes.list source to v1.35
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' \
| sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get updatePre-Upgrade: Back Up etcd
kubeadm upgrade modifies etcd internals. A backup is not optional — it is the only thing standing between you and a full cluster rebuild if something goes wrong.
bash
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /opt/etcd-backup-pre-1.35.db
# Verify the snapshot is valid
ETCDCTL_API=3 etcdctl snapshot status /opt/etcd-backup-pre-1.35.db --write-out=tableProduction tip: Before running
kubeadm upgrade apply, you can gracefully drain in-flight API requests by stopping the API server a few seconds first:killall -s SIGTERM kube-apiserver. This prevents requests from being hard-cut mid-flight when etcd restarts during the upgrade.
Understanding the Two Upgrade Commands
The upgrade splits into two distinct commands with very different scopes:
kubeadm upgrade apply — runs only on the primary control plane node. This does the heavy lifting:
- Upgrades static Pod manifests for the API server, controller-manager, and scheduler
- Applies new CoreDNS and kube-proxy manifests
- Generates new RBAC rules where needed
- Automatically renews certificates expiring within 180 days
kubeadm upgrade node — runs on all other nodes (secondary control plane and workers):
- On secondary control plane nodes: fetches updated
ClusterConfiguration, upgrades local static Pod manifests and kubelet config - On worker nodes: fetches
ClusterConfigurationand updates kubelet config only — does not touch control plane manifests
Phase 1: Primary Control Plane Node
bash
# 1. Unhold and upgrade kubeadm
sudo apt-mark unhold kubeadm
sudo apt-get install -y kubeadm=1.35.x-*
sudo apt-mark hold kubeadm
# Verify the version
kubeadm version
# 2. Plan — read this output carefully before proceeding
sudo kubeadm upgrade planThe plan output will confirm: API server reachability, control plane health, image availability for v1.35.x, and a dry-run summary of what will change. If anything shows red here, stop and fix it before continuing.
bash
# 3. Apply the upgrade
sudo kubeadm upgrade apply v1.35.x
# 4. Upgrade CNI plugin manually
# kubeadm does NOT manage CNI addons — upgrade Flannel/Calico/Cilium separately
# Follow your CNI provider's specific upgrade documentation
# 5. Drain the control plane node
kubectl drain <cp-node-name> --ignore-daemonsets
# 6. Upgrade kubelet and kubectl
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.35.x-* kubectl=1.35.x-*
sudo apt-mark hold kubelet kubectl
# 7. Restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# 8. Bring the node back
kubectl uncordon <cp-node-name>Phase 2: Secondary Control Plane Nodes
Repeat for each additional control plane node, one at a time. The only difference is you use kubeadm upgrade node instead of kubeadm upgrade apply — do not run plan or apply here.
bash
sudo apt-mark unhold kubeadm
sudo apt-get install -y kubeadm=1.35.x-*
sudo apt-mark hold kubeadm
sudo kubeadm upgrade node
kubectl drain <cp-node-name> --ignore-daemonsets
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.35.x-* kubectl=1.35.x-*
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon <cp-node-name>Phase 3: Worker Nodes
Upgrade one worker at a time — or in small batches — to protect workload capacity. Draining a node evicts its pods onto remaining nodes, so if you drain too many at once you risk hitting resource limits.
bash
# On the worker node
sudo apt-mark unhold kubeadm
sudo apt-get install -y kubeadm=1.35.x-*
sudo apt-mark hold kubeadm
# Update local kubelet config
sudo kubeadm upgrade node
# From the control plane — drain the worker
kubectl drain <worker-node-name> --ignore-daemonsets --delete-emptydir-data
# Back on the worker node — upgrade kubelet and kubectl
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.35.x-* kubectl=1.35.x-*
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# From the control plane — bring the worker back
kubectl uncordon <worker-node-name>Repeat for each worker node before moving to the next.
Post-Upgrade Verification
bash
# Confirm all nodes are Ready and on v1.35.x
kubectl get nodes
# Verify control plane pods are running the new version
kubectl get pods -n kube-system
# Check kubelet is healthy on each node
systemctl status kubelet
journalctl -xeu kubelet # if something looks wrong
# Verify etcd health
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint healthCoreDNS rebalancing — because nodes are drained and upgraded sequentially, CoreDNS pods tend to accumulate on the first control plane node. Force a rebalance after all nodes are upgraded:
bash
kubectl -n kube-system rollout restart deployment coredns
kubectl -n kube-system get pods -l k8s-app=kube-dns -o wide
# Confirm pods are now distributed across multiple nodesQuick Reference
| Phase | Node | Command |
|---|---|---|
| 1 | Primary CP | kubeadm upgrade apply v1.35.x |
| 2 | Secondary CP | kubeadm upgrade node |
| 3 | Workers | kubeadm upgrade node |
The drain → upgrade kubelet → restart → uncordon sequence is identical across all three phases. The only thing that changes is the kubeadm command in the middle.