Skip to content

What are the core components of Kubernetes, and how do the control plane and worker nodes interact?

A production Kubernetes cluster is architecturally divided into two distinct functional areas: the Control Plane, which makes global decisions about the cluster, and the Worker Nodes, which provide the runtime environment for applications,.

This architecture relies on a "hub-and-spoke" API pattern where the API server acts as the central hub for all communication.

1. The Control Plane (The "Brain")

The control plane components manage the overall state of the cluster, detecting and responding to events such as scheduling requirements or node failures.

  • kube-apiserver: This is the core component and the "front end" of the Kubernetes control plane. It exposes the Kubernetes HTTP API. All communication between components—whether from nodes, internal controllers, or external users (via kubectl)—terminates at the API server. It is designed to scale horizontally by running multiple instances.
  • etcd: This is a consistent, highly-available key-value store used as the backing store for all cluster data,. Because this persists the entire state of the cluster, a robust backup plan for etcd is critical.
  • kube-scheduler: This component watches for newly created Pods that have not yet been assigned to a node. It selects the optimal node for the Pod to run on based on resource requirements, hardware/software constraints, affinity specifications, and data locality.
  • kube-controller-manager: This component runs controller processes. While logically each controller (like the Job controller or Node controller) is a separate process, they are compiled into a single binary and run in a single process to reduce complexity. These controllers continuously compare the current state of the cluster to the desired state.
  • cloud-controller-manager (Optional): In cloud environments, this component links the cluster into the cloud provider's API. It separates the components that interact with the cloud platform (like setting up load balancers or checking if a node has been deleted from the cloud) from components that only interact with the cluster,.

2. Worker Nodes (The "Muscle")

Nodes are the worker machines (virtual or physical) that host the Pods which form the application workload. Every cluster needs at least one worker node.

  • kubelet: An agent that runs on every node in the cluster. Its primary responsibility is to ensure that containers are running in a Pod. It takes a set of PodSpecs (provided by the API server) and ensures the defined containers are running and healthy. It does not manage containers not created by Kubernetes.
  • kube-proxy: A network proxy running on each node that implements the Kubernetes Service concept. It maintains network rules on the node, allowing network communication to Pods from inside or outside the cluster. It uses the OS packet filtering layer (like iptables) if available, or forwards traffic itself.
  • Container Runtime: The software responsible for actually running the containers. Kubernetes supports runtimes such as containerd, CRI-O, or any implementation of the Kubernetes Container Runtime Interface (CRI).

3. Interaction Between Control Plane and Nodes

The interaction between the control plane and nodes is designed to be secure and resilient, relying heavily on the API server as an intermediary.

Node Registration and Heartbeats Nodes must be added to the API server, either manually or by the kubelet self-registering. Once registered, the kubelet sends periodic "heartbeats" to the control plane to prove availability.

  • Leases: Kubernetes uses Lease objects in the kube-node-lease namespace to communicate these heartbeats.
  • Failure Detection: If the control plane stops receiving heartbeats, the Node controller updates the Node's status to Unknown and, after a grace period, may trigger eviction of Pods from that node.

Communication Paths

  • Node to Control Plane: All API usage from nodes (or Pods running on them) terminates at the API server. Nodes do not communicate with each other directly to coordinate; they communicate their status to the API server.
  • Control Plane to Node: There are two primary paths from the API server to the nodes:
    1. To the Kubelet: Used for fetching logs, attaching to running pods, and port-forwarding.
    2. To Services/Pods: Used when the API server proxies a connection to a specific Pod or Service.

The Declarative Loop The interaction is fundamentally declarative. The control plane (via the scheduler) assigns a Pod to a Node. The kubelet on that node watches for this assignment. When it sees a Pod assigned to its node, it reads the PodSpec and instructs the container runtime to pull the image and start the container. If the container fails, the kubelet attempts to restart it locally. If the node fails, the control plane notices the missing heartbeats and reschedules the workload elsewhere.