Appearance
How does Kubernetes networking work at a high level (Pod-to-Pod, Pod-to-Service, external traffic)?
Hello. I am a Senior Kubernetes platform engineer. Networking in Kubernetes often feels like "magic" to newcomers, but it relies on a very strict set of design rules and modular components working in concert.
To understand Kubernetes networking, we must look at three distinct layers: The Pod Network (how containers talk), The Service Network (how we abstract ephemeral IPs), and External Traffic (how the world reaches the cluster).
Here is the high-level architecture of Kubernetes networking.
1. The Fundamental Model: The "Flat" Network
Unlike the old days of Docker where we mapped container ports to host ports (e.g., 8080:80), Kubernetes imposes a "flat" network model. This is the foundational rule you must understand: Every Pod gets its own unique IP address.
The Design Rules:
- All Pods can communicate with all other Pods without using Network Address Translation (NAT).
- Agents on a node (like the kubelet) can communicate with all Pods on that node.
- The IP address that a Pod sees for itself is the same IP address that others see for it.
How it works (Pod-to-Pod):
- Intra-Node (Same Node): When Pod A talks to Pod B on the same node, traffic flows through a virtual ethernet bridge (often
cni0) or similar local switching mechanism managed by the operating system namespaces,. - Inter-Node (Across Nodes): When Pod A talks to Pod C on a different node, the traffic leaves the node. This is handled by a CNI Plugin (Container Network Interface). Depending on the plugin (e.g., Calico, Flannel, Cilium), this traffic is either routed directly (if the underlying network supports it) or encapsulated in an overlay network (like VXLAN) to reach the destination node,.
2. Service Networking: Solving the "Churn" Problem
Pods are ephemeral; they are created and destroyed to match the desired state of the cluster. If a Pod dies and is replaced, the new Pod gets a new IP. This makes Pod IPs unreliable for stable communication.
The Solution: The Service A Service is an abstraction that provides a stable Virtual IP (ClusterIP) and a stable DNS name for a logical set of Pods,.
How it works (Pod-to-Service):
- DNS Resolution: A Pod queries the cluster DNS (CoreDNS) for
my-service. DNS returns the Service's ClusterIP (a virtual IP),. - Traffic Interception (kube-proxy): The Pod sends traffic to that ClusterIP. The traffic never actually hits a network interface with that IP. Instead, a component called kube-proxy running on every node intercepts this traffic.
- DNAT: kube-proxy (commonly running in
iptablesorIPVSmode) uses packet filtering rules to redirect (Destination NAT) the packet from the virtual ClusterIP to the actual IP of one of the healthy backing Pods,. - EndpointSlices: The control plane updates EndpointSlice objects whenever Pods come and go. kube-proxy watches these slices to keep its forwarding rules up to date,.
3. External Traffic: Getting In and Out
The ClusterIP described above is only accessible inside the cluster. To handle traffic from the outside world, we use higher-level constructs.
Ingress (Traffic IN)
- NodePort: Opens a specific port (e.g., 30007) on every node in the cluster. Traffic sent to any node's IP at that port is forwarded to the Service.
- LoadBalancer: Asks the cloud provider (AWS, GCP, Azure) to provision a physical or virtual load balancer. This LB sends traffic to the NodePorts, which then route to the Pods.
- Ingress / Gateway API: An Ingress is not a Service type, but a router. It sits behind a LoadBalancer and routes HTTP/HTTPS traffic to internal Services based on hostnames (e.g.,
api.example.com) or paths (e.g.,/app),. This allows you to share one LoadBalancer across many services.
Egress (Traffic OUT)
- Masquerading (SNAT): When a Pod sends traffic to the public internet, the destination does not know how to route back to the Pod's private IP (e.g.,
10.244.0.5). To solve this, Kubernetes (viaip-masq-agentor the CNI) performs Source NAT (SNAT). It replaces the Pod's source IP with the Node's physical IP address. The response comes back to the Node, which un-NATs the packet and hands it back to the Pod.
Summary of Communication Flows
| Path | Primary Mechanism | Description |
|---|---|---|
| Pod → Pod | CNI Plugin | Direct IP connectivity. No NAT. Traffic flows via bridge or overlay. |
| Pod → Service | kube-proxy + DNS | Pod resolves Service DNS to a VIP. kube-proxy (iptables/IPVS) intercepts VIP and forwards to a random backing Pod IP. |
| External → Pod | LoadBalancer / Ingress | Cloud LB sends traffic to NodePort. Node routes to Pod. Ingress provides Layer 7 routing logic. |
| Pod → Internet | SNAT (Masquerade) | Traffic leaving the cluster is Masqueraded to appear as if it came from the Node's IP. |
A Note on Security
By default, the Kubernetes network is "open"—all Pods can talk to all other Pods. As a platform engineer, you should use NetworkPolicies to restrict this. NetworkPolicies act like a firewall, allowing you to define allow-lists for which Pods can communicate with one another,.