Appearance
How does Kubernetes NetworkPolicy actually work — and why isn't mine doing anything?
By default, Kubernetes runs a flat, open network: every Pod gets its own IP address and can talk to every other Pod directly, regardless of node or namespace. That's great for getting started. In production, it's a security problem.
NetworkPolicy is how you lock it down.
What NetworkPolicy Actually Is
A NetworkPolicy is a declarative Kubernetes resource that controls traffic flow at the IP address or port level (OSI layer 3/4). You define exactly which Pods can talk to which other Pods, which namespaces, and which external IP ranges.
The isolation model has three important behaviours:
Default non-isolated: A Pod with no NetworkPolicy selecting it accepts all inbound and outbound traffic. Nothing is blocked until you create a policy that targets it.
Default deny on selection: The moment a NetworkPolicy selects a Pod for a specific direction — ingress or egress — that Pod becomes isolated for that direction. Only traffic explicitly permitted in the policy is allowed. Everything else is dropped silently.
Additive rules: Multiple policies applying to the same Pod do not conflict. The result is the union of all allowed traffic across all applicable policies. There is no deny rule priority or evaluation order — if any policy permits a connection, it goes through.
The Critical Gotcha: Kubernetes Does Not Enforce NetworkPolicy
This is the most important thing to understand about NetworkPolicy, and the source of a huge amount of confusion:
Kubernetes stores your NetworkPolicy. Your CNI plugin enforces it.
When you kubectl apply a NetworkPolicy, the API server validates and stores it as a record of intent. That's all Kubernetes does. The actual packet filtering happens in your CNI plugin's data plane.
If you're running standard Flannel, your NetworkPolicy objects are being stored but completely ignored. Flannel does not support NetworkPolicy enforcement.
CNI plugins that do enforce NetworkPolicy:
- Calico — most common in production
- Cilium — eBPF-based, most performant
- Antrea — VMware's CNI
- Weave Net
- Kube-router
Always verify your CNI supports NetworkPolicy before writing policies and assuming they're active.
kubectl describe networkpolicywill show you what was parsed — but it won't tell you if the CNI is actually enforcing it.
How to Configure a NetworkPolicy
A NetworkPolicy spec has four sections:
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-policy
namespace: default
spec:
podSelector: # which pods this policy applies to
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress: # what traffic is allowed IN
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress: # what traffic is allowed OUT
- to:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090When defining peers in from or to rules, you have three selector types:
podSelector— specific Pods in the same namespacenamespaceSelector— entire namespaces matched by labelipBlock— external IP CIDR ranges (e.g.,10.0.0.0/8)
The Default Deny Pattern
The recommended starting point for any production namespace is a default deny-all policy:
yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # selects ALL pods in the namespace
policyTypes:
- Ingress
- EgressThis immediately isolates every Pod in the namespace. From here, you add explicit allow policies for the traffic you actually need. Because policies are additive, each new allow policy punches a specific hole without affecting the deny-all baseline.
Troubleshooting NetworkPolicy
Network policies drop traffic silently at the kernel level. Misconfigurations show up as mysterious application timeouts, not clear error messages.
1. Verify Your CNI Actually Enforces NetworkPolicy
If traffic is flowing when it shouldn't be, first check whether your CNI supports enforcement. The quickest test is to apply a default deny-all to a test namespace and verify that a simple curl between Pods is actually blocked.
bash
kubectl describe networkpolicy <name> -n <namespace>This shows you exactly how the control plane parsed your selectors. A common YAML mistake is indentation — an incorrectly indented namespaceSelector and podSelector changes the logic from "Pods with label X in namespace Y" to "Pods with label X OR any Pod in namespace Y". The describe output will reveal this.
2. Test with Ephemeral Pods
Don't debug using your application. Launch a throwaway Pod to test raw network paths:
bash
# Spin up a busybox pod with a specific label
kubectl run debug-pod --rm -ti \
--labels="app=frontend" \
--image=busybox \
-- /bin/sh
# Inside the pod — test TCP connectivity
wget --spider --timeout=1 http://backend-service:8080If it times out, the policy is blocking as expected. If it returns a response, the connection is permitted.
3. Never Use ping to Test NetworkPolicy
NetworkPolicy operates at Layer 4 — TCP, UDP, and SCTP. ICMP (the protocol ping uses) is entirely undefined in the Kubernetes NetworkPolicy specification.
Depending on your CNI plugin, a deny-all policy might block TCP while still allowing ICMP pings to succeed. This leads to exactly the wrong conclusion — you ping, it responds, you assume the policy isn't working, but actually your TCP traffic is being correctly dropped.
Always test with curl, wget, nc, or telnet.
4. hostNetwork Pods are a Blindspot
If a Pod runs with hostNetwork: true, NetworkPolicy behaviour is often undefined or ignored. Most CNI plugins cannot properly distinguish hostNetwork Pod traffic from the underlying host node's traffic. Policies targeting these Pods via podSelector frequently fail silently, and their traffic is treated as node-level IP communication rather than Pod traffic.