Skip to content

Kubernetes NetworkPolicy: Practical Examples for Ingress and Egress

How do Kubernetes NetworkPolicies actually isolate workloads?

Kubernetes NetworkPolicies provide an application-centric construct to govern how pods communicate with various network entities over OSI layer 3 or layer 4 (TCP, UDP, and SCTP). They allow cluster architects to implement a zero-trust network model, ensuring that only explicitly authorized traffic flows within the cluster and to the outside world.

How NetworkPolicy Works and CNI Enforcement

To understand NetworkPolicies, you must first understand the concept of "isolation". By default, all pods in a Kubernetes cluster are "non-isolated". This means they accept traffic from any source and can send traffic to any destination without restriction.

A pod becomes "isolated" for a specific direction (ingress or egress) the moment a NetworkPolicy selects that pod in its podSelector and includes that direction in its policyTypes list. Once a pod is isolated for ingress, the only allowed incoming connections are those explicitly permitted by the ingress list of the applicable policies. The same principle applies to egress.

Crucially, Kubernetes NetworkPolicies are strictly additive (deny-by-default, allow-by-exception). There are no explicit "deny" rules in the API. If multiple policies apply to a single pod, the effective permissions are the union of all allowed connections.

CNI Requirement: The Kubernetes API server merely stores the NetworkPolicy resources. It does not enforce them. To use network policies, your cluster must run a Container Network Interface (CNI) plugin that supports policy enforcement, such as Calico, Cilium, Antrea, or Weave Net. Creating a policy in a cluster without a supporting CNI will have absolutely no effect.


Practical Implementation: The Default Deny Pattern

In a multi-tenant or secure environment, starting with a strict network isolation baseline is a best practice. You should deploy a "default deny" policy in every namespace to ensure that any pod deployed without specific allow rules is entirely cut off from the network.

Example 1: Default Deny All (Ingress and Egress)

This policy targets all pods in the namespace and isolates them for both incoming and outgoing traffic.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: secure-namespace
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Explanation:

  • podSelector: {}: An empty selector matches all pods in the namespace.
  • policyTypes: [Ingress, Egress]: Explicitly isolates the matched pods for both directions.
  • Because the ingress and egress arrays are omitted (or empty), no traffic is permitted.

Example 2: Allowing Core DNS Traffic

If you apply the "Default Deny All" policy above, your pods will instantly lose the ability to resolve DNS names (such as reaching standard Kubernetes Services), which breaks almost all applications. You must explicitly allow egress traffic to your cluster's DNS server (typically CoreDNS running in the kube-system namespace).

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-egress
  namespace: secure-namespace
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

Explanation: This policy matches all pods in the namespace and adds an explicit egress rule. It allows outbound UDP and TCP traffic on port 53 strictly to pods labeled k8s-app: kube-dns residing in the kube-system namespace. Notice the use of the immutable kubernetes.io/metadata.name label, which the Kubernetes control plane automatically applies to all namespaces, allowing you to reliably target namespaces by name.


Ingress Rules: Controlling Inbound Traffic

Ingress rules define what entities can communicate with the targeted pods. You can filter traffic based on the source pod's labels, the source namespace's labels, or specific IP blocks.

Example 3: Allow Ingress from Specific Pods (Same Namespace)

If you have a backend database, you only want your frontend application to access it.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-ingress-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 5432

Explanation: The policy applies to any pod labeled role: db. The ingress.from rule specifies that only pods in the same namespace possessing the role: frontend label can initiate a TCP connection to port 5432.

Example 4: Allow Ingress from a Specific Namespace

Sometimes you need to expose an internal API to an entirely different tenant or team residing in a different namespace.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow-namespace
  namespace: backend-ns
spec:
  podSelector:
    matchLabels:
      app: internal-api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          project: myproject

Explanation: This targets pods labeled app: internal-api in the backend-ns. The namespaceSelector allows inbound traffic from any pod that resides in any namespace labeled with project: myproject.

Example 5: The "AND" Condition (Namespace + Pod Selector)

A critical architectural requirement is often restricting access to a specific pod inside a specific namespace. This requires a logical AND condition.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: strict-api-ingress
  namespace: backend-ns
spec:
  podSelector:
    matchLabels:
      app: secure-api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          user: alice
      podSelector:
        matchLabels:
          role: client

Explanation: Notice how namespaceSelector and podSelector are nested under the exact same list item (denoted by the lack of a leading hyphen before podSelector). This creates a strict logical AND. Traffic is only allowed if the source pod has the label role: client AND resides in a namespace labeled user: alice.

Example 6: The "OR" Condition Trap

A common mistake engineers make is altering the YAML array structure, fundamentally changing the security posture from an AND to an OR.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: loose-api-ingress
  namespace: backend-ns
spec:
  podSelector:
    matchLabels:
      app: secure-api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          user: alice
    - podSelector:
        matchLabels:
          role: client

Explanation: By placing a hyphen - before podSelector, it becomes a separate element in the from array. This policy evaluates as an OR condition. It allows traffic from any pod in the user: alice namespace, OR from any pod labeled role: client in the local backend-ns namespace. This heavily expands the attack surface compared to Example 5.


Egress Rules: Controlling Outbound Traffic

Egress rules secure your cluster against data exfiltration or malware contacting command-and-control servers. They dictate where your isolated pods are allowed to initiate connections.

Example 7: Restricting Egress via IP Blocks

When interacting with external databases, APIs, or legacy systems outside the Kubernetes cluster, you must use ipBlock, as Pod IPs are ephemeral and external systems lack Kubernetes labels.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: external-db-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 192.168.1.0/24
        except:
        - 192.168.1.50/32

Explanation: The pods labeled app: web are permitted to route outbound traffic to the 192.168.1.0/24 subnet. However, the except array prevents them from contacting the specific IP 192.168.1.50.

Architectural Note: Cluster ingress/egress mechanisms (like Service NodePorts) often perform Source NAT (rewriting the source IP). It is undefined in the Kubernetes specification whether policy evaluation happens before or after this IP rewrite, and behavior varies heavily by CNI plugin. Be cautious when using ipBlock targeting cluster-internal Service IPs.

Example 8: Egress with Port Ranges

Introduced natively in Kubernetes v1.25, you can target a range of ports rather than defining hundreds of individual port rules.

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: multiport-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: app
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 32000
      endPort: 32768

Explanation: Pods labeled role: app can initiate TCP connections to the 10.0.0.0/24 CIDR block. Crucially, the traffic is only allowed if the destination port falls within the inclusive range of 32000 to 32768. For this to work, endPort must be equal to or greater than port, and both must be strictly numeric. Your CNI plugin must also explicitly support the endPort field.


Testing and Troubleshooting NetworkPolicies

To verify that your policies are functioning correctly, you should intentionally try to violate them using interactive diagnostic pods.

  1. Launch a test pod: Use kubectl run to spin up an ephemeral container in the target namespace.
    bash
    kubectl run busybox --rm -ti --image=busybox -- /bin/sh
  2. Test the connection (Should Fail): Attempt to contact a secured service. If the default deny is working, the request should hang or time out.
    bash
    wget --spider --timeout=2 <service-ip-or-name>
  3. Launch an authorized test pod: Spin up another pod, this time attaching the labels authorized by your NetworkPolicy.
    bash
    kubectl run busybox --rm -ti --labels="role=client" --image=busybox -- /bin/sh
  4. Test the connection (Should Succeed): The same wget command should now instantly return a success confirmation.

Common Mistakes and Limitations to Avoid

As an architect, be aware of these common pitfalls and strict API limitations:

  • HostNetwork Bypasses: NetworkPolicy behavior for pods running with hostNetwork: true is largely undefined. Most CNI plugins completely ignore hostNetwork pods when evaluating podSelector, treating their traffic as standard Node-level traffic rather than pod traffic.
  • Protocols beyond TCP/UDP/SCTP: The NetworkPolicy API strictly governs Layer 4 protocols (TCP, UDP, and SCTP). If you define a "deny all" policy, it is only guaranteed to drop these three protocols. The behavior of other protocols, such as ICMP (ping) or ARP, is entirely undefined and depends strictly on your chosen CNI plugin. Some CNIs will drop pings under a deny rule; others will allow them.
  • Node Identity Targeting: You cannot use NetworkPolicies to target Kubernetes Nodes by their Kubernetes identities or labels (e.g., trying to block traffic to node-role.kubernetes.io/control-plane). You can only restrict node access using raw CIDR blocks via ipBlock.
  • Timing and Lifecycle Races: When a new NetworkPolicy is created, the CNI plugin requires time to translate it into underlying rules (like eBPF maps or iptables). If a pod is scheduled before the network plugin finishes processing the policy, the pod may be briefly started unprotected. If you require absolute assurance of reachability, you should use Init Containers to block application startup until the network path is fully established and policy enforcement has settled.

Based on Kubernetes v1.35 (Timbernetes). Changelog.