Appearance
What is an Ingress, how does it differ from a Service, and how is external traffic routed into the cluster?
Lets break down the concept of Ingress, differentiate it from the standard Service resource, and explain the architectural flow of external traffic into your cluster.
1. What is an Ingress?
An Ingress is an API object that manages external access to the Services in a cluster, specifically for HTTP and HTTPS traffic,.
While Kubernetes Services provide stable networking for Pods, an Ingress acts as a "smart router" or entry point that sits in front of multiple Services. It consolidates routing rules into a single resource, allowing you to expose multiple internal components behind a single external IP address or Load Balancer.
Key features provided by Ingress include:
- Load Balancing: Distributing traffic across backend services.
- SSL/TLS Termination: Decrypting HTTPS traffic at the edge so backend services can speak plain HTTP.
- Name-based Virtual Hosting: Routing traffic to different services based on the hostname (e.g.,
foo.example.comvs.bar.example.com).
2. Ingress vs. Service: The Engineering Distinction
It is crucial to understand that Ingress is not a type of Service; it is a separate configuration layer that sits above Services.
| Feature | Service (Layer 4) | Ingress (Layer 7) |
|---|---|---|
| Primary Goal | Defines a logical set of Pods and a policy to access them (internal or external),. | Defines rules to route external HTTP/S traffic to Services. |
| Protocol | Protocol agnostic (TCP, UDP, SCTP). | Protocol aware (HTTP, HTTPS). |
| External Access | NodePort/LoadBalancer: Typically exposes one Service per IP/Port. Can be expensive (one cloud LB per service). | Ingress: Exposes multiple Services via a single IP/Load Balancer using routing rules (Host/Path). |
| Scope | Manages network transport to Pods. | Manages routing logic (URIs, Hostnames) to Services. |
Analogy: If a Service is a phone number for a specific department (Sales), Ingress is the company's main switchboard that listens to the request ("I need to talk to Sales") and routes the call to the correct number.
3. How External Traffic is Routed
Routing does not happen magically just by creating an Ingress resource. The architecture relies on a specific component called an Ingress Controller.
A. The Ingress Controller
Unlike other controllers (like the Deployment controller) which are built into the kube-controller-manager, an Ingress Controller is not started automatically with a cluster. You must explicitly install one (e.g., NGINX, AWS Load Balancer Controller, HAProxy) for Ingress resources to have any effect,.
B. The Traffic Flow
When external traffic hits the cluster, the flow typically follows this path,:
- The Entry Point: The client (e.g., a web browser) sends a request to the External IP address managed by the Ingress Controller (often an external Cloud Load Balancer provisioned by that controller).
- Rule Evaluation: The Ingress Controller inspects the HTTP request headers and URI path. It compares them against the Rules defined in your Ingress resource,.
- Host Matching: Is the request for
api.example.comorweb.example.com? - Path Matching: Is the request for
/storeor/login?
- Host Matching: Is the request for
- Routing to Service: Once a match is found, the controller routes the traffic to the backend Service specified in the rule.
- Delivery to Pod: The Service then load-balances the traffic to one of the healthy Pods backing that service.
C. Routing Patterns
You can configure Ingress to handle traffic in different ways:
- Simple Fanout: One IP address routes to multiple Services based on the URI path (e.g.,
/foogoes toservice1,/bargoes toservice2). - Name-Based Virtual Hosting: One IP address routes to multiple Services based on the Host header (e.g.,
foo.bar.comgoes toservice1,bar.foo.comgoes toservice2).