Appearance
Scenario: Why can't my Pod resolve Service names?
It is the most famous meme in engineering: "It's always DNS." In Kubernetes, this is statistically true. When a microservice cannot talk to another, 90% of the time the breakdown occurs during name resolution, not packet transport.
The Symptom
Your application logs show: fail: lookup db-service.prod.svc.cluster.local: no such host.
1. The Mechanic: The Search Path
Kubernetes DNS is not magic; it is just a /etc/resolv.conf file injected into every container.
text
search prod.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5Every time your app looks up db-service, the OS creates 5 queries (due to ndots:5) before it finally finds the host. This effectively amplifies DNS traffic by 5x.
2. Troubleshooting Steps
Step A: Verify CoreDNS The DNS server is just a Deployment. If it's down, nothing works.
bash
kubectl get pods -n kube-system -l k8s-app=kube-dnsStep B: The "nslookup" Test Launch a debug pod to test resolution manually:
bash
kubectl run -it --rm debug --image=busybox:1.28 --restart=Never -- nslookup db-serviceStep C: Network Policies
Common Trap
If you have a NetworkPolicy that denies Egress, you might be blocking UDP port 53. If your Pod cannot reach CoreDNS, it cannot resolve names.