04 - Services¶
What this session is¶
About 45 minutes. Services - how pods reach each other and how the outside world reaches your pods. The piece that turns "3 pods running nginx" into "a stable endpoint other things can talk to."
The problem¶
Each pod gets its own IP, but pod IPs are ephemeral - a pod dies and is replaced; the new one has a different IP. You can't hardcode a pod IP anywhere.
A Service is a stable virtual IP + DNS name that fronts a set of pods. Clients talk to the service; the service load-balances across whichever pods currently match.
Your first Service¶
Assuming the nginx Deployment from page 03 is running:
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx # match pods with label app=nginx
ports:
- port: 80 # the service port
targetPort: 80 # the container port
type: ClusterIP # default - internal only
Apply:
Output:
The service has a stable IP. It also gets a DNS name: nginx.default.svc.cluster.local (or just nginx within the same namespace).
The four Service types¶
| Type | What it does |
|---|---|
ClusterIP (default) |
Internal-only. Reachable from within the cluster. |
NodePort |
Exposes the service on a port on every node (30000-32767). |
LoadBalancer |
In a cloud cluster, provisions an external cloud load balancer. |
ExternalName |
DNS alias to an external hostname. Rare. |
For pod-to-pod traffic: ClusterIP. For "I want a stable external port for testing": NodePort. For "I want a real public endpoint in the cloud": LoadBalancer. Local clusters often don't give you LoadBalancers automatically - use port-forward or NodePort.
Test it: pod-to-pod¶
Run a debug container:
kubectl run debug --rm -it --image=alpine -- sh
# inside the container:
wget -qO- http://nginx
# should return nginx's welcome HTML
exit
The wget reached the nginx pods via the service's ClusterIP. DNS resolved nginx to the service's IP. Different pods get balanced across the three nginx replicas.
NodePort: external access (sort of)¶
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080 # optional - leave out for auto-assigned
Now reachable at <any-node-IP>:30080. On a local single-node cluster, that's localhost:30080.
NodePorts are for tests and ad-hoc access. For production, use a LoadBalancer or Ingress (page 09).
LoadBalancer (cloud)¶
In AWS/GCP/Azure, this provisions a real load balancer with a public IP. On local clusters, this stays pending forever - you'd use minikube tunnel or similar to fake it. Stick with port-forward for local dev.
How selectors find pods¶
The Service's selector matches pods by labels. Recall the Deployment's pod template:
Service's selector:
The service tracks all pods with app: nginx. As pods come and go, the service automatically updates its list of endpoints.
You can see the actual endpoints:
Port terminology¶
Sources of confusion:
port- the Service's port (what clients call).targetPort- the container's port (where traffic is forwarded).nodePort- for NodePort services, the port exposed on every node.
Often they're all the same (port: 80, targetPort: 80) but they don't have to be. Useful: port: 8080, targetPort: 8080 exposes a service on 8080 that talks to the container's 8080.
Service discovery via DNS¶
Inside the cluster, every Service has a DNS name:
<service-name>.<namespace>.svc.cluster.local
# or just <service-name> if you're in the same namespace
So a pod in the default namespace can reach the nginx service simply as nginx. A pod in another namespace would use nginx.default or nginx.default.svc.cluster.local.
This is how multi-service apps wire together: each app uses the service name of its dependency.
Headless service (briefly)¶
A service with clusterIP: None doesn't get a virtual IP - instead, DNS returns the IPs of all matching pods. Used for things like databases where clients need to talk to specific replicas. Mentioned for recognition.
Exercise¶
-
Apply the nginx Deployment + Service:
-
Test pod-to-pod:
-
Change to NodePort and access externally: Edit
service.yamltotype: NodePort, addnodePort: 30080. Apply: -
Watch endpoints update as pods change:
-
Cleanup:
What you might wonder¶
"Is the Service load-balancing or round-robin?" By default: load-balancing across endpoints. Behavior depends on the kube-proxy mode (iptables, IPVS, eBPF). For most cases, it's "good enough" round-robin-ish.
"Can a Service select pods from multiple Deployments?"
Yes - anything matching the labels. Useful for blue-green: two Deployments both labeled app: myapp; the Service serves both during the switchover.
"What about session stickiness?"
spec.sessionAffinity: ClientIP makes the service stick a client to the same backend pod. Rarely needed; mentioned.
"What's the difference between Service and Ingress?" Services are L4 (TCP/UDP). Ingresses are L7 (HTTP/HTTPS) and add host/path routing. Page 09.
Done¶
- Write a Service.
- Use it for pod-to-pod communication via DNS.
- Recognize the four Service types.
- Distinguish
port,targetPort,nodePort. - Inspect Endpoints.
Next: ConfigMaps and Secrets →