02 - Pods¶
What this session is¶
About 45 minutes. Pods - Kubernetes' smallest deployable unit. You'll write your first manifest YAML, apply it, inspect it, debug a broken one.
What a pod is¶
A Pod is one or more containers that share:
- A network namespace (same IP, same ports - they can reach each other on localhost).
- Volumes.
- A lifecycle (created together, destroyed together).
99% of pods have one container. The multi-container case is for tightly-coupled helpers ("sidecar pattern" - e.g. a log shipper running alongside the main app).
Think of a pod as "a wrapper for one container + its co-located helpers." When we say "deploying nginx to Kubernetes," we mean "a pod with one nginx container."
You almost never run pods directly. You'll use Deployments (page 03) which manage pods for you. But understanding pods first is essential.
Your first pod via YAML¶
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
Apply:
Output: pod/nginx created.
Inspect:
describe shows everything - image, IP, node, events. Read it.
Logs:
Reach it (port-forward):
Delete:
Anatomy of the YAML¶
Every Kubernetes manifest has the same four top-level keys:
| Key | What it is |
|---|---|
apiVersion |
Which API version this resource uses (v1 for core resources) |
kind |
Type of resource (Pod, Deployment, Service, ...) |
metadata |
Name, labels, annotations |
spec |
The actual configuration (resource-specific) |
The names are stable across resources. Once you've memorized them, every K8s YAML reads with the same structure.
More detailed pod spec¶
A more realistic pod:
apiVersion: v1
kind: Pod
metadata:
name: web
labels:
app: web
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
name: http
env:
- name: HELLO
value: "world"
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
restartPolicy: Always
New fields:
env- environment variables for the container.resources- CPU and memory budget.- requests - what the scheduler reserves (used to decide which node has room).
- limits - hard ceiling (container is throttled or killed if it exceeds).
- CPU in millicores:
100m= 0.1 CPU. Memory:128Mi= 128 mebibytes (useMi,Gi, not the SIM,G). livenessProbe- Kubernetes periodically checks; if it fails enough times, the container is restarted.restartPolicy-Always(default),OnFailure,Never.
Always set resource requests and limits. Pods without them can starve other pods or get killed unpredictably.
Multi-container pod (sidecar pattern)¶
apiVersion: v1
kind: Pod
metadata:
name: app-with-sidecar
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 8080
- name: log-shipper
image: my-log-shipper:1.0
# this container scrapes /shared/logs and sends to elsewhere
volumeMounts:
- name: logs
mountPath: /shared/logs
volumes:
- name: logs
emptyDir: {}
Both containers share the volume logs (an emptyDir - wiped when the pod dies). The main app writes logs there; the sidecar reads and ships them somewhere.
You'll rarely write this yourself - most needs are met by a single container. Recognize the pattern when you see it.
Pod lifecycle / status¶
kubectl get pods shows STATUS. Common values:
- Pending - pod is accepted but containers haven't started yet (image pulling, scheduling).
- Running - at least one container is alive.
- Succeeded - all containers exited with code 0. (Like a batch job.)
- Failed - at least one container exited with nonzero code, and
restartPolicywon't retry. - CrashLoopBackOff - container keeps crashing; Kubernetes is backing off restarts.
- ImagePullBackOff - image can't be pulled (wrong name, no auth).
describe shows recent events - the timeline of what happened. Almost always tells you what's wrong.
Debugging a broken pod¶
Real workflow when a pod won't start:
kubectl get pods- what state is it in?kubectl describe pod <name>- read the Events at the bottom. Usually says exactly what's wrong.kubectl logs <name>- what did the container print before crashing?kubectl logs <name> --previous- logs from the previous (crashed) container, if it restarted.kubectl exec -it <name> -- sh- shell in (if the container is at least briefly running).
Most issues are: wrong image name, missing env var, wrong port, can't reach a dependency, OOMKilled, no resource room on the node.
Pods are mortal¶
A pod dies if: - Its node dies. - It's evicted (resource pressure). - You delete it. - A controller (Deployment) replaces it with a new version.
When a pod dies, it's gone - a new pod gets a new name, a new IP. Anything you stored inside the pod is lost. That's why you use Deployments (which create replacement pods automatically) and volumes (for persistence).
Don't get attached to specific pods.
Exercise¶
-
Write and apply the basic pod YAML above (
pod.yaml). Apply, inspect, port-forward, curl it, delete. -
Write a pod that uses an env var:
Apply,apiVersion: v1 kind: Pod metadata: name: envtest spec: containers: - name: app image: alpine command: ["sh", "-c", "echo Hello, $WHO! && sleep 60"] env: - name: WHO value: "Kubernetes"kubectl logs envtest. Should print "Hello, Kubernetes!". -
Debug a broken pod intentionally:
Apply. RunapiVersion: v1 kind: Pod metadata: name: broken spec: containers: - name: app image: nonexistent/image:404kubectl get pods(should show ImagePullBackOff or ErrImagePull). Runkubectl describe pod broken. Read the Events. Delete. -
Resource limits:
Apply.apiVersion: v1 kind: Pod metadata: name: greedy spec: containers: - name: app image: alpine command: ["sh", "-c", "while true; do :; done"] resources: requests: { cpu: "100m", memory: "64Mi" } limits: { cpu: "200m", memory: "128Mi" }kubectl top pod greedy(may need metrics-server) - should show CPU usage capped near 200m. Delete.
What you might wonder¶
"Why YAML?" Kubernetes' API is declarative - you describe desired state in JSON/YAML. YAML is more human-friendly for editing. JSON also works.
"How do I see the YAML of a running resource?"
Useful for "what did Kubernetes actually create?" - includes auto-generated fields (UID, status, etc.)."What's apiVersion: v1 vs apiVersion: apps/v1?"
Some resources live in different API groups. Core resources (Pod, Service, ConfigMap) are in v1. Apps (Deployment, StatefulSet) are in apps/v1. Networking (Ingress) is in networking.k8s.io/v1. The right apiVersion for each resource is in the docs (or shown in kubectl explain Pod).
"What's kubectl explain?"
A built-in docs lookup. kubectl explain pod.spec.containers shows the schema. Useful when you forget a field name.
Done¶
- Write a basic Pod manifest.
- Apply, inspect, log, exec, delete.
- Read
kubectl describefor events. - Distinguish Pod status values.
- Debug a broken pod.