Skip to content

02 - Pods

What this session is

About 45 minutes. Pods - Kubernetes' smallest deployable unit. You'll write your first manifest YAML, apply it, inspect it, debug a broken one.

What a pod is

A Pod is one or more containers that share: - A network namespace (same IP, same ports - they can reach each other on localhost). - Volumes. - A lifecycle (created together, destroyed together).

99% of pods have one container. The multi-container case is for tightly-coupled helpers ("sidecar pattern" - e.g. a log shipper running alongside the main app).

Think of a pod as "a wrapper for one container + its co-located helpers." When we say "deploying nginx to Kubernetes," we mean "a pod with one nginx container."

You almost never run pods directly. You'll use Deployments (page 03) which manage pods for you. But understanding pods first is essential.

Your first pod via YAML

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.27
    ports:
    - containerPort: 80

Apply:

kubectl apply -f pod.yaml

Output: pod/nginx created.

Inspect:

kubectl get pods
kubectl describe pod nginx

describe shows everything - image, IP, node, events. Read it.

Logs:

kubectl logs nginx

Reach it (port-forward):

kubectl port-forward pod/nginx 8080:80
# open http://localhost:8080 in your browser

Delete:

kubectl delete pod nginx
# or:
kubectl delete -f pod.yaml

Anatomy of the YAML

Every Kubernetes manifest has the same four top-level keys:

Key What it is
apiVersion Which API version this resource uses (v1 for core resources)
kind Type of resource (Pod, Deployment, Service, ...)
metadata Name, labels, annotations
spec The actual configuration (resource-specific)

The names are stable across resources. Once you've memorized them, every K8s YAML reads with the same structure.

More detailed pod spec

A more realistic pod:

apiVersion: v1
kind: Pod
metadata:
  name: web
  labels:
    app: web
    tier: frontend
spec:
  containers:
  - name: nginx
    image: nginx:1.27
    ports:
    - containerPort: 80
      name: http
    env:
    - name: HELLO
      value: "world"
    resources:
      requests:
        cpu: "100m"
        memory: "128Mi"
      limits:
        cpu: "500m"
        memory: "256Mi"
    livenessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 10
  restartPolicy: Always

New fields:

  • env - environment variables for the container.
  • resources - CPU and memory budget.
  • requests - what the scheduler reserves (used to decide which node has room).
  • limits - hard ceiling (container is throttled or killed if it exceeds).
  • CPU in millicores: 100m = 0.1 CPU. Memory: 128Mi = 128 mebibytes (use Mi, Gi, not the SI M, G).
  • livenessProbe - Kubernetes periodically checks; if it fails enough times, the container is restarted.
  • restartPolicy - Always (default), OnFailure, Never.

Always set resource requests and limits. Pods without them can starve other pods or get killed unpredictably.

Multi-container pod (sidecar pattern)

apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
  - name: app
    image: my-app:1.0
    ports:
    - containerPort: 8080
  - name: log-shipper
    image: my-log-shipper:1.0
    # this container scrapes /shared/logs and sends to elsewhere
    volumeMounts:
    - name: logs
      mountPath: /shared/logs
  volumes:
  - name: logs
    emptyDir: {}

Both containers share the volume logs (an emptyDir - wiped when the pod dies). The main app writes logs there; the sidecar reads and ships them somewhere.

You'll rarely write this yourself - most needs are met by a single container. Recognize the pattern when you see it.

Pod lifecycle / status

kubectl get pods shows STATUS. Common values:

  • Pending - pod is accepted but containers haven't started yet (image pulling, scheduling).
  • Running - at least one container is alive.
  • Succeeded - all containers exited with code 0. (Like a batch job.)
  • Failed - at least one container exited with nonzero code, and restartPolicy won't retry.
  • CrashLoopBackOff - container keeps crashing; Kubernetes is backing off restarts.
  • ImagePullBackOff - image can't be pulled (wrong name, no auth).

describe shows recent events - the timeline of what happened. Almost always tells you what's wrong.

Debugging a broken pod

Real workflow when a pod won't start:

  1. kubectl get pods - what state is it in?
  2. kubectl describe pod <name> - read the Events at the bottom. Usually says exactly what's wrong.
  3. kubectl logs <name> - what did the container print before crashing?
  4. kubectl logs <name> --previous - logs from the previous (crashed) container, if it restarted.
  5. kubectl exec -it <name> -- sh - shell in (if the container is at least briefly running).

Most issues are: wrong image name, missing env var, wrong port, can't reach a dependency, OOMKilled, no resource room on the node.

Pods are mortal

A pod dies if: - Its node dies. - It's evicted (resource pressure). - You delete it. - A controller (Deployment) replaces it with a new version.

When a pod dies, it's gone - a new pod gets a new name, a new IP. Anything you stored inside the pod is lost. That's why you use Deployments (which create replacement pods automatically) and volumes (for persistence).

Don't get attached to specific pods.

Exercise

  1. Write and apply the basic pod YAML above (pod.yaml). Apply, inspect, port-forward, curl it, delete.

  2. Write a pod that uses an env var:

    apiVersion: v1
    kind: Pod
    metadata:
      name: envtest
    spec:
      containers:
      - name: app
        image: alpine
        command: ["sh", "-c", "echo Hello, $WHO! && sleep 60"]
        env:
        - name: WHO
          value: "Kubernetes"
    
    Apply, kubectl logs envtest. Should print "Hello, Kubernetes!".

  3. Debug a broken pod intentionally:

    apiVersion: v1
    kind: Pod
    metadata:
      name: broken
    spec:
      containers:
      - name: app
        image: nonexistent/image:404
    
    Apply. Run kubectl get pods (should show ImagePullBackOff or ErrImagePull). Run kubectl describe pod broken. Read the Events. Delete.

  4. Resource limits:

    apiVersion: v1
    kind: Pod
    metadata:
      name: greedy
    spec:
      containers:
      - name: app
        image: alpine
        command: ["sh", "-c", "while true; do :; done"]
        resources:
          requests: { cpu: "100m", memory: "64Mi" }
          limits:   { cpu: "200m", memory: "128Mi" }
    
    Apply. kubectl top pod greedy (may need metrics-server) - should show CPU usage capped near 200m. Delete.

What you might wonder

"Why YAML?" Kubernetes' API is declarative - you describe desired state in JSON/YAML. YAML is more human-friendly for editing. JSON also works.

"How do I see the YAML of a running resource?"

kubectl get pod nginx -o yaml
Useful for "what did Kubernetes actually create?" - includes auto-generated fields (UID, status, etc.).

"What's apiVersion: v1 vs apiVersion: apps/v1?" Some resources live in different API groups. Core resources (Pod, Service, ConfigMap) are in v1. Apps (Deployment, StatefulSet) are in apps/v1. Networking (Ingress) is in networking.k8s.io/v1. The right apiVersion for each resource is in the docs (or shown in kubectl explain Pod).

"What's kubectl explain?" A built-in docs lookup. kubectl explain pod.spec.containers shows the schema. Useful when you forget a field name.

Done

  • Write a basic Pod manifest.
  • Apply, inspect, log, exec, delete.
  • Read kubectl describe for events.
  • Distinguish Pod status values.
  • Debug a broken pod.

Next: Deployments →

Comments