11 - kubectl Power Tools¶
What this session is¶
About 45 minutes. The kubectl commands you'll use to debug real clusters. Logs, exec, port-forward, top, events, and a few other essentials.
Logs¶
kubectl logs <pod> # all logs
kubectl logs -f <pod> # follow (like tail -f)
kubectl logs --tail 100 <pod> # last 100 lines
kubectl logs --since 10m <pod> # last 10 minutes
kubectl logs <pod> -c <container> # specific container (multi-container pod)
kubectl logs <pod> --previous # previous (crashed) container
Logs deployment-wide (across all matching pods):
For multiple pods at once, install stern (brew install stern):
stern is much nicer than kubectl's native multi-pod log handling. Indispensable.
Exec into a pod¶
kubectl exec -it <pod> -- sh # shell
kubectl exec <pod> -- ls /app # one-off
kubectl exec -it <pod> -c <container> -- bash
For Deployments (any matching pod):
Use exec to: inspect environment vars (env), check filesystem state, run database client (psql, redis-cli), test connectivity (nc -zv host port).
Port-forward¶
kubectl port-forward pod/<name> 8080:80
kubectl port-forward svc/<name> 8080:80
kubectl port-forward deploy/<name> 8080:80
Tunnel from your laptop into the cluster. Use for debugging - don't expose production this way.
Forward to a Service: kubectl picks a healthy backing pod.
Top¶
Requires metrics-server installed:
kubectl top nodes
kubectl top pods
kubectl top pods -A
kubectl top pods --sort-by=cpu
kubectl top pods --sort-by=memory
If error: Metrics API not available, install metrics-server (most local clusters need it explicitly; minikube addons enable metrics-server).
Events¶
The cluster's event log:
kubectl get events -A
kubectl get events --sort-by='.lastTimestamp'
kubectl get events --field-selector type=Warning
For a specific resource:
When debugging "why is this pod stuck," describe's Events section is usually the answer.
Watch¶
-w updates as things change. Useful for "watch a rolling update happen."
Describe¶
kubectl describe pod <name>
kubectl describe deployment <name>
kubectl describe service <name>
kubectl describe ingress <name>
Shows full configuration AND the relevant events. The most useful single command for debugging.
Edit¶
Powerful but discouraged for production - your changes aren't in version control. Use for quick experiments; for real changes, edit the YAML file and apply.
Patch¶
For surgical updates without full edit:
JSON or YAML format. Useful in scripts. Rarely needed interactively.
Diff¶
Shows what would change if you applied. Great for "did I update this correctly?"
Get with output formatting¶
kubectl get pods -o wide # extra columns (node, IP)
kubectl get pods -o yaml # full YAML
kubectl get pods -o json # full JSON (pipe to jq)
kubectl get pods -o name # just names (great in shell loops)
kubectl get pods -o jsonpath='{.items[*].metadata.name}' # custom field
kubectl get pods -o custom-columns='NAME:.metadata.name,IP:.status.podIP'
jsonpath is finicky but powerful. The custom-columns form is more readable.
Useful third-party kubectl helpers¶
Worth installing:
kubectx/kubens- switch contexts / namespaces fast.stern- tail logs across many pods.k9s- terminal UI for K8s. Liketopbut interactive and beautiful.kubectl-tree- show resource hierarchy (Deployment → ReplicaSet → Pods).kubectl-neat- strip generated fields from YAML for readability.stretchr/k- Lots of aliases and helpers. Or justalias k=kubectlyourself.
k9s in particular is something to install on day one. k9s then arrow keys + Enter to navigate. Logs are 1 keypress away; exec is 1 keypress away. Most clusters' day-2 operators live in k9s.
Real debugging workflow¶
A pod is failing. What I'd actually do:
kubectl get pods- status?kubectl describe pod <name>- Events at the bottom usually say why.kubectl logs <name>/kubectl logs <name> --previous- what did it print before failing?kubectl exec -it <name> -- sh- shell in (if it's running long enough). Check env vars, file paths, network reachability.kubectl get events --sort-by='.lastTimestamp' --all-namespaces- broader picture.stern <name>- if it restarts repeatedly, follow all incarnations.- Check related resources: ConfigMap, Secret, Service, PVC. Often the pod's fine; a dependency is wrong.
Exercise¶
-
Install
Or your distro's equivalent.sternandk9s: -
Exec into a running pod:
-
Watch a rolling update:
See pods come and go. -
Use stern:
-
Launch k9s:
Arrow keys, Enter to drill in,lfor logs,sfor shell,:for command,qto quit. Spend 10 minutes wandering. -
Resource sort:
Which pods use the most memory?
What you might wonder¶
"How do I know which container in a multi-container pod my logs are from?"
By default, you only see the first container. Specify with -c <name>. Or --all-containers for all of them.
"What's kubectl explain?"
Built-in docs: kubectl explain pod.spec.containers shows what fields exist. Useful when you forget a YAML field name.
"What's a kubeconfig?"
~/.kube/config - file kubectl reads to find clusters and credentials. Multiple clusters live here; kubectl config use-context <name> switches. kubectx makes this easier.
Done¶
- Read logs (with follow, previous, by selector).
- Exec into running containers.
- Port-forward for local access.
- Inspect with
get,describe,top,events. - Use third-party helpers (
stern,k9s). - Apply a real debugging workflow.
Next: Reading other people's manifests →