01 - Setup¶
What this session is¶
About 30 minutes. Get a local Kubernetes cluster running. Install kubectl. Run your first commands.
Step 1: Pick a local-cluster tool¶
Pick one. They all work; you can switch later.
Option A: Docker Desktop (easiest on macOS / Windows) Open Docker Desktop → Settings → Kubernetes → check "Enable Kubernetes" → Apply. Wait a few minutes; Docker Desktop downloads everything and starts a single-node cluster.
Option B: minikube
brew install minikube # macOS
sudo apt install minikube # if available; else download from minikube.sigs.k8s.io
minikube start
Option C: kind (Kubernetes IN Docker)
brew install kind # macOS
# or: curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64 && chmod +x ./kind && sudo mv ./kind /usr/local/bin/
kind create cluster
Option D: k3d (k3s in Docker)
Pick one. The rest of this path uses commands that work the same way regardless of which.
Step 2: Install kubectl¶
kubectl is the CLI. Docker Desktop's Kubernetes installs it; minikube and kind don't (always).
Verify:
Should print a version like Client Version: v1.31.x.
Step 3: Verify the cluster¶
You should see something like:
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
If you get connection refused or similar, the cluster isn't running. Restart the cluster (minikube start, kind delete cluster && kind create cluster, restart Docker Desktop).
Check the nodes:
You should see one node (your local "machine" in the cluster):
Cluster is up.
Step 4: Your first kubectl commands¶
kubectl get pods # no pods yet - empty
kubectl get pods -A # all namespaces - see system pods
kubectl get namespaces # list namespaces
kubectl version # client + server version
kubectl config current-context # which cluster you're talking to
-A is short for --all-namespaces. You'll see system pods running in kube-system (CoreDNS, kube-proxy, etc.) - these are Kubernetes' own internals.
Step 5: Run your first pod¶
The fastest way (we'll do this properly with YAML in page 02):
After a few seconds:
You just deployed nginx to your cluster.
Step 6: Look at the pod's logs¶
Should show nginx's startup messages.
Step 7: Port-forward to access it¶
The pod is running in the cluster but not reachable from your laptop yet. Use port-forward to tunnel:
In another terminal:
Should return the nginx welcome HTML.
Ctrl-C the port-forward when done. We'll do permanent exposure with Services in page 04.
Step 8: Clean up¶
Step 9: Set up shell completion (optional but valuable)¶
# bash
source <(kubectl completion bash)
echo 'source <(kubectl completion bash)' >> ~/.bashrc
# zsh
source <(kubectl completion zsh)
echo 'source <(kubectl completion zsh)' >> ~/.zshrc
Tab completion for resource names. Saves a lot of typing.
Also useful: alias k=kubectl:
Many Kubernetes users have done this; you'll see k get pods in tutorials and at work.
What just happened, conceptually¶
You ran kubectl run - that told the Kubernetes API server "I want a pod running nginx." The cluster's scheduler picked a node (your only one), pulled the image, started the container, marked the pod as Running.
You then asked the API server "show me the pod's logs" - it routed to the kubelet on the node, which fetched the logs from the container runtime.
You used port-forward to tunnel a local port through kubectl to the pod inside the cluster. Everything went through the API server.
Three components you've now interacted with:
- API server - the cluster's brain. Everything goes through it.
- Scheduler - decides which node a pod runs on.
- kubelet - agent on each node; runs the actual pods.
There's more (etcd, controllers, etc.); we'll meet them as needed.
What you might wonder¶
"Why is everything inside the cluster opaque to my laptop?" Pods get IPs only within the cluster's network. To reach them from outside you either port-forward (debugging), use a Service of type NodePort or LoadBalancer (page 04), or use an Ingress (page 09). Architectural separation.
"Can I run multiple clusters?"
Yes - kind create cluster --name another, kubectl config get-contexts to list, kubectl config use-context <name> to switch. Useful for testing different K8s versions or simulating multi-cluster setups.
"What's kubeconfig?"
~/.kube/config - the file kubectl reads to know which cluster to talk to and how to authenticate. Multiple clusters can coexist in one config; current-context is which one is active.
"What if I break the cluster?"
Local clusters are throwaway. kind delete cluster && kind create cluster recreates from scratch in 30 seconds. Don't be afraid to break things.
Done¶
- Local Kubernetes cluster running.
kubectlinstalled and talking to the cluster.- Ran a pod, viewed logs, port-forwarded.
- Recognized the API server / scheduler / kubelet model.