09 - Concurrency 101¶
What this session is¶
About an hour. You'll learn the two primitives that make Go famous: goroutines (cheap concurrent "things going at the same time") and channels (the way they talk to each other). This page is an introduction - there's much more to learn later - but by the end you'll be able to make programs that do several things at once.
A note before we start: concurrency is the hardest concept in programming. If this page is the most confusing one so far, that's because it's actually the hardest topic, not because the page is bad. Be patient with yourself.
The problem¶
Suppose you need to download three web pages. Each takes 2 seconds. If you do them one after another, total time is 6 seconds. If you start all three at once and wait for them all to finish, total time is ~2 seconds.
That second pattern - doing multiple things at the same time - is concurrency. Go makes it easy.
Goroutines: doing things "at the same time"¶
A goroutine is a function running independently of the rest of your program. To start one, write go in front of a function call:
package main
import (
"fmt"
"time"
)
func say(msg string) {
for i := 0; i < 3; i++ {
fmt.Println(msg)
time.Sleep(500 * time.Millisecond)
}
}
func main() {
go say("hello")
go say("world")
time.Sleep(2 * time.Second)
fmt.Println("done")
}
Type and run. You should see "hello" and "world" interleaved, then "done."
What's new:
go say("hello")startssayrunning independently and immediately moves on to the next line. Two goroutines are now running.time.Sleep(500 * time.Millisecond)pauses for half a second.time.Sleep(2 * time.Second)inmainis there so the program doesn't exit before the goroutines finish.
That last point is critical. When main returns, the whole program ends - including any goroutines still running. If you remove the time.Sleep(2 * time.Second) and run again, you'll see almost no output: main starts the goroutines, immediately ends, the program quits.
Sleeping is a bad way to "wait for things to finish." We need a real mechanism.
sync.WaitGroup: wait for N things¶
The simplest "wait for goroutines to finish" tool is sync.WaitGroup. Think of it as a counter that goroutines decrement when they're done; main waits until it hits zero.
package main
import (
"fmt"
"sync"
"time"
)
func work(id int, wg *sync.WaitGroup) {
defer wg.Done()
fmt.Println("worker", id, "starting")
time.Sleep(1 * time.Second)
fmt.Println("worker", id, "done")
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go work(i, &wg)
}
wg.Wait()
fmt.Println("all workers finished")
}
Run it. Output (order may vary):
worker 1 starting
worker 3 starting
worker 2 starting
worker 1 done
worker 3 done
worker 2 done
all workers finished
New things:
var wg sync.WaitGroup- creates a WaitGroup. Zero-valued, ready to use.wg.Add(1)- bumps the counter up before starting each goroutine.defer wg.Done()- schedules aDonecall to happen when this function returns.Donedecrements the counter. (deferis a Go feature: "do this thing right before this function ends, no matter how it ends." Very useful.)wg.Wait()- blocks until the counter hits zero.go work(i, &wg)- note&wg. We pass a pointer so all goroutines share the same WaitGroup (not copies of it).
Total run time: ~1 second, not 3. Three workers ran at the same time, each took 1 second, total was 1 second. That's the win.
Channels: goroutines talking to each other¶
Often a goroutine produces a value and another goroutine needs to receive it. The Go way to pass values between goroutines safely is a channel.
A channel is like a pipe. One goroutine puts values in one end; another takes them out the other end.
package main
import "fmt"
func main() {
ch := make(chan int)
go func() {
ch <- 42
}()
value := <-ch
fmt.Println(value) // 42
}
New things:
make(chan int)- creates a channel that carriesintvalues.makeis a built-in for creating channels, slices, and maps.ch <- 42- send the value42into the channel. (Arrow points into the channel.)value := <-ch- receive a value from the channel. (Arrow points out of the channel.)go func() { ... }()- start an inline anonymous function as a goroutine. The()at the end immediately calls it. Common pattern.
Important: the receive <-ch blocks (waits) until something is sent. The send ch <- 42 blocks until something is received. The two goroutines meet at the channel and exchange the value. This is called synchronization.
A more useful example: fetching things in parallel¶
package main
import (
"fmt"
"time"
)
func fetch(url string, results chan<- string) {
// Pretend we're making an HTTP call.
time.Sleep(500 * time.Millisecond)
results <- "result from " + url
}
func main() {
urls := []string{"a.example", "b.example", "c.example"}
results := make(chan string)
for _, url := range urls {
go fetch(url, results)
}
for range urls {
fmt.Println(<-results)
}
}
Run. Total time: ~500 ms (not 1500 ms). All three "fetches" happen at the same time.
A new piece of syntax: chan<- string. The arrow direction in the type says "this channel can only be sent to, not received from." It's a hint to the reader (and the compiler) about how the channel is used. The opposite is <-chan string (receive-only). Plain chan string allows both.
In the main function, we loop for range urls (no index, no value - just "do this len(urls) times") and receive a result each iteration. We don't know which URL's result comes out when, but we know we get exactly three results because we started three goroutines.
Channels can be closed¶
When you're done sending on a channel, you can close it: close(ch). Receivers can then loop until the channel is empty and closed:
This loop ends when the channel is closed (and drained). Useful when you have an unknown number of values coming in.
Important
only the sender should close a channel, and only when no more values are coming. Closing a channel that you're not the sole sender of is a way to crash your program. For now, keep it simple: one goroutine sends, closes when done; one or more receive.
Common patterns and warnings¶
- Don't write to a closed channel. Panic.
- Don't close a channel from the receiver side. Confusing and error-prone.
- A nil channel blocks forever. If you do
var ch chan intwithoutmake, both send and receive hang forever.
These rules feel restrictive at first. They exist because the alternative is data races (multiple things touching the same memory simultaneously without coordination) which are the worst kind of bug - they appear randomly and are nearly impossible to reproduce.
The slogan¶
Go's tagline for concurrency:
Don't communicate by sharing memory; share memory by communicating.
In other languages, threads talk by reading and writing the same variables, protected by locks. In Go, the idiom is: each thing owns its own data, and passes copies via channels when other things need them. Less subtle, fewer bugs.
You'll still meet mutexes (sync.Mutex) in real Go code - sometimes a lock is the right tool. But for most tasks, channels are first.
Going deeper¶
Real-world concurrent Go is mostly the basics above plus five or six patterns you'll see in every serious codebase. Read this section to be ready when you encounter them.
context.Context for cancellation¶
If a goroutine is doing slow work (an HTTP request, a query), you need a way to say "stop, never mind." That's what context is for:
import "context"
func fetchSlow(ctx context.Context, url string) (string, error) {
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
return string(body), nil
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
_, err := fetchSlow(ctx, "https://slow.example.com")
if err != nil {
fmt.Println("gave up:", err)
}
}
The HTTP client checks ctx periodically; if it's canceled (timeout fired, or someone called cancel()), the request aborts. The convention in Go: any long-running function takes ctx as the first argument. The standard library is built around this.
You'll write func DoStuff(ctx context.Context, ...) constantly in production code.
Buffered vs unbuffered channels¶
The channel we used (make(chan int)) is unbuffered: send blocks until someone is ready to receive. Useful for handoffs.
A buffered channel holds N values:
ch := make(chan int, 5)
ch <- 1 // doesn't block
ch <- 2 // doesn't block
// ... 3 more before sending blocks
Three honest rules: 1. Default to unbuffered. Forces you to think about the handoff. 2. Use buffer = 1 for "I want to send and not wait for a receiver." Common for signaling. 3. Use buffer = N when you know N. A worker pool's job queue. A bounded retry buffer.
A buffered channel is not a queue you should fill up and ignore. If the buffer fills, the sender blocks just like an unbuffered channel.
The select statement¶
select lets a goroutine wait on multiple channel operations at once:
select {
case msg := <-input:
handle(msg)
case <-ctx.Done():
return ctx.Err()
case <-time.After(5 * time.Second):
return errors.New("timeout")
}
Whichever case is ready first wins. If none is ready, select blocks. With a default: case, select is non-blocking. This is the workhorse for any goroutine that has more than one thing to wait on - almost always combined with ctx.Done() for cancellation.
Worker pools (bounded concurrency)¶
"Do these 1000 things, but only 10 at a time":
jobs := make(chan Job, len(work))
results := make(chan Result, len(work))
// Start 10 workers.
for i := 0; i < 10; i++ {
go func() {
for j := range jobs {
results <- process(j)
}
}()
}
// Send work.
for _, w := range work {
jobs <- w
}
close(jobs) // workers' range loops will exit when channel drains
// Collect.
for i := 0; i < len(work); i++ {
r := <-results
// ... use r ...
}
close(jobs) lets the workers know there's no more work; their for j := range jobs loops exit cleanly. Without that close, the workers would block forever after the last job.
The golang.org/x/sync/errgroup package wraps this pattern with proper error handling. Use it; don't roll your own.
The race detector¶
Concurrency bugs are the worst kind: rare, hard to reproduce, often invisible until production. Go ships with a race detector. Run your tests with it:
If two goroutines access the same memory without coordination (and at least one is writing), the race detector reports it with full stack traces from both sides. Production servers should not run with -race (it's slow), but every test suite should.
The first time you turn it on in an existing codebase, brace yourself.
Goroutine leaks¶
A goroutine that blocks forever and never gets a chance to return is leaked. It holds memory and other resources. Common patterns:
// Leak: caller forgot to read from the channel
go func() {
ch <- expensiveCompute() // blocks forever if nobody reads
}()
// Leak: no cancellation; channel never closed
go func() {
for msg := range input { // blocks forever if no one closes input
handle(msg)
}
}()
The fix is always the same: every long-running goroutine should be reachable by a ctx.Done() channel or a closed input channel that lets its loop exit. If you can't draw an "and how does this goroutine end" arrow when you write it, you're leaking.
A production trick: dump goroutines with runtime.NumGoroutine() and pprof.Lookup("goroutine").WriteTo(os.Stdout, 1) periodically. A leak shows up as a number that only ever grows.
Mutexes still exist, and that's fine¶
The "share memory by communicating" advice is right most of the time. But sometimes you genuinely need shared state - a counter, a cache, a connection pool. Use sync.Mutex (or sync.RWMutex for read-heavy work):
type Counter struct {
mu sync.Mutex
value int
}
func (c *Counter) Inc() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
Two rules: hold the lock for as short a time as possible; never call out to user code while holding a lock (deadlock risk). The race detector catches forgotten locks the same way it catches channel-data races.
Exercise¶
In a new file parallel.go:
Write a program that:
- Has a slice of 10 numbers:
nums := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}. - For each number, in a separate goroutine, computes its square and sends the result on a channel.
- In
main, receives all 10 squares from the channel and adds them up. - Prints the total. (Expected: 385, the sum of 1² + 2² + ... + 10².)
Hints:
- Use make(chan int) for the results channel.
- Loop over nums starting goroutines, then a second loop (for range nums) receiving results.
- Add each received value to a running total.
Stretch: make a version using sync.WaitGroup and a regular slice instead of a channel (each goroutine writes to its own slot in a []int of length 10; main waits and sums). Compare which is easier to read.
What you might wonder¶
"Is a goroutine the same as a thread?" No, but close enough for now. A goroutine is much cheaper than an OS thread (you can have millions of goroutines without trouble; you can't have millions of threads). Under the hood, Go's runtime schedules many goroutines onto a small pool of threads. The full picture lives in the "Go Mastery" path; for now, treat a goroutine as "a thing that runs at the same time as other things."
"What happens if two goroutines write to the same variable without a channel or lock?"
A data race - undefined behavior, randomly-corrupt results, intermittent crashes. Go has a built-in detector: run your program with go run -race yourfile.go. It reports races at runtime. Run with -race whenever you write goroutines until you trust your code.
"When should I NOT use goroutines?" When the work is fast and sequential. Spinning up a goroutine has small overhead; for sub-microsecond tasks, you'll often be slower. Goroutines pay off when each unit of work takes more than ~10µs, or when units of work can genuinely happen at the same time (waiting on I/O, network, disk).
"What's select?"
A way to wait on multiple channels at once - receive from whichever one is ready first. Useful in real programs. Out of scope for an intro page; we'll meet it in real code in page 12.
Done¶
You can now:
- Start a goroutine with go funcCall().
- Wait for a known number of goroutines to finish with sync.WaitGroup.
- Create and use channels (make(chan T), ch <- v, <-ch).
- Range over a channel until it's closed.
- Understand the slogan "share memory by communicating."
- Recognize the major footguns: closed channels, nil channels, data races.
Concurrency is a big topic and we've only scratched the surface. The good news: you can write quite a lot of useful concurrent code with just these primitives.
Next page: writing tests for your code - so you know it works and you'll know when it stops working.
Next: Tests → 10-tests.md