Go Concurrency Patterns — Practical Guide (Mar 11, 2026)
Go Concurrency Patterns
Level: Intermediate
Date: March 11, 2026 (Relevant for Go 1.21 and later)
Prerequisites
Before diving into Go concurrency patterns, you should be comfortable with Go basics: variables, functions, and interfaces. Prior experience using goroutines and channels is highly recommended because this article builds on those concepts to demonstrate effective concurrency patterns introduced and refined up to Go 1.21. If you’re new to Go concurrency primitives, consider reviewing the official Go Tour section on Concurrency.
Hands-on Steps: Common Concurrency Patterns in Go
Go’s CSP-style concurrency model uses goroutines and channels as its primary building blocks. The way you compose these primitives leads to powerful, elegant concurrency patterns. We’ll explore several core patterns, illustrating practical use cases and idiomatic implementations.
The Pipeline Pattern
This pattern is useful for breaking a problem into discrete processing stages, each running concurrently and communicating via channels. Ideal for streaming data transformations.
// worker stages
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for _, n := range nums {
out <- n
}
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
out <- n * n
}
}()
return out
}
func main() {
nums := generate(2, 3, 4)
squares := square(nums)
for sq := range squares {
fmt.Println(sq)
}
}
This pipeline generates integers, feeds them into a squaring stage, and prints results. Notice how each stage closes its output channel, signalling completion downstream.
The Worker Pool Pattern
This pattern controls concurrency when you want to limit simultaneous goroutines, balancing throughput with resource constraints.
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("worker %d processing job %dn", id, j)
results <- j * 2 // simulate work
}
}
func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)
// start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// send jobs
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
// collect results
for a := 1; a <= 5; a++ {
fmt.Println(<-results)
}
}
This pattern balances load: jobs are sent to a fixed number of workers. Use buffered channels to smooth throughput when needed.
The Select Statement for Multiplexing
The select statement enables waiting on multiple channel operations simultaneously, essential for timeout, cancellation, or multiplexing patterns.
func workerWithTimeout(jobs <-chan int, timeout time.Duration) {
for {
select {
case job, ok := <-jobs:
if !ok {
return
}
fmt.Printf("Processing job %dn", job)
case <-time.After(timeout):
fmt.Println("Timeout: No jobs received. Exiting.")
return
}
}
}
This function waits on jobs channel but exits if no job arrives within a timeout. This pattern is crucial when dealing with externally triggered events or to avoid indefinite blocking.
Common Pitfalls
- Leaking goroutines: Ensure goroutines exit properly by closing channels or using contexts to prevent leaks.
- Deadlocks: Avoid situations where goroutines wait indefinitely on channels without counterpart sends or receives.
- Race conditions: Though data communicated via channels is safe, shared variables accessed outside channels require explicit synchronisation (e.g.
sync.Mutexoratomicpackage). - Unbuffered channels and blocked sends/receives: Be mindful that unbuffered channels block until the opposite side is ready. Use buffered channels if decoupling is needed.
Validation
To ensure correctness of your concurrent code:
- Run with the race detector: Execute your program with
go run -raceorgo test -raceto catch data races. - Use context for cancellation: Prefer
context.Contextto control goroutine lifecycle in real-world applications. - Write unit tests: For concurrent components, design tests that verify output correctness under concurrency load.
- Use profiling and tracing: The Go runtime provides
pprofand tracing tools (e.g.,go tool trace) to detect bottlenecks or leaks.
Checklist / TL;DR
- Use goroutines + channels for simple concurrency; preserve clarity.
- Pipeline pattern for sequential concurrent stages communicating over channels.
- Worker pools to control parallelism and resource use.
selectfor multiplexing, timeouts, and cancellation.- Always close channels where appropriate to signal completion.
- Beware of goroutine leaks; cancel or close channels properly.
- Run your code with
-raceand write tests that probe concurrency behaviours. - Use
context.Contextto manage cancellations and timeouts cleanly in long-lived goroutines.
When to Choose One Pattern Over Another
Pipeline vs Worker Pool: Pipelines are great for transforming data streams step-by-step, where each stage’s output is the next stage’s input. Worker pools fit best when you have many independent tasks and want to limit concurrency (e.g., limited CPU, IO-heavy work).
Select multiplexing: Always use select when dealing with multiple channels or timeouts simultaneously to prevent blocking or to add responsiveness.
Context vs Channel Closing: For cancelling work, use context.Context as a standard approach, but channels are still ideal for signalling normal completion or sending results.