Go's concurrency model is one of its strongest features. Unlike thread-based concurrency, Go uses goroutines (lightweight, cheap) and channels (typed pipes for communication). This guide teaches you practical concurrent Go through realistic examples.
Goroutines
A goroutine is a function running concurrently with other goroutines. Launched with the go keyword:
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
fmt.Printf("Hello, %s!\n", name)
}
func main() {
// Launch goroutines
go sayHello("Alice")
go sayHello("Bob")
go sayHello("Charlie")
// Without this, main exits before goroutines finish
time.Sleep(100 * time.Millisecond)
fmt.Println("Done")
}
Important: Goroutines are extremely cheap. You can run thousands (even millions) of them. Each starts with ~2KB of stack space that grows as needed.
sync.WaitGroup
time.Sleep is never the right way to wait for goroutines. Use sync.WaitGroup:
package main
import (
"fmt"
"sync"
)
func processItem(id int, wg *sync.WaitGroup) {
defer wg.Done() // Signal completion when function returns
fmt.Printf("Processing item %d\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // Increment counter before launching
go processItem(i, &wg) // Pass pointer to wg
}
wg.Wait() // Block until counter reaches 0
fmt.Println("All items processed")
}
Channels
Channels are typed conduits for goroutine communication. They enforce the Go philosophy: "Don't communicate by sharing memory; share memory by communicating."
package main
import "fmt"
func sum(nums []int, ch chan int) {
total := 0
for _, n := range nums {
total += n
}
ch <- total // Send result to channel
}
func main() {
nums := []int{1, 2, 3, 4, 5, 6, 7, 8}
ch := make(chan int)
// Split work between two goroutines
go sum(nums[:4], ch) // [1,2,3,4]
go sum(nums[4:], ch) // [5,6,7,8]
x, y := <-ch, <-ch // Receive two values
fmt.Println(x + y) // 36
}
Buffered Channels
Unbuffered channels block sender until receiver is ready. Buffered channels have a queue:
// Unbuffered - blocks until receiver is ready
ch := make(chan int)
// Buffered - holds up to 5 values without blocking
ch := make(chan int, 5)
ch <- 1 // doesn't block (buffer not full)
ch <- 2 // doesn't block
v := <-ch // 1
Closing Channels and Range
func producer(ch chan<- int) { // send-only channel
for i := 0; i < 5; i++ {
ch <- i
}
close(ch) // Signal no more values
}
func main() {
ch := make(chan int)
go producer(ch)
// Range over channel until it's closed
for v := range ch {
fmt.Println(v) // 0, 1, 2, 3, 4
}
// Alternative: check if channel is closed
v, ok := <-ch
if !ok {
fmt.Println("Channel closed, value:", v) // v = zero value
}
}
Worker Pool Pattern
The most useful pattern in Go concurrency — a fixed number of goroutines processing a queue of work:
package main
import (
"fmt"
"sync"
"time"
)
type Job struct {
ID int
Data string
}
type Result struct {
JobID int
Output string
Error error
}
func worker(id int, jobs <-chan Job, results chan<- Result, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
// Simulate work
time.Sleep(50 * time.Millisecond)
result := Result{
JobID: job.ID,
Output: fmt.Sprintf("Processed '%s' by worker %d", job.Data, id),
}
results <- result
}
}
func main() {
numWorkers := 3
numJobs := 10
jobs := make(chan Job, numJobs)
results := make(chan Result, numJobs)
var wg sync.WaitGroup
// Start workers
for i := 1; i <= numWorkers; i++ {
wg.Add(1)
go worker(i, jobs, results, &wg)
}
// Send jobs
for i := 1; i <= numJobs; i++ {
jobs <- Job{ID: i, Data: fmt.Sprintf("item-%d", i)}
}
close(jobs) // No more jobs
// Wait for workers, then close results
go func() {
wg.Wait()
close(results)
}()
// Collect results
for result := range results {
fmt.Printf("Job %d: %s\n", result.JobID, result.Output)
}
}
Select Statement
select waits on multiple channel operations — whichever is ready first executes:
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "one"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "two"
}()
for i := 0; i < 2; i++ {
select {
case msg := <-ch1:
fmt.Println("Received from ch1:", msg)
case msg := <-ch2:
fmt.Println("Received from ch2:", msg)
}
}
}
Select with Timeout
func fetchWithTimeout(url string, timeout time.Duration) (string, error) {
resultCh := make(chan string, 1)
errCh := make(chan error, 1)
go func() {
// Simulate HTTP request
time.Sleep(2 * time.Second)
resultCh <- "response body"
}()
select {
case result := <-resultCh:
return result, nil
case err := <-errCh:
return "", err
case <-time.After(timeout):
return "", fmt.Errorf("request timed out after %v", timeout)
}
}
Select with Default (Non-Blocking)
select {
case msg := <-ch:
fmt.Println("Got message:", msg)
default:
fmt.Println("No message available, continuing...")
}
Context for Cancellation
context.Context is the standard way to cancel goroutines:
package main
import (
"context"
"fmt"
"time"
)
func doWork(ctx context.Context, id int) {
for {
select {
case <-ctx.Done():
fmt.Printf("Worker %d cancelled: %v\n", id, ctx.Err())
return
default:
fmt.Printf("Worker %d doing work...\n", id)
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
// Cancel after 2 seconds
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel() // Always call cancel to release resources
for i := 1; i <= 3; i++ {
go doWork(ctx, i)
}
<-ctx.Done()
time.Sleep(100 * time.Millisecond) // Let goroutines log cancellation
fmt.Println("Main done")
}
sync.Mutex for Shared State
When goroutines share memory (not the preferred way, but sometimes necessary):
package main
import (
"fmt"
"sync"
)
type SafeCounter struct {
mu sync.Mutex
count int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}
func main() {
counter := &SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Println("Final count:", counter.Value()) // Always 1000
}
sync.RWMutex for Read-Heavy Workloads
type Cache struct {
mu sync.RWMutex
data map[string]string
}
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock() // Multiple readers allowed simultaneously
defer c.mu.RUnlock()
v, ok := c.data[key]
return v, ok
}
func (c *Cache) Set(key, value string) {
c.mu.Lock() // Exclusive write lock
defer c.mu.Unlock()
c.data[key] = value
}
sync.Once for One-Time Initialization
var (
instance *Database
once sync.Once
)
func GetDatabase() *Database {
once.Do(func() {
instance = &Database{
conn: openConnection(),
}
})
return instance
}
Practical Example: Parallel HTTP Requests
package main
import (
"context"
"fmt"
"net/http"
"sync"
"time"
)
type URLResult struct {
URL string
StatusCode int
Duration time.Duration
Error error
}
func checkURL(ctx context.Context, url string) URLResult {
start := time.Now()
result := URLResult{URL: url}
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
result.Error = err
return result
}
client := &http.Client{Timeout: 10 * time.Second}
resp, err := client.Do(req)
if err != nil {
result.Error = err
result.Duration = time.Since(start)
return result
}
defer resp.Body.Close()
result.StatusCode = resp.StatusCode
result.Duration = time.Since(start)
return result
}
func checkURLsConcurrently(urls []string) []URLResult {
results := make([]URLResult, len(urls))
var wg sync.WaitGroup
ctx := context.Background()
for i, url := range urls {
wg.Add(1)
go func(idx int, u string) {
defer wg.Done()
results[idx] = checkURL(ctx, u)
}(i, url)
}
wg.Wait()
return results
}
func main() {
urls := []string{
"https://golang.org",
"https://github.com",
"https://stackoverflow.com",
}
start := time.Now()
results := checkURLsConcurrently(urls)
for _, r := range results {
if r.Error != nil {
fmt.Printf("%-40s ERROR: %v\n", r.URL, r.Error)
} else {
fmt.Printf("%-40s %d (%v)\n", r.URL, r.StatusCode, r.Duration.Round(time.Millisecond))
}
}
fmt.Printf("\nTotal time: %v (sequential would be ~%v)\n",
time.Since(start).Round(time.Millisecond),
sumDurations(results))
}
func sumDurations(results []URLResult) time.Duration {
var total time.Duration
for _, r := range results {
total += r.Duration
}
return total.Round(time.Millisecond)
}
Detecting Race Conditions
Go has a built-in race detector:
go run -race main.go
go test -race ./...
go build -race -o myapp .
Output when a race is detected:
WARNING: DATA RACE
Write at 0x00c000018098 by goroutine 7:
main.main.func1()
/tmp/race.go:14 +0x38
Previous read at 0x00c000018098 by main goroutine:
main.main()
/tmp/race.go:17 +0x5c
Always run tests with -race in CI.
Common Mistakes
Goroutine leak — goroutine blocked forever on a channel no one reads:
// BAD: goroutine leaks if caller returns early
func leak() {
ch := make(chan int)
go func() {
result := doSlowWork()
ch <- result // blocks forever if no receiver
}()
}
// GOOD: buffered channel or context cancellation
func noLeak(ctx context.Context) {
ch := make(chan int, 1) // buffered
go func() {
select {
case ch <- doSlowWork():
case <-ctx.Done():
}
}()
}
Closing a channel twice — panics. Only the sender should close a channel, and only once.
Sending on a closed channel — panics.
Loop variable capture (fixed in Go 1.22+):
// Before Go 1.22: all goroutines print the same value
for i := 0; i < 5; i++ {
go func() { fmt.Println(i) }() // BUG in older Go
}
// Fixed: explicit parameter
for i := 0; i < 5; i++ {
go func(i int) { fmt.Println(i) }(i) // correct pre-1.22
}
// Go 1.22+: loop variable is per-iteration, both work
Summary
| Tool | Use For |
|---|---|
goroutine | Run functions concurrently |
channel | Pass data between goroutines |
sync.WaitGroup | Wait for goroutines to finish |
sync.Mutex | Protect shared data |
sync.RWMutex | Read-heavy shared data |
sync.Once | One-time initialization |
select | Wait on multiple channels |
context | Cancellation and timeouts |
Go's concurrency model rewards the "share by communicating" approach. Start with goroutines + WaitGroup for parallel tasks, use channels for pipelines, and reach for Mutex only when channels don't fit the problem. Always run go test -race to catch data races early.