Concurrency in Go: A Beginners Guide

Shivam JhaShivam Jha
6 min read

One of the most fascinating aspects of Go is how it treats concurrency as a first-class citizen. While many programming languages make concurrency feel like an afterthought or a painful maze of threads and callbacks, Go designed it into the core of the language. This makes it not only easier to write concurrent programs but also more intuitive to reason about them.

In this post, we’ll break down what concurrency really means, why it matters, and how Go implements it through goroutines, channels, and synchronization primitives. We’ll also look at common pitfalls such as race conditions and deadlocks, so you’ll know not just how to use concurrency, but also how to use it safely.

What is Concurrency (and how it differs from Parallelism)?

It’s common to confuse concurrency with parallelism. Concurrency is about structuring a program to deal with multiple tasks at once, while parallelism is about executing multiple tasks simultaneously.

  • Concurrency: Tasks overlap in progress. A single CPU may switch between tasks so quickly that it seems like they are running together.

  • Parallelism: Tasks literally run at the same time, but this requires multiple CPU cores.

Go’s concurrency model makes it easy to express concurrency. Whether your program ends up running things in parallel depends on the machine and Go’s runtime scheduler.

Why Concurrency Matters

Modern software often needs to handle many things at once:

  • A web server must serve thousands of requests without blocking.

  • A chat application must let you type messages while receiving new ones in real time.

  • A scraper might fetch hundreds of URLs concurrently to reduce total runtime.

Without concurrency, programs would handle these tasks one by one, wasting time waiting on I/O or other slow operations. Concurrency allows a program to remain responsive and efficient.

Goroutines: The Building Blocks

In Go, concurrency starts with goroutines. A goroutine is a lightweight thread managed by the Go runtime. Unlike operating system threads, goroutines are cheap — you can create thousands of them without exhausting memory.

Launching a goroutine is as simple as prefixing a function call with go:

func task() {
    fmt.Println("Running in a goroutine")
}
func main() {
    go task()   // starts concurrently
    fmt.Println("Main function continues")
}

When you call go task(), the function task runs concurrently with the main function. The Go scheduler takes care of multiplexing goroutines onto system threads.

Key points about goroutines:

  • They are much lighter than threads (few KB of stack space vs MB).

  • The runtime scheduler grows/shrinks their stack dynamically.

  • You can run millions of goroutines on a modern machine.

Channels: Communication Made Simple

Concurrency is powerful, but it creates a question: how do goroutines safely share data? Instead of encouraging shared memory with locks everywhere, Go introduces channels.

A channel is a typed conduit through which goroutines communicate. One goroutine sends data into the channel, another receives from it.

ch := make(chan int)
// sender
go func() {
    ch <- 42
}()
// receiver
val := <-ch
fmt.Println(val) // this will print 42

Channels provide two guarantees:

  1. Synchronization — A send operation blocks until another goroutine receives from the channel (and vice versa, for unbuffered channels).

  2. Safety — Data is passed by value, reducing risks of race conditions.

Channels can be buffered as well, allowing asynchronous sends up to a capacity:

ch := make(chan string, 2)
ch <- "first"
ch <- "second"
// does not block until capacity exceeded

WaitGroups: Coordinating Multiple Goroutines

Often, you want to wait for multiple goroutines to finish before moving on. For example, spawning 10 workers to fetch URLs and then collecting their results. For this, Go provides sync.WaitGroup.

var wg sync.WaitGroup

for i := 1; i <= 5; i++ {
    wg.Add(1)
    go func(id int) {
        defer wg.Done()
        fmt.Println("Worker", id, "done")
    }(i)
}

wg.Wait() // waits for all workers

A WaitGroup is like a counter: each goroutine increments it when starting and decrements when finishing. The main function blocks until the counter hits zero.

Mutexes: Protecting Shared Data

Sometimes goroutines need to update shared data. Without synchronization, you risk race conditions, where multiple goroutines read/write the same variable simultaneously and cause unpredictable results.

A simple example of a race condition:

var count int

for i := 0; i < 1000; i++ {
    go func() {
        count++ // this is not safe!
    }()
}

The result of count is undefined because increments are not atomic.

The solution is a mutex (mutual exclusion lock):

var mu sync.Mutex
var count int

for i := 0; i < 1000; i++ {
    go func() {
        mu.Lock()
        count++
        mu.Unlock()
    }()
}

The mutex ensures only one goroutine can access the variable at a time.

Common Pitfalls

Concurrency is powerful but not free of challenges. Some common issues include:

  • Race Conditions: When multiple goroutines access shared data without synchronization.

  • Deadlocks: When two goroutines wait on each other indefinitely (e.g., both trying to receive from each other’s channels).

  • Starvation: When one goroutine monopolizes resources, preventing others from making progress.

  • Goroutine Leaks: When goroutines are created but never exit, leading to memory/resource leaks.

Go provides tools like the -race flag (go run -race) to detect race conditions during testing.

Concurrency in the Real World

To see the value of concurrency, imagine building a web scraper. Without concurrency, fetching 100 URLs sequentially may take minutes. With concurrency, you can spawn 100 goroutines, each fetching one URL, and process them together. The result: huge time savings and responsiveness.

Another example is a web server. In Go’s standard library, the HTTP server spawns a new goroutine for each incoming request. This means thousands of clients can be handled smoothly without blocking each other.

Conclusion

Concurrency in Go is not an afterthought — it is at the heart of the language. With goroutines and channels, Go makes concurrent programming accessible without sacrificing performance. By understanding goroutines, channels, synchronization primitives like WaitGroups and Mutexes, and being mindful of pitfalls like race conditions and deadlocks, you can build software that is responsive, efficient, and scalable.

If you enjoyed this blog, I hope it helped you demystify concurrency in Go and see how goroutines, channels, and synchronization tools come together to make programs efficient and safe. I tried to explain it in a way that’s approachable even for beginners, while still covering the technical depth needed to actually write concurrent programs.

If you’re a Go developer or just learning, I encourage you to experiment with goroutines and channels — try building small projects like a web scraper, a worker pool, or a mini chat server. Concurrency might feel tricky at first, but once you get it, it becomes one of the most rewarding and powerful tools in your toolkit.

And if you found this useful, share it with your friends, try it out in your own projects, or drop a comment letting me know what concepts you want me to break down next.

0
Subscribe to my newsletter

Read articles from Shivam Jha directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shivam Jha
Shivam Jha

LFX'24 @Kyverno | Web Dev | DevOps | OpenSource | Exploring Cloud Native Technologies.