Rediscovering Concurrency in Go


When I first encountered Go, the discussions around goroutines and channels were overwhelming. As a computer science graduate, I was familiar with threads, but I didn't anticipate the unique approach Go offers.
My experience with a talking doll project coded in C highlighted the complexities of managing multiple inputs and outputs using raw threads. This experience underscored the absence of Go's streamlined concurrency features, such as channels and lightweight goroutines.
This article aims to bridge the gap left by many Go tutorials by delving into the fundamentals of concurrency, comparing Go's approach with the more manual methods in C.
What is concurrency?
You may need to know parallelism as well, this will make us easier to understand the concept!
Concurrency means structuring our program to handle multiple tasks at once, but not necessarily doing them at the same time. People like to use this kitchen analogy. Imagine you are cooking a dish. You need to do several things and switching between: start boiling water, meanwhile you can chopping onions, then get back to the boiling water. When you only have one CPU core, concurrency is an illusion of multitasking which basically it is not! It’s about managing lot of things and not doing all of them at once.
Parallelism means doing multiple things at the same time. Let’s get back to the kitchen analogy. Now you have a cook helper that can help you chopping onions while you are boiling water. Both of you doing multiple things at the exact same time simultaneously. You need to have multiple CPU core to do this. It’s about execution, not structuring.
When implementing concurrency, sometime we don’t care how the computer is executing things. Theses tasks could be run in single core or might be run in multiple core, the programming language is helping us to structure the process to make it look like run simultaneously.
Concurrency in Your Language
Many programming language have different way to implement concurrency. We are focusing on C and Go.
I think, Go’s concurrency it is easy to understand. However if we don’t understand how it is in C, we will not appreciate the luxury we have in Go. When I worked with C threads, every mistake was painful. Race conditions is first class concern, deadlock is your friend, anything you named it.
Go gives a safer tool to implement concurrency, but the core problems don’t disappear. Do you familiar with these: nil pointer panic, resource lock, deadlock, etc.
We still need to grasp the fundamentals, I think C is the best way to start.
How to re learn concurrency (may includes un learn)
Learn basic of concurrency in Go
Let’s stop pretending you have unlimited time. If you need to get code running—now—Go’s concurrency model is the shortest path between problem and solution. Here’s the bare minimum to get started:
package main
import (
"fmt"
"time"
)
func processTask(id int) int {
fmt.Printf("Task %d starting\n", id)
time.Sleep(1 * time.Second) // Simulate long running process
fmt.Printf("Task %d done\n", id)
return id * id
}
func main() {
for i := 1; i <= 3; i++ {
go processTask(i)
}
time.Sleep(2 * time.Second) // Wait for tasks to finish (not production-safe, but fine for demo)
fmt.Println("All tasks launched.")
}
See it by yourself, no explanation needed at this point.
Recognize several concurrency patterns.
Let’s start listing popular concurrency pattern in Go, here are the patterns every junior should know without excuse. These patterns introduce correct and maintainable concurrent code, which save your life in the long run.
pattern 1: Fan out / fan in
When to use: You have a bunch of work and want to split it among workers, then collect the results.
You are processing multiple things at once. Try experimenting processing things only using single loop, do you spot the different. Is it faster?
package main
import (
"fmt"
"time"
)
func processTask(id int) int {
fmt.Printf("Task %d starting\n", id)
time.Sleep(1 * time.Second) // Simulate long running process
fmt.Printf("Task %d done\n", id)
return id * id
}
func worker(jobs <-chan int, results chan<- int) {
for job := range job
results <- processTask(job)
}
}
func main() {
numJobs := 10
numWorkers := 3
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
// Send jobs
for j := 1; j <= numJobs; j++ {
fmt.Printf("Sending job %d\n", j)
jobs <- j
}
close(jobs) // Close the jobs channel to signal no more jobs will be sent
// Fan-out: launch multiple workers
for w := 1; w <= numWorkers; w++ {
go worker(jobs, results)
}
// Fan-in: collect results
for a := 1; a <= numJobs; a++ {
res := <-results
fmt.Printf("Result received: %d\n", res)
}
}
pattern 2: Pipeline
When to use: Data needs to be processed in stages (think: factory line).
You can actually spawn multiple identical or different worker per stage, while they still meeting to the same stage. I have never use this on my usecase however.
package main
import (
"fmt"
"time"
)
func processTask(id int) int {
fmt.Printf("Task %d starting\n", id)
time.Sleep(1 * time.Second) // Simulate long running process
fmt.Printf("Task %d done\n", id)
return id * id
}
func processTaskAdvanced(id int) int {
fmt.Printf("Advanced Task %d starting\n", id)
time.Sleep(1 * time.Second) // Simulate longer process
fmt.Printf("Advanced Task %d done\n", id)
return id * id * 2
}
func workerStage1(jobs <-chan int, results chan<- int) {
for job := range jobs {
results <- processTask(job)
}
close(results)
fmt.Println("Stage 1 processing complete")
}
func workerStage2(jobs <-chan int, results chan<- int) {
for job := range jobs {
results <- processTaskAdvanced(job)
}
close(results)
fmt.Println("Stage 2 processing complete")
}
func main() {
jobs := make(chan int, 10)
// Send jobs
for i := 1; i <= 10; i++ {
jobs <- i
}
close(jobs) // Close the jobs channel to signal no more jobs will be sent
// Single worker for stage 1
stage1 := make(chan int)
go workerStage1(jobs, stage1)
// Single worker for stage 2
stage2 := make(chan int)
go workerStage2(stage1, stage2)
// Collect results from stage 2
for result := range stage2 {
fmt.Println("Result:", result)
}
}
pattern 3: Worker Pool
When to use: You need to control concurrency—don’t spawn a million goroutines and pray.
When you encounter random slowdowns, you can still process the other by spawning multiple worker.
package main
import (
"fmt"
"time"
)
func processTask(id int) int {
fmt.Printf("Task %d starting\n", id)
time.Sleep(1 * time.Second) // Simulate long running process
fmt.Printf("Task %d done\n", id)
return id * id
}
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("worker %d started job %d\n", id, j)
results <- processTask(j)
fmt.Printf("worker %d finished job %d\n", id, j)
}
}
func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)
// Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send 5 jobs
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
// Collect results
for a := 1; a <= 5; a++ {
res := <-results
fmt.Println("result:", res)
}
}
pattern 4: Select Statement
When to use: You need to multiplex—wait for multiple things at once.
Try running the program. Observe that the program is receiving from both channels without blocking one another.
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
// Goroutine 1
go func() {
time.Sleep(1 * time.Second)
ch1 <- "message from ch1"
}()
// Goroutine 2
go func() {
time.Sleep(2 * time.Second)
ch2 <- "message from ch2"
}()
// Using select to handle multiple channels
for i := 0; i < 2; i++ { // 2 since we expect two messages from both channels
select {
case msg1 := <-ch1:
fmt.Println("Received:", msg1)
case msg2 := <-ch2:
fmt.Println("Received:", msg2)
}
}
}
pattern 5: WaitGroup
When to use: You need to wait for a bunch of goroutines to finish before moving on.
Observe the goutput order may vary due to goroutine scheduling.
package main
import (
"fmt"
"sync"
"time"
)
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
time.Sleep(time.Second) // Simulate work
fmt.Println("Hello from goroutine", n)
}(i)
}
wg.Wait() // Block until all goroutines call Done()
}
pattern 6: Mutex
When to use: You need to protect shared state—because goroutines WILL stomp on each other if you don’t.
Observe the output of the program to see how the counter is incremented by different goroutines.
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
var mu sync.Mutex
counter := 0
for i := 0; i < 10; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
// Simulate some work
mu.Lock()
counter++
fmt.Printf("Counter at goroutine %d: %d\n", n, counter)
mu.Unlock()
}(i)
}
wg.Wait()
}
// Output:
// % go run main.go
// Counter at goroutine 9: 1
// Counter at goroutine 4: 2
// Counter at goroutine 0: 3
// Counter at goroutine 1: 4
// Counter at goroutine 2: 5
// Counter at goroutine 3: 6
// Counter at goroutine 7: 7
// Counter at goroutine 8: 8
// Counter at goroutine 5: 9
// Counter at goroutine 6: 10
Understand what goroutine and channel is
Before you copy-paste another “concurrent Go example,” stop and ask yourself these two questions:
question 1: How many tasks do you actually need?
If you just want to run something in parallel, use a goroutine.
go doSomething()
That’s it. No threads, no memory management, no boilerplate.
#include <pthread.h>
void* doSomething(void* arg) { /* ... */ return NULL; }
pthread_t tid;
pthread_create(&tid, NULL, doSomething, NULL);
pthread_join(tid, NULL);
In C, you have to create a thread, pass a function pointer, manage the result, then join. Every single time. If you forget to join, you leak resources. If you pass the wrong pointer, you get undefined behavior. Go’s goroutine is a single keyword.
question 2: How do you pass data between tasks?
If your goroutines need to talk to each other, you need a channel.
ch := make(chan int)
go func() {
ch <- 42 // send data
}()
fmt.Println(<-ch) // receive data
Direct, safe, and built-in. No locks, no structs, no memory headaches.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
typedef struct {
int value;
pthread_mutex_t lock;
} Data;
void* worker(void* arg) {
Data* d = (Data*)arg;
pthread_mutex_lock(&d->lock);
d->value = 42;
pthread_mutex_unlock(&d->lock);
return NULL;
}
int main() {
Data d;
d.value = 0;
pthread_mutex_init(&d.lock, NULL);
pthread_t tid;
pthread_create(&tid, NULL, worker, &d);
pthread_join(tid, NULL);
pthread_mutex_lock(&d.lock);
printf("%d\n", d.value);
pthread_mutex_unlock(&d.lock);
pthread_mutex_destroy(&d.lock);
return 0;
}
That’s so many lines of setup just to pass one integer. Miss a lock, your value is garbage. Miss a destroy call, you leak memory. Channel exists so you don’t have to write this anymore.
Why does this matter?
In Go, you focus on the problem that matter to you. It help you to ignore problem in concurrency, make it possible for you to ship fast.
Remember, the problem is still there
You think just because you’re using Go, all your concurrency problems vanish? Wrong. ere’s a classic mistake: leaking goroutines and memory.
package main
import (
"fmt"
"time"
)
func leakyWorker(done chan bool) {
for {
select {
case <-done:
fmt.Println("Worker stopped")
return
default:
// Simulate work, but never actually receive "done"
time.Sleep(100 * time.Millisecond)
}
}
}
func main() {
done := make(chan bool)
// Start 5 workers, but never signal them to stop
for i := 0; i < 5; i++ {
go leakyWorker(done)
}
fmt.Println("Main finished, but workers are still running and eating memory...")
time.Sleep(5 * time.Second)
}
What’s wrong here?
Go routine is started, but never stopped.
done
channel is never closed or signaled.Worker is looping forever, eating RAM and CPU.
Imagine doing inside HTTP handlers or message consumer. You will kill your server eventually.
Go is safe, but it’s not idiot-proof. It’s just tiring to see this issue in production.
Let’s normalize “it depends” statement
If you are looking for a silver bullet, you are wasting your time. How many time do I witness overusing concurrency without meaningful goal, just making the code hard to read and maintain!
“It depends”, is not a lazy answer. It’s the only honest answer in real life.
Don’t just follow a blog post that uses channels everywhere. They are just a blog post after all.
Analyze the context, what’s the problem you are trying to solve? Ask your favorite AI to deep dive into the problem, start comparing multiple solutions.
Now that you see different solutions, validate both the problem solving and the implementation. Is it satisfying the requirement? Is it safe to run in production?
Conclusion
In this era, people easily forget the fundamentals. AI spits out code, people just straight to “tab tab tab and push”. When you don’t know a thing, you are just a monkey pushing buttons.
Real engineer review fundamentals. Want to design a solid architecture? Start learning system design. Want to write a solid concurrent Go code? See underneath it, how concurrency really works in C. If you never fought with threads, locks, and memory leaks in C, you will not appreciate the power Go gives you.
Stop being a code monkey. Before you hit accept, read carefully. Test it, break it, fix it. You are senior reviewing your junior AI robot’s work. If you don’t catch the bugs, nobody will.
Respect the fundamentals. Use the tools, don’t let them use you. If you are really a senior, act like it.
Subscribe to my newsletter
Read articles from Satria H R Harsono directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Satria H R Harsono
Satria H R Harsono
Helping junior software engineers navigate their careers by sharing lessons from my journey—avoiding pitfalls, learning from mistakes, and building a strong foundation for success in tech.