Go Concurrency: Goroutines, Channels, and more


Do not communicate by sharing memory; instead, share memory by communicating.
~ Rob Pike
Introduction
If you've ever wondered how Go achieves concurrency so efficiently, you're in for a treat. In this guide, we'll explore Goroutines, Channels, Done Channels, Select Statements, Buffered vs. Unbuffered Channels, Pipelines, and the for select
loop. By the end, you'll have a solid grasp of these concepts, and we'll also break down a worker pool example to give you a hands-on understanding of Go's concurrency model.
Goroutines
A Goroutine is a lightweight thread managed by the Go runtime. You can think of it as a function running independently in the background, without blocking the main program.
package main
import (
"fmt"
"time"
)
func sayHello() {
fmt.Println("Hello from Goroutine")
}
func main() {
go sayHello()
time.Sleep(time.Second)
fmt.Println("Main function exit")
}
A Goroutine starts with the go
keyword before a function call. The main function doesn't wait for Goroutines unless we explicitly tell it to.
Channels
Goroutines are great, but how do they communicate? Channels help Goroutines exchange data safely.
package main
import "fmt"
func main() {
ch := make(chan string)
go func() {
ch <- "Hello from channel"
}()
fmt.Println(<-ch)
}
Here, a Goroutine sends data to a channel, and the main function receives it. Channels are blocking, meaning they wait for both sending and receiving operations to complete.
Done Channel
Sometimes, we need to signal that a Goroutine has completed its task. Done channels make this easy.
package main
import "fmt"
func worker(done chan bool) {
fmt.Println("Work done!")
done <- true
}
func main() {
done := make(chan bool)
go worker(done)
<-done
}
The main function waits until it receives a signal from the done
channel, ensuring the Goroutine has finished execution.
Select Statement
Select lets us handle multiple channels at once. It waits for multiple operations and executes whichever is ready first.
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(2 * time.Second)
ch1 <- "Channel 1"
}()
go func() {
time.Sleep(1 * time.Second)
ch2 <- "Channel 2"
}()
select {
case msg := <-ch1:
fmt.Println(msg)
case msg := <-ch2:
fmt.Println(msg)
}
}
Select picks the channel that receives data first and executes that case.
Buffered vs. Unbuffered Channels
Buffered channels store data even if no Goroutine is ready to receive it.
package main
import "fmt"
func main() {
ch := make(chan int, 2)
ch <- 1
ch <- 2
fmt.Println(<-ch)
fmt.Println(<-ch)
}
With a buffer of size 2, two values can be stored without an immediate receiver. Unbuffered channels, in contrast, require simultaneous send and receive.
Pipelines
Pipelines process data step by step using multiple Goroutines.
package main
import "fmt"
func generator(nums ...int) <-chan int {
ch := make(chan int)
go func() {
for _, n := range nums {
ch <- n
}
close(ch)
}()
return ch
}
func square(input <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range input {
out <- n * n
}
close(out)
}()
return out
}
func main() {
nums := generator(1, 2, 3, 4, 5)
squares := square(nums)
for n := range squares {
fmt.Println(n)
}
}
Each function passes processed data to the next stage, creating a pipeline.
For-Select Loop
This pattern allows continuous reading from multiple channels.
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan string)
go func() {
for {
ch <- "Hello"
time.Sleep(time.Second)
}
}()
for {
select {
case msg := <-ch:
fmt.Println(msg)
case <-time.After(3 * time.Second):
fmt.Println("Timeout!")
return
}
}
}
This loop continuously listens to ch
and times out after 3 seconds.
Understanding the Worker Pool Code
You can view the code from the repo, don’t forget to star it - https://github.com/Satyxm/letsGO/tree/main/workerpool
Now, let’s walk through the worker pool example.
In the main function, we create a job queue (jobs
) and a result queue (results
). We start three worker Goroutines that continuously pick jobs and process them. Once all jobs are dispatched, we close the jobs
channel.
Workers pull jobs from the jobs
channel, simulate processing with a sleep, and send results to the results
channel. A sync.WaitGroup
ensures all workers complete their tasks before closing results
.
Finally, we read from results
and print the output.
Output
Conclusion
Learning these concepts will help you write efficient and scalable concurrent programs in Go.
If this guide helped, stay connected:
GitHub: https://github.com/satyxm
LinkedIn: https://www.linkedin.com/in/satyams-in/
Till then, Take Care and Happy Coding folks!
Subscribe to my newsletter
Read articles from Satyam Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Satyam Singh
Satyam Singh
Fostering a thriving tomorrow with inventive technologies.