Concurrency Control Patterns: Lessons from Working with RwLocks & Mutexes


Imagine your application as a bustling highway filled with cars—each one a thread zooming along with its own agenda. Now picture what happens when all these cars need to access the same gas station (your shared data). Without proper traffic signals, you've got yourself a demolition derby waiting to happen. That's concurrent programming without synchronization—chaotic, unpredictable, and destined for crashes.
Enter RwLocks and Mutexes: the traffic control systems of our multi-threaded world. Think of a Mutex as a single-lane bridge where only one car can cross at a time. It doesn't matter if the car just wants to glance at the scenery (read) or repaint the bridge (write)—one vehicle, one crossing, no exceptions.
An RwLock, by contrast, is like a smart highway with dynamic lane allocation. Cars just wanting to check out the view (readers) can cruise alongside each other in multiple lanes. But when a maintenance vehicle (writer) needs to repaint the lines, all lanes temporarily shut down for exclusive access.
In our car's computer system, the dashboard display (reading speed, fuel levels, temperature) can be accessed by multiple systems simultaneously—perfect for an RwLock. But when the engine control unit needs to adjust timing parameters? That's a write operation that demands exclusive access—Mutex territory.
This article will navigate the twists and turns of these concurrency patterns—sometimes hitting the brakes for technical depth, other times accelerating through practical examples with the windows down. Fasten your seatbelts; synchronization has never been this much of a joy ride.
🪦 The Day My Thread Pool Turned into a Thread Cemetery
My journey with concurrency primitives began like many horror stories—with a seemingly simple task. I needed to build a monitoring service registry that would collect metrics from various system components and occasionally update the registry with config updates. Sounds straightforward, right? Multiple readers, occasional writers. I use Rust heavily at work and was just starting to explore the uncharted waters known as concurrency.
Little did I know I was about to create the thread equivalent of a traffic jam where everyone's honking but nobody's moving.
💥 My First Collision: Mutex Madness
Here is a representation of my initial (flawed) approach in Rust, which led to a beautiful, perfect deadlock:
use std::sync::{Arc, Mutex};
use std::thread;
struct DashboardData {
speed: f64,
fuel_level: f64,
engine_temp: f64,
}
fn main() {
let dashboard = Arc::new(Mutex::new(DashboardData {
speed: 0.0,
fuel_level: 100.0,
engine_temp: 85.0,
}));
// Create our main control thread
let dashboard_clone = Arc::clone(&dashboard);
let control_thread = thread::spawn(move || {
// Get write access to update the dashboard
let mut data = dashboard_clone.lock().unwrap();
println!("Control thread acquired lock, updating values...");
data.speed += 10.0;
data.fuel_level -= 5.0;
// Here's where things went wrong - I tried to spawn a reader
// thread WHILE still holding the write lock
let dashboard_for_reader = Arc::clone(&dashboard_clone);
let reader = thread::spawn(move || {
println!("Reader thread trying to access dashboard...");
// This will block forever because the control thread
// still holds the lock!
let view = dashboard_for_reader.lock().unwrap();
println!("Current speed: {}", view.speed); // Never reaches here
});
// Simulating some additional processing
thread::sleep(std::time::Duration::from_secs(2));
reader.join().unwrap(); // This will hang forever
// Only drop the lock AFTER we've already asked the reader to
// try reading (deadlock guaranteed!)
println!("Control thread finishing its work");
drop(data); // Finally releasing the lock, but it's too late
});
control_thread.join().unwrap(); // This will also hang forever
println!("Program completed successfully!"); // Narrator: It did not.
}
What happened here was the concurrency equivalent of a perfect storm. The control thread acquired the lock and then, while still holding it, spawned a reader thread that immediately tried to acquire the same lock. But since the control thread wouldn't release the lock until after the reader was supposed to complete... neither thread could progress.
In car terms, this was like a maintenance vehicle parking across all highway lanes and then radioing for an inspector to come check something on the road. The inspector arrives but can't get onto the highway because it's blocked, while the maintenance crew refuses to move until the inspection is complete. A classic chicken-and-egg situation.
My terminal just sat there, cursor blinking mockingly, as my application entered the concurrency twilight zone. No errors, no crashes, just... eternal waiting. This is the special kind of bug that doesn't tell you something is wrong - it just quietly stops everything. (It drove me near insane!)
🔒 The RwLock Revelation
After some hair-pulling debugging sessions, reflecting on my life’s choices and a deep dive into the documentation, I had my "aha" moment. What I needed wasn't a Mutex but an RwLock!
use std::sync::{Arc, RwLock};
use std::thread;
struct DashboardData {
speed: f64,
fuel_level: f64,
engine_temp: f64,
}
fn main() {
let dashboard = Arc::new(RwLock::new(DashboardData {
speed: 0.0,
fuel_level: 100.0,
engine_temp: 85.0,
}));
// Create threads for readers
let mut handles = vec![];
for i in 0..10 {
let dashboard_clone = Arc::clone(&dashboard);
let handle = thread::spawn(move || {
// Reader thread just wants to display current values
loop {
let data = dashboard_clone.read().unwrap(); // <- READ lock
println!("Display {}: Speed: {}, Fuel: {}, Temp: {}",
i, data.speed, data.fuel_level, data.engine_temp);
// Lock automatically released when data goes out of scope
thread::sleep(std::time::Duration::from_millis(100));
}
});
handles.push(handle);
}
// Create a writer thread for engine control
let dashboard_clone = Arc::clone(&dashboard);
let writer = thread::spawn(move || {
// Writer thread needs to update values
loop {
{
let mut data = dashboard_clone.write().unwrap(); // <- WRITE lock
data.speed += 1.0;
data.fuel_level -= 0.1;
data.engine_temp += 0.2;
println!("Engine control updated values!");
// Lock is released immediately after this block
}
// Do processing outside the lock
thread::sleep(std::time::Duration::from_millis(500));
}
});
handles.push(writer);
// Join all threads (still looping forever, but at least efficiently!)
for handle in handles {
handle.join().unwrap();
}
}
The difference? Reader threads could now access the data concurrently without blocking each other. Only when the writer needed to update values did it temporarily stop all reads to make its changes. I also carefully moved the "processing time" outside the lock scope to minimize the time the exclusive write lock was held. In other words? Get your shit done fast and hand-off
Back to our highway analogy: Now multiple cars could view the scenery simultaneously in parallel lanes, only briefly pausing when the maintenance vehicle needed to do its work 😄.
🗣️ The Great Concurrency Showdown: How Other Languages Handle It
While Rust made me truly understand these concepts through its ownership model, other languages approach the same problems with their own styles:
☕ Java's Synchronized Carnival
In Java, you might see something like:
// Using a ReadWriteLock
ReadWriteLock rwLock = new ReentrantReadWriteLock();
Lock readLock = rwLock.readLock();
Lock writeLock = rwLock.writeLock();
// Reader thread
readLock.lock();
try {
// Read dashboard data
} finally {
readLock.unlock(); // Must explicitly unlock
}
// Writer thread
writeLock.lock();
try {
// Update dashboard data
} finally {
writeLock.unlock(); // Must explicitly unlock
}
Java developers will recognize the ubiquitous try-finally block—because forgetting to unlock is like abandoning your car in the middle of the bridge. Java also has the simpler synchronized
keyword for mutex-like behavior, but it doesn't distinguish between readers and writers.
🎪 Python's Context Manager Magic
Python's approach feels almost conversational:
import threading
# Using RLock (a reentrant mutex) for simplicity
dashboard_lock = threading.RLock()
# Using a context manager for automatic release
with dashboard_lock:
# Access dashboard data
print(f"Speed: {dashboard.speed}")
# For RwLock functionality, you'd typically use a third-party library
# like `readerwriterlock`
Python's with
statement ensures locks get released even if exceptions occur. It's like having a valet who guarantees your car will exit the bridge regardless of what happens inside.
🐻 Go's Channel Philosophy
Go takes a different approach, preferring channels over locks when possible:
// Using a RWMutex
var dashboardLock sync.RWMutex
// Reader
dashboardLock.RLock()
// Read dashboard data
dashboardLock.RUnlock()
// Writer
dashboardLock.Lock()
// Update dashboard data
dashboardLock.Unlock()
// But Go would encourage:
// "Don't communicate by sharing memory; share memory by communicating."
Go's philosophy shifts the paradigm—instead of multiple cars trying to access the same gas station, each car gets deliveries through dedicated pipelines (channels).
It was fun researching the various approaches for this post, even with experience in all aforementioned languages, there’s still a whole lot to learn 😜.
🚵 Lessons From The Road
Through this journey across the concurrency landscape, I've learned some fundamental lessons:
Know Your Traffic Pattern: Is your data mostly read with occasional writes? RwLock. Balanced or mostly writes? Mutex might be simpler and perform better.
Release Locks Quickly: Don't stop for a picnic while holding the bridge hostage. Do your processing outside the locked sections.
Lock at the Right Granularity: Lock only what you need. Having separate locks for the speedometer and fuel gauge lets cars check their speed without waiting for someone else checking their fuel.
Beware of Deadlocks: If Thread A holds Lock 1 and waits for Lock 2, while Thread B holds Lock 2 and waits for Lock 1, they'll both wait forever—like two cars at an intersection each waiting for the other to go first. (From where I come from, this is just a normal Tuesday morning 😂)
Consider Alternatives: Sometimes, lock-free data structures or message-passing architectures can eliminate the need for explicit locks altogether.
🛣️ Conclusion: The Joy of Smooth Traffic
Once I replaced my Mutex with an RwLock and properly scoped my lock usage, my monitoring system hummed along like a well-designed highway system. Reader threads zipped through concurrently while writer threads made their updates with minimal disruption.
The difference between mutexes and RwLocks might seem subtle at first glance, but in practice, it's the difference between a traffic jam and a choreographed dance of vehicles—especially as your system scales up with more and more threads trying to access shared resources.
So next time you're building concurrent systems, remember: not all locks are created equal, and choosing the right one can make your code both safer and significantly more efficient. Happy driving on the concurrency highway!
Subscribe to my newsletter
Read articles from Abdur-Rahman Fashola directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Abdur-Rahman Fashola
Abdur-Rahman Fashola
I am a Software Engineer based in Lagos, with a passion for walking, gaming, networking and breaking stuffs. Careless writer 🤪