Atomic Habits vs. The Mutex Mafia

Nurul HasanNurul Hasan
11 min read

In the world of multithreaded programming, there’s one golden rule: Don’t let threads fight over shared data.
But how do we keep things peaceful?
Enter: std::atomic and std::mutex.

In this article, we’ll break down both of these synchronization tools, compare them with real-life analogies, and show simple C++ code examples with outputs so you can see the difference clearly.


The Problem: Race Conditions

A race condition happens when two or more threads access shared data at the same time and at least one of them modifies it without synchronization — causing unpredictable and incorrect behavior.


Analogy:

Imagine two chefs in a kitchen sharing a single recipe card to prepare a dish.

  • Chef A reads the card and writes down: "Add 2 eggs".

  • At the same time, Chef B reads the same card and writes: "Add 1 egg".

But since they were both reading and writing at the same time, the final recipe might end up saying:

  • "Add 21 eggs",

  • or just "Add egg",

  • or completely unreadable.


Meet the Contenders: std::atomic vs std::mutex

std::atomic:

A lightweight tool for simple tasks like counting or setting flags. It's fast and doesn’t use locks, but only works well for small, single-variable operations.

std::mutex:

A stronger tool for protecting more complex code or multiple variables. It uses locks to make sure only one thread can access something at a time. Slower, but more flexible.


keywords :

  1. Deadlock

    A deadlock happens when two or more threads are waiting for each other to release a resource, but none of them ever do — so they all get stuck forever.

    Analogy:

    Imagine two people trying to pass each other in a narrow hallway:

    • Person A says, "You move first."

    • Person B says, "No, YOU move first."
      They both wait... forever.

    That’s a deadlock. Everyone’s holding something the other needs, but no one is willing to give it up first.

    Example in C++ (Deadlock Situation):

     #include <iostream>
     #include <mutex>
     #include <thread>
    
     std::mutex mtx1, mtx2;
    
     void threadA() {
         std::lock_guard<std::mutex> lock1(mtx1);
         std::this_thread::sleep_for(std::chrono::milliseconds(100));
         std::lock_guard<std::mutex> lock2(mtx2);  // waits for mtx2
         std::cout << "Thread A acquired both locks\n";
     }
    
     void threadB() {
         std::lock_guard<std::mutex> lock2(mtx2);
         std::this_thread::sleep_for(std::chrono::milliseconds(100));
         std::lock_guard<std::mutex> lock1(mtx1);  // waits for mtx1
         std::cout << "Thread B acquired both locks\n";
     }
    
     int main() {
         std::thread t1(threadA);
         std::thread t2(threadB);
         t1.join();
         t2.join();
         return 0;
     }
    

    | Term | Meaning | | --- | --- | | mutex | A lock tool that prevents multiple threads from using the same data at once | | lock | When a thread takes control of the mutex to enter a critical section | | unlock | When the thread releases the mutex, allowing others to enter | | lock_guard | A smart, automatic way to lock and unlock in C++ |

    Breakdown of above code:

    std::lock_guard<std::mutex> lock1(a);

    What it does:

    • This creates a lock guard named lock1, which locks the mutex a immediately.

    • std::lock_guard is a RAII (Resource Acquisition Is Initialization) object — it locks the mutex when it's created and automatically unlocks it when it goes out of scope (e.g., when the function or block ends).

Analogy:

Thread walks up to a room with mutex a on the door. It takes the key (lock1) and locks the door behind itself so no one else can enter.

std::lock_guard<std::mutex> lock2(b);

What it does:

  • After waking up from sleep, the thread tries to lock another mutex b.

  • If mutex b is already locked by another thread, this line will block (wait) until that thread unlocks it.

  • If b is free, it gets locked immediately.

Analogy:

Now the thread wants to enter another room protected by mutex b.
If someone is already in there, it waits at the door until the key becomes available.

Let’s say you have:

  • Thread 1 doing:

      lock_guard lock1(a);  
      sleep 100ms;
      lock_guard lock2(b);
    
  • Thread 2 doing:

      lock_guard lock1(b);  
      sleep 100ms;
      lock_guard lock2(a);
    

Now:

  • Thread 1 locks a, sleeps, and waits for b

  • Thread 2 locks b, sleeps, and waits for a

Both threads are waiting for each other to release a lock — this is a deadlock 😵

  1. Lock

    In programming, a lock (provided by a mutex) is a tool used to protect a shared resource, such as a variable, from being accessed by multiple threads at the same time.

  2. mutex = MUTual EXclusion

    A mutex (short for "mutual exclusion") ensures that only one thread at a time can access or modify a shared resource.

    Analogy :

    Imagine:

    • There's one bathroom and multiple people (threads) who want to use it.

    • There's a key hanging on the wall (the mutex).

    • Only the person who holds the key can enter the bathroom.

So:

  • When Thread A "locks" the mutex, it takes the key and enters the bathroom.

  • If Thread B comes by and sees the key is gone, it must wait (it’s "blocked").

  • When Thread A finishes, it returns the key — now Thread B can use the bathroom.

This is mutex locking:

  • lock() = take the key

  • unlock() = return the key

  1. Lock-Free

    Definition:

    Lock-free programming means that at least one thread is guaranteed to make progress, even under contention, without traditional locks like std::mutex.

    It avoids the overhead and risks of locks by using atomic operations.

    Analogy:

    Imagine a group of people writing names on a whiteboard.

    • With a mutex, only one person can enter the room to write (others wait outside).

    • With atomic operations, they can all write safely at the same time, as long as each person sticks to one column.


    std::atomic: Light, Fast, and Lock-Free

    Analogy:

    Imagine a digital scoreboard — every button press instantly adds a point without blocking anyone else.
    That's std::atomic: fast, non-blocking, and efficient for basic tasks.

    Use it when:

    • You need to protect simple and single data (like counters, flags, pointers).

    • You want maximum performance and minimum complexity.

Example:

    #include <iostream>
    #include <atomic>
    #include <thread>

    std::atomic<int> counter(0);

    void increment() {
        for (int i = 0; i < 1000; ++i) {
            counter++; // atomic operation
        }
    }

    int main() {
        std::thread t1(increment);
        std::thread t2(increment);
        t1.join();
        t2.join();

        std::cout << "Final counter value: " << counter << std::endl;
        return 0;
    }

Expected Output:

    Final counter value: 2000

Each thread adds 1000 to the counter. Since std::atomic guarantees atomicity, no data loss occurs.

No locks, no waiting, no deadlocks. This is lock-free, and each thread can move independently.


Why Deadlocks Don’t Happen with std::atomic in C++

1. No Mutexes Involved

  • std::atomic operations don’t use std::mutex.

  • No thread is waiting for a lock → no deadlock.

2. Lock-Free & Non-blocking

  • Atomic operations are usually implemented using CPU instructions (e.g., LOCK XADD).

  • Operations like ++, load(), store() complete instantly, without blocking.

3. Thread-Safe by Design

  • std::atomic<T> ensures atomicity of reads/writes.

  • No need for external synchronization.

4. No Lock Order Issues

  • Deadlocks usually need multiple mutexes with inconsistent locking order.

  • Atomics don’t use locks → no ordering to worry about.

⚠️ BUT… Atomics Are Not a Magic Solution for Everything

While they prevent deadlocks, atomics have limitations:

IssueExplanation
Complex logicIf you need to perform multiple operations atomically (e.g., check and update two different variables together), atomics alone are not enough — you'd need a mutex or a lock-free data structure.
Harder to reason aboutAtomic code can be trickier to write and debug, especially with relaxed memory models.
False sense of safetyYou can still have race conditions if multiple atomics are used together without proper coordination.

std::mutex: Mutual Exclusion

What is it?

A mutex allows only one thread to access a shared resource at a time.

“Hey, only one person in the bathroom at a time.”

Code Example (with Mutex)

#include <iostream>
#include <thread>
#include <mutex>

int counter = 0;
std::mutex mtx;

void increment() {
    for (int i = 0; i < 1000; ++i) {
        std::lock_guard<std::mutex> lock(mtx);  // acquire the lock
        counter++;
        // lock is automatically released here
    }
}

int main() {
    std::thread t1(increment);
    std::thread t2(increment);
    t1.join();
    t2.join();
    std::cout << "Final counter value: " << counter << std::endl;
}

Output:

Final counter value: 2000

✅ Safe. No race condition.


What is “Locking” a Mutex?

When you lock a mutex, you're saying:

“No one else can touch this thing until I’m done.”

In code:

std::lock_guard<std::mutex> lock(mtx);
  • Locks mtx when lock is created

  • Automatically unlocks when lock goes out of scope — smartly managed by lock_guard


concept of multimutex

It means you're creating more than one mutex to protect different resources, or sometimes accidentally to protect the same resource in a broken way.


When Is Using Multiple Mutexes Correct?

Correct Use Case: Protecting Different Resources

Let’s say you have two shared variables: balance and log.

You should use one mutex for each, because they are independent.

std::mutex balance_mutex;
std::mutex log_mutex;

int balance = 0;
std::vector<std::string> log;

void deposit() {
    {
        std::lock_guard<std::mutex> lock(balance_mutex);
        balance += 100;
    }
    {
        std::lock_guard<std::mutex> lock(log_mutex);
        log.push_back("Deposited 100");
    }
}

This is safe and correct — two different mutexes are guarding two different things.


When Is Using Multiple Mutexes Dangerous?

Protecting the Same Resource with Different Mutexes

Let’s say you accidentally do this:

std::mutex m1;
std::mutex m2;

int sharedData = 0;

void threadA() {
    std::lock_guard<std::mutex> lock(m1);
    sharedData++;
}

void threadB() {
    std::lock_guard<std::mutex> lock(m2);
    sharedData++;
}

⚠️ This is dangerous and wrong — both threads are updating sharedData, but they are not synchronized, because each thread is locking a different mutex!

It’s like having two different keys for the same bathroom — anyone can enter at any time. 💥

🧠 What Happens in That Case?

  • No synchronization actually happens.

  • This results in race conditions, even though you “used a mutex”.

  • This is a common mistake in multithreaded code.

Multiple Mutexes Can Cause Deadlock Too

Even when you're correctly locking two different resources, if two threads lock them in opposite orders, it can cause a deadlock.

❗ Bad Order Example:

cppCopyEditstd::mutex a, b;

void thread1() {
    std::lock_guard<std::mutex> lock1(a);
    std::this_thread::sleep_for(std::chrono::ms(100));
    std::lock_guard<std::mutex> lock2(b);  // waits for b
}

void thread2() {
    std::lock_guard<std::mutex> lock1(b);
    std::this_thread::sleep_for(std::chrono::ms(100));
    std::lock_guard<std::mutex> lock2(a);  // waits for a — deadlock
}

Both threads hold one lock and are waiting for the other — deadlock! 😵

Summary

ScenarioSafe?Why?
One mutex for one resourcePrevents multiple threads from accessing the same resource at once
Two mutexes for two resourcesKeeps unrelated data safe separately
Two mutexes for the same resourceThreads are not truly synchronized

Which variable/operation is managed by which mutex

you don't explicitly bind a variable to a mutex in code — there's no syntax in C++ that says:

“This variable is protected by this mutex.”

Instead, it's a design decision and a discipline you must follow as the programmer.

Here's the Key Principle

A mutex doesn't protect a variable automatically — it protects a critical section, i.e., a block of code where a shared resource (like a variable) is accessed.

So:

  • If you access a shared variable, you must ensure it's always done under the same mutex lock.

  • You and other developers have to agree: “Variable X is guarded by mutex M.”

  • This is not enforced by the compiler — it's a convention and discipline.

Example

#include <iostream>
#include <mutex>
#include <thread>

int sharedCounter = 0;
std::mutex counterMutex;

void incrementCounter() {
    for (int i = 0; i < 10000; ++i) {
        std::lock_guard<std::mutex> lock(counterMutex);  // Lock the mutex
        ++sharedCounter;                                 // Critical section
    }
}

int main() {
    std::thread t1(incrementCounter);
    std::thread t2(incrementCounter);
    t1.join();
    t2.join();

    std::cout << "Final Counter: " << sharedCounter << std::endl;
    return 0;
}

What Happens Here?

  • sharedCounter is a shared variable.

  • counterMutex is the mutex we decided will protect sharedCounter.

  • We don’t tell the compiler about this association.

  • But in our minds and code, we ensure all accesses to sharedCounter happen only inside a lock on counterMutex.

If you or someone else accesses sharedCounter outside the lock — it's a bug.

❗ Important Notes

  • A mutex doesn't know or care what variable it's "protecting." That's your responsibility.

  • Using the same mutex for unrelated variables can cause unnecessary blocking (bad performance).

  • Using multiple mutexes without a consistent locking order can cause deadlocks (e.g., your original code).


Final Summary

Featurestd::atomicstd::mutex
Thread-safe
Lock-free
Deadlock-proof❌ (possible if misused)
ComplexitySimple (for single vars)Required for multi-var logic
PerformanceHighMedium

On the way to building my own Redis, every insight counts — and I hope this article added one more to your journey. Thank you for reading. If you enjoyed it, consider subscribing to get notified when new, deeper dives drop.

Thank you for now — and see you in the next one. ❤️

0
Subscribe to my newsletter

Read articles from Nurul Hasan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nurul Hasan
Nurul Hasan