Understanding Semaphore in Rust: Controlling Concurrent Access

In my previous posts, we explored Box
, Rc
, Arc
, and Mutex
for managing ownership and thread-safe shared state. Today, we'll complete the picture with Semaphore
- Rust's solution for controlling how many operations can run concurrently.
The Problem: Unlimited Concurrency
Imagine you're building a web server that processes file uploads. Without any limits, 1000 concurrent requests could spawn 1000 file processing tasks simultaneously, overwhelming your system:
// This could crash your server with too many concurrent operations
for request in incoming_requests {
tokio::spawn(async move {
process_large_file(request).await; // 1000 of these running at once!
});
}
This is where Semaphore
becomes essential.
What is a Semaphore?
A Semaphore
is like a bouncer at a club who controls how many people can enter. It starts with N "permits" and:
When a thread wants to do work, it must acquire a permit
If permits are available, the thread gets one and proceeds
If no permits are available, the thread waits in line
When a thread finishes, it releases its permit for the next waiting thread
Basic Semaphore Usage
Let's see how this works with a practical example:
use std::{sync::{Arc, Mutex}, time::Duration};
use tokio::sync::Semaphore;
async fn semaphore_demo() {
// Initialize semaphore - thread safe so we wrap it with Arc
let semaphore = Arc::new(Semaphore::new(2)); // Only 2 permits available
// Initialize shared state that will be mutated across threads
let shared_counter = Arc::new(Mutex::new(0));
// Create thread handle vector to store multiple thread JoinHandles
let mut handles = vec![];
for i in 1..=4 {
// Clone the semaphore so each thread gets its own reference
let sem = Arc::clone(&semaphore);
// Clone the shared counter so each thread can access it
let counter = Arc::clone(&shared_counter);
let handle = tokio::spawn(async move {
println!("Thread {} waiting for permit...", i);
// Acquire permit - only 2 threads can get this at a time
let _permit = sem.acquire().await.unwrap();
println!("Thread {} got permit! Available permits: {}",
i, sem.available_permits());
// Do some work while holding the permit and update shared counter
{
let mut count = counter.lock().unwrap();
*count += 1;
println!("Thread {} updated counter to: {}", i, *count);
}
// Simulate work for 500ms while holding the permit
tokio::time::sleep(Duration::from_millis(500)).await;
println!("Thread {} releasing permit", i);
// Permit automatically released when _permit drops
});
handles.push(handle);
}
// Wait for all threads to complete
for handle in handles {
handle.await.unwrap();
}
println!("All threads completed!");
println!("Final counter value: {}", *shared_counter.lock().unwrap());
}
Why Arc<Semaphore> Instead of Just Semaphore?
This is a crucial point that many developers miss. Let's understand the difference:
Semaphore::new(2)
Type:
Semaphore
Ownership: Single owner only
Sharing: Cannot be shared across threads
Problem: Each thread would need its own semaphore, defeating the purpose
Arc::new(Semaphore::new(2))
Type:
Arc<Semaphore>
Ownership: Multiple owners allowed
Sharing: Can be shared across threads
Solution: All threads share the same semaphore and its permit pool
Without Arc
, this code wouldn't compile:
// This won't work
let sem = Semaphore::new(2);
tokio::spawn(async move {
sem.acquire().await; // sem moved here
});
tokio::spawn(async move {
sem.acquire().await; // ERROR: sem already moved!
});
With Arc
, each thread gets its own reference to the same semaphore:
// This works perfectly
let sem = Arc::new(Semaphore::new(2));
let sem1 = Arc::clone(&sem);
tokio::spawn(async move {
sem1.acquire().await; // Works!
});
let sem2 = Arc::clone(&sem);
tokio::spawn(async move {
sem2.acquire().await; // Works!
});
Semaphore vs Mutex: Different Tools for Different Jobs
Understanding when to use each is crucial:
Mutex provides exclusive access - only ONE thread can access the protected resource at a time. Think of it as a single-occupancy bathroom.
Semaphore provides controlled concurrent access - UP TO N threads can work simultaneously. Think of it as a parking lot with N spaces.
// Mutex: Only 1 thread can modify the counter at a time
let counter = Arc::new(Mutex::new(0));
// Semaphore: Up to 3 threads can process files simultaneously
let file_processor = Arc::new(Semaphore::new(3));
The Complete Concurrency Picture
Now we have the full toolkit for Rust concurrency:
// Single ownership, heap allocation
let data = Box::new(42);
// Multiple owners, immutable sharing (single-threaded)
let data = Rc::new(42);
// Multiple owners, mutable sharing (single-threaded)
let data = Rc::new(RefCell::new(42));
// Multiple owners, immutable sharing (multi-threaded)
let data = Arc::new(42);
// Multiple owners, mutable sharing (multi-threaded)
let data = Arc::new(Mutex::new(42));
// Controlled concurrent access (rate limiting)
let limiter = Arc::new(Semaphore::new(10));
Conclusion
Semaphore
completes Rust's concurrency toolkit by providing controlled access to resources. Combined with Arc
, it enables you to build systems that can handle high load without overwhelming your hardware.
Building a high-throughput Axum server using these concurrency primitives is the natural next step for applying these concepts in production systems.
All code examples are available at github.com/Ashwin-3cS/box-arc-rc-mutex-semaphore
Subscribe to my newsletter
Read articles from Ashwin directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ashwin
Ashwin
I'm a Full Stack Web3 Engineer crafting cutting-edge dApps and DeFi solutions. From writing secure smart contracts to building intuitive Web3 interfaces, I turn complex blockchain concepts into user-friendly experiences. I specialize in building on Ethereum, Sui, and Aptos — blockchain platforms where I’ve developed and deployed production-grade, battle-tested smart contracts. My experience includes working with both Solidity on EVM chains and Move on Sui and Aptos. I'm passionate about decentralization, protocol development, and shaping the infrastructure for Web3's future.