Handle Race Condition with Laravel Cache Lock
Table of contents
Recently I stumbled upon a race condition issue in one of my project. I am using Laravel and there are two approach that I can think of at the time. Using atomic lock at database level or using Laravel cache lock.
The Problem
The function for my race condition issue is similar to below code
DB::beginTransaction();
$latest_number = get_latest_number_from_db();
$order_number = $latest_number + 1;
$order_data[$order_number] = $order_number;
insert_to_order_table($order_data);
insert_to_order_detail_table($order_detail_data);
DB::commit();
The problem with above code is that when there are two separate processes that run sy the same time, it will read the same latest number from DB. Then it increments the latest number and resulting in the same order number.
I already make my order_number
field unique in my database. But because in my code I wrap the function of inserting order_data
and order_detail_data
inside DB transaction, it does not throw error right away even though the insert transaction is invalid because the order number data is not unique.
The unexpected behaviour is on the COMMIT transaction. When the commit happens, the first process to commit succeed. But the second process it failed to commit the order data, but succeed to commit the order detail data with the order relation id to the first order that already succeed. So the order detail of the second process strayed into the first order. And it is weird. I still don't know why the behaviour is like this. So I only fixed the race condition.
Finding The Solutions
The way to fix this my first thought is using atomic lock at the database level. So on the below code I add lockForUpdate()
method. What this function does is to make database Serializable isolation. This is the highest isolation level on MySQL. It provides strict transaction isolation by ensuring that concurrent transactions do not interfere with each other. Serializable isolation prevents dirty reads, non-repeatable reads, and phantom reads by locking the affected rows or tables until the transaction completes.
$latest_number = get_latest_number_from_db();
public function get_latest_number_from_db()
$latest_number = Order::lockForUpdate()->orderBy('order_number', 'DESC')->first()->order_number;
return $latest_number;
When using this level of database isolation so what happens? When two process is calling get_latest_number_from_db()
at the same time, the first process will read normally, but the second process will wait in order to read the latest order number until the first process is done. When the first process is committed the transaction, it will release the lock for the order table and the second process now can read the latest order number and continue to process.
This works well to fix the race condition issue. But I have concern. When using serializable isolation, like I said it will prevent another process to read the data. My application mostly used to read the data. So I think it has drawback to use serializable isolation because my frontend application will have to wait to just show data if there's inserting process.
So I thought a more safe approach maybe is using Laravel cache lock.
/**
* The maximum time a process can hold a lock, if the time is exceeded
* then the lock will be released automatically
*/
$lock_time = 10;
/**
* The maximum time a process can wait to acquire a lock from another process
* If the time is exceeded, a process will automatically throw a \LockTimeoutException error
*/
$block_time = 10;
$lock = Cache::lock('new_order', $lock_time);
try {
// Acquiring cache lock
$lock->block($block_time);
DB::beginTransaction();
$latest_number = get_latest_number_from_db();
$order_number = $latest_number + 1;
$order_data[$order_number] = $order_number;
insert_to_order_table($order_data);
insert_to_order_detail_table($order_detail_data);
DB::commit();
} catch (LockTimeoutException $e) {
throw new \Exception('lock timeout exception');
} finally {
$lock->release();
}
With my code wrapped inside cache lock. Now if there are two process happen at the same time, the first process will acquired a lock and continue to process the data insertion. The second process when it want to acquire the lock it will wait some time until the first process finished and release the lock. But if the first process take too long and second process waiting time exceeds the block time, it will throw LockTimeoutException
.
$lock = Cache::lock('new_order', 10);
if ($lock->get()) {
DB::beginTransaction();
$latest_number = get_latest_number_from_db();
$order_number = $latest_number + 1;
$order_data[$order_number] = $order_number;
insert_to_order_table($order_data);
insert_to_order_detail_table($order_detail_data);
DB::commit();
$lock->release();
}
Another method of using Laravel cache lock is using get()
function. While the block()
function is a blocking method so the other process will wait until the first process released the lock, the get()
method is a non-blocking method so it directly return false if another process hold the lock and true if there's no process that acquired the lock. So if we use above get()
method the program will pass the inserting method if there's a race condition case.
Compared to the atomic lock at database level, using cache lock will only affect on this exact process that has issue with race condition and do not affect another processes.
With this solutions, the race condition issue should be okay now. Thanks!
Subscribe to my newsletter
Read articles from Iwan Prakoso directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by