Crafting Thread-Safe Functions in Java - Part 1
In a previous article, we explored the concepts of critical sections and race conditions in multithreading-you can find it here.
In this article, I will demonstrate these concepts through a simple function. Let's examine a classic counter functionality, which is represented by the following class:
package com.example;
public class Counter {
int num;
int counter;
public Counter(int num) {
this.num = num;
this.counter = this.num;
}
public void incrementCounter(String thread) {
System.out.println("Thread " + thread + " reads current counter value: " + counter);
counter++;
System.out.println("Thread " + thread + " updated current counter value to: " + counter);
}
}
Having called the method by the main thread, we get the following printed in the terminal.
package com.example;
public class Main {
public static void main(String[] args) throws InterruptedException {
Counter counter = new Counter(5);
counter.incrementCounter("Main");
counter.incrementCounter("Main");
}
}
Thread Main reads current counter value: 5
Thread Main updated current counter value to: 6
Thread Main reads current counter value: 6
Thread Main updated current counter value to: 7
There are no surprises here, the single thread increments the current counter value twice as expected.
Now, let’s allocate this task to two threads in the following way:
package com.example;
public class Main {
public static void main(String[] args) throws InterruptedException {
Counter counter = new Counter(5);
Thread t1 = new Thread(() -> counter.incrementCounter("Thread 1"));
Thread t2 = new Thread(() -> counter.incrementCounter("Thread 2"));
t1.start();
t2.start();
t1.join();
t2.join();
}
}
The following is observed on the terminal:
Thread Thread 2 reads current counter value: 5
Thread Thread 1 reads current counter value: 5
Thread Thread 2 updated current counter value to: 6
Thread Thread 1 updated current counter value to: 7
This is the textbook example of when the shared data is exposed to multiple threads and they rush to read/write to it without synchronisation between their operations, which results in data inconsistency and unexpected behaviour. You can run the example numerous times and you are likely to get different results every time - with this implementation we do not have control over which thread gets scheduled to the CPU and how long - so as you can see, in this scenario it is not guaranteed that the thread can read and write to the numerical value before its time up on the CPU.
Let’s break down what has happened in this scenario:
Thread 2 reads the current value: 5
Before it can write to it, Thread 1 comes in and also reads the current value which is 5
Thread 1 gets interrupted by Thread 2 which now increments the counter to 6
Thread 1, even though it’s meant to increment the current value by 1, it seemingly jumped from 5 to 7, and that is because, by the time it could write to the counter, the state got updated by Thread 2, thus resulting in unexpected behaviour.
As you can see, the race condition did not affect the final count but this is only by chance. We should always aim to maintain data consistency and atomicity during operations while using multi-threading.
What’s the solution to this problem? We should make the incrementCounter() function synchronised:
public synchronized void incrementCounter(String thread) {
System.out.println("Thread " + thread + " reads current counter value: " + counter);
counter = counter +1 ;
System.out.println("Thread " + thread + " updated current counter value to: " + counter);
}
Running the example now results in the following:
Thread Thread 1 reads current counter value: 5
Thread Thread 1 updated current counter value to: 6
Thread Thread 2 reads current counter value: 6
Thread Thread 2 updated current counter value to: 7
What does synchronized keyword do? It only allows one thread at a time to access the logic which is synchronized and blocks access to the rest of the threads. Once a thread enters this function it acquires the lock, which in this case it’s the instance of Counter. While this lock is owned by a thread no other thread can start interfering with it until the lock is released. This, as you can observe, results in a predicted execution.
In conclusion, managing race conditions and critical sections in Java multithreading is essential for data consistency and predictable behaviour. Using synchronization, like the synchronized
keyword, controls thread access to shared resources, preventing unexpected outcomes and ensuring reliable multithreaded applications.
Subscribe to my newsletter
Read articles from Her Code Review directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by