Java Multithreading: Master Concurrent Programming
CPU
The CPU, often called the brain of the computer, is responsible for executing a set of instructions from programs. It performs basic arithmetic, logical, control and input/output operations as specified in the instructions.
Example:- Intel Core i7 or AMD Ryzen 7.
Core
A core is an individual processing unit within a CPU. Modern CPUs can have multiple cores, allowing them to perform multiple tasks simultaneously.
Example:- One core might be processing some background instructions or tasks while another handles browser-related activities. This is just an example and doesn't happen exactly like this, different processes on our system are managed by these multiple cores.
Program
A program is a set of instructions written in a programming language that tells the computer how to perform a specific task.
Example :- Microsoft Word is a program that allows users to create and edit documents.
Process
A process is a program that is being executed or a running program. When a program runs, the operating system creates a process to manage its execution.
When we open Microsoft Word, it becomes a process in the operating system. A single program can have multiple processes. For example, a web browser(a single program) might have different processes to manage tabs, handle browser extensions, and manage media like playing video or audio.
Thread
A thread is the smallest unit of execution within a process. A process can have multiple threads, which share the same resources but can run independently.
A web browser like Google Chrome might use multiple threads for different tabs, with each tab running as a separate thread.
Multitasking
Multitasking allows an operating system to run multiple processes simultaneously. On single-core CPUs, this is done through time-sharing, rapidly switching between tasks. On multi-core CPUs, true parallel execution occurs, with tasks distributed across cores. The OS scheduler balances the load, ensuring efficient and responsive system performance.
Example: We are browsing the internet while listening to music and downloading a file on a PC.
Multithreading
Multithreading refers to the ability of CPU to execute different parts of a process with multiple threads within a single process concurrently. Multithreading can be done at application level by developers.
A web browser can use multithreading by having separate threads for rendering the page, running JavaScript, and managing user inputs. This makes the browser more responsive and efficient.
Multithreading enhances the efficiency of multitasking by breaking down individual tasks into smaller sub-tasks that are executed by threads. These threads can run concurrently, making better use of the CPU’s capabilities.
Multithreading allows us to perform multitasking at more fine grained level.
In a single-core system:
Both threads and processes are managed by the OS scheduler through time slicing and context switching to create the illusion of simultaneous execution. This is how os manages to run multiple applications at the same time.
In a multi-core system:
Both threads and processes can run in true parallel on different cores, with the OS scheduler distributing tasks across the cores to optimise performance.
Time Slicing:- The OS scheduler allocates each process or thread a time slice (a specific amount of time for which a thread or process can use the CPU to perform tasks). This ensures that each process or thread gets a fair share of CPU time. It prevents any single process or thread from monopolizing the CPU, improving responsiveness and enabling concurrent execution.
Context Switching:- Context switching is the process of saving the state of a currently running process or thread and loading the state of the next one to be executed. When a process or thread’s time slice expires, the OS scheduler performs a context switch to move the CPU’s focus to another process or thread. This allows multiple processes and threads to share the CPU, giving the appearance of simultaneous execution on a single-core CPU or improving parallelism on multi-core CPUs.
Multitasking can be achieved through multithreading where each task is divided into threads that are managed concurrently.
While multitasking typically refers to the running of multiple applications, multithreading is more granular, dealing with multiple threads within the same application or process.
Multithreading in Java
Java provides robust support for multithreading, allowing developers to create applications that can perform multiple tasks simultaneously, improving performance and responsiveness.
In Java, multithreading is the concurrent execution of two or more threads to maximize the utilization of the CPU. Java’s multithreading capabilities are part of the java.lang package, making it easy to implement concurrent execution.
In a single-core environment, Java’s multithreading is managed by the JVM and the OS, which switch between threads to give the illusion of concurrency. The threads share the single core, and time-slicing is used to manage thread execution.
In a multi-core environment, Java’s multithreading can take full advantage of the available cores. The OS can distribute threads across multiple cores, allowing true parallel execution of threads.
A thread is a lightweight process, the smallest unit of processing. Java supports multithreading through its java.lang.Thread class and the java.lang.Runnable interface.
A running program is code that is being executed or running, like a Spring Boot application that waits for requests or inputs. A running program refers to code that is currently being executed or is in a ready state, waiting for input. A Spring Boot application remains in a ready state, waiting for inputs once it has started and initialized.
When a Java program starts, one thread begins running immediately, which is called the main thread. This thread is responsible for executing the main method of a program.
public class Test {
public static void main(String[] args) {
System.out.println("Hello world !");
}
}
To create a new thread in Java, you can either extend the Thread class or implement the Runnable interface.
Method 1: extend the Thread class
A class is created that extends Thread.
Override the run method to include the code that will be executed by the thread.
start() - Method to initiate the execution of a thread.
public class Test {
public static void main(String[] args) {
World world = new World();
world.start();
for (; ; ) {
System.out.println("Hello");
}
}
}
public class World extends Thread {
@Override
public void run() {
for (; ; ) {
System.out.println("World");
}
}
}
Method 2: Implement Runnable interface
A class is created that implements Runnable.
Override the run method to include the code that will be executed by the thread.
A Thread object is created by passing an instance of created class.
start() - start method is called on the Thread object to initiate the new thread.
public class Test {
public static void main(String[] args) {
World world = new World();
Thread thread = new Thread(world);
thread.start();
for (; ; ) {
System.out.println("Hello");
}
}
}
public class World implements Runnable {
@Override
public void run() {
for (; ; ) {
System.out.println("World");
}
}
}
Benefits of Multithreading:
Better CPU Utilization:
- Multiple threads use all available CPU cores, maximizing processing power and ensuring no core is underutilized. In a single-threaded program, only one core is used, leaving others idle. With multithreading, multiple threads can run on different cores, improving CPU utilization.
Faster Execution:
- Tasks are split among threads and run at the same time, reducing the total execution time, such as when processing large datasets.
Responsive Applications:
- GUI apps stay interactive by handling background tasks, like file uploads or downloads, in separate threads.
Non-blocking I/O Operations:
- Threads can perform asynchronous tasks, such as network requests which are not be done in synchronised fashion, without halting the main program.
Process: It's simply a program that is in an executing or running state.
Thread:-
Inter-thread communication is faster, less expensive, and more efficient because threads share the same memory address of the process they belong to.
A thread is a lightweight process and exists as a single unit of execution.
Context switching takes less time because threads share the same memory space and are lightweight. Since threads have to store fewer things to maintain the previous state, context switching is faster.
Threads belongs to a process and have same memory space or address of a process.
Threads may require synchronization because they share the same memory. Different threads can access the same variables and objects, which can lead to issues like deadlocks and race conditions.
Since a thread is lightweight, it takes less time to create.
Process:-
Inter-process communication is slower, more expensive, and complex because each process has its own memory space or address.
A process is heavyweight and can have multiple threads of execution.
Context switching takes more time because each process has its own memory space and is heavier. Since processes have more information to store to maintain their state during switching, context switching is slower.
Each process has its own memory address or space (memory space is the memory allocated to a process to store its data, such as program instructions in the code segment and dynamically allocated objects in heap memory).
Processes do not require synchronization because they are isolated from each other, meaning they have different memory spaces.
Since a process is heavyweight, it takes more time to create compared to a thread.
Class Lock vs Object Lock:-
Class Lock: Every class has a unique lock, often called an intrinsic lock or class-level lock. These locks are used to make static blocks or methods (static data) thread-safe. This lock is applied when we use 'static synchronized' in case of static method. It is generally used to prevent multiple threads from accessing a piece of code at the same time.
public class ClassLevelLockExample
{
public void classLevelLockMethod()
{
synchronized (ClassLevelLockExample.class)
{
}
}
}
Object Lock: Each object has a unique lock, often called an intrinsic or object-level lock. These locks are achieved or applied using the keyword 'synchronized' in case of instance method and are used to protect non-static data. They are generally used to synchronize a non-static method or block, ensuring that only one thread can execute the code block on a given instance of the class. Essentially, only a single thread can execute the code block using the same object of the class.
public class ObjectLevelLockExample
{
public void objectLevelLockMethod()
{
synchronized (this)
{
}
}
}
User Vs Daemon Thread:-
User Thread (Non-Daemon Thread): User threads have their own life cycle, and their life is independent of other threads. The JVM (Java Virtual Machine) waits for user threads to finish their tasks before it shuts down. Once all user threads have completed their tasks, the JVM stops executing these threads along with any associated daemon threads.
The JVM shuts down or stops executing code when the main thread finishes its execution, meaning the main thread and all other user threads have completed their tasks.
In the case of Spring Boot, the application JVM doesn't shut down until we stop the application because the user threads created by the Tomcat server are still running. These threads keep the server active to accept requests.
The JVM waits for user threads to finish their tasks before shutting down.
These threads are usually created by the user to run tasks at the same time.
They are used for critical tasks or core work of an application.
These threads are considered high-priority tasks, so they need to run in the foreground.
Daemon Thread: Daemon threads act as service providers, offering support to user threads. There are two main methods available in the thread class for daemon threads: setDaemon()
and isDaemon()
. Daemon threads typically run in the background and support user threads.
JVM does not wait for daemon threads to finish their tasks before termination, as soon as user threads finishes their execution daemon threads are also terminated immediately.
These threads are normally created by JVM.
They are not used for any critical tasks but to do some supporting tasks.
These threads are referred to as low priority threads, therefore are especially required for supporting background tasks like garbage collection, releasing memory of unused objects, etc.
Thread Lifecycle
The lifecycle of a thread in Java consists of several states, which a thread goes through during its execution.
New: A thread is in this state when it is created but not yet started.
Runnable: After the start method is called, the thread becomes runnable. It’s ready to run or execute code and is waiting for CPU time.
Running: The thread is in this state when it is executing.
Blocked/Waiting: A thread is in this state when it is waiting for a resource or for another thread to perform an action.
Terminated: A thread is in this state when it has finished executing.
public class MyThread extends Thread{
@Override
public void run() {
System.out.println("RUNNING"); // RUNNING
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
System.out.println(e);
}
}
public static void main(String[] args) throws InterruptedException {
MyThread t1 = new MyThread();
System.out.println(t1.getState()); // NEW
t1.start();
System.out.println(t1.getState()); // RUNNABLE
Thread.sleep(100);
System.out.println(t1.getState()); // TIMED_WAITING
t1.join();
System.out.println(t1.getState()); // TERMINATED
}
}
Runnable vs Thread: Extending vs Implementing:
One major difference between Thread and Runnable comes from Java's lack of support for multiple inheritance. A class that extends Thread cannot extend any other class, which limits its flexibility. On the other hand, a class can implement multiple interfaces, so it can implement Runnable and extend another class if needed. Therefore, if you need to extend a class and also want multithreading, use Runnable.
Extend the Thread class if you need to override its methods or if the task requires direct control over the thread itself. However, remember that this limits your ability to inherit from other classes.
Executor framework (e.g., ExecutorService), which offers a higher-level way to manage threads and allows for better resource management, allows thread creation using Runnable Interface.
Runnable allows us to use lambda expressions since it is a functional interface. This helps in writing clean and optimized code.
Implementing multithreading using Runnable is preferred over using the Thread class for the reasons mentioned above.
Thread methods
start( ): Initiate the execution of the thread. The Java Virtual Machine (JVM) then calls the run() method of the thread.
run( ): Includes the code that will be executed by the thread. When the thread is started, the run() method is invoked.
sleep(long millis): Temporarily pause or stops the execution of the currently executing thread for the specified number of milliseconds. It doesn’t release lock like wait method and it’s not required to be called from synchronised context.
join( ): When one thread calls the join() method of another thread, it pauses the execution of the current thread until the thread being joined has completed its execution.
setPriority(int newPriority): Set the priority of the thread. The priority is a value between
Thread.MIN_PRIORITY
(1) andThread.MAX_PRIORITY
(10). Threads with higher priority are usually preferred by the thread scheduler, meaning they might get more CPU time than lower-priority threads. Setting a higher priority doesn't guarantee that a thread will run before others; it simply suggests to the scheduler that it should be favored.
public class MyThread extends Thread {
public MyThread(String name) {
super(name);
}
@Override
public void run() {
System.out.println("Thread is Running...");
for (int i = 1; i <= 5; i++) {
for (int j = 0; j < 5; j++) {
System.out.println(Thread.currentThread().getName() + " - Priority: " + Thread.currentThread().getPriority() + " - count: " + i);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
public static void main(String[] args) throws InterruptedException {
MyThread l = new MyThread("Low Priority Thread");
MyThread m = new MyThread("Medium Priority Thread");
MyThread n = new MyThread("High Priority Thread");
l.setPriority(Thread.MIN_PRIORITY);
m.setPriority(Thread.NORM_PRIORITY);
n.setPriority(Thread.MAX_PRIORITY);
l.start();
m.start();
n.start();
}
}
- interrupt(): Interrupts a running thread. The interrupt method only works when the thread is in a sleeping or waiting state. If a thread is not in a sleeping or waiting state, calling the interrupt method will have no effect, and the thread will continue executing normally.
When we use the interrupt method, it throws an InterruptedException.
Syntax:- public void interrupt(){}
A thread can only be interrupted by another thread. When a target thread is interrupted, an internal interruption flag is set on it, indicating that it has been interrupted. If the thread is in a call to wait()
, sleep()
, or join()
, it will throw an InterruptedException
.
If the target thread is actively running (for example, performing computations or processing data), it will continue working until:
It checks its interruption status using
isInterrupted()
orThread.interrupted()
.It encounters a blocking operation like
Thread.sleep()
,wait()
, orjoin()
.
After catching the InterruptedException or checking the interruption status, the thread should handle the interruption by:
Deciding to stop its current task.
Performing any necessary cleanup actions.
Ultimately choosing whether to continue processing or exit.
public class InterruptExample extends Thread {
public void run() {
try {
System.out.println("Thread is working...");
Thread.sleep(5000); // Simulating a long task
System.out.println("Thread woke up!");
} catch (InterruptedException e) {
System.out.println("Thread was interrupted during sleep.");
}
System.out.println("Thread is exiting.");
}
public static void main(String[] args) throws InterruptedException {
InterruptExample thread = new InterruptExample();
thread.start();
// Let the thread run for a bit
Thread.sleep(2000);
// Interrupt the thread while it's sleeping
thread.interrupt();
// Optionally wait for the thread to finish
thread.join();
System.out.println("Main thread is exiting.");
}
}
- yield():
Thread.yield()
is a static method that suggests the current thread temporarily pause its execution to let other threads with the same or higher priority run. It's important to understand thatyield()
is only a suggestion to the thread scheduler, and the actual behavior depends on the JVM and operating system.
public class MyThread extends Thread {
@Override
public void run() {
for (int i = 0; i < 5; i++) {
System.out.println(Thread.currentThread().getName() + " is running...");
Thread.yield();
}
}
public static void main(String[] args) {
MyThread t1 = new MyThread();
MyThread t2 = new MyThread();
t1.start();
t2.start();
}
}
8. Thread.setDaemon(boolean): Marks the thread as either a daemon thread or a user thread. When the JVM exits, all daemon threads are stopped. The main thread will stop, or the JVM will exit, when no user threads are running, even if some daemon threads are still active. Daemon threads are background threads that support user threads, so there's no point in daemon threads running without user threads.
public class MyThread extends Thread {
@Override
public void run() {
while (true) {
System.out.println("Hello world! ");
}
}
public static void main(String[] args) {
MyThread myThread = new MyThread();
myThread.setDaemon(true); // myThread is daemon thread
MyThread t1 = new MyThread();
t1.start(); // t1 is user thread
myThread.start();
System.out.println("Main Done");
}
}
Synchronization:
Synchronization is a mechanism that ensures only one thread can access a specific resource (like a variable, object, or method) at a time. This is important in multithreading because multiple threads might try to read or change shared resources at the same time, which can lead to unpredictable results.
Let's look at an example where two threads are incrementing the same counter, or in other words, sharing the same resource.
class Counter {
private int count = 0; // shared resource
public void increment() {
count++;
}
public int getCount() {
return count;
}
}
public class MyThread extends Thread {
private Counter counter;
public MyThread(Counter counter) {
this.counter = counter;
}
@Override
public void run() {
for (int i = 0; i < 1000; i++) {
counter.increment();
}
}
public static void main(String[] args) {
Counter counter = new Counter();
MyThread t1 = new MyThread(counter);
MyThread t2 = new MyThread(counter);
t1.start();
t2.start();
try {
t1.join();
t2.join();
}catch (Exception e){
}
System.out.println(counter.getCount()); // Expected: 2000, Actual will be random <= 2000
}
}
The output of the code is not 2000 because the increment method in the Counter class is not synchronized. This causes a race condition when both threads try to increment the count variable at the same time.
A race condition happens when two or more threads access shared data and try to change it at the same time which leads to unexpected results, such as our count not reaching 2000.
Without synchronization, one thread might read the value of count before the other thread finishes writing its incremented value. This can result in both threads reading the same value, incrementing it, and writing it back, which means one of the increments is lost.
For example, if the current value is 101 and thread B needs to update it to 102, both threads might read 101 at the same time because they are running concurrently, resulting in the value being 102.
We can fix this by using the synchronized
keyword.
class Counter {
private int count = 0; // shared resource
public synchronized void increment() {
count++;
}
public int getCount() {
return count;
}
}
By synchronizing the increment
method, you ensure that only one thread can run this method at a time, preventing the race condition. With this change, the output will consistently be 2000.
Here, we synchronized a specific method, but we can also apply a lock to a specific piece of code using a synchronized block. This allows us to synchronize only the part of the code shared by threads, not the entire method, which can improve our code's performance.
class Counter {
private int count = 0; // shared resource
public void increment() {
synchronized (this) { // synchronized block
count++;
}
}
public int getCount() {
return count;
}
}
A critical section is a part of the code that accesses shared resources and must not be run by more than one thread at the same time, as this can cause unexpected results. Protecting critical sections is crucial to prevent race conditions.
Mutual exclusion is a principle that ensures when one thread is running in a critical section, no other thread can enter that section. This is achieved through synchronization. Synchronization is based on the principle of mutual exclusion.
In Java, the synchronized
keyword ensures mutual exclusion by locking the object (or class for static synchronized methods) when a thread enters a synchronized method or block. Other threads trying to enter the same synchronized context must wait until the lock is released. The mutual exclusion principle relies on this locking mechanism.
Thread Safety of a program is the process of ensuring that multiple threads can access and work on the shared piece of code or resource without causing data inconsistency or corruption. We use the synchronized keyword and extrinsic locking mechanisms to ensure thread safety.
Deadlock:
A deadlock occurs in concurrent programming when two or more threads are blocked forever, each waiting for the other to release a resource. This usually happens when threads hold locks on resources and request additional locks held by other threads. For instance, Thread A holds Lock 1 and waits for Lock 2, while Thread B holds Lock 2 and waits for Lock 1. Since neither thread can move forward their execution, they remain in a deadlock state. Deadlocks can greatly affect system performance and are difficult to debug and fix in multi-threaded applications.
class Pen {
public synchronized void writeWithPenAndPaper(Paper paper) {
System.out.println(Thread.currentThread().getName() + " is using pen " + this + " and trying to use paper " + paper);
paper.finishWriting();
}
public synchronized void finishWriting() {
System.out.println(Thread.currentThread().getName() + " finished using pen " + this);
}
}
class Paper {
public synchronized void writeWithPaperAndPen(Pen pen) {
System.out.println(Thread.currentThread().getName() + " is using paper " + this + " and trying to use pen " + pen);
pen.finishWriting();
}
public synchronized void finishWriting() {
System.out.println(Thread.currentThread().getName() + " finished using paper " + this);
}
}
class Task1 implements Runnable {
private Pen pen;
private Paper paper;
public Task1(Pen pen, Paper paper) {
this.pen = pen;
this.paper = paper;
}
@Override
public void run() {
pen.writeWithPenAndPaper(paper); // thread1 locks pen and tries to lock paper
}
}
class Task2 implements Runnable {
private Pen pen;
private Paper paper;
public Task2(Pen pen, Paper paper) {
this.pen = pen;
this.paper = paper;
}
@Override
public void run() {
synchronized (pen){
paper.writeWithPaperAndPen(pen); // thread2 locks paper and tries to lock pen
}
}
}
public class DeadlockExample {
public static void main(String[] args) {
Pen pen = new Pen();
Paper paper = new Paper();
Thread thread1 = new Thread(new Task1(pen, paper), "Thread-1");
Thread thread2 = new Thread(new Task2(pen, paper), "Thread-2");
thread1.start();
thread2.start();
}
}
Inter-Thread communication
Inter-Thread Communication (Cooperation) is a mechanism that allows threads to exchange information or coordinate their execution. It enables threads to work together to solve a common problem or to share resources.
Inter Thread communication only works in case of synchronised threads i.e threads which include locking mechanism.
Inter-Thread communication is a mechanism in which a thread releases the lock and enter into paused or waiting state and another thread acquires the lock and continue to execute.
It is implemented using the following methods of the Object class:
wait() :- If any thread calls the wait() (non-static method), it causes the current thread to release the lock and wait and go to sleep until another thread invokes the notify() or notifiyAll() method for the object lock, or a specified amount of time has elapsed.
notifiy() :- This method is used to wake up a single thread instead of multiple threads that are waiting on the object’s lock and releases the object lock.
notifyAll() :- This method is used to wake up all threads that are in waiting state waiting for object lock and releases the object lock.
To call the wait(), notify(), or notifyAll() methods on any object, the thread must be within a synchronized area or executing thread-safe code with intrinsic locking. These methods are designed to work only with intrinsic locking, which involves the synchronized keyword.
Example - Producer Consumer Problem:
//shared resource shared by two different threads.
class SharedResource {
private int data;
private boolean hasData;
//producer will only produce data when hasData is false and release the lock by calling notify() method for Producer Thread. When producer has data it will wait until consumer to consumes it.
public synchronized void produce(int value) {
while (hasData) {
try {
wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
data = value;
hasData = true;
System.out.println("Produced: " + value);
notify();
}
//consumer will only consume data when hasData is true and when it finish consuming data it will release the lock by calling notify() method for Consumer Thread. When producer has data is false consume will wait until producer to produces it.
public synchronized int consume() {
while (!hasData){
try{
wait();
}catch (InterruptedException e){
Thread.currentThread().interrupt();
}
}
hasData = false;
System.out.println("Consumed: " + data);
notify();
return data;
}
}
//producer which produces data
class Producer implements Runnable {
private SharedResource resource;
public Producer(SharedResource resource) {
this.resource = resource;
}
@Override
public void run() {
for (int i = 0; i < 10; i++) {
resource.produce(i);
}
}
}
//consume which consumes the data
class Consumer implements Runnable {
private SharedResource resource;
public Consumer(SharedResource resource) {
this.resource = resource;
}
@Override
public void run() {
for (int i = 0; i < 10; i++) {
int value = resource.consume();
}
}
}
public class ThreadCommunication {
public static void main(String[] args) {
SharedResource resource = new SharedResource();
Thread producerThread = new Thread(new Producer(resource));
Thread consumerThread = new Thread(new Consumer(resource));
producerThread.start();
consumerThread.start();
}
}
The Producer and Consumer threads acquire a lock on the Shared Resource and communicate with each other using wait()
and notify()
to produce and consume data. If there are multiple consumers, we can use notifyAll()
to inform them to consume data or to release the lock held by the Producer thread.
The wait()
, notify()
, and notifyAll()
methods are in the Object class instead of the Thread class because every object in Java has a lock. Since locks are based on objects and classes, and these methods are called for shared resources that could be objects or classes, not on threads, they are placed in the Object class instead of the Thread class.
Runnable with Lambda Expression:
public class LambdaRunnableExample {
public static void main(String[] args) {
// Creating a Runnable using a lambda expression -> lambda expression provides implementation of run method of runnable interface.
Runnable runnable = () -> {
for (int i = 0; i < 5; i++) {
System.out.println("Running in thread: " + Thread.currentThread().getName() + " - Count: " + i);
try {
Thread.sleep(500); // Sleep for 500 milliseconds
} catch (InterruptedException e) {
Thread.currentThread().interrupt(); // Restore the interrupted status
}
}
};
// Creating threads
Thread thread1 = new Thread(runnable);
Thread thread2 = new Thread(runnable);
// Starting threads
thread1.start();
thread2.start();
// Wait for threads to finish
try {
thread1.join();
thread2.join();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
System.out.println("Finished execution!");
}
}
Thread Pool:
A thread pool is a collection of pre-initialized threads that are ready to execute multiple tasks concurrently. Instead of creating a new thread for each task, tasks are assigned to available threads in the pool, which helps improve performance and manage resources efficiently. This way, threads can be reused, reducing the overhead associated with thread creation and destruction. Once thread completes its task it returns back to pool.
Benefits of Thread Pool:
Better Resource Management: Thread pools lower the overhead of creating and destroying threads. By reusing a set number of threads, they optimize resource use and reduce the performance costs linked to frequent thread lifecycle management.
Improved Response Time: By reusing threads and minimizing the overhead of thread creation, thread pools can significantly improve the response time for executing tasks. This leads to faster completion of tasks, particularly in applications with high concurrency demands.
Control Over Thread Count: Thread pools provide control over the number of concurrent threads used in application. Which helps to know thread usage and helps to improve performance by better management of threads.
Executors Framework:
Executors framework helps to implement thread pool in Java.
The Executors framework was introduced in Java 5 as part of the java.util.concurrent package to simplify the development of concurrent applications by abstracting away many of the complexities involved in creating and managing threads.
It will help in
Avoiding Manual Thread Management: The Executor framework abstracts the complexities of thread lifecycle management. Developers can focus on submitting tasks without needing to manually create, start, or stop threads, making the code cleaner and easier to maintain.
Better Resource Management.
Scalability: The framework allows for scalable application design. You can easily adjust the number of threads in the pool based on workload demands, providing flexibility to handle varying levels of concurrency without significant code changes.
Thread Reuse: The Executor framework enables thread reuse through its thread pools. Once a thread completes a task, it is returned to the pool and can be assigned to new tasks, minimizing the overhead of thread creation and improving performance.
Task Scheduling and Management: The framework supports not only executing tasks but also scheduling them for future execution. This includes features for fixed-rate and delayed execution, making it easy to manage recurring tasks.
Error Handling: The Executor framework provides built-in mechanisms for handling exceptions.
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class ExecutorFrameWork {
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
//we can define no. of threads our program will use.
ExecutorService executor = Executors.newFixedThreadPool(3);
for (int i = 1; i < 10; i++) {
//the variables used inside lambda expression need to final because lambda expression captures the variable, and if the variable could change (like the loop variable i), it could lead to inconsistent results.
int finalI = i;
//The submit method is used to submit a task for execution (taks we want threads to execute). When you submit a task, it is placed in the executor's task queue for execution by one of the available threads in the pool.
executor.submit(() -> {
long result = factorial(finalI);
System.out.println(result);
});
}
//The shutdown method is used to stop the executor service. It initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Once all tasks have completed, the executor will release its resources, including terminating any threads that were part of its thread pool.
executor.shutdown();
// executor.shutdown();
try {
//The awaitTermination(long timeout, TimeUnit unit) method is used in the Executor framework to block the calling thread until the executor has completed its shutdown process.
executor.awaitTermination(1, TimeUnit.SECONDS);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
System.out.println("Total time " + (System.currentTimeMillis() - startTime));
}
private static long factorial(int n) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
long result = 1;
for (int i = 1; i <= n; i++) {
result *= i;
}
return result;
}
}
Runnable vs Callable Interface
Runnable introduced in JDK 1.0 and JDK 1.5.
Both runnable and callable are functional interface. But runnable has run method with return type as void and it’s doesn’t allow us to declare compile time or checked exception -> I.e with throws keyword we have to use try-catch in case of runnable. While in case of callable we have a call method with return type could be any valid data type and allows to declare compile time exception with throws keyword.
We can use pass the runnable instance to Thread class constructor to create thread but it’s not possible to pass callable instance to thread class. We need to use Executor Service which allows us to use callable for multithreading purpose. Executor service allows use to create fixed size thread pool which we can use for multi threading purpose.
Runnable belongs to Java.lang and callable belongs to java.util.concurrent package.
Runnable cannot be passed to invokeAll method but callable could be passed.
Volatile Keyword :-
Threads maintain their own local caches of shared variables for performance optimization. If one thread modifies a variable, it will first get updated in its cache, and then the changes are reflected in the main memory. After this, other threads' caches will be updated. Other threads might not see the updated value immediately because they read from their local copy.
When a variable is marked as volatile, every read and write operation on that variable is done directly from the main memory, not from the thread's local cache. This ensures that all threads always see the most recent value of a volatile variable.
The volatile keyword does not make a variable thread-safe by default, unlike the atomic keyword.
Example :-
Volatile keyword in used in creation of singleton class which can be accessed by multiple threads.
public class Singleton {
//used volatile for ensuring visibility
private static volatile Singleton instance = null;
private Singleton() {
System.out.println("Singleton instance created.");
}
// Step 3: Public method to provide access to the Singleton instance
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) { // class level lock to make sure only single thread can access this critical section at a time
if (instance == null) { //double checking
instance = new Singleton();
}
}
}
return instance;
}
}
In a multithreaded environment, two threads might try to create an instance of the Singleton class simultaneously. This could lead to two different instances being created if a normal variable is used, as threads will read the value from their local cache. Using volatile
resolves this issue.
Atomic Variables:
Before atomic variables, we had to use the synchronized keyword to access a variable in a thread-safe way. Atomic classes are specialized classes designed for thread-safe operations on single variables. They include classes such as AtomicInteger, AtomicLong, AtomicBoolean, and AtomicReference, among others.
Atomic Classes provide a cleaner, more efficient way to achieve thread-safe access to single variables without the complexity and overhead associated with the synchronized keyword.
Example:-
import java.util.concurrent.atomic.AtomicInteger;
public class AtomicCounter {
private AtomicInteger counter = new AtomicInteger(0);
public void increment() {
counter.incrementAndGet();
}
public int getCounter() {
return counter.get();
}
}
// Usage in multiple threads
public class Main {
public static void main(String[] args) throws InterruptedException {
AtomicCounter atomicCounter = new AtomicCounter();
// Creating multiple threads to increment the counter
Thread thread1 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
atomicCounter.increment();
}
});
Thread thread2 = new Thread(() -> {
for (int i = 0; i < 1000; i++) {
atomicCounter.increment();
}
});
thread1.start();
thread2.start();
thread1.join();
thread2.join();
System.out.println("Final Counter Value: " + atomicCounter.getCounter());
}
}
Locks:-
Locks are mechanisms that allow us to control access to shared resources by multiple threads, ensuring that only one thread can access a resource at a time.
Intrinsic Lock: These locks are automatically present in every object and class in Java. You don't see them, but they exist. When you use the
synchronized
keyword, you're using these automatic locks. A thread acquires a lock on an object or class when you use thesynchronized
keyword.Explicit Lock: These are more advanced locks that you can control yourself and create using the Lock interface implementations from the
java.util.concurrent.locks
package. You explicitly decide when to lock and unlock, giving you more control over how and when threads can access a particular piece of code.
The synchronized keyword in Java provides basic thread-safety but has limitations:
It lacks a try-lock mechanism, which can cause threads to block indefinitely. If a thread is blocked for any reason and the lock is not released, this can result in a deadlock, as no other thread will be able to acquire the lock.
Explicit locks (Lock interface) offer more fine-grained control, try-lock capabilities to prevent blocking, and provide powerful tools for complex concurrency situations.
You can implement different locking strategies, such as read/write locks and reentrant locks, which are not possible with intrinsic locks. This approach allows for better optimization based on the specific needs of the application.
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class BankAccount {
private int balance = 100;
private final Lock lock = new ReentrantLock();
public void withdraw(int amount) {
System.out.println(Thread.currentThread().getName() + " attempting to withdraw " + amount);
try {
if (lock.tryLock(1000, TimeUnit.MILLISECONDS)) {
if (balance >= amount) {
try {
System.out.println(Thread.currentThread().getName() + " proceeding with withdrawal");
Thread.sleep(3000); // Simulate time taken to process the withdrawal
balance -= amount;
System.out.println(Thread.currentThread().getName() + " completed withdrawal. Remaining balance: " + balance);
} catch (Exception e) {
Thread.currentThread().interrupt();
} finally {
lock.unlock();
}
} else {
System.out.println(Thread.currentThread().getName() + " insufficient balance");
}
} else {
System.out.println(Thread.currentThread().getName() + " could not acquire the lock, will try later");
}
} catch (Exception e) {
Thread.currentThread().interrupt();
}
}
}
public class Main {
public static void main(String[] args) {
BankAccount sbi = new BankAccount();
Runnable task = new Runnable() {
@Override
public void run() {
sbi.withdraw(50);
}
};
Thread t1 = new Thread(task, "Thread 1");
Thread t2 = new Thread(task, "Thread 2");
t1.start();
t2.start();
}
}
Non-Blocking Attempt:
When you call tryLock(), it attempts to acquire the lock immediately. If the lock is available (i.e., no other thread holds it), the calling thread acquires the lock and returns true.
If the lock is already held by another thread, tryLock() returns false immediately, allowing the calling thread to continue executing other tasks without waiting.
Avoiding Deadlocks:
- Since tryLock() does not block, it helps in avoiding deadlocks in situations where threads might end up waiting on each other for locks.
Optional Timeout:
ReentrantLock also provides an overloaded version of tryLock(long timeout, TimeUnit unit) that allows you to specify a timeout period. This way, if the lock is not available, the thread will wait for a specified duration before giving up.
If the lock becomes available within that time, the thread acquires the lock and returns true. If not, it returns false.
Reentrant Lock:
A Reentrant Lock in Java is a type of lock i.e class implementing lock interface that allows a thread to acquire the same lock multiple times without causing a deadlock. If a thread already holds the lock, it can re-enter the lock without being blocked.
The ReentrantLock class from the java.util.concurrent.locks package provides this functionality, offering more flexibility than the synchronized keyword, including try-locking mechanism, timed locking, and multiple condition variables for advanced thread coordination.
public class ReentrantExample {
private final Lock lock = new ReentrantLock();
public void outerMethod() {
lock.lock();
try {
System.out.println("Outer method");
innerMethod();
} finally {
lock.unlock();
}
}
public void innerMethod() {
lock.lock();
try {
System.out.println("Inner method");
} finally {
lock.unlock();
}
}
public static void main(String[] args) {
ReentrantExample example = new ReentrantExample();
example.outerMethod();
}
}
In the example above, the use of ReentrantLock prevents a deadlock because it allows the same thread to acquire the lock again in the innerMethod. Also, the number of lock operations is matched by the number of unlock operations.
If we had used the default synchronized keyword, we wouldn't be able to acquire the lock again, which would result in a deadlock.
Methods of ReentrantLock
- lock()
Acquires the lock if it is not already held by another thread.
If the lock is held by another thread, the current thread will wait until it can acquire the lock.
- tryLock()
Tries to acquire the lock without waiting. Returns
true
if the lock is acquired,false
otherwise.This is non-blocking, so the thread will not wait if the lock is unavailable.
- tryLock(long timeout, TimeUnit unit)
Attempts to acquire the lock with a timeout. If the lock isn't available, the thread waits for the specified time before giving up. This is useful when you want to try getting the lock without waiting forever. It lets the thread continue with other tasks if the lock isn't available in time. This approach helps avoid deadlocks and prevents a thread from being stuck waiting for a lock.
Returns true if the lock is acquired within the timeout, false otherwise.
- unlock()
Releases the lock held by the current thread.
Should be called in a
finally
block to make sure the lock is always released, even if an exception happens.
Read Write Lock
A Read-Write Lock is a concurrency control mechanism that lets multiple threads read shared data at the same time while allowing only one thread to write at a time.
This type of lock, provided by the ReentrantReadWriteLock
class in Java, boosts performance in situations with many read operations and few writes.
Multiple readers can get the read lock without blocking each other. However, when a thread needs to write, it must get the write lock, ensuring exclusive access. This prevents data inconsistency and improves read efficiency compared to traditional locks. Write access is blocked during reading to maintain data consistency, and vice versa. This is all managed by the ReentrantReadWriteLock
class.
public class ReadWriteCounter {
private int count = 0;
private final ReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock readLock = lock.readLock();
private final Lock writeLock = lock.writeLock();
public void increment() {
writeLock.lock();
try {
count++;
Thread.sleep(50);
} catch (InterruptedException e) {
throw new RuntimeException(e);
} finally {
writeLock.unlock();
}
}
public int getCount() {
readLock.lock();
try {
return count;
} finally {
readLock.unlock();
}
}
public static void main(String[] args) throws InterruptedException {
ReadWriteCounter counter = new ReadWriteCounter();
Runnable readTask = new Runnable() {
@Override
public void run() {
for (int i = 0; i < 10; i++) {
System.out.println(Thread.currentThread().getName() + " read: " + counter.getCount());
}
}
};
Runnable writeTask = new Runnable() {
@Override
public void run() {
for (int i = 0; i < 10; i++) {
counter.increment();
System.out.println(Thread.currentThread().getName() + " incremented");
}
}
};
Thread writerThread = new Thread(writeTask);
Thread readerThread1 = new Thread(readTask);
Thread readerThread2 = new Thread(readTask);
writerThread.start();
readerThread1.start();
readerThread2.start();
writerThread.join();
readerThread1.join();
readerThread2.join();
System.out.println("Final count: " + counter.getCount());
}
}
Fairness of Locks
Fairness in the context of locks refers to the order in which threads get a lock. A fair lock makes sure that threads acquire the lock in the order they requested it, preventing any thread from being starved. With a fair lock, if multiple threads are waiting, the thread that has been waiting the longest gets the lock next.
However, fairness can reduce throughput because of the overhead needed to maintain the order. Non-fair locks, on the other hand, let threads "cut in line," which might improve performance but can also cause some threads to wait indefinitely, known as starvation, if others frequently acquire the lock.
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class FairnessLockExample {
private final Lock lock = new ReentrantLock(true);
public void accessResource() {
lock.lock();
try {
System.out.println(Thread.currentThread().getName() + " acquired the lock.");
Thread.sleep(1000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
System.out.println(Thread.currentThread().getName() + " released the lock.");
lock.unlock();
}
}
public static void main(String[] args) {
FairnessLockExample example = new FairnessLockExample();
Runnable task = new Runnable() {
@Override
public void run() {
example.accessResource();
}
};
Thread thread1 = new Thread(task, "Thread 1");
Thread thread2 = new Thread(task, "Thread 2");
Thread thread3 = new Thread(task, "Thread 3");
thread1.start();
thread2.start();
thread3.start();
}
}
When to Use Non-Fair Locks:-
High Throughput Requirements:- Fairness of locks result in low throughput as order of execution of threads need to be maintained i.e managing the queue of threads and context switching between threads which is not case with unfair lock where the lock could be acquired faster by the next ready thread.
Short-Lived Critical Sections:- In these cases, the benefits of fair locks are minimal because threads typically do not have to wait long to acquire the lock. Therefore, fairness is less of a concern and they can negatively impact throughput, so it's better to avoid using them.
CountDown Latch
We use a countdown latch when we want a thread, whether it's the main thread or a thread that spawns other threads, to wait until a certain number of threads have finished their execution.
import java.util.concurrent.Callable;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Test {
public static void main(String[] args) throws InterruptedException {
int n = 3;
ExecutorService executorService = Executors.newFixedThreadPool(n);
// we can define latch for no. of threads.
CountDownLatch latch = new CountDownLatch(n);
executorService.submit(new DependentService(latch));
executorService.submit(new DependentService(latch));
executorService.submit(new DependentService(latch));
//main thread will wait until all three threads finish their execution.
latch.await();
System.out.println("Main");
executorService.shutdown();
}
}
class DependentService implements Callable<String> {
private final CountDownLatch latch;
public DependentService(CountDownLatch latch) {
this.latch = latch;
}
@Override
public String call() throws Exception {
try {
System.out.println(Thread.currentThread().getName() + " service started.");
Thread.sleep(2000);
} finally {
// when thread finishes it’s execution count of threads latch having decrease by 1.
latch.countDown();
}
return "ok";
}
}
Output: -
pool-1-thread-3 service started.
pool-1-thread-2 service started.
pool-1-thread-1 service started.
Main
Cyclic Barrier
We use a cyclic barrier when we want our threads to wait for each other to reach a specific point in their execution, called the barrier point, before proceeding. This is useful in situations where multiple threads need to perform a task in stages, and all threads must finish their current stage before starting the next one.
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
public class Main {
public static void main(String[] args) {
int numberOfSubsystems = 4;
// The first parameter specifies the number of threads that must wait at the barrier point,
// and the second parameter is an action that is executed once all threads reach the barrier point.
CyclicBarrier barrier = new CyclicBarrier(numberOfSubsystems, new Runnable() {
@Override
public void run() {
System.out.println("All subsystems are up and running. System startup complete.");
}
});
//threads will cyclic barrier
Thread webServerThread = new Thread(new Subsystem("Web Server", 2000, barrier));
Thread databaseThread = new Thread(new Subsystem("Database", 4000, barrier));
Thread cacheThread = new Thread(new Subsystem("Cache", 3000, barrier));
Thread messagingServiceThread = new Thread(new Subsystem("Messaging Service", 3500, barrier));
webServerThread.start();
databaseThread.start();
cacheThread.start();
messagingServiceThread.start();
}
}
class Subsystem implements Runnable {
private String name;
private int initializationTime;
private CyclicBarrier barrier;
public Subsystem(String name, int initializationTime, CyclicBarrier barrier) {
this.name = name;
this.initializationTime = initializationTime;
this.barrier = barrier;
}
@Override
public void run() {
try {
System.out.println(name + " initialization started.");
Thread.sleep(initializationTime); // Simulate time taken to initialize
System.out.println(name + " initialization complete.");
// await method is used to mark the barrier point. -
//when all no. of threads mentioned in cyclic barrier reaches this point then execution proceeds further.
barrier.await();
} catch (InterruptedException | BrokenBarrierException e) {
e.printStackTrace();
}
}
}
Subscribe to my newsletter
Read articles from Vinay Kumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by