Quick Study Points on Operating System - Process Management and Scheduling
Process management and scheduling are fundamental concepts in operating systems that involve managing and scheduling processes to efficiently utilize system resources.
Key Points:
Process Management:
a. Definition and Components:
Process: A program in execution; an instance of a program.
Process Control Block (PCB): A data structure that stores information about a process, such as a process ID, program counter, registers, and resource usage.
b. Process States:
New: The process is being created.
Ready: The process is waiting to be assigned to a processor.
Running: The process is currently being executed.
Blocked: The process is waiting for an event or resource.
Terminated: The process has finished its execution.
c. Process Synchronization:
Ensuring coordination and communication among processes to avoid conflicts and data inconsistency.
Techniques include semaphores, mutexes, and monitors.
(Process Synchronization continued) Process Scheduling:
1. Purpose:
Allocating system resources (CPU time) to processes efficiently and fairly. Minimizing response time, maximizing throughput, and ensuring fairness.
2. Scheduling Criteria:
CPU Utilization: Maximizing CPU usage to keep the system busy.
Throughput: Maximizing the number of processes completed per unit of time.
Turnaround Time: Minimizing the time taken to execute a particular process.
Waiting Time: Minimizing the time spent by a process in the ready queue.
Response Time: Minimizing the time taken to respond to a user's request.
3. Scheduling Algorithms:
First-Come, First-Served (FCFS): Processes are executed in the order they arrive.
Shortest Job Next (SJN): The process with the shortest burst time is executed next.
Round Robin (RR): Processes are executed in a fixed time quantum, allowing fair execution.
Priority Scheduling: Processes are assigned priority levels, and the highest priority process is executed next.
Multilevel Queue Scheduling: Processes are divided into different priority levels, each with its own scheduling algorithm.
d. Scheduling Policies:
Pre-emptive Scheduling: Processes can be interrupted and rescheduled before completing their execution.
Non-Pre-emptive Scheduling: Processes run until completion unless they voluntarily release the CPU.
e. Scheduling in Multiprocessor Systems:
Load Balancing: Distributing processes evenly across multiple processors to utilize system resources efficiently.
Symmetric Multiprocessing (SMP): Each processor performs scheduling independently. Asymmetric Multiprocessing (AMP): A master processor handles the task of process scheduling.
Subscribe to my newsletter
Read articles from Code Fortress directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Code Fortress
Code Fortress
I am an IT student, and am passionate about exploring the dynamic world of technology.