CPU Scheduling in Operating Systems (Os) with Examples
What is CPU Scheduling?
- CPU scheduling is a crucial aspect of modern operating systems.
- CPU scheduling involves the management of processes in a system and determines which process gets access to the central processing unit (CPU) at any given time.
Prerequisites:- Processes in Operating Systems
What are Threads in Os?
- Processes are independent program units that run concurrently in a computer system.
- Processes can contain multiple execution units called threads that share the same memory and resources but execute different parts of the code.
- CPU scheduling primarily deals with processes and their threads.
What is Context Switching?
- A context switch is the process of saving the state of one process or thread and loading the saved state of another.
- This is necessary when a running process is interrupted and must yield the CPU to another process.
- Context switching introduces overhead and must be minimized for efficient CPU scheduling.
let's understand with an example
- Imagine you're using a computer with a web browser and an email application open.
- Web Browsing: You're reading a webpage, and the CPU is dedicated to the web browser.
- Email Interruption: You receive a new email, interrupting the web browser's execution.
- Context Switch: The operating system saves the web browser's state and loads the email application.
- Email Response: You read and respond to the email, with the CPU now dedicated to the email application.
- Returning to Web Browsing: You go back to the web browser, triggering another context switch.
Smooth Transition: - The web browser's state is loaded, and you continue reading the webpage seamlessly.
- Context switching is the behind-the-scenes process that allows your computer to smoothly transition between different tasks,
Types of CPU Scheduling
There are mainly two types of CPU scheduling as follows :
- Preemptive
- Non Preemptive
Preemptive Scheduling
- In preemptive scheduling, the operating system has the capability to interrupt a currently running process and temporarily suspend its execution to start or resume another process.
- The decision to switch between processes can occur at any time, typically based on factors like priority or a predefined time quantum.
Key Characteristics:
- Allows for dynamic prioritization based on system conditions.
- Frequent context switches between processes.
Responsive to changes in process states.
Non-Preemptive Scheduling
- In non-preemptive scheduling, a running process cannot be interrupted until it completes its execution or enters a waiting state voluntarily.
- The operating system makes scheduling decisions only when a process terminates or moves to the waiting state.
Key Characteristics:
- Simpler to implement and manage.
- Generally results in longer CPU bursts for running processes.
- Less frequent context switches compared to preemptive scheduling.
Preemptive vs Non Preemptive
Context Switching:
- Preemptive: Involves frequent context switches, as processes can be interrupted at any time.
- Non-Preemptive: Context switches occur only when a process voluntarily gives up the CPU or completes its execution.
Responsiveness
- Preemptive: Highly responsive to changes in system conditions or priorities.
- Non-Preemptive: Less responsive during the execution of a process, as decisions are made only at certain points.
Complexity
- Preemptive: More complex due to the need for real-time decision-making during process execution.
- Non-Preemptive: Simpler, as scheduling decisions are made less frequently.
Examples
- Preemptive: Round Robin with time slicing, Priority Scheduling with preemption.
- Non-Preemptive: FCFS (First-Come, First-Served), SJF (Shortest Job First), Priority Scheduling without preemption.
CPU Scheduling Criteria
- CPU Utilization
- Throughput
- Turnaround Time
- Waiting Time
- Response Time
CPU Utilization
- One of the primary objectives of CPU scheduling is to maximize CPU utilization.
- This means keeping the CPU busy as much as possible to ensure that it is not idle.
- Higher CPU utilization indicates efficient resource utilization.
Throughput
- Throughput refers to the number of processes completed in a unit of time.
- Efficient scheduling should aim to maximize throughput by completing more processes in less time.
Turnaround Time
- Turnaround time is the total time taken for a process to enter the system, execute, and finish.
- Lower turnaround times are desirable as they indicate faster process completion.
Waiting Time
- Waiting time is the cumulative amount of time a process spends waiting in the ready queue before being executed.
- Reducing waiting time enhances overall system performance.
Response Time
- Response time is the time it takes for a system to respond to an external event or a user request.
- Quick response times lead to a more interactive and responsive system.
What is the Need for CPU Scheduling Algorithm?
- In a computer system, multiple processes compete for the CPU's attention. A CPU scheduling algorithm is needed to:
- Maximize CPU Utilization: Without scheduling, a powerful CPU might be underutilized if processes are not efficiently managed.
- Optimize Throughput: By efficiently managing the order of process execution, the system can complete more tasks in less time.
- Minimize Waiting Time: Without scheduling, processes might wait longer in the queue, leading to delays. Scheduling minimizes waiting times.
- Quick Response Time: In interactive systems, scheduling ensures quick responses to user inputs, making the system more responsive.
CPU Scheduling Algorithms
First-Come, First-Served (FCFS)
- In the First-Come, First-Served (FCFS) scheduling algorithm, processes are executed in the order they arrive in the ready queue.
- It's like waiting in a queue, and the first process to enter the queue is the first to be served.
- However, FCFS doesn't consider the execution time of processes.
Example:
- Think of a queue at a ticket counter.
- The person who arrives first gets the ticket first, regardless of how long the ticketing process will take.
- This can lead to situations where a very long task is ahead of shorter ones, causing delays.
Shortest Job First (SJF)
- The Shortest Job Next (SJN) or Shortest Job First (SJF) scheduling algorithm prioritizes the process with the shortest execution time.
- This minimizes waiting times and is based on the assumption that shorter tasks should be completed first to improve overall efficiency.
Example:
- Imagine a fast-food restaurant.
- They prioritize preparing and serving smaller, quicker-to-cook orders (shortest jobs) ahead of larger, time-consuming orders (longest jobs) to reduce waiting times for customers.
Round Robin
- In Round Robin scheduling, processes are assigned a fixed time slice (quantum) in a circular manner.
- Each process gets a turn to execute for a fixed time, and then the CPU switches to the next process in the queue.
- This ensures that all processes get a fair share of the CPU.
Example:
- Think of a round-robin discussion in a group meeting.
- Each person gets a fixed time to speak, and when their turn is up, the discussion moves to the next person.
- This way, everyone has an equal opportunity to express their ideas.
Priority Scheduling
- Priority Scheduling assigns a priority to each process, and the scheduler selects the highest-priority process for execution.
- This allows important or time-sensitive tasks to be handled before lower-priority tasks.
Example:
- In a hospital emergency room, patients are prioritized based on the severity of their condition.
- Those with life-threatening issues (high priority) are treated before patients with less urgent needs (lower priority).
Multilevel Queue
- In the Multilevel Queue scheduling algorithm, processes are divided into multiple queues, each with different priorities.
- Each queue can have its own scheduling algorithm.
- This approach is used to manage processes with varying importance or characteristics effectively.
Example:
- Consider a customer support center with multiple teams.
- High-priority support requests (e.g., system outages) are handled by the "Urgent" team, while less critical issues (e.g., user questions) go to the "General" team.
- Each team may use different strategies to handle their workload.
Conclusion
CPU scheduling is a critical component of operating systems, aiming to optimize various criteria like CPU utilization, throughput, turnaround time, waiting time, and response time.