Processes in Operating Systems (Os) with Examples

Processes in Operating Systems (Os) with Examples

  • When a program is loaded into memory and starts running, it becomes a process.
  • Each process has its own memory space, resources, and execution state.
  • A process is a fundamental concept in operating systems.
  • Each process is independent, isolated, and has its own memory space, which prevents it from interfering with other processes.

Components of a Process

Processes in a computer system consist of different components that handle various aspects of program execution.
Understanding these components—stack, heap, text, and data—helps us grasp how a process operates.

Stack

  • The stack is like a temporary scratchpad used by a process.
  • It stores function calls, local variables, and execution context.
  • When a function is called, a new stack frame is added, and when the function ends, its frame is removed.
  • Think of the stack as a pile of plates. Each time you add a plate (function call), it goes on top.
  • When you finish with a plate (function ends), you take it off the top.
  • This last-in, first-out (LIFO) structure mirrors how the stack operates in a process.

Heap

  • The heap is a dynamic storage area where a process can allocate memory during runtime.
  • It's like an expandable workspace where a process can request and release memory as needed.
  • Consider the heap as a desk with papers. A process can ask for more papers (memory) when it needs them and return them when done.

Text

The text section contains the actual program instructions or code of the process. It's a read-only area where the executable code is stored.

Data

  • The data section holds global and static variables used by the program. It includes initialized and uninitialized data.
  • Functions can write and read information in the data section to communicate and share important details, ensuring coordination during the process execution.

Process vs Program

  • A program is a set of instructions written in a programming language.
  • When a program is loaded into memory and starts running, it becomes a process.
  • Each process has its own memory space, resources, and execution state.

Process Life Cycle

Processes go through several states during their execution:
  • New: The process is being created.
  • Ready: The process is ready to execute but is waiting for CPU time.
  • Running: The process is actively executing on the CPU.
  • Blocked: The process is waiting for an event, such as I/O completion.
  • Terminated: The process has finished its execution.

Schedulers in Os

  • In operating systems, schedulers are responsible for determining the order in which processes are executed by the CPU.
  • The three main types of schedulers are:

Long-Term Scheduler (Job Scheduler)

  • The long-term scheduler selects processes from the job pool (a pool of processes waiting to be brought into memory) and loads them into the main memory for execution.
  • Its primary goal is to control the degree of multiprogramming, balancing the number of processes in the ready queue.
  • Example: Consider a system where new processes arrive from user submissions.
  • The long-term scheduler decides which processes to bring into memory based on factors like priority, system load, etc.

Short-Term Scheduler (CPU Scheduler)

  • The short-term scheduler selects processes from the ready queue and allocates the CPU to them for a short time.
  • It is responsible for determining the order in which ready processes are executed, providing a fair and efficient utilization of the CPU.
  • Example: In a system with multiple processes in the ready queue, the short-term scheduler decides which process to run next based on priority,
  • time quantum (for time-sharing systems), or other scheduling algorithms.

Medium-Term Scheduler

  • The medium-term scheduler decides when to swap processes in and out of the main memory.
  • It is responsible for managing processes in the "waiting" or "blocked" state, moving them out of the main memory to secondary storage.
  • when necessary and bringing them back when their execution can continue.

What is Process Scheduling?

  • Process scheduling is a critical component of operating systems that determines which process to execute next on the CPU.
  • The goal is to efficiently utilize the CPU while providing fair and responsive execution to all processes.

Scheduling Algorithms

  • First-Come, First-Served (FCFS)
  • Shortest Job First (SJF):
  • Round Robin:
  • priority Scheduling:
  • Multilevel Queue:

1. First-Come, First-Served (FCFS)

The processes are executed in the order they arrive, without considering their execution time.

2. Shortest Job First (SJF)

Prioritizes the process with the shortest execution time, reducing waiting times.

3. Round Robin

Processes are assigned a fixed time slice (quantum) and are executed in a circular manner.

4. Priority Scheduling:

Each process is assigned a priority, and the scheduler selects the highest-priority process for execution.

5. Multilevel Queue:

Processes are divided into multiple queues with different priorities.

Examples

  • In a real-time system, processes related to critical tasks like engine control are assigned higher priority to ensure they get immediate
  • CPU time, while less critical tasks like logging run at lower priority to avoid disrupting critical operations.

Process Creation

Processes are created in various ways, including:
  • Forking
  • Executing
  • Spawning

Forking

  • In Unix-like systems, a process can create a new process using the fork() system call.
  • The new process is a copy of the parent process with a different process ID (PID).

Executing

After forking, a process can replace its program using the exec() system call to run a different program.

Spawning

Some operating systems offer APIs to create processes explicitly, such as the Windows API's CreateProcess().

Example 1:

  • Consider a word processing application.
  • When you open the application, it creates a process to manage the user interface.
  • If you open multiple documents simultaneously, each document is handled by a separate process, ensuring that a crash in one document does not affect others.

Example 2:

  • Web browsers like Google Chrome run multiple processes, one for each tab or extension.
  • This isolation ensures that a misbehaving website or extension does not crash the entire browser.

Process Control Block (PCB)

  • A Process Control Block (PCB) is a data structure that contains all the information about a process, including its state, program counter, registers, and more.
  • It is used by the operating system to manage and schedule processes.

Inter Process Communication (IPC)

  • Processes often need to communicate with each other.
  • IPC mechanisms like pipes, sockets, and shared memory allow processes to exchange data and synchronize their activities.

Process Termination

When a process completes its execution or is terminated due to an error, the operating system releases the resources associated with it, closes files, and updates accounting information
  • In a multitasking environment, when a process is waiting for user input, it enters the "Blocked" state, allowing other processes to utilize the CPU.
  • Once the input is available, it transitions back to the "Ready" state.
  • In a server application, multiple processes handle incoming client requests.
  • These processes communicate through sockets, allowing them to share data and coordinate their responses.

When Does Context Switching Happen?

  • Context switching occurs in operating systems when the CPU switches from executing one process to another.
  • This switch happens in various situations, such as when a running process voluntarily turns over the CPU (e.g., due to a system call or waiting for I/O)
  • or
  • when the operating system's scheduler preempts the currently running process to give CPU time to another process.

Attributes or Characteristics of a Process

Program Counter (PC)

  • The program counter is like a "pointer" that tells the CPU which instruction in a program to execute next.
  • In the context of processes, each process has its own program counter.
  • This means that different processes can be running different parts of a program simultaneously without interfering with each other.
Example:
  • Imagine two people reading different books. Each person has their own bookmark (program counter).
  • They can read their books independently, and their bookmarks help them keep track of where they left off.

Registers

  • Registers are small, high-speed storage areas inside the CPU. Each process has its own set of registers.
  • These registers store important data and execution context, such as the values of variables, function call information, and other critical details.
  • By having separate registers, processes can work on their data without mixing it up with other processes.
Example:
  • Think of registers as personal notebooks. In a classroom, each student has their own notebook (registers) where they write down their notes and ideas.
  • This way, they can take notes without sharing a common notebook with others.

Memory Space

  • Processes have their own dedicated memory space.
  • This means that the data and variables used by one process are stored separately from those used by other processes.
  • This separation ensures that one process cannot accidentally or maliciously access or modify the memory of another process, providing data isolation.
Example:
  • Picture a shared kitchen with different cabinets.
  • Each person (process) has their own cabinet (memory space) to store their food.
  • This way, nobody can take food from someone else's cabinet, ensuring that each person's food is safe and isolated.

File Descriptors

  • Processes have separate file descriptors, which are like handles that allow them to access files.
  • These file descriptors ensure that one process's interaction with files does not interfere with another process's access to the same files.
  • It's like having a dedicated key to your own room in a building.
Example:
  • In an office building, each employee has a unique key (file descriptor) to their office (file).
  • They can access their office without needing someone else's key. This way, privacy and security are maintained.

Status Information

  • Each process maintains information about its current state, such as whether it's running, waiting, or ready to execute.
  • This status information helps the operating system manage and schedule processes effectively.

Conclusion

Processes, with their distinct components and management, are essential for efficient program execution and resource optimization in operating systems