Process Scheduling
Process Scheduling
Process Scheduling
Process
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data.
The following image shows a simplified layout of a process inside main memory −
Program
When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm.
A collection of computer programs, libraries and related data are referred to as a software.
In general, a process can have one of the following five states at a time.
Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every
process.
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2 Process privileges
Program Counter is a pointer to the address of the next instruction to be executed for
this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.
7 CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8 Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID
etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems.
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Process Scheduling
Is the act of determining which process is in the ready state, and should be moved to the
running state
The main aim of the process scheduling system is to keep the CPU busy all the time and
to deliver minimum response time for all programs.
The scheduler must apply appropriate rules for swapping processes IN and OUT of CPU.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Time-division multiplexing (TDM) is a method of transmitting and receiving
independent signals over a common signal path by means of synchronized switches at
each end of the transmission line so that each signal appears on the line only a fraction of
time in an alternating pattern.
Pre-emptive Scheduling:
When the operating system decides to favour another process, pre-empting the
currently executing process.
In this scheduling a process switches from running state to ready state or from
waiting state to ready state.
In pre-emptive scheduling processes can be scheduled
It’s when the currently executing process gives up the CPU voluntarily.
In this scheduling a process terminates or switches from running to waiting for
state.
In Non-preemptive scheduling, processes cannot be scheduled.
Scheduling Queues
The OS maintains a separate queue for each of the process states and PCBs of all processes in
the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to
its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
Device queues − Processes waiting for a device to become available are placed in this
queue. There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it is selected
for execution (or dispatched).
Once the process is assigned to the CPU and is executing, one of the following several events can
occur:
The process could issue an I/O request, and then be placed in the I/O queue.
The process could create a new subprocess and wait for its termination.
The process could be removed by force from the CPU, as a result of an interrupt, and be
put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state, and
is then put back in the ready queue.
A process continues this cycle until it terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.
Types of Schedulers
This scheduler removes the processes from memory (and from active contention for the CPU),
and thus reduces the degree of multiprogramming.
the process can be reintroduced into memory and its execution can be continued where it
left from. This scheme is called swapping.
The process is swapped out, and is later swapped in, by the medium term scheduler.
Swapping may be necessary to improve the process mix, or because a change in memory
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a later
time.
After this, the state for the process to run next is loaded from its own PCB and used to set the PC,
registers, etc. At that point, the second process can start executing.
When the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Operations on Process
There are two major operations Process Creation and Process Termination.
Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other processes.
The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.
A child process may receive some amount of shared resources with its parent depending on
system implementation.
To prevent runaway children from consuming all of a certain system resource, child processes
may or may not be limited to a subset of the resources originally allocated to the parent.
There are two options for the parent process after creating the child :
There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process
2. The child process has a program loaded into it
Process Termination
By making the exit(system call), typically returning an int, processes may request their own
termination.
This int is passed along to the parent if it is doing a wait(), and is typically zero on successful
completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including :
When a process ends, all of its system resources are freed up, open files flushed and closed, etc.
The process termination status and execution times are returned to the parent if the parent is
waiting for the child to terminate, or eventually returned to init if the process already became an
orphan.
The processes which are trying to terminate but cannot do so because their parent is not waiting
for them are termed zombies. These are eventually inherited by init as orphans and killed off.
Scheduling Criteria
There are many different criterias to check when considering the "best" scheduling algorithm,
they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time (Ideally 100% of the time).
Considering a real system, CPU usage should range from 40% (lightly loaded) to 90% (heavily
loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time.
This may range from 10/second to 1/hour depending on the specific processes.
Response Time
Amount of time it takes from when a request was submitted until the first response is
produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.
Burst Time
Arrival Time
Completion Time
Turnaround Time
Is the interval from time of submission (Arrival time) of the process to the time of completion of
the process(completion time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Waiting Time = Turn Around Time - Burst Time
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to
get into the CPU.
Non-preemptive algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is
based on priority where a scheduler may preempt a low priority running process anytime when a
high priority process enters into a ready state.
Advantages
Better for long processes
Simple method (i.e., minimum overhead on processor)
No starvation
Disadvantages
Inappropriate for interactive systems
Large fluctuations in average turnaround time are possible.
Convoy effect occurs. Even very small process should wait for its turn to come to utilize
the CPU. Short process behind long process results in lower CPU utilization.
Throughput is not emphasized.
Process Arrival Burst Completion Turnaround Waiting
no time(AT) time(BT time(CT) time(TAT) time =
) = CT-AT TAT-
BT
1 0 4 4 4 0
2 1 3 7 6 3
3 2 1 8 6 5
4 3 2 10 7 5
5 4 5 15 11 6
P1 P2 P3
0 24 27 30
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
81 / 3 = 27 ms
Example 3
PROCESS ARRIVAL TIME BURST TIME
P1 0 80
P2 0 20
P3 0 10
P4 0 20
P5 0 50
Gantt chart
P1 P2 P3 P4 P5
0 80 100 110 130 180
Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
600/5 = 120 ms
Preemptive example
Examples
Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at
arrival or completion of processes. What is the average waiting time for the three processes?
Priority Scheduling
Quantum=5
Multilevel Queue Scheduling