Process Scheduling

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

Operating System - Process Scheduling

Process

A process is basically a program in execution.

The execution of a process must progress in a sequential fashion.

program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data.

The following image shows a simplified layout of a process inside main memory −

S.N. Component & Description


Stack
1
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
Heap
2
This is dynamically allocated memory to a process during its run time.
Text
3
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
Data
4
This section contains the global and static variables.

Program

A program is a piece of code which may be a single line or millions of lines.

A computer program is usually written by a computer programmer in a programming language.

A computer program is a collection of instructions that performs a specific task when


executed by a computer.

When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm.

A collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle

When a process executes, it passes through different states.

In general, a process can have one of the following five states at a time.

S.N State & Description


.
1 Start

This is the initial state when a process is first started/created.


2 Ready

The process is waiting to be assigned to a processor.

Ready processes are waiting to have the processor allocated to them by the operating
system so that they can run.

Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.
3 Running

Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4 Waiting

Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
5 Terminated or Exit

Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every
process.

The PCB is identified by an integer process ID (PID).

A PCB keeps all the information needed to keep track of a process

S.N Information & Description


.
1 Process State

The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2 Process privileges

This is required to allow/disallow access to system resources.


3 Process ID

Unique identification for each of the process in the operating system.


4 Pointer

A pointer to parent process.


5 Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for
this process.
6 CPU registers

Various CPU registers where process need to be stored for execution for running
state.
7 CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the
process.
8 Memory management information

This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
9 Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID
etc.
10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems.

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

Process Scheduling

 Is the act of determining which process is in the ready state, and should be moved to the
running state
 The main aim of the process scheduling system is to keep the CPU busy all the time and
to deliver minimum response time for all programs.
 The scheduler must apply appropriate rules for swapping processes IN and OUT of CPU.
 Process scheduling is an essential part of a Multiprogramming operating systems.
 Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
 Time-division multiplexing (TDM) is a method of transmitting and receiving
independent signals over a common signal path by means of synchronized switches at
each end of the transmission line so that each signal appears on the line only a fraction of
time in an alternating pattern.

Scheduling falls into one of the two general categories:

 Pre-emptive Scheduling:

 When the operating system decides to favour another process, pre-empting the
currently executing process.
 In this scheduling a process switches from running state to ready state or from
waiting state to ready state.
 In pre-emptive scheduling processes can be scheduled

 Non Pre-emptive Scheduling

 It’s when the currently executing process gives up the CPU voluntarily.
 In this scheduling a process terminates or switches from running to waiting for
state.
 In Non-preemptive scheduling, processes cannot be scheduled.

Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues.

The OS maintains a separate queue for each of the process states and PCBs of all processes in
the same execution state are placed in the same queue.

When the state of a process is changed, its PCB is unlinked from its current queue and moved to
its new state queue.

The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − Processes waiting for a device to become available are placed in this
queue. There are unique device queues available for each I/O device.

A new process is initially put in the Ready queue. It waits in the ready queue until it is selected
for execution (or dispatched).

Once the process is assigned to the CPU and is executing, one of the following several events can
occur:

 The process could issue an I/O request, and then be placed in the I/O queue.
 The process could create a new subprocess and wait for its termination.
 The process could be removed by force from the CPU, as a result of an interrupt, and be
put back in the ready queue.

In the first two cases, the process eventually switches from the waiting state to the ready state, and
is then put back in the ready queue.

A process continues this cycle until it terminates, at which time it is removed from all queues and
has its PCB and resources deallocated.
Types of Schedulers

There are three types of schedulers

1. Long Term Scheduler


2. Short Term Scheduler
3. Medium Term Scheduler

Long Term / job Scheduler

 It determines which programs are admitted to the system for processing.


 Process loads into the memory for CPU scheduling.
 Long term scheduler runs less frequently.
 Long Term Schedulers decide which program must get into the job queue.
 From the job queue, selects processes from the queue and loads them into memory for
execution.
 Job Scheduler maintains a good degree of Multiprogramming. An optimal degree of
Multiprogramming means the average rate of process creation is equal to the average
departure rate of processes from the execution memory.
 When a process changes the state from new to ready, then there is use of long-term
scheduler.

Short Term / CPU Scheduler

 It runs very frequently.


 The main aim of this scheduler is to enhance CPU performance and increase process
execution rate.
 It is the change of ready state to running state of the process.
 CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next.
 Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

This scheduler removes the processes from memory (and from active contention for the CPU),
and thus reduces the degree of multiprogramming.

the process can be reintroduced into memory and its execution can be continued where it
left from. This scheme is called swapping.

The process is swapped out, and is later swapped in, by the medium term scheduler.

Swapping may be necessary to improve the process mix, or because a change in memory

requirements has overcommitted available memory, requiring memory to be freed up.

Comparison among Scheduler


S.N Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
.
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them into which are ready to process into memory and
memory for execution execute execution can be continued.

Context Switch

A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a later
time.

A context switcher enables multiple processes to share a single CPU.

Context switching is an essential part of a multitasking operating system features.


When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block.

After this, the state for the process to run next is loaded from its own PCB and used to set the PC,
registers, etc. At that point, the second process can start executing.

When the process is switched, the following information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Operations on Process

There are two major operations Process Creation and Process Termination.

Process Creation

Through appropriate system calls, such as fork or spawn, processes may create other processes.

The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.

Each process is given an integer identifier, termed as process identifier, or PID.

The parent PID (PPID) is also stored for each process.

A child process may receive some amount of shared resources with its parent depending on
system implementation.

To prevent runaway children from consuming all of a certain system resource, child processes
may or may not be limited to a subset of the resources originally allocated to the parent.

There are two options for the parent process after creating the child :

 Wait for the child process to terminate before proceeding.


 Run concurrently with the child, continuing to process without waiting. It is also possible
for the parent to run for a while, and then wait for the child later, which might occur in a
sort of a parallel processing operation.

There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process
2. The child process has a program loaded into it

Process Termination

By making the exit(system call), typically returning an int, processes may request their own
termination.

This int is passed along to the parent if it is doing a wait(), and is typically zero on successful
completion and some non-zero code in the event of any problem.

Processes may also be terminated by the system for a variety of reasons, including :

 The inability of the system to deliver the necessary system resources.


 In response to a KILL command or other unhandled process interrupts.
 A parent may kill its children if the task assigned to them is no longer needed i.e. if the
need of having a child terminates.
 If the parent exits, the system may or may not allow the child to continue without a parent

When a process ends, all of its system resources are freed up, open files flushed and closed, etc.

The process termination status and execution times are returned to the parent if the parent is
waiting for the child to terminate, or eventually returned to init if the process already became an
orphan.

The processes which are trying to terminate but cannot do so because their parent is not waiting
for them are termed zombies. These are eventually inherited by init as orphans and killed off.

Scheduling Criteria

There are many different criterias to check when considering the "best" scheduling algorithm,
they are:

CPU Utilization

To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time (Ideally 100% of the time).

Considering a real system, CPU usage should range from 40% (lightly loaded) to 90% (heavily
loaded.)
Throughput

It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time.

This may range from 10/second to 1/hour depending on the specific processes.

Response Time

Amount of time it takes from when a request was submitted until the first response is
produced.

Remember, it is the time till the first response and not the completion of process execution(final
response).

CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.

Burst Time

Time required by a process for CPU execution.

Arrival Time

Time at which the process arrives in the ready queue.

Completion Time

Time at which process completes its execution.

Turnaround Time

It is the amount of time taken to execute a particular process

Is the interval from time of submission (Arrival time) of the process to the time of completion of
the process(completion time).

Turnaround time = Completion time – Arrival time

Turnaround time = Waiting time + Burst time

Waiting Time

The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Waiting Time = Turn Around Time - Burst Time

Load Average

It is the average number of processes residing in the ready queue waiting for their turn to
get into the CPU.

Process scheduling algorithms


 First-Come, First-Served (FCFS) Scheduling
 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive.

Non-preemptive algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is
based on priority where a scheduler may preempt a low priority running process anytime when a
high priority process enters into a ready state.

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Advantages
 Better for long processes
 Simple method (i.e., minimum overhead on processor)
 No starvation

Disadvantages
 Inappropriate for interactive systems
 Large fluctuations in average turnaround time are possible.
 Convoy effect occurs. Even very small process should wait for its turn to come to utilize
the CPU. Short process behind long process results in lower CPU utilization.
 Throughput is not emphasized.
Process Arrival Burst Completion Turnaround Waiting
no time(AT) time(BT time(CT) time(TAT) time =
) = CT-AT TAT-
BT
1 0 4 4 4 0
2 1 3 7 6 3
3 2 1 8 6 5
4 3 2 10 7 5
5 4 5 15 11 6

Gantt chart for the above Process


P1 P2 P3 P4 P5
0 4 7 8 10 15

Average Waiting Time = (0+3+5+5+6)/5 = 19/5 = 3.8


Example 1

PROCESS ARRIVAL BURST TIME


TIME
P1 0 24
P2 0 3
P3 0 3
a. Draw the Gantt chart

P1 P2 P3
0 24 27 30

PROCESS WAIT TIME TURN AROUND


TIME
P1 0 24
P2 24 27
P3 27 30

Total Wait Time


0 + 24 + 27 = 51 ms

Average Waiting Time = (Total Wait Time) / (Total number of processes)


51/3 = 17 ms

Total Turn Around Time


24 + 27 + 30 = 81 ms

b. Find the Average Turn Around time

Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
81 / 3 = 27 ms

Example 3
PROCESS ARRIVAL TIME BURST TIME
P1 0 80
P2 0 20
P3 0 10
P4 0 20
P5 0 50

Gantt chart

P1 P2 P3 P4 P5
0 80 100 110 130 180

PROCESS WAIT TIME TURN AROUND


TIME
P1 0 80
P2 80 100
P3 100 110
P4 110 130
P5 130 180

Total Wait Time


0 + 80 + 100 + 110 + 130 = 420 ms

a. Find the Average Waiting Time

Waiting Time = (Total Wait Time) / (Total number of processes)


420/5 = 84 ms

Total Turn Around Time


80 + 100 + 110 + 130 + 180 = 600 ms

b. Find the Average Turn Around time

Average Turn Around time = (Total Turn Around Time) / (Total number of processes)
600/5 = 120 ms

c. Find the Throughput

5 jobs/180 sec = 0.2778 jobs/sec

Shortest-Job-Next (SJN) Scheduling

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

Non preemptive example

 Process is executed to completion


 It is based on available processes in the ready state
 Least burst time is processed first

Process no Arrival Burst Completion Turnaround Waiting


time(AT) time(BT) time(CT) time(TAT) time =
= CT-AT TAT-BT
1 0 7 8 7 0
2 1 5 16 14 9
3 2 1 9 6 5
4 3 2 11 7 5
5 4 8 24 19 11

Gantt chart for the above Process


P1 P3 P4 P2 P5
0 1 8 9 11 16 24

Average Turnaround time =

Average Waiting time =

Preemptive example
Examples

Consider the following table of arrival time and burst time for three processes P0, P1 and P2.

Process Arrival time Burst Time


P0 0 ms 9 ms
P1 1 ms 4 ms
P2 2 ms 9 ms

The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at
arrival or completion of processes. What is the average waiting time for the three processes?

Priority Scheduling

 Priority scheduling is a non-preemptive algorithm


 Its the most common scheduling algorithms in batch systems.
 Each process is assigned a priority.
 Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis (FCFS
manner) .
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
 Least burst time is processed first
 Processes with same priority are executed in.
Examples

Non Preemptive priority scheduling example


Process no Priority Arrival Burst Completion Turnaround Waiting
time(AT) time(BT) time(CT) time(TAT) time =
= CT-AT TAT-BT
1 2(low) 0 4 4 4 0
2 4 1 2 25 24 22
3 6 2 3 23 21 18
4 10 3 5 9 6 1
5 8 4 1 20 16 15
6 12 (high) 5 4 13 8 4
7 9 6 6 19 13 7

Gantt chart for the above Process


P1 P4 P6 P7 P5 P3 P2
0 4 9 13 19 20 23 25

Average Turnaround time =

Average Waiting time =


Preemptive priority scheduling example

Process Priority Arrival Burst New Completion Turnaround Waiting


no time(AT) time(BT) Burst time(CT) time(TAT) time =
time(BT) = CT-AT TAT-BT
1 2 0 4 3 25 25 21
2 4 1 2 1 22 21 19
3 6 2 3 2 21 19 16
4 10 3 5 3 12 9 4
5 8 4 1 1 19 15 14
6 12 5 4 0 9 4 0
7 9 6 6 6 18 12 6

Gantt chart for the above Process


P1 P2 P3 P4 P6 P4 P7 P5 P3 P2 P1
0 1 2 3 5 9 12 18 19 21 22
25

Smallest Remaining Time First (SRTF)


Round Robin(RR) Scheduling

 A fixed time is allotted to each process, called quantum, for execution.


 Once a process is executed for given time period that process is preemptied and other
process executes for given time period.
 Context switching is used to save states of preemptied processes

Quantum=5
Multilevel Queue Scheduling

 Multiple queues are maintained for processes.


 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.

You might also like