OS Unit-3
OS Unit-3
OS Unit-3
Ayushi Gupta
(Assistant Professor)
Department of Computer Science and Engineering
Mangalmay Institute of Engineering and Technology, Greater Noida
Content :
CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States,
Process Transition Diagram, Schedulers, Process Control Block (PCB),
Process address space, Process identification information, Threads and their
management, Scheduling Algorithms, Multiprocessor Scheduling.
Deadlock: System model, Deadlock characterization, Prevention, Avoidance
and detection, Recovery from deadlock.
Mangalmay Institute of Engineering and Technology, Greater Noida
Scheduling Concepts :
In a multiprogramming environment, there are several processes
competing at a time to get the CPU for their execution. Scheduling refers to
the strategy and the methods used by the operating system to decide which
process is going to be allocated CPU next among the several processes
waiting in the queue for CPU time.
1. CPU burst and I/O burst
2. Context Switching
3. Scheduling queues
4. Types of Schedulers
6. Dispatcher
Mangalmay Institute of Engineering and Technology, Greater Noida
1.) CPU burst and I/O burst :
The process during execution undergoes two phases:
CPU burst and input/output burst. When a process is allocated CPU and
other resources and is executing, this is the CPU burst phase of the
process. When a process waits for some I/O operation or some other task
to be completed, this is I/O burst phase of the process.
These two phases occur in an alternating order in a process. Process
begins its execution with a CPU burst followed by a I/O burst, then it
again goes into CPU burst followed by a I/O burst and this continue in
alternating order till the last CPU burst sends request to the operating
system to terminate the execution that this time CPU burst will not be
followed by an I/O burst.
Below figure shows the alternating order of CPU burst followed by I/O
burst.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
2.) Context Switching :
A process during its execution goes in the wait state for some I/O
operation or some other task to be completed (this is I/O burst phase of
the process). At that time the CPU is taken away from that process and is
allocated to some other process for execution. This switching of CPU
from one process to another process is done so that CPU is not idle at
any time. This is called context switching. For example, process P1 is
currently being executed by the CPU, after some time P1 goes to wait
state for some I/O operation and so CPU is allocated to process P2. The
OS save the process P1, PCB information like process ID, set of registers,
program counter, process state, etc. After that OS allocate CPU to process
P2, loads the new content of process P2 and begin executing P2. After
some time when P1 finishes its I/O operation and processor is again
allocated to process P1, it can be resumed from where it was left by
retrieving the saved PCB contents of P1. This is how context switching
works.
Mangalmay Institute of Engineering and Technology, Greater Noida
2.) Scheduling Queues :
Job Queue: Each new process goes into the job queue. Processes in the
job queue reside on mass storage and await the allocation of main
memory.
Ready Queue: The set of all processes that are in main memory and are
waiting for CPU allocation are kept in the ready queue. Ready queue is
generally maintained as a linked list. The queue header (start) points to
the first process PCB and tail (end) points to the last process PCB. Each
process PCB contains a pointer field also that points to the next process
PCB in the linked list.
Mangalmay Institute of Engineering and Technology, Greater Noida
Waiting (Device) Queues: The set of processes waiting for allocation of
certain I/O devices is kept in the waiting (device) queue. Device queue is
also maintained using linked list in a similar way as ready queue.
Mangalmay Institute of Engineering and Technology, Greater Noida
4.) Types of Schedulers :
Scheduler is a part of the operating system and basically there are three
types of schedulers: Long term schedulers, Medium term schedulers and
Short term schedulers .
Long term schedulers :
Long term schedulers are also known as job schedulers. In a computer
system there are several processes waiting for their execution. All these
processes are stored on secondary storage device in the form of job
queue for later processing. Long term scheduler makes decision that
which of the processes from among the several processes waiting in the
job queue will be transferred (loaded) from secondary memory to main
memory for execution. On main memory these processes are maintained
in a ready queue.
The processes selected by the job scheduler should be the good mix of
I/O bound processes and CPU bound processes. If there are more I/O
bound processes, then CPU will be idle most of the time and if it's more
CPU bound processes then CPU will be heavily burdened. The good
selection gives the efficient multiprogramming
Mangalmay Institute of Engineering and Technology, Greater Noida
Short term schedulers :
Short term schedulers are also known as CPU schedulers. Short term
scheduler makes decision that which process from among the several
processes waiting in the ready queue will be allocated CPU next for its
execution. Short term scheduler is used frequently as compared to long
term scheduler. This is because process executes for as short while and
then goes into wait for some I/O operation. Thus CPU scheduler has to
frequently pick some process from the ready queue when one process
goes to waiting. The CPU scheduler should be fast enough to make
process selection decision so that CPU time is not wasted and CPU is busy
with execution of one or the other process.
Mangalmay Institute of Engineering and Technology, Greater Noida
Medium term schedulers :
The process after executing for a short while goes into wait for some I/O
operation. At the time that process is waiting means suspended. Since
main memory has limited storage area, so in order to make space for
other processes to come from secondary memory to main memory ready
queue, the suspended processes are transferred back to secondary
memory. After sometime these suspended processes can be reloaded
(bring back) into the main memory and they can continue the execution
from the same point from where they left behind. Transferring the
suspended processes to the secondary memory is called swapping out
and bringing back the suspended processes back to the main memory is
called swapping in. The process of swapping out and swapping in is
carried out by medium term scheduler. Medium term scheduler is also
known as swapper.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
5.) Scheduling Schemes (or Scheduling Methods/Models) :
The CPU scheduling scheme is divided into two types: Preemptive and Non-
preemptive
Non-Preemptive Scheduling: In Non-preemptive scheduling scheme, once the
CPU is allocated to the process, it can only be taken in two conditions:
i. Either the process completes its CPU burst cycle and goes for I/O wait, means
when a process goes from running state the waiting state.
ii. Or the process completes its execution and terminates.
Preemptive Scheduling: In preemptive scheduling scheme the process with
higher priority is given the favour in comparison to low priority process. This
means if some process is using the CPU and meanwhile a new process with
higher priority than the currently running process arrives in the ready queue,
then the CPU is taken from the currently running process and allocated to that
higher priority process. The operating system moves low priority process to ready
queue (from running state to ready state) without the process requesting it. The
main disadvantage of the preemptive scheme is that when process is interrupted
and prevented from running at any arbitrary point, at that time process is
modifying some data and before complete modification of data the process is
pre-empted. So when another process tries to read that data, that data is not in
consistent state (not completely modified).
Mangalmay Institute of Engineering and Technology, Greater Noida
6.) Dispatcher :
Dispatcher is one of the important entity used in CPU scheduling. When the short
term scheduler makes decision that which process from among the several
processes waiting in the ready queue will be allocated CPU next for its execution;
it is the dispatcher that assigns CPU to that selected process. A dispatcher switch
execution from one process to another(context switch),sets up user registers,
memory mapping etc. that a process needs to actually run and transfer CPU
control to that process. In dispatching the process status changes from ready to
running. The dispatcher is placed in between the ready queue and short term
scheduler. The term dispatch latency describes the amount of time taken by the
system to stop one process and give permission to another process to begin
execution.
Mangalmay Institute of Engineering and Technology, Greater Noida
Performance Criteria (Scheduling Criteria) :
A performance criterion (scheduling criteria) is the basis of measuring efficiency
of the scheduling algorithm. Various parameters that are taken into consideration
in scheduling criteria are:
CPU Utilisation: This is a measure of how busy the CPU is. The CPU must not be
idle and should be busy as much as possible so as to maximize the CPU utilization.
Throughput: This is a measure of how many processes are executed in a given
unit of time. The higher the throughput the better is the performance of the
system.
Turnaround time: This is a measure of time taken to complete a particular
process, means how long does it take for a program to complete its execution
from the time when it is submitted to the system.
Waiting Time: This is a measure of time a process spend waiting for the CPU in
total, means amount of time a process spend waiting in the ready queue to get a
chance for execution.
Response Time: This is measure of the amount of time between submission of
requests and first response to the request. Usually, the goal is to minimize the
response time.
Mangalmay Institute of Engineering and Technology, Greater Noida
Process States:
In terms of computer system a program is a set of instructions written in a
particular sequence to perform a specific task. Programs are executable files
generally stored in secondary memory in a computer system and in order to
execute a program they are first loaded into the primary memory. So when a
program is loaded into the primary memory for execution, that executing state of
a program is a process. In its entire life cycle, from the beginning of the execution
till the end a process changes its state from time to time. The process state tells
the operating system what activity a process is doing at particular instance of
time. This process state awareness helps operating system to manage various
processes efficiently. Any process can have following states:
i. New: A process is in new state at instant of time when it is created.
ii. Ready: A process is in ready state at instant of time when it is ready to run but
the CPU is not yet allocated and so that process is in the CPU queue waiting for
its turn to get CPU.
iii. Running: A process is in running state at instant of time when it is executing its
instruction using CPU and other necessary resources allocated to it by operating
system.
Mangalmay Institute of Engineering and Technology, Greater Noida
iv. Waiting: A process is in waiting state at instant of time when it is waiting for
completion of some task or operation (such as the completion of an I/O
operation) so that as the operation or task gets completed the process will get
the CPU for its execution.
v. Terminated: A process is in terminated state at instant of time when it is has
finished its execution either normally or operating system has aborted it (due to
errors). The process releases (return back) all the resources it is holding.
Mangalmay Institute of Engineering and Technology, Greater Noida
Process Control Block (PCB) :
Process Control Block is also known as Process Descriptor. PCB is the data
structure maintained by the operating system for each process. A PCB contains
the information necessary to manage and control a process like- what the
processes is doing currently(waiting, executing et), what are the resources
allocated to the process, what are the access rights given to the process, process
priority, process scheduling criteria etc. Whenever a new process is created, its
associated PCB is maintained by operating system and when the process
terminates (ends), the associated PCB is removed from the system The various
piece of information contained in PCB are often referred to as process attributes:
Process State: This attribute tells the current state of the process, whether the
process is in new state, ready state, waiting state, running state and so on.
Pointer: Pointer points to another data structure. It can point to the PCB of
another process and that another process can be the parent process or a child
process (if it exists). [Parent process is one that has created the given process and
child process is one that is created from the given process.
Process Identification Number (PID): PID is the unique identification number
given to each process by operating system in order to uniquely identify a process
and differentiate one process from other.
Mangalmay Institute of Engineering and Technology, Greater Noida
Process Hardware Context: It contains the information about the various
general purpose and special registers used by a process like-
Accumulator, Program Counter (PC holds the address of the next
instruction to be executed by the process), Stack Pointer, PSW (Program
Status Word)register, Index register etc. Also information regarding
interrupt is stored.
Process Software Context: It contains –
* Information regarding Memory Management like-value of base register,
limit register, page table, segment table etc.
* Information regarding Input/output like- various input/output devices
allocated to a process, number of files open, access rights on various
files, open sockets etc. Accounting Information like - amount of CPU time
used time limits, etc.
Scheduling information etc.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
Process Address Space :
When there are several concurrent user processes are executing in a
computer system it is the responsibility of the operating system to
protect both user processes as well as kernel processes from
unauthorized access or unintentional modification by another processes.
The total system address space is divided into two parts: user space and
kernel space.
Mangalmay Institute of Engineering and Technology, Greater Noida
In user address space, each user process is given their own separate
address space. The address space allocated to the user process contains
the instructions, data and other information to be used by that process.
Each process executes instructions and access data of its own address
space only. In order to communicate with other processes and kernel or
to use the common shared data the processes used the protocols and
communication mechanism supported by the operating system (IPC etc).
OS also ensures that no process should cross its address space boundary,
means each process should work in its limited address space range.
Kernel address space contains various kernel processes and their
associated data.
Mangalmay Institute of Engineering and Technology, Greater Noida
The separate address space allocated to each process has a general
structure as shown in figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
Threads and their management:
1. Introduction
2. Single Threaded Process and Multithreaded Process
3. Types of Threads
4. Multithreading Models
Mangalmay Institute of Engineering and Technology, Greater Noida
Introduction :
Thread is an independent execution unit within a process. Each thread in
a system has a Thread ID, program counter, its own register sets and
stack. A process can have multiple threads but thread has only one
container process to which it belongs. The threads share code section,
data section, file descriptors and other system resources with the other
threads in the same process. Thus threads can executes parallel with
other threads belonging to same process. One important thing is that a
thread is not a process by itself so it cannot run on its own, means thread
execution has to be initialized by a process in which it is created. Each
process in a computer system has a unique address space and all the
threads created by a particular process shares the address space of that
process and also executes simultaneously with other threads within the
process address space. Also like process, threads also have state and the
different threads within a process can have different states at any given
instance of time. A system that supports this thread model is known as
multithreaded system. A thread is also called a light weight process.
Mangalmay Institute of Engineering and Technology, Greater Noida
The various advantages of threads are:
i. Since threads executes parallely with other threads, so parallel
computing can be achieved using thread system. This means various
tasks of a process can be subdivided among threads, where each thread
does its task independently and simultaneously with other threads. Thus
overall process execution becomes faster and efficient using this parallel
computing.
ii. In a threading system each thread is an independent execution unit, so
if one thread is blocked in some I/O task or some other task, the other
threads are not affected and continue their execution task.
iii. Since threads share address space and other system resources, so
thread system is much economical.
Mangalmay Institute of Engineering and Technology, Greater Noida
Scheduling Algorithm :
Some CPU scheduling Terminologies are given below :
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and
arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and
burst time.
Waiting Time = Turn Around Time – Burst Time
Response Time(R.T.): Response time is the time spent between the ready
state and getting the CPU for the first time.
Response Time = Time it Started Executing – Arrival Time
Mangalmay Institute of Engineering and Technology, Greater Noida
Types of CPU scheduling Algorithm or Non-Preemptive and Preemptive
Algorithms :
There are mainly six types of process scheduling algorithms:
1. First Come First Serve (FCFS)
4. Priority Scheduling
6. Multiprocessor Scheduling
Mangalmay Institute of Engineering and Technology, Greater Noida
1.) First Come First Serve(FCFS) :
I. Jobs are executed on first come, first serve basis.
5) Deadlock Detection
For example : Consider system having two resources R1 and R2. Now the
two processes P1 and P2 are in a deadlock condition if P1 is holding R1
and also wants R2 to finish its execution. Likewise P2 is holding R2 and
also wants R1 to finish its execution. then process P1 and process P2 will
be in deadlock. Thus both the process enters into deadlock and waits
indefinitely.
Mangalmay Institute of Engineering and Technology, Greater Noida
To understand deadlock, a system can be modelled as set of limited
resources like CPU, memory, I/O devices etc. These limited resources
Are to be allocated to a number of requesting processes according to
their requirements. The resources in a system belong to a specific
category and all the resources within a category are equivalent. Since all
the resources in a particular category are equivalent, so any one of the
resource belonging to that category can be allocated to service the
request generated. If there is any variation between the resources within
a category, then that category needs to be further divided into separate
category.
The process employs resources in a following order :
• Request for the resource: The process sends request to require a
resource. If the requested resource is in use by other process and
thus not available, then request cannot be immediately granted. The
requesting process must wait until the resource become available.
System calls like open(), malloc(), new(), request() etc. are used to
generate request to acquire a resource.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Using the resource : The process once allocated resource make use
of the resource. For example : Perform execution using CPU,
perform write operation on file etc.
• Releasing the resource : The process relinquishes the acquired
resource so that it becomes available and allocated to other
requesting processes. System calls like close(), free(), delete(),
release() etc. are used to release the acquired resource.