Os Unit 3
Os Unit 3
Os Unit 3
UNIT-3
Scheduling
CPU scheduling is the process which allows one process to use the CPU while
execution of another process is in hold due to unavailability of any resources like
CPU cycle time, I/O etc. Its aim is to utilize the CPU time up to maximum level.
It is the process of giving executing time to another process while one process is
waiting or idle such that CPU time is not wasted or dose not sits idle.
In single processor system only one process can run at a time, other must wait
until the CPU is free. The objective of the multiprogramming is to have some
process running all the time to maximize the CPU utilization. Every time one
process has to wait another process can take over use of the CPU.
CPU-I/O Burst Cycle:
Process execution consists of a cycle of CPU execution and I/O wait. Process
alternate between these two burst. Process execution begins with a CPU burst and
followed by the I/O burst which is followed by another CPU burst then I/O burst
and so on. The final CPU burst ends with a system request to terminate execution.
CPU scheduler:
Whenever the CPU becomes idle, the operating system must selects one of the
process from ready queue to execute such that CPU time is utilized. This selection
process is carried out by the short term scheduler also known as CPU scheduler.
This scheduler selects the process from the group of processes in memory that are
ready to execute and allocates the CPU to that process. All the process in the
ready queue are lined up waiting for a chance to run on the CPU. The record in
the queue are generally process control block of the processes.
CPU scheduling decision may take place under the following circumstance:
I. When a process switches from the running state to the waiting state. For
e.g. the result of an I/O request or an invocation of wait() for the
termination of child.
II. When a process switches from running state to the ready state for e.g. when
an interrupt occur.
III. When a process switches from the waiting state to ready state for e.g. at
completion of I/O
IV. When a process terminates.
Dispatcher:
Dispatcher is the module or component involved in CPU scheduling that gives
the control of the CPU to the process selected by the short-term scheduler. This
function involves switching the context, switching to user mode, jumping to the
proper location in the user program to restart that program. As it is invoked during
every process switch, it should be as fast as possible.
The time it takes for dispatcher to stop one process and start another process is
known as dispatch latency.
Scheduling Criteria:
Many criteria have been suggested for comparing CPU scheduling algorithm.
Which characteristics are used for comparison can make a difference in which
algorithm is judged to be best. The criteria includes the following:
1. CPU Utilization:
It refers to keep the CPU as busy as possible. Conceptually CPU utilization
can range from 0 to 100 percent. In real system it should ranges from 40
percent (lightly loaded system) to 90 percent (highly loaded system).
2. Throughput:
It is the measure of work or measure of number of process that are
completed per time. If the CPU is busy executing processes then the work
is being done. For long process it rate may be one process per hour and for
short transaction it may be ten process per second.
3. Turnaround Time:
The interval from the time of submission of the process to the time of
completion is turnaround time. It refers to how long it takes to execute a
process. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing in the CPU and doing I/O.
Mathematically, Turnaround time = Complete Time (CT) – Burst Time
(BT)
4. Waiting Time:
It is the sum of the period spent waiting in the ready queue. CPU scheduling
algorithm only affects the amount of time that a process spends waiting in
the ready queue.
Mathematically, Waiting Time = Total turnaround time – Burst time
5. Response Time:
It is measure of the time from the submission of a request until the first
response is produced. It is the time process takes to start responding not the
time it takes to output the response.
Scheduling Algorithm:
CPU scheduling deals with the problem of deciding which of the process in the
ready queue is to be allocated to the CPU. Some of the CPU scheduling algorithm
are:
1. First Come First Serve Scheduling (FCFS): (Non-preemptive)
In this scheme the process that request the CPU first is allocated the CPU
first. The implementation of this scheme is managed with a FIFO queue.
When the process enters the ready queue its PCB is linked into the tail of
the queue and when the CPU is free it is allocated to the process at the head
of the queue. The running process is then removed from the queue.
Its disadvantage is that its average waiting time is often quite long. For e.g.:
Suppose that process arrives in the order P1, P2 and P3 and are served in
FCFS order.
Non-preemptive Scheme:
This algorithm associates with each process’s next CPU burst. When the
CPU is available it is assigned to the process that has the smallest next CPU
burst. If the next two process’s burst are same then FCFS mechanism is
used to break the tie. Scheduling depends on the length of the next CPU
burst of a process rather than its total length. Example:
Process Burst Time
P1 7
P2 3
P3 4
Preemptive Scheme:
The choices arises when a new process arrives at the ready queue while
previous process is still executing and the CPU burst of new arrived
process is less than that of executing. This scheme will preempt the
currently executing process. This scheme is sometimes called shortest-
remaining-time first scheduling.
The SJF scheduling is provably optimal as it gives the minimum waiting
time. Although it is optimal, the real difficulty with this scheme is knowing
the length of the next CPU request.
Process: At first, the process with arrival time 0 will execute after that process
with smallest job is executed. In this case P1 is executed first as it has arrival time
0 and after that P3 is executed as it has smallest burst than P2 and P4. If there is
tie in burst then the process with shortest arrival time will executed first. In this
case P2 and P4 both have burst 4 but arrival time of P4 is first so P2 is executed.
Process:
First the process with arrival time 0 or with smallest arrival time is executed after
that the process executed up to next small arrival time. Then the burst time of
executing process is subtracted with arrival time of next process (in this case P1’s
burst time = 7-2 =5). P1 is compared with P2’s burst time (here 5 (p1) > 4 (p2)).
The smallest burst is 4 so P2 is executed up to next arrival time and P2’s burst is
now 4-2 = 2. Now, P3 arrives its burst is compared with P1 and P2. In this case
P3 has smallest burst than P1 and P2 so it is executed up to next arrival. Now all
P1, P2, P3 and P4 arrives. The comparison is made and the process with smallest
burst is executed.
Calculation is shown in table below:
NOTE: Refer your class note for further example of round robin scheduling
• Time-slice:
Here each queue gets certain portion of CPU time which it can then
schedule among its various processes. For example: foreground
queue can be given 80 percent of CPU time for RR scheduling
among its process while background queue will receive 20 percent
of the CPU time to give it to its process.
Here, the priority of P1 is higher than P2 because P1’s period is less than
P2’s period. First, P1 will execute and it have to finish 25 burst within the
period or interval of 50. After 50 interval again P1 will execute and so on.
So p1 execute and finish its burst of 25 before 50 period. Now, P2 will start
from 25 (where P1 has finished) and execute till 50 period. From 50 to 25
P2 has completed only 25 of its burst (50-25) i.e. still 15 burst is remaining.
P2 is stopped in 50 period because P1 arrived at 50 period and P1 has
highest priority than P2. So, P2 is preempted and P1 is executed. P1 have
to complete 25 burst within 100 period, it will complete in 75 period. Now
that remaining 15 burst of P2 is executed. The process continues so on..
Solution:
Here, process with less deadline have high priority i.e. (P2>P1>P3). P2
have to complete its 2 unit burst in every 5 interval within the deadline of
4. Similarly, P1 have to complete its 3 unit of burst in every 20 interval
within the deadline of 7 and P3 have to complete its 2 unit of burst in every
10 interval within deadline of 9.
First P1 will executes and completes its burst of 2 unit within less than of
its deadline thus meeting its deadline. After this P1 will execute as it has
less deadline that P3. So, P1 completes its 3 unit of burst within its deadline
of 19. Now, P2 arrives at period 5 so it will complete its 2 unit of burst in
less than its deadline i.e. 9. Now, P3 will execute and complete its burst of
2 unit within its deadline (9) and so on…
NOTE:
o Example of non-preemptive and preemptive of SJF, Priority Scheduling,
Round Robin, Rate monotonic and Earliest Deadline First Scheduling are
done in class. So, refer your class note.
o Saving the process context and process scheduler topic are given in lesson
2. So, refer the note of lesson 2.