Module 3 & 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

MODULE 3: PROCESSES 5.

Memory-management CPU scheduling information - includes


Process information process priority, pointers to scheduling
- unit of work in most systems. 6. Accounting information queues, scheduling parameters.
- a program in execution; process 7. I/O status information Accounting information - which
execution must progress in As a process is executed, it changes includes amount of CPU and real time
sequential fashion. state. used, time limits, account numbers, job,
- has a set of associated - new: The process is being or process number, etc.
resources. created.
- Ex: Compiler, Word-processor, - ready: The process is waiting to
Email program, Sending output to be assigned to a processor.
a printer - running: Instructions are being PROCESS STATES
Process includes: executed.
- program counter (specifies the - waiting: The process is waiting
next instruction to be executed) for some event to occur.
- stack (contains temporary data - terminated: The process has
e.g. method parameters) finished execution.
- data section (contains global Program counter - contains the memory
variables) address of the next instruction to be
- text section (program code) executed for this process.
PCB CPU registers - includes accumulators, NEW - On creation, the process enters
- Process Control Block index registers, stack pointers, general the new state.
- Representation of a process in purpose registers, 4 principal events for process creation
the operating system. Memory-management information - 1. System initialization
Information associated with each includes value of the base and limit 2. Execution of a process creation
process. registers, page tables, segment tables. system call by a running process.
1. Process state I/O status information, which includes 3. A user request to create a new
2. Program counter list of I/O devices allocated to this process.
3. CPU registers process, list of open files, etc. 4. Initiation of a batch job
4. CPU scheduling information
Job queue - set of all processes in the 1. Process issues an I/O request
system. As a process is created (enters and then be placed in an I/O
the system), it is put on a job queue. queue.
• If there is not enough room in the main 2. The time slice allotted to the
memory for all processes in the job process in CPU has expired.
queue, job scheduling is done. 3. Process creates a new
• Long-term scheduler (or job scheduler) subprocess (forks a child) and
– selects which processes should be waits for its termination. • When CPU switches to another
brought into the ready queue 4. Process forcibly removed from process, the system must save the state
READY - The process enters the ready the CPU because of an interrupt of the old process (CPU registers,
state (when loaded in the memory) and is and then be placed back in the process state and memory management
waiting to be assigned to a processor. ready queue. information) and load the saved state for
Ready queue - As the process is loaded Queuing Diagram of Process the new process.
in the memory, it is kept in the ready Scheduling - Process switches from • Context-switch time is overhead; the
queue. Ready queue contains a set of all the waiting state to the ready state. system does no useful work while
processes residing in main memory, switching.
ready and waiting to execute. • Context-switch time is highly
Short-term scheduler (or CPU dependent on hardware support.
scheduler) – selects which process TERMINATED
should be executed (dispatched) next Process terminates due to:
and allocates CPU. 1. Normal exit (voluntary)
Dispatcher – a module that gives control 2. Error exit (voluntary)
of the CPU to the process selected by the 3. Fatal error (involuntary)
short-term scheduler. Context Switch - System decides that
4. Killed by another process
RUNNING - The process is executed in the process has run long enough, and it is
(involuntary)
the CPU and enters the running state. time to let another process have some
Events that may occur while process is CPU time.
being executed:
• Medium term –an intermediate level
Schedulers queue used in systems with virtual
Long-term scheduler (or job scheduler) memory or timesharing.
– selects which processes should be –removes processes from the memory
brought into the ready queue. and reintroduced later (swapping)
– is invoked very infrequently (seconds, Summary: Scheduler Activities
minutes)  (may be slow). • Swapping – in some other time, the
– controls the degree of process can be reintroduced and
INPUT QUEUE - collection of processes multiprogramming. continued where it left off
on the disk that are waiting to be brought Short-term scheduler (or CPU • Context Switch – switching the CPU
into memory to run the program. scheduler) from one process to another
A program must be brought into memory – selects which process should be • Dispatcher – a module that gives
and placed within a process for it to be executed next and allocates CPU. control of the CPU to the process
run. – is invoked very frequently selected by the short-term scheduler
CPU scheduler selects which process in (milliseconds)  (must be fast). Suspended Processes
the ready queue will be executed (in the ▪ Processor is faster than I/O so all
CPU). I/O – CPU bound Processes processes could be waiting for I/O
As the process is executed, it accesses • Processes can be described as either: ▪ Swap these processes to disk to free up
instructions and data from memory. – I/O-bound process – spends more time more memory
Eventually when the process terminates, doing I/O than computations, many short ▪ Waiting or blocked state becomes
its memory space is declared available. CPU bursts. suspended state when swapped to disk
Process Scheduling – CPU-bound process – spends more ▪ Two new states
Process Scheduling Queues time doing computations; few very long - Blocked, suspend
Job queue – set of all processes in the CPU bursts. - Ready, suspend
system. • The long-term scheduler should select Additional States
Ready queue – set of all processes a good process mix of I/O bound and CPU • Ready – process in main memory and
residing in main memory, ready and bound processes. available for execution
waiting to execute. Schedulers • Blocked/Waiting – process in main
Device queues – set of processes memory and awaiting an event
waiting for an I/O device.
• Blocked/Suspend – process in Process Termination Interrupt – hardware mechanism that
secondary memory and awaiting an • Process executes last statement enables a device to notify the CPU.
event and asks the operating system to
• Ready/Suspend – process in secondary decide it (exit).
memory and available for execution - Output data from child to parent
Operations on Processes (via wait).
Process Creation - Process’ resources are
• Parent process create children deallocated by operating system.
processes, which, in turn create other • Parent may terminate execution of
processes, forming a tree of processes. children processes (abort).
• Resource sharing - Child has exceeded allocated
- Parent and children share all resources.
resources. - Task assigned to child is no longer
- Children share subset of parent’s required.
resources. - Parent is exiting.
• Execution • Operating system does not
- Parent and children execute allow child to continue if its
concurrently. parent terminates.
- Parent waits until children • Cascading termination.
terminate Definition of terms
• Address space Address space – list of memory
- Child is a duplicate of parent. locations from some minimum (usually
- Child has a program loaded into it 0) to some maximum, which the process
(treated as a new process). can read and write. It contains the
• UNIX examples executable program, program’s data and
- fork system call creates new stack.
process System call – interface between
- exec system call used after a fork operating system and process. It is
to replace the process’ memory function called by an application to
space with a new program. invoke a kernel service.
MODULE 4: CPU SCHEDULING • CPU-Bound Process - spends more Dispatcher module gives control of the
Basic Concepts time doing computations; few very long CPU to the process selected by the short-
As process is loaded in the memory, it is CPU bursts. term scheduler; this involves:
kept in the ready queue. CPU Scheduler – switching context
Ready queue contains set of all • CPU scheduling decisions may take – switching to user mode
processes residing in main memory, place when a process: – jumping to the proper location in the
ready and waiting to execute. 1. Switches from running to waiting state. user program to restart that program
Short-term scheduler (or CPU 2. Switches from running to ready state. • Dispatch latency – time it takes for the
scheduler) – selects from among the 3. Switches from waiting to ready. dispatcher to stop one process and start
processes in memory that are ready to 4. Terminates. another running.
execute and allocates the CPU to one of • Scheduling under 1 and 4 is non-
them. preemptive (there is no choice). Scheduling Criteria
Dispatcher – a module that gives control • In 2 and 3, scheduling is preemptive Scheduling Performance Criteria
of the CPU to the process selected by the (there is a choice). 1. CPU Utilization
short-term scheduler. - percentage or fraction of the time
CPU Scheduling Scheduling Schemes the CPU is doing something.
– It is the basis of multiprogrammed OS. • Non-preemptive - Objective: Maximize CPU
– It deals with the problem of deciding – Once a process is in the running state, it utilization
which of the processes in the ready will continue until it terminates or blocks 2. Throughput
queue is to be allocated the CPU itself for I/O - number of processes that are
CPU–I/O Burst Cycle • Preemptive completed per time unit.
- Process execution consists of a – Currently running process may be - Objective: Maximize throughput
cycle of CPU execution and I/O interrupted and moved to the ready state 3. Turnaround Time
wait. by the operating system– Allows for – time interval from the
Types of Processes better service since any one process submission of process to its
• I/O-Bound Process- spends more time cannot monopolize the processor for completion.
doing I/O than computations, many short very long – Finish Time – (minus) Arrival Time
CPU bursts. – Objective: Minimize turnaround
time
4. Waiting Time
- time spent by the job waiting in Scheduling Algorithms – I/O processes must wait until
the ready queue. • First-Come, First-Served CPU-bound processes complete
- Turnaround Time – (minus) Burst Scheduling - Suffers from convoy effect, where
Time • Shortest-Job-First/Shortest-Job- short processes wait for the long
- Objective: Minimize waiting time Next process to get off the CPU.
5. Response Time – Non-preemptive Shortest-Job- - FCFS is a non-preemptive, that is
- the amount of time it takes to First a process which gets CPU
start the response. – Preemptive SJF/Shortest- allocation keeps the CPU until it
- Objective: Minimize response Remaining-Time-First releases the CPU (by terminating
time • Priority Scheduling or requesting an I/O).
Scheduling Algorithm Goals – Non-preemptive Priority Shortest-Job-First (SJF)
– Preemptive Priority - Should be termed Shortest Next
• Highest Response-Ratio Next CPU Burst because scheduling is
Scheduling done thru the next CPU burst
• Round-Robin Scheduling rather than the total length of the
• Multilevel Queue Scheduling process.
• Multilevel Feedback Queue - Make use of the length of the next
Scheduling CPU burst.
First-Come, First Served (FCFS) - Short processes jump ahead of
- The process that requested the longer processes.
CPU first is allocated first (arrival - Advantage: SJF is optimal – gives
time/loaded in memory) minimum average waiting time
Scheduling Algorithms
- Managed using a FIFO queue. for a given set of processes.
CPU scheduler selects which
- Average waiting time is not - Disadvantage: Longer processes
process in the ready queue will be
generally minimal. suffer from starvation
executed (in the CPU).
- A short process may have to wait Two schemes:
CPU scheduler selects which
a very long time before it can – non-preemptive
process in the ready queue will be
execute. • When CPU is available, it is
executed (in the CPU).
- Favors CPU-bound processes assigned to the job with the
smallest next CPU burst
• once CPU is given to the Priority Scheduling – Preemption is done using a clock
process, it cannot be preempted • A priority number (integer) is associated interrupt generated at periodic intervals
until it completes its CPU burst. with each process (every quantum).
• In case of tie, FCFS is employed • The CPU is allocated to the process with – If interrupt occurs, running process is
– preemptive the highest priority preempted and added at the end of the
• Preempt the current executing • Rule: smallest integer has highest queue
process if its remaining burst priority • No job is allocated the CPU for more
time is greater than burst time of • Problem: Starvation – low priority than one time quantum
new process processes may never execute. If there are n processes in the ready
• This scheme is known as the • Solution: Aging – as time progresses queue and the time quantum is q, then
Shortest-Remaining-Time-First increase the priority of the process. each process gets 1/n of the CPU time in
(SRTF). • SJF is a priority scheduling where chunks of at most q time units at once.
priority is the predicted next CPU burst • No process waits more than (n-1)q time
Scheduling Algorithms time units.
• First-Come, First-Served Scheduling • Non-Preemptive Priority (NPP) • For example:
• Shortest-Job-First/Shortest-Job-Next – CPU is allocated to the job with highest – If there are 5 processes
- Non-preemptive Shortest-Job- priority. – Time quantum = 20 milliseconds
First – if equal priority, use FCFS –Each process will get up to 20
– Preemptive SJF/Shortest- • Preemptive Priority (PP) milliseconds every 100 seconds (20 * 5)
Remaining-Time-First – Executing job is preempted if its priority –No process waits more than 80-time
• Priority Scheduling is lower than priority of new job units.
- Non-preemptive Priority Round Robin (RR) • Performance depends heavily on the
- Preemptive Priority • RR; also, time slicing size of the time quantum
• Highest Response-Ratio Next • It is designed especially for time- –q large FIFO
Scheduling sharing systems. –q small q must be large with respect
• Round-Robin Scheduling • It is like FCFS, but preemption is added to context switch, otherwise overhead is
• Multilevel Queue Scheduling to switch between process. too high.
• Multilevel Feedback Queue Scheduling – Each process gets a small unit of CPU
time (time quantum), usually 10-100
milliseconds.
- Turnaround time varies with the Multilevel Queue Multi-Level Queue
time quantum. • Ready queue is partitioned into • MLQ
- Average turnaround time of a set separate queues: • Ready queue is partitioned into
of processes does not foreground (interactive) separate queues
necessarily improve as the time background (batch) • Jobs are permanently assigned to 1
quantum size increases. • Each queue has its own scheduling queue (based on some property such as
- In general, the average algorithm, process type, memory size, process
turnaround time can be improved foreground – RR priority, etc)
if most processes finish their next background – FCFS – foreground (interactive)
CPU burst in a single time • Scheduling must be done between the – background (batch)
quantum. queues. • Each queue has its own scheduling
Highest-Response-Ratio-Next (HRRN) – Fixed priority scheduling; (i.e., serve all algorithm
• HRRN, non-preemptive from foreground then from background). – foreground – RR
• Process with the highest response ratio Possibility of starvation. – background – FCFS
is executed next – Time slice – each queue gets a certain • Scheduling between queues is often a
• Response ratio is computed per amount of CPU time which it can preemptive priority scheduling
process as: schedule amongst its processes, i.e., Scheduling between the queues
time spent waiting + expected service 80% to foreground in RR 1. Fixed priority scheduling; (i.e.,
time expected service time – 20% to background in FCFS serve all from foreground then
from background).
Where expected service time is the CPU - Each queue has absolute priority
burst over lower priority queues
• equal response ratio, FCFS is used - Preemptive scheduling between
queues
• Minimum value of response ratio is 1 - Possibility of starvation.
happens when process just entered the 2. Time slice – each queue gets a
ready queue certain amount of CPU time which it
• Shorter processes are favored, but can schedule amongst its processes;
aging of longer processes increases the – 80% to foreground in RR
ratio – 20% to background in FCFS
Multilevel Feedback Queue – At Q1 job is again served FCFS and
• MLFQ receives 16 additional milliseconds. If it
• A process can move between the still is not complete, it is preempted and
various queues; aging can be moved to queue Q2.
implemented this way. – Pre-emptive scheduling between
• Multilevel-feedback-queue scheduler queues; Only when higher priority
defined by the following parameters: queues are empty will lower priority
– number of queues queues be processed.
– scheduling algorithms for each queue – If RR is preempted and quantum
– method used to determine when to allotted has not expired, preempted
upgrade a process to a higher priority process remains at the head of the queue
queue and quantum allotment is initialized to
– method used to determine when to zero. (Note: case to case)
demote a process to a lower priority
queue • Penalizes processes that are executing
– method used service to determine for long periods of time
which queue a process will enter when • Problem: starvation of long processes if
that process needs service I/O bound and interactive processes
keep arriving
• Three queues: • Solution: aging (process waiting too
–Q0–time quantum 8 milliseconds long in low-priority queue can be moved
–Q1–time quantum 16 milliseconds to higher priority queue)
–Q2–FCFS
• Scheduling
– A new job enters queue Q0 which is
served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not
finish in 8 milliseconds, job is moved to
queue Q1.
FROM BOOK completion or reception of a signal). process scheduler - selects an available
Ready - process is waiting to be assigned process (possibly from a set of several
PROCESSES to a processor. available processes) for program
- is the unit of work in a modern Terminated – The process has finished execution on the CPU.
time-sharing system. execution.
- a program in execution Scheduling Queues
- more than the program code, PCB - Each process is represented in the As processes enter the system, they are
which is sometimes known as the operating system by a process control put into a job queue, which consists of all
text section. block (PCB)—also called a task control processes in the system.
- INCLUDES: block. The processes that are residing in main
Program Counter – next memory and are ready and waiting to
instruction to be executed. execute are kept on a list called the ready
Process Stack - contains temp. queue.
data. queue is generally stored as a linked list
Data Section - contains global Schedulers
variables. A process migrates among the various
Heap - memory that is scheduling queues throughout its
dynamically allocated during lifetime. The operating system must
process run time. select, for scheduling purposes,
processes from these queues in some
Process State fashion. The selection process is carried
A process executes, it changes state. The out by the appropriate scheduler.
state of a process is defined in part by the
current activity of that process. A process CPU SCHEDULING
may be in one of the following states: CPU scheduling is the basis of multi-
New – process is being created. programmed operating systems. By
Running – instructions are being switching the CPU among processes, the
executed. operating system can make the
Waiting - The process is waiting for some computer more productive.
event to occur (such as an I/O
In a single-processor system, only one carried out by the short-term scheduler,
process can run at a time. or CPU scheduler.
The objective of multiprogramming is Preemptive Scheduling - preemptive
to have some process always running, to scheduling can result in race conditions
maximize CPU utilization. when data are shared among several
Scheduling of this kind is a fundamental processes.
operating-system function. Almost all - affects the design of the
computer resources are scheduled operating-system kernel.
before use. The CPU is, of course, one of -
the primary computer resources. Thus, Non-preemptive Scheduling - once the
its scheduling is central to operating- CPU has been allocated to a process, the
system design. process keeps the CPU until it releases
CPU–I/O Burst Cycle the CPU either by terminating or by
The success of CPU scheduling depends switching to the waiting state.
on an observed property of processes: Dispatcher is the module that gives
process execution consists of a cycle of control of the CPU to the process
CPU execution and I/O wait. Processes selected by the short-term scheduler.
alternate between these two states. This function involves the following:
Process execution begins with a CPU • Switching context
burst. That is followed by an I/O burst, •Switching to user mode
which is followed by another CPU burst, •Jumping to the proper location in the
then another I/O burst, and soon. user program to restart that program
Eventually, the final CPU burst ends with Scheduling Criteria
a system request to terminate execution. - CPU utilization
CPU Scheduler - Throughput. If the CPU is busy
Whenever the CPU becomes idle, the executing processes, then work
operating system must select one of the is being done. One measure of
processes in the ready queue to be work is the number of processes
executed. The selection process is that are completed per time unit,
called throughput.

You might also like