Operating System: Aims and Objectives
Operating System: Aims and Objectives
Operating System: Aims and Objectives
and ubiquity of DOS in the world of the PC-compatible platform, DOS was often considered to
be the native operating system of the PC compatible platform.
There are alternative versions of DOS, such as FreeDOS and OpenDOS. FreeDOS appeared in
1994 due to Microsoft Windows 95, which differed from Windows 3.11
DOS Working Environment
This subsection will give you a general understanding about the command prompt, directory,
Directory, Working with the files, File naming conventions, Viewing, Editing, Executing, Stop
Execution, Printing, Backup files, and Rebooting.
Command Prompt
If we take a look at the computer screen, we are likely to see a blank screen with the exception of
a few lines, at least one of which begins with a capital letter followed by a colon and a backslash
and ends with a greater-than symbol (>):
C:\>
Any line in DOS that begins like this is a command prompt. This line prompt is the main way
users know where they are in DOS. Here is how:
The C: tells the user that he/she is working within the filespace (disk storage) on the hard
drive
given the designation C. C is usually reserved for the internal hard disk of a PC.
The backslash (\) represents a level in the hierarchy of the file structure. There is always at
least one because it represents the root directory, the very first level of your hard disk.
Folder icons represent a directory, and the
document icons represent actual files in those
directories.
Directory
If you need more help in orienting yourself, it sometimes helps to take a look at the files and
directories available where you are by using the DIR command.
C:\>dir
This will give you a listing of all the files and directories contained in the current directory in
addition to some information about the directory itself. You will see the word volume in this
information. Volume is simply another word for a disk that the computer has access to. Your hard
disk is a volume, your floppy disk is a volume, a server disk (hard disk served over a network) is
a volume. Now you know fancy words for all the parts of the
format DOS uses to represent a file.
Volume: C:
Pathname: \DEMO\DOS&WIN\SAMPLES\
Filename: SAMPLE
Here are some helpful extensions of the DIR command:
C:\>dir | more
(will display the directory one screen at a time with a < more> prompt--use control-C to escape)
C:\>dir /w
(wide: will display the directory in columns across the screen)
C:\>dir /a
(all: will display the directory including hidden files and directories)
Working with the files
Understanding how to manage your files on your disk is not the same as being able to use them.
In DOS (and most operating systems), there are only two kinds of files, binary files and text files
(ASCII). Text files are basic files that can be read or viewed very easily. Binary files, on the
other hand, are not easily viewed. As a matter of fact, most binary files are not to be viewed but
to be executed. When you try to view these binary files (such as with a text editor), your screen is
filled with garbage and you may even hear beeps.
While there are only two kinds of files, it is often difficult to know which kind a particular file is.
For files can have any extension! Fortunately, there is a small set of extensions that have
standard meanings like .txt, .bat, and .dat for text files and .exe and .com for binary executables.
File naming conventions
Careful file naming can save time. Always choose names which provide a clue to the file's
contents. If you are working with a series of related files, use a number somewhere in the name
to indicate which version you have created. This applies only to the filename parameter; most of
the file extension parameters you will be using are predetermined or reserved by DOS for certain
types of file.
For example, data1.dat, employee.dat
Editing
You can view any text file using text editor.
For example to open a file named employee.txt in the work directory of the C drive
C:\work> edit employee.txt
Executing
Binary files ending in .exe are usually "executed" by typing the filename as if it were a
command. The following command would execute the WordPerfect application which appears
on the disk directory as WP.EXE:
C:\APPS\WP51>wp
Binary files ending in .com often contain one or more commands for execution either through the
command prompt or through some program.
Stop Execution
If you wish to stop the computer in the midst of executing the current command, you may use the
key sequence Ctrl-Break. Ctrl-Break does not always work with non-DOS commands. Some
software packages block its action in certain situations, but it is worth trying before you re-boot.
Rebooting
In some cases, when all attempts to recover from a barrage of error messages fails, as a last resort
you can reboot the computer. To do this, you press, all at once, the control, alternate and delete
(CTRL+ALT+DELELTE). If you re-boot, you may lose some of your work and any data active
in RAM which has not yet been saved to disk.
INTRODUCTION TO PROCESS
Aims and Objectives
In this lesson we will learn about the introduction of process, various states of the process, and
process transitions. The objectives of this lesson are to make the student aware of the basic
concepts process and its behaviors in an operating system.
INTRODUCTION TO PROCESS
Process is defined as a program in execution and is the unit of work in a modern timesharing
system. Such a system consists of a collection of processes: Operating-system processes
executing system code and user processes executing user code. All these processes can
potentially execute concurrently, with the CPU (or CPUs) multiplexed among them. By
switching the CPU between processes, the operating system can make the computer more
productive.
A process is more than the program code, it includes the program counter, the process stack, and
the contents of process register etc. The purpose of process stack is to store temporary data, such
as subroutine parameters, return address and temporary variables. All these information will be
stored in Process Control Block (PCB). The Process control block is a record containing many
pieces of information associated with a process including process state, program counter, cpu
registers, memory management information, accounting information, I/O status information, cpu
scheduling information, memory limits, and list of open files.
PROCESS STATES
When a process executes, it changes the state, generally the state of process is determine by the
current activity of the process. Each process may be in one of the following states.
New-------------: The process is being created.
Running--------: The process is being executed
Waiting---------: The process is waiting for some event to occur.
Ready-----------: The process is waiting to be assigned to a processor.
Terminate------: The process has finished execution.
The important thing is only one process can be running in any processor at any time. But many
processes may be in ready and waiting states. The ready processes are loaded in to a ready
queue. A queue is one type of data structure. It is used here to store process. The operating
system creates a process and prepares the process to be executed then the operating systems
moved the process in to ready queue. When it is time to select a process to run, the operating
system selects one of the jobs from the ready queue and moves the process from ready state to
running state. When the execution of a process has completed then the operating system
terminates that process from running state. Some times operating system terminates the process
for some other reasons also which include time limit exceeded, memory unavailable, access
violation, protection error, I/O failure, data misuse and so on.
When the time slot of the processor expires or if the processor receives any interrupt signal, then
the operating system shifts running process to ready state. For example, let process P1 be
executing in the processor and in the mean time let process P2 generates an interrupt signal to the
processor. The processor compares the priorities of process P1 and P2.
If P1>P2 then the processor continue the process P1. Otherwise the processor switches to
process P2 and the process P1 is moved to ready state.
A process is put into the waiting state, if the process need an event to occur or an I/O task is to be
done. For example if a process in running state need an I/O device, then the process is moved to
blocked (or) waiting state.
A process in the blocked (waiting) state is moved to the ready state when the event for which it
has been waiting occurs.
The OS maintains a ready list and a blocked list to store references to processes not running. The
following figure shows the process state diagram
New
Ready
Running
Terminated
Waiting
The new and terminated states are worth a bit of more explanation. The former refer to a process
that has just been defined (e.g. because an user issued a command at a terminal), and for which
the OS has performed the necessary housekeeping chores. The latter refers to a process whose
task is not running anymore, but whose context is still being saved (e.g. because an user may
want to inspect it using a debugger program).
A simple way to implement this process handling model in a multiprogramming OS would be to
maintain a queue (i.e. a first-in-first-out linear data structure) of processes, put at the end the
queue the current process when it must be paused, and run the first process in the queue.
However, it's easy to realize that this simple two-state model does not work. Given that the
number of processes that an OS can manage is limited by the available resources, and that I/O
events occur at much larger time scale that CPU events, it may well be the case that the first
process of the queue must still wait for an I/O event before being able to restart; even worse, it
may happen that most of the processes in the queue must wait for I/O. In this condition the
scheduler would just waste its time shifting the queue in search of a runnable process.
A solution is to split the not-running process class according to two possible conditions:
processes blocked in the wait for an I/O event to occur, and processes in pause, but nonetheless
ready to run when given a chance. A process would then be put from running into blocked state
on account of an event wait transition, would go running to ready state due to a timeout
transition, and from blocked to ready due to event occurred transition. This model would work
fine if the OS had an very large amount of main memory available and none of the processes
hogged too much of it, since in this case there would always be a fair number of ready processes.
However, because the costs involved this
scenario is hardly possible, and again the likely result is a list of blocked processes all waiting
for I/O.
PROCESS STATE TRANSITION
The various process states, displayed in a state diagram, with arrows indicating possible
transitions between states. Processes go through various process states which determine how the
process is handled by the operating system kernel. The specific implementations of these states
vary in different operating systems, and the names of these states are not standardized, but the
general high-level functionality is the same.
When a process is created, it needs to wait for the process scheduler (of the operating system) to
set its status to "waiting" and load it into main memory from secondary storage device (such as a
hard disk or a CD-ROM). Once the process has been assigned to a processor by a short-term
scheduler, a context switch is performed (loading the process into the processor) and the process
state is set to "running" - where the processor executes its instructions. If a process needs to wait
for a resource (such as waiting for user input, or waiting for a file to become available), it is
moved into the "blocked" state until it no longer needs to wait - then it is moved back into the
"waiting" state. Once the process finishes execution, or is terminated by the operating system, it
is moved to the "terminated" state where it waits to be removed from main memory. The act of
assigning a processor to the first process on the ready list is called dispatching. The OS may use
an interval timer to allow a
process to run for a specific time interval or quantum.
OPERATIONS ON PROCESS
There are various operations that can be performed on a process and are listed below.
a) create
b) destroy
c) suspend
d) resume
e) change priority
f) block
g) wake up
h) dispatch
i) enable
SUSPEND AND RESUME
The OS could then perform a suspend transition on blocked processes, swapping them
on disk and marking their state as suspended (after all, if they must wait for I/O, they might
as well do it out of costly RAM), load into main memory a previously suspended process,
activating into ready state and go on. However, swapping is an I/O operation in itself, and so
at a first sight things might seem to get even worse doing this way. Again the solution is to
carefully reconsider the reasons why processes are blocked and swapped, and recognize that
if a process is blocked because it waits for I/O, and then suspended. The I/O event might
occur while it sits swapped on the disk.
PROCESS STATE TRANSITIONS WITH SUSPEND AND RESUME
RUNNING
NEW
READY
SUSPENDEDREADY
EXITED
BLOCKED
SUSPENDEDBLOCKED
We can thus classify suspended processes into two classes: ready-suspended for those suspended
process whose restarting condition has occurred, and blocked-suspended for those who must still
wait instead. This classification allows the OS to pick from the good pool of ready-suspended
processes, when it wants to revive the queue in main memory. Provisions must be made for
passing processes between the new states. This means allowing for new transitions: activate and
suspend between ready and ready-suspended, and between blocked suspended and blocked as
well, end event-occurred transitions from blocked to ready, and from blocked-suspended to
ready-suspended as well.
SUSPENDING A PROCESS
Indefinitely removes it from contention for time on a processor without being destroyed
Useful for detecting security threats and for software debugging purposes
A suspension may be initiated by the process being suspended or by another process
A suspended process must be resumed by another process
INTERRUPT PROCESSING
AIMS AND OBJECTIVES
This lesson focuses on the following concepts
a) Introduction to interrupt processing
b) Interrupt classes
c) Context switching
The main objective of this lesson is to make the student aware of the interrupt processing, classes
and context switching.
INTRODUCTION TO INTERRUPT PROCESSING
An interrupt is an event that alters the sequence in which a processor executes instructions and it
is generated by the hardware of the computer system.
HANDLING INTERRUPTS
After receiving an interrupt, the processor completes execution of the current instruction, then
pauses the current process
The processor will then execute one of the kernels interrupt-handling functions
The interrupt handler determines how the system should respond
Interrupt handlers are stored in an array of pointers called the interrupt vector
After the interrupt handler completes, the interrupted process is restored and executed or the
next process is executed.
Interrupt handlers should be written in high-level languages so that they are easy to understand
and modify. They should be written in assembly language for efficiency reasons and because
they manipulate hardware registers and use special call/return sequences that cannot be coded in
high-level languages.
To satisfy both goals certain OS employs the following two-level strategy. Interrupts branch to
low-level interrupt dispatch routines that are written in assembly language. These handle lowlevel tasks such as saving registers and returning from the interrupt when it has been processed.
However, they do little else, they call high-level interrupt routines to do the bulk of interrupt
processing, passing them enough information to identify the interrupting device. The OS
provides three interrupt dispatchers: one to handle input interrupts, one to handle output
interrupts, and one to handle clock interrupts. Input and output dispatchers are separated for
convenience, and a special clock dispatcher is provided for efficiency reasons.
NT provides an even more modular structure. A single routine, called the trap handler, handles
both traps (called exceptions by NT) and interrupts, saving and restoring registers, which are
common to both. If the asynchronous event was an interrupt, then it calls an interrupt handler.
The task of this routine is to raise the processor priority to that of the device interrupting (so that
a lower-level device cannot preempt), call either an internal kernel routine or an external routine
called an ISR, and then restore the processor priority. These two routines roughly correspond to
our high-level interrupt routine and the trap handler corresponds to our low-level routine. Thus,
NT trades off the efficiency of 2 levels for the reusability of 3 levels.
The device table entry for an input or output interrupt handler points at the high-level part of the
interrupt handler, which is device-specific, and not the low-level part which is shared by all
devices (except the clock).
process's stack will hold the PS and PC value for only one interrupt and there will never be more
interrupts pending than the number of processes in the system.
3.2.5 INTERRUPT CLASSES
SVC (supervisor call) interrupts: - They enable software to respond to signals from hardware.
These are initiated by a running process that executes the SVC instruction. An SVC is a user
generated request for a particular system service such ad performing input/output, obtaining
more storage, or communicating with the system operator. A user must request a service through
an SVC which helps the OS secure from the user.
I/O interrupts: - These are initiated by the input/output hardware. They signal to the CPU that
the status of a channel or device has changed. For example, I/O interrupts are caused when an
I/O operation completes, when an I/O error occurs, or when a device is made ready.
External interrupts: - These are caused by various events including the expiration of a quantum
on an interrupting clock, the pressing of the consoles interrupt key by the operator, or the receipt
of a signal from another processor on a multiprocessor system.
Restart interrupts: - These occur when the operator presses the consoles restart button, or
when a restart SIGP (signal processor) instruction arrives from another processor on a
multiprocessor system.
Program check interrupts: - These occur as a programs machine language instructions are
executed. These problems include divide by zero, arithmetic overflow, data is in wrong format,
attempt to execute an invalid operation code, attempt to reference beyond the limits of real
memory, attempt by a user process to execute a privileged instruction and attempts to reference a
protected resources.
Machine check interrupts: - These are caused by malfunctioning hardware.
STORAGE MANAGEMENT
AIMS AND OBJECTIVES
The aim of this lesson is to learn the concept of Real storage management strategies.
The objectives of this lesson are to make the student aware of the following concepts
a) Contiguous storage allocation
b) Non Contiguous storage allocation
c) Fixed partition multiprogramming
d) Variable partition multiprogramming
INTRODUCTION
The organization and management of the main memory or primary memory or real memory of a
computer system has been one of the most important factors influencing operating systems
design. Regardless of what storage organization scheme we adopt for a particular system, we
must decide what strategies to use to obtain optimal performance. Storage Management
Strategies are of four types as described below:
a) FETCH STRATEGIES concerned with when to obtain the next piece of program or data
for transfer to main storage from secondary storage
a. Demand fetch in which the next piece of program or data is brought into the
main storage when it is referenced by a running program
b. Anticipatory fetch strategies where we make guesses about the future
program control which will yield improved system performance
B) PLACEMENT STRATEGIES concerned with determining where in main storage to place
and incoming program. Examples are first fit, best fit and worst fit
C) REPLACEMENT STRATEGIES concerned with determining which piece of program or
data to replace to make place for incoming programs
CONTIGUOUS STORAGE ALLOCATION
In contiguous storage allocation each program has to occupy a single contiguous block of storage
locations. The simplest memory management scheme is the bare machine concept, where the
user is provided with the complete control over the entire memory space. The next simplest
scheme is to divide memory into two sections, one for the user and one for the resident monitor
of the operating system. A protection hardware can be provided in terms of fence register to
protect the monitor code and data from changes by the user program.
The resident monitor memory management scheme may seem of little use since it appears to be
inherently single user. When they switched to the next user, the current contents of user memory
were written out to a backing storage and the memory of the next
user is read in called as swapping
NON-CONTIGUOUS STORAGE ALLOCATION
Memory is divided into a number of regions or partitions. Each region may have one program to
be executed. Thus the degree of multiprogramming is bounded by the number of regions. When a
region is free, a program is selected from the job queue and loaded into the free regions. Two
major schemes are multiple contiguous fixed partition allocation and multiple contiguous
variable partition allocation.
FIXED PARTITIONS MULTIPROGRAMMING
Fixed partitions multiprogramming also called as multiprogramming with fixed number of task
(MFT) or multiple contiguous fixed partition allocation. MFT has the following properties.
Several users simultaneously compete for system resources
switch between I/O jobs and calculation jobs for instance
Allowing Relocation and Transfers between partitions
Protection implemented by the use of several boundary registers : low and high
boundary registers, or base register with length
Fragmentation occurs if user programs cannot completely fill a partition - wasteful.
All the jobs that enters the system are put into queues. Each partition has its own job queue as
shown in the following figure. The memory requirements of each job and the available regions in
determining which jobs are allocated memory are taken care by the job scheduler. When a job is
allocated space, it is loaded into a region and then compete for the CPU. When a job terminates,
it releases its memory region, which the job scheduler may then fill with another job from the job
queue. Another way is to allow a single unified queue and the decisions of choosing a job reflect
the choice between a best-fit-only or a best available- fit job memory allocation policy.
However, complete wastage is still not reduced. The OS keeps a table indicating which parts of
memory are available and which are occupied. Initially all memory is available for user
programs, and is considered as one large block of available memory, a hole. When a job arrives
and needs memory, we search for a hole large enough for this job. If we find one, we allocate
only as much as is needed, keeping the rest available to satisfy future requests. The most
common algorithms for allocating memory are first-fit and best-fit.
VIRTUAL STORAGE
AIMS AND OBJECTIVES
The aim of this lesson is to learn the concept of virtual storage management strategies. The
objectives of this lesson are to make the student aware of the following concepts
virtual storage management strategies
page replacement strategies
working sets
demand paging
and page size
INTRODUCTION
Virtual Memory is technique which allows the execution of processes that may not be completely
in memory. The main advantage of this scheme is that user programs can be larger than physical
memory. The ability to execute a program which is only partially in memory would have many
benefits which includes
(a) users can write programs for a very large virtual address space,
(b)more users can be run at the same time, with a corresponding increase in cpu
utilization and
throughput, but no increase in response time or turnaround time,
(c) less I/O would be needed to load or swap each user into memory, so each user would
run faster.
55555
6767
You can see that Optimal replacement, creates six page faults
LEAST RECENTLY USED
Most of the case, predicting the future page references is difficult and hence implementing
optimal replacement is difficult. Hence there is a need of other scheme which approximates the
optimal replacement. Least recently used (LRU) schemes approximate the future uses by the past
used pages. In LRU scheme, we replace those pages which have not been used for the longest
period of time.
For example for the reference string
1, 5, 6, 1, 7, 1, 5, 7, 6, 1, 5, 1, 7
with 3 frames, the page faults will be as follows
111116667
55777755
6655111
You can see that LRU creates nine page faults.
WORKING SETS
If the number of frames allocated to a low-priority process falls below the minimum number
required, we must suspend its execution. We should then page out it remaining pages, freeing all
of its allocated frames. A process is thrashing if it is spending more time paging than executing.
Thrashing can cause severe performance problems. To prevent thrashing, we must provide a
process with as many frames as it needs. There are several techniques available to know how
many frame a process needs. Working sets is a strategy which starts by looking at what a
program is actually using.
The set of most recent page references is the working set denoted by . The accuracy of the
working set depends upon the selection of . If it is too small, page fault will increase an if it is
too large, then it is very difficult to allocate the required frames.
For example,
You can see that the working set (ws) at two different times for the window size .=11. [The
working set refers to the pages the process has used during that time for the window size ]. So
at the maximum the above given example needed atleast 5 frames, otherwise page fault will
occur. In most of the cases we will allocate the number of frames to a process depending on the
average working set size.
Let Si be the average working set size for each process in the system. Then
D Si
is the total demand for frames. Thus process i needs Si frames. If the total demand is greater than
the
total number of available frames, thrashing will occur.
DEMAND PAGING
Demand paging is the most common virtual memory system. Demand paging is similar to a
paging system with swapping. When we need a program, it is swapped from the backing storage.
There are also lazy swappers, which never swaps a page into memory unless it is needed. The
lazy swapper decreases the swap time and the amount of physical memory needed, allowing an
increased degree of multiprogramming.
PAGE SIZE
There is no single best page size. The designers of the Operating system will decide the page size
for an existing machine. Page sizes are usually be in powers of two, ranging from 28 to 212 bytes
or words. The size of the pages will affect in the following way.
Decreasing the page size increases the number of pages and hence the size of the page
table.
Memory is utilized better with smaller pages.
For reducing the I/O time we need to have smaller page size.
To minimize the number of page faults, we need to have a large page size.
PROCESSOR MANAGEMENT
AIMS AND OBJECTIVES
A multiprogramming operating system allows more than one process to be loaded into the
executable memory at a time and for the loaded process to share the CPU using time
multiplexing. Part of the reason for using multiprogramming is that the operating system itself is
implemented as one or more processes, so there must be a way for the operating system and
application processes to share the CPU. Another main reason is the need for processes to per I/O
operations in the normal course of computation. Since I/O operations ordinarily require orders of
magnitude more time to complete than do CPU instructions, multi programming systems allocate
the CPU to another process whenever a process invokes an I/O operation
Make sure your scheduling strategy is good enough with the following criteria:
Utilization/Efficiency: keep the CPU busy 100% of the time with useful work
Throughput: maximize the number of jobs processed per hour.
Turnaround time: from the time of submission to the time of completion, minimize the
time batch users must wait for output
Waiting time: Sum of times spent in ready queue - Minimize this
Response Time: time from submission till the first response is produced, minimize
response time for interactive users
Fairness: make sure each process gets a fair share of the CPU
The aim of this lesson is to learn the concept of processor management and related issues. The
objectives of this lesson are to make the student aware of the following concepts
preemptive scheduling
Non preemptive scheduling
Priorities
deadline scheduling
INTRODUCTION
When one or more process is runnable, the operating system must decide which one to run first.
The part of the operating system that makes decision is called the Scheduler; the algorithm it
uses is called the Scheduling Algorithm.
An operating system has three main CPU schedulers namely the long term scheduler, short term
scheduler and medium term schedulers. The long term scheduler determines which jobs are
admitted to the system for processing. It selects jobs from the job pool and loads them into
memory for execution. The short term scheduler selects from among the jobs in memory which
are ready to execute and allocated the cpu to one of them. The medium term scheduler helps to
remove processes from main memory and from the active contention for the cpu and thus reduce
the degree of multiprogramming.
The cpu scheduler has another component called as dispatcher. It is the module that actually
gives control of the cpu to the process selected by the short term scheduler which involves
loading of registers of the process, switching to user mode and jumping to the proper location.
Before looking at specific scheduling algorithms, we should think about what the scheduler is
trying to achieve. After all the scheduler is concerned with deciding on policy, not providing a
mechanism. Various criteria come to mind as to what constitutes a good scheduling algorithm.
Some of the possibilities include:
Fairness make sure each process gets its fair share of the CPU.
Efficiency (CPU utilization) keep the CPU busy 100 percent of the time.
Response Time [Time from the submission of a request until the first response is
produced] minimize response time for interactive users.
Turnaround time [The interval from the time of submission to the time of completion]
minimize the time batch users must wait for output.
Throughput [Number of jobs that are completed per unit time] maximize the number of
jobs processed per hour.
Waiting time minimize the waiting time of jobs
PREEMPTIVE VS NON-PREEMPTIVE
The Strategy of allowing processes that are logically runnable to be temporarily suspended is
called Preemptive Scheduling. ie., a scheduling discipline is preemptive if the CPU can be taken
away. Preemptive algorithms are driven by the notion of prioritized computation. The process
with the highest priority should always be the one currently using the processor. If a process is
currently using the processor and a new process with a higher priority enters, the ready list, the
process on the processor should be removed and returned to the ready list until it is once again
the highest-priority process in the system.
Run to completion is also called Nonpreemptive Scheduling. ie., a scheduling discipline is
nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from
that process. In short, Non-preemptive algorithms are designed so that once a process enters the
running state(is allowed a process), it is not removed from the processor until it has completed its
service time ( or it explicitly yields the processor). This leads to race condition and necessitates
of semaphores, monitors, messages or some other sophisticated method for preventing them. On
the other hand, a policy of letting a process run as long as it is wanted would mean that some
process computing to a billion places could deny service
to all other processes indefinitely.
PRIORITIES
A priority is associated with each job, and the cpu is allocated to the job with the highest priority.
Priorities are generally some fixed numbers such as 0 to 7 or 0 to 4095. However there is no
general agreement on whether 0 is the highest or lowest priority. Priority can be defined either
internally or externally. Examples of internal priorities are time limits, memory requirements,
number of open files, average I/O burst time, CPU burst time, etc. External priorities are given
by the user.
A major problem with priority scheduling algorithms is indefinite blocking or starvation. A
solution to this problem is aging. Aging is a technique of gradually increasing the priority of jobs
that wait in the system for a long time.
DEADLINE SCHEDULING
Certain jobs have to be completed in specified time and hence to be scheduled based on deadline.
If delivered in time, the jobs will be having high value and otherwise the jobs will be having nil
value. The deadline scheduling is complex for the following reasons
Giving resource requirements of the job in advance is difficult
A deadline job should be run without degrading other deadline jobs
In the event of arriving new jobs, it is very difficult to carefully plan resource
requirements
Resource management for deadline scheduling is really an overhead
PROCESSOR SCHEDULING
AIMS AND OBJECTIVES
The aim of this lesson is to learn the concept of processor scheduling and scheduling algorithms.
The objectives of this lesson are to make the student aware of the following concepts
FIFO
Round Robin
Shortest Job first
Shortest remaining time first
Highest response ratio next (HRN)
INTRODUCTION
Different algorithms have different properties and may favor one class of processes over another.
In choosing best algorithm, the characteristic explained in the previous lesson must be
considered which includes CPU utilization, throughput, turnaround time, waiting time, and
response time. The five most important algorithms used in the CPU scheduling are
FIFO,
Round Robin,
Shortest Job first,
Shortest remaining time first
and Highest response ratio next (HRN).
The turnaround times are now 4, 9, 15 and 23 minutes for an average of 12.75 minutes.
Shortest job first is provably optimal. Consider the case of four jobs, with runtimes of a, b, c and
d respectively. The first job finishes at time a, the second finishes at time a+b and so on. The
mean turnaround time is (4a+3b+2c+d)/4. It is clear that a contributes more to the average than
the other, so it should be the shortest job, with b next, then c and finally d as the longest, as it
affects only its own turnaround time. The same argument applies equally well to any number of
jobs.
Consider for example, the following scenario of four jobs and the corresponding CPU burst
time arrived in the order of job number.
JOB BURST TIME
1
20
2
10
3
5
4
15
The algorithm allocates the shortest job first.
JOB 3
0
JOB 2
5
JOB 4
15
JOB 4
30
50
shortest-job-first algorithm will allow the currently running job to finish its cpu burst.
Preemptive-shortest-job-first algorithm is also called as shortest remaining time first.
Consider for example, the following scenario of four jobs and the corresponding CPU burst time
and arrival time.
Job Burst time Arrival time
1
20
0
2
10
2
3
5
4
4
15
19
The algorithm allocates the jobs as shown in Gantt chart
In this lesson we will learn about the introduction to Disk Scheduling Strategies, RAM and
optical disks.The objective of this lesson is to make sure that the student understands the
following.
Disk Scheduling,
First Come First Served (FCFS),
Shortest Seek Time First (SSTF)
SCAN
Circular SCAN (C-SCAN)
RAM and
Optical Disks
INTRODUCTION
In multiprogramming systems several different processes may want to use the system's resources
simultaneously. For example, processes will contend to access an auxiliary storage device such
as a disk. The disk drive needs some mechanism to resolve this contention, sharing the resource
between the processes fairly and efficiently.
A magnetic disk consists of a collection of platters which rotate on about a central spindle. These
platters are metal disks covered with magnetic recording material on both sides. Each disk
surface is divided into concentric circles called tracks. Disk divides each track into sectors, each
typically contains 512 bytes. While reading and writing the head moves over the surface of the
platters until it finds the track and sector it requires. This is like finding someone's home by first
finding the street (track) and then the particular house number (sector). There is one head for
each surface on which information is stored each on its own arm. In most systems the arms are
connected together so that the heads move in unison, so that each head is over the same track on
each surface. The term cylinder refers to the collection of all tracks which are under the heads at
any time.
In order to satisfy an I/O request the disk controller must first move the head to the correct track
and sector. Moving the head between cylinders takes a relatively long time so in order to
maximize the number of I/O requests which can be satisfied the scheduling policy should try to
minimize the movement of the head. On the other hand, minimizing head movement by always
satisfying the request of the closest location may mean that some requests have to wait a long
time. Thus, there is a trade-off between throughput (the average number of requests satisfied in
unit time) and response time (the average time between a request arriving and it being satisfied).
Advantages of SSTF are higher throughput and lower response times than FCFS and it is a
reasonable solution for batch processing systems. The disadvantages of SSTF are
(i)
it does not ensure fairness,
(ii)
there are possibility of indefinite postponement,
(iii)
There will be high variance of response times and
(iv)
the response time generally will be unacceptable for interactive systems
SCAN
The drive head sweeps across the entire surface of the disk, visiting the outermost cylinders
before changing direction and sweeping back to the innermost cylinders. It selects the next
waiting requests whose location it will reach on its path backwards and forwards across the disk.
Thus, the movement time should be less than FCFS but the policy is clearly fairer than SSTF.
Circular SCAN (C-SCAN)
C-SCAN is similar to SCAN but I/O requests are only satisfied when the drive head is traveling
in one direction across the surface of the disk. The head sweeps from the innermost cylinder to
the outermost cylinder satisfying the waiting requests in order of their locations. When it reaches
the outermost cylinder it sweeps back to the innermost cylinder without satisfying any requests
and then starts again.
FILE SYSTEMS
INTRODUCTION
In computing, a file system (often also written as filesystem) is a method for storing and
organizing computer files and the data they contain to make it easy to find and access them. File
systems may use a data storage device such as a hard disk or CD-ROM and involve maintaining
the physical location of the files, they might provide access to data on a file server by acting as
clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they may be virtual and exist
only as an access method for virtual data.
More formally, a file system is a set of abstract data types that are implemented for the storage,
hierarchical organization, manipulation, navigation, access, and retrieval of data. File systems
share much in common with database technology, but it is debatable whether a file system can be
classified as a special-purpose database (DBMS).
TYPES OF FILE SYSTEMS
File system types can be classified into disk file systems, network file systems and special
purpose file systems.
DISK FILE SYSTEMS
A disk file system is a file system designed for the storage of files on a data storage device, most
commonly a disk drive, which might be directly or indirectly connected to the computer.
Examples of disk file systems include FAT, FAT32, NTFS, HFS and HFS+, ext2, ext3, ISO 9660,
ODS-5, and UDF. Some disk file systems are journaling file systems or versioning file systems.
FLASH FILE SYSTEMS
A flash file system is a file system designed for storing files on flash memory devices. These are
becoming more prevalent as the number of mobile devices is increasing, and the capacity of flash
memories catches up with hard drives. While a block device layer can run emulate hard drive
behavior and store regular file systems on a flash device, this is suboptimal for several reasons:
ERASING BLOCKS: Flash memory blocks have to be explicitly erased before they can be
written to. The time taken to erase blocks can be significant, thus it is beneficial to erase unused
blocks while the device is idle.
RANDOM ACCESS: Disk file systems are optimized to avoid disk seeks whenever possible,
due to the high cost of seeking. Flash memory devices impose no seek latency.
WEAR LEVELLING: Flash memory devices tend to "wear out" when a single block is
repeatedly overwritten; flash file systems try to spread out writes as evenly as possible.
DATABASE FILE SYSTEMS
A new concept for file management is the concept of a database-based file system. Instead of, or
in addition to, hierarchical structured management, files are identified by their characteristics,
like type of file, topic, author, or similar metadata.
space and allocate it to the new file. This space is then removed from the free space list. When a
file is deleted, its disk space is added to the free space list.
BIT-VECTOR
Frequently, the free-space list is implemented as a bit map or bit vector. Each block is
represented by a 1 bit. If the block is free, the bit is 0; if the block is allocated, the bit is 1. For
example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25, 26, and 27 are
free, and the rest of the blocks are allocated. The free-space bit map would be:
11000011000000111001111110001111
The main advantage of this approach is that it is relatively simple and efficient to find n
consecutive free blocks on the disk. Unfortunately, bit vectors are inefficient unless the entire
vector is kept in memory for most accesses. Keeping it main memory is possible for smaller
disks such as on microcomputers, but not for larger ones.
CONTIGUOUS ALLOCATION
The contiguous allocation method requires each file to occupy a set of contiguous address on the
disk. Disk addresses define a linear ordering on the disk. Notice that, with this ordering,
accessing block b+1 after block b normally requires no head movement. When head movement
is needed (from the last sector of one cylinder to the first sector of the next cylinder), it is only
one track. Thus, the number of disk seeks required for accessing contiguous allocated files in
minimal, as is seek time when a seek is finally needed.
Contiguous allocation of a file is defined by the disk address and the length of the first block. If
the file is n blocks long, and starts at location b, then it occupies blocks b, b+1, b+2, , b+n-1.
The directory entry for each file indicates the address of the starting block and the length of the
area allocated for this file.
The difficulty with contiguous allocation is finding space for a new file. If the file to be created is
n blocks long, then the OS must search for n free contiguous blocks. First-fit, bestfit, and worstfit strategies (as discussed in Chapter 4 on multiple partition allocation) are the most common
strategies used to select a free hole from the set of available holes. Simulations have shown that
both first-fit and best-fit are better than worst-fit in terms of both time storage utilization. Neither
first-fit nor best-fit is clearly best in terms of storage utilization, but first-fit is generally faster.
These algorithms also suffer from external fragmentation. As files are allocated and deleted, the
free disk space is broken into little pieces. External fragmentation exists when enough total disk
space exists to satisfy a request, but this space not contiguous; storage is fragmented into a large
number of small holes.
Another problem with contiguous allocation is determining how much disk space is needed for a
file. When the file is created, the total amount of space it will need must be known and allocated.
How does the creator (program or person) know the size of the file to be created. In some cases,
this determination may be fairly simple (e.g. copying an existing file), but in general the size of
an output file may be difficult to estimate.
Assest 1
read, write,
execute, own
Read
Assest 2
execute
read, write,
execute, own
file
read
device
write