OS MOD 3-Rev21

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

OPERATING SYSTEMS

MODULE III: MEMORY MANAGEMENT

Memory management - Different address bindings – compile, link and run time bindings. -
Difference between logical address and physical address - Contiguous memory allocation –
fixed partition and variable partition – Allocation Strategies - first fit, best fit and worst fit -
Define fragmentation – internal and external, and solutions - Paging and paging hardware -
Segmentation, advantages of segmentation over paging- Concept of virtual memory -
Demand paging - Page-faults and how to handle page faults. - Page replacement algorithms:
FIFO, optimal, LRU -Thrashing.
Address Binding

Memory consists of a large array of words, each has an address associated with it.The work of
cpu is to fetch instructions from memory based on program counter.These instructions requires
loading and storing to/from specific memory address.Addres binding is the process of mapping
from one address space to another.
Binding of Instructions and Data to Memory

1
Address binding of instructions and data to memory addresses can
happen at three different stages
• Compile time:
→ If the address of memory location where the process will reside in memory is
know at compile time, then absolute code can be generated. For example, if you
know that a user process will reside starting at location R, then the generated
compiler code will start at that location and extend up from there.
• Load time:
→ Must generate relocatable code if memory location is not known at compile time.
→ If memory location is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code.
→ In this case, final binding is delayed until load time.

• Execution time:
→ If the process can be moved during its execution from one memory segment to
another, then binding must be delayed until run time.
→ Need hardware support for address maps
Logical vs. Physical Address Space
→ Logical address – An address generated by the CPU is referred to as a logical address.
→ Physical address – An address seen by the memory unit—that is, the one loaded into the
memory-address register of the memory—is referred to as a physical address.
• Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time address-
binding scheme.
• The set of all logical addresses generated by a program is a logical address space.
• The set of all physical addresses corresponding to these logical addresses is a physical
address space.
Memory-Management Unit (MMU)

2
• Hardware device that maps virtual to physical address.
• In MMU scheme, the value in the relocation register is added to every address generated
by a user process at the time it is sent to memory.
• The user program deals with logical addresses; it never sees the real physical addresses.

Swapping
A process can be swapped (removed) temporarily out of memory to a secondary storage. This
may be done to make space for the execution of other processes. And later, the process was
swapped out is brought back into memory for continued execution.This process is called
swapping

Memory Management Techniques:


Memory is usually divided into two partitions : one for OS and other for user process
We can place OS in either low memory or in high memory.We need to consider how to allocate
the available memory to the process that are in the input queue waiting to be brought into memory
.
The memory management techniques can be classified into following main categories:
Contiguous memory management schemes
Non-Contiguous memory management schemes

3
Single Contiguous Memory Allocation
→ In this scheme, The main memory is divided into two contiguous areas or partitions.
→ The operating systems reside permanently in one partition, generally at the lower
memory,
→ and the user process is loaded into the other partition.

Multiple partitioning
The single Contiguous memory management scheme is inefficient as it limits computers to
execute only one program at a time resulting in wastage in memory space and CPU time.
The problem of inefficient CPU use can be overcome using multiprogramming that allows more
than one program to run concurrently.
To switch between two processes, the operating systems need to load both processes into the main
memory.
The operating system needs to divide the available main memory into multiple parts to load
multiple processes into the main memory. Thus multiple processes can reside in the main memory
simultaneously.

4
Multiple partitioning schemes can be of two types
• fixed Partitioning
• Variable partitioning (Dynamic partitioning)

16 MB
8 MB
6 MB
8 MB
5 MB
8 MB
12 MB
8 MB
10 MB
8 MB

Equal size partition Unequal size partition


Fixed-sized partitions
→ One of the simplest methods for allocating memory is to divide memory into several
fixed-sized partitions.
→ Each partition may contain exactly one process. Thus, the degree of multiprogramming is
bound by the number of partitions.

There are two difficulties with the use of equal size partitions
• A program may be too big to fit into a partition.
• Main memory utilization is extremely inefficient. Any
process, no matter how small, occupies an entire partition.

Variable size partitions

O O O O
S S S S
process 5 process 5 process 5 process 5

process 9 process 9

process 8 process
10

process 2 process 2 process 2 process 2

5
Whenever one of running processes, (p8) is terminated, Find a ready process whose size is best
fit to the hole, (p9) Allocate it from the top of hole If there is still an available hole, repeat the
above (for p10). Any size of processes, (up to the physical memory size) can be allocated.
Allocation strategies(Dynamic storage allocation)
In general set of holes, of various sizes is scattered throughout memory at any given time.When a
process arrives and needs memory, the system searches this set for a hole that is large enough
for this process. If the hole is too large it will split into two: One part is allocated to the arriving
process, the other is returned to the set of holes.
When a process terminates, it releases its block of memory which is then placed back in the set
of holes. If the new hole is adjacent to the other holes, the adjust holes are merged to form one
larger hole.
At this point the system may need to check whether there are processes waiting for memory
and whether this newly freed and recombined memory could satisfy the demands of any of
these waiting processes
First Fit , Best Fit, and Worst fit Allocation strategies
• First-fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search
ended. The searching ends as soon as a free hole is found that is large enough.
• Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless
ordered by size. Produces the smallest leftover hole.
• Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest
leftover hole.
• First-fit and best-fit better than worst-fit in terms of speed and storage utilization.
Fragmentation
• Whenever a process is loaded or removed from the physical memory block, it creates
a small hole in memory space which is called fragment.
• The fragmentation is further classified into two categories
Internal and External Fragmentation.
Internal fragmentation– allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but
not being used.
Solution:The memory must be partitioned into variable sized blocks and
assign the best fit block to the process.
External fragmentation-It occurs when variable size memory space are
allocated to the processes dynamically. When the process is removed from the
memory, it creates the free space in the memory causing external
fragmentation.
Solution to external fragmentation: compaction
Permit logical address space of a process to be noncontiguous

6
Compaction: The goal of compaction is to shuffle the memory contents so as to place all
free memory together in one large block. The simplest compaction algorithm is to move all
processes toward one end of memory; all holes move in the other direction,
producing one large hole of available memory. This scheme can be expensive.

What is paging?

→ Paging is a memory-management scheme that permits the physical address space of a


process to be non-contiguous.
→ Paging avoids external fragmentation and the need for compaction
Explain paging with paging hardware diagram ?

→ Paging is a memory-management scheme that permits the physical address space of a


process to be non-contiguous.
→ Physical memory is broken into fixed sized blocks called frames.
→ Logical memory is also broken into blocks of the same size called pages.
→ When a process is to be executed, its pages are loaded into available memory frames
from the backing store.
→ The backing store is divided into fixed sized blocks that are of the same size as the
memory frames.

Paging Hardware

➢ Every address generated by the CPU is divided into two parts:


1. Page number(p): used as an index into the page table. The page table contains
the base address of each page in physical memory.

7
2. Page offset(d):Combined with base address to define the physical memory address
that is sent to the memory unit.

Paging Model of Logical and Physical Memory

Paging example:

Using a page size of 4 bytes and a physical memory of 32 bytes .


It is shown that how the user's view of memory can be mapped into physical memory.

8
Physical address = (frame number * size of page) + offset
Logical address 0 is page O, offset O. Indexing into the page table, we find that page 0 is in frame
5. Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0).
Logical address 3 (page 0, offset 3) maps to physical address 23 (= (5 x 4) + 3).
Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6.
Thus, logical address 4 maps to physical address 24 (= (6 x 4) + 0).

Page Table
• Page table is kept in main memory.
• Page-tablebase register (PTBR) points to the page table.
• Page-table length register (PRLR) indicates size of the page table.
• In this scheme every data/instruction access requires two memory accesses. One for the
page table and one for the data/instruction.

Advantages of Paging:

● Eliminates fragmentation
● Allows higher degree of multiprogramming, which results in increased memory and
processor utilization
● Eliminates compaction overhead
Disadvantages of Paging

● Page address mapping hardware increases the cost of computer and slows down the
processor.

Segmentation :

→ Segmentation is a technique to break memory into logical pieces where each piece
represents a group of related information.

→ For example, data segments or code segment for each process, data segment for
operating system and so on.

→ Segmentation can be implemented using or without using paging.

→ Segmentation is a Memory-management scheme that supports user view of memory.

→ A program is a collection of segments. A segment is a logical unit such as:

main program,

procedure,

function,

local variables, global variables,

9
common block,

stack,

symbol table, arrays

→ Each of these segments is of variable length; the length is defined by the purpose of the
segment in the program.
→ Unlike paging, segments are having varying sizes and thus eliminates internal
fragmentation.
External fragmentation still exists but to lesser extent.

User’s View of a Program

Segmentation hardware

Segment table – maps two-dimensional physical addresses; each table entry has:

Segment base – contains the starting physical address where the segments reside in memory

Segment limit – specifies the length of the segment

A logical address consists of two parts: a segment number, s, and an offset into that segment, d

The segment number is used as an index to the segment table.

The offset d of the logical address must be between 0 and the segment limit. If it is not, we trap to the OS

Example:

10
Address generated by CPU is divided into

Segment number s -- segment number is used as an index into a segment table which contains
base address of each segment in physical memory and a limit of segment.

Segment offset o -- segment offset is first checked against limit and then is combined with base
address to define the physical memory address.

→ A logical address space is a collection of segments.


→ Each segment has a name and a length.
→ The addresses specify both the segment name and the offset within the segment.
→ The user therefore specifies each address by two quantities: a segment name and an
offset
Segmentation Architecture

• Logical address consists of a two tuple:

<segment-number, offset>,

• Segment table – maps two-dimensional physical addresses; each table entry has:

– base – contains the starting physical address where the segments reside in
memory.
11
– limit – specifies the length of the segment.

• Segment-table base register (STBR) points to the segment table’s location in memory.

• Segment-table length register (STLR) indicates number of segments used by a program;

segment number s is legal if s< STLR.

Segmentation Hardware

List advantages of segmentation over paging

● No internal fragmentation
● Improved memory utilization
● Provides virtual memory
● Dynamic linking and loading
● Facilitate shared segments
● Infer controlled access
● Average segment size >> average page size
● less overhead (smaller tables)compared to variable partitioning
Disadvantage: External fragmentation

Virtual Memory

➢ Virtual memory – separation of user logical memory from physical memory.


o Only part of the program needs to be in memory for execution
o Logical address space can therefore be much larger than physical address space
o Allows address spaces to be shared by several processes
o Allows for more efficient process creation
➢ Virtual memory can be implemented via:
o Demand paging
o Demand segmentation

12
Advantages of Virtual Memory

● More processes may be maintained in main memory


● A process may be larger than all of main memory, then also it will be executed.
● Virtual memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory.
● Because each user program could take less physical memory, more programs could be
run at the same time and thereby increasing CPU utilization and throughput.
● Less I/O would be needed to load or swap each program into memory, so each user
program would run faster.
● Virtual memory also allows files and memory to be shared by several different processes
through page sharing.
Demand paging

● With demand-paged virtual memory, pages are only loaded when they are demanded
during program execution; pages that are never accessed are thus never loaded into
physical memory.
● A demand-paging system is similar to a paging system with swapping where processes
reside in secondary memory (usually a disk).
● When we want to execute a process, we swap it into memory.
● Rather than swapping the entire process into memory, however, we use a lazy swapper.
● Lazy swapper never swaps a page into memory unless that page will be needed.
● A swapper manipulates entire processes, whereas a pager is concerned with the
individual pages of a process.
● We thus use pager, rather than swapper, in connection with demand paging.
Transfer of a Paged Memory to Contiguous Disk Space

13
Valid-Invalid Bit

● With each page table entry a valid–invalid bit is associated


(v ⇒ in-memory, i ⇒ not-in-memory)

● Initially valid–invalid bit is set to i on all entries

● During address translation, if valid–invalid bit in page table entry is i ⇒ page fault

Steps to handle page fault with diagram

Access to a page marked invalid causes a page fault.

• Check the page table for the process to determine whether the valid-invalid bit
of page referenced is ‘i’ or ‘v’ (ie whether in physical memory or not).
• If the referenced page is in memory (valid-invalid bit ‘v’), continue the
execution. If the valid-invalid bit is ‘I’, the page is still in secondary memory and
it has to be brought into main memory.
• We find a free frame
• We schedule a disk operation to read the desired page into the newly allocated
frame.
• When the disk read is complete, we modify the page table to indicate that the
page is now in memory.
14
• We restart the instruction that was interrupted by the trap.
• The process can now access the page as though it had always been in memory.
Page Replacement

● Page replacement takes the following approach.


● If no frame is free, we find one that is not currently being used and free it.
● We can free a frame by writing its contents to swap space and changing the page table
(and all other tables) to indicate that the page is no longer in memory
● now use the freed frame to hold the page for which the process faulted.
● Find a free frame:
o If there is a free frame, use it.
● If there is no free frame, use a page-replacement algorithm to select a victim frame.
o Write the victim frame to the disk; change the page and frame tables accordingly.
● Read the desired page into the newly freed frame; change the page and frame tables.
● Restart the user process.

Illustrate any 3 page replacement algorithms ?

● In all our examples, the reference string is


7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1

For a memory with three frames,

FIFO Page Replacement

● The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm.


● A FIFO replacement algorithm associates with each page the time when that page was
brought into memory.
● When a page must be replaced, the oldest page is chosen.
● Create a FIFO queue to hold all pages in memory.
● Replace the page at the head of the queue. When a page is brought into memory, insert it at
the tail of the queue.
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
● 3 frames (3 pages can be in memory at a time per process)
15
Disadvantage

Belady's anomaly:For some page-replacement algorithms, the page-fault rate may increase as
the number of allocated frames increases.

Optimal Page Replacement

● Replace page that will not be used for longest period of time
● An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms
and will never suffer from Belady's anomaly.
● Unfortunately, the optimal page-replacement algorithm is difficult to implement, because
it requires future knowledge of the reference string.

LRU Page Replacement

● The key distinction between the FIFO and OPT algorithms is that the FIFO algorithm uses
the time when a page was brought into memory, whereas the OPT algorithm uses the
time when a page is to be used.
● LRU replacement associates with each page the time of that page's last use.
● When a page must be replaced, LRU chooses the page that has not been used for the
longest period of time.
● We can think of this strategy as the optimal page-replacement algorithm looking
backward in time, rather than forward.

16
Thrashing

● If the process does not have the number of frames it needs to support pages in active
use, it will quickly page-fault.
● At this point, it must replace some page.
● However, since all its pages are in active use, it must replace a page that will be needed
again right away.
● Consequently, it quickly faults again, and again, and again, replacing pages that it must
bring back in immediately.
● This high paging activity is called thrashing.
● A process is thrashing if it is spending more time paging than executing.

Cause of Thrashing

● The operating system monitors CPU utilization.


● If CPU utilization is too low, we increase the degree of multiprogramming by introducing a
new process to the system.
● A global page-replacement algorithm is used; it replaces pages without regard to the
process to which they belong.
● Now suppose that a process enters a new phase in its execution and needs more frames.
● It starts faulting and taking frames away from other processes.
● These processes need those pages, however, and so they also fault, taking frames from
other processes. These faulting processes must use the paging device to swap pages in
and out.
● As they queue up for the paging device, the ready queue empties.
● As processes wait for the paging device, CPU utilization decreases.
● The CPU scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming as a result.
● The new process tries to get started by taking frames from running processes, causing
more page faults and a longer queue for the paging device.
● As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the
degree of multiprogramming even more.
● Thrashing has occurred, and system throughput plunges.
● The page fault rate increases tremendously. As a result, the effective memory-access
time increases.
● No work is getting done, because the processes are spending all their time paging.
● As the degree of multiprogramming increases, CPU utilization also increases, although
more slowly, until a maximum is reached.

17
● If the degree of multiprogramming is increased even further, thrashing sets in, and CPU
utilization drops sharply.
● At this point, to increase CPU utilization and stop thrashing, we must decrease the degree
of multi programming.

18

You might also like