Bhavna-os IV Sem a+b1&e2+f Unit-3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 104

BC0 008B Operating

Systems
Unit-III
Process
Synchronization
and IPC

Submitted by
Dr Bhavna Sharma
Outline
Introduction of IPC
The Critical-Section Problem
Peterson’s Solution
Hardware Support for Synchronization
Mutex Locks
Semaphores
Monitors
System Model for Deadlock
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
Inter Process Communication
(IPC)
Inter process communication (IPC) is used for
exchanging data between multiple threads in one or more
processes or programs. The Processes may be running on
single or multiple computers connected by a network. The
full form of IPC is Inter-process communication.
It is a set of programming interface which allow a
programmer to coordinate activities among various
program processes which can run concurrently in an
operating system. This allows a specific program to
handle many user requests at the same time.
Since every single user request may result in multiple
processes running in the operating system, the process
may require to communicate with each other. Each IPC
protocol approach has its own advantage and limitation,
so it is not unusual for a single program to use all of the
IPC methods.
Pipes
Pipe is widely used for communication between two related processes. This is a half-duplex method,
so the first process communicates with the second process. However, in order to achieve a full-duplex,
another pipe is needed.
Message Passing:
It is a mechanism for a process to communicate and synchronize. Using message passing, the process
communicates with each other without resorting to shared variables.
IPC mechanism provides two operations:
Send (message)- message size fixed or variable
Received (message)
Message Queues:
A message queue is a linked list of messages stored within the kernel. It is identified by a message
queue identifier. This method offers communication between single or multiple processes with full-
duplex capacity.
Direct Communication:
In this type of inter-process communication process, should name each other explicitly. In this
method, a link is established between one pair of communicating processes, and between each pair,
only one link exists.
Indirect Communication:
Indirect communication establishes like only when processes share a common mailbox each pair of
processes sharing several communication links. A link can communicate with many processes. The
link may be bi-directional or unidirectional.
Shared Memory:
Shared memory is a memory shared between two or more processes that are established using
shared memory between all the processes. This type of memory requires to protected from each other
by synchronizing access across all the processes.
FIFO:
Communication between two unrelated processes. It is a full-duplex method, which means that the
first process can communicate with the second process, and the opposite can also happen.
Background
Processes can execute concurrently
◦ May be interrupted at any time, partially
completing execution
Concurrent access to shared data may
result in data inconsistency
Maintaining data consistency requires
mechanisms to ensure the orderly
execution of cooperating processes
the Bounded Buffer problem with use of a
counter that is updated concurrently by the
producer and consumer,. Which lead to
race condition.
Race Condition
Processes P0 and P1 are creating child processes using the fork()
system call
Race condition on kernel variable next_available_pid which
represents the next available process identifier (pid)

Unless there is a mechanism to prevent P0 and P1 from


accessing the variable next_available_pid the same pid
could be assigned to two different processes!
Critical Section Problem
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of
code
◦ Process may be changing common variables, updating
table, writing file, etc.
◦ When one process in critical section, no other may be in
its critical section
Critical section problem is to design protocol
to solve this
Each process must ask permission to enter
critical section in entry section, may follow
critical section with exit section, then
remainder section
Critical Section

General structure of process Pi


Critical-Section Problem (Cont.
)
Requirements for solution to critical-section problem

1. Mutual Exclusion - If process Pi is executing in its critical


section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the process that will
enter the critical section next cannot be postponed
indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their
critical sections after a process has made a request to
enter its critical section and before that request is granted

◦ Assume that each process executes at a nonzero speed


◦ No assumption concerning relative speed of the n processes
Problem with Interrupt-based
Solution
Entry section: disable interrupts
Exit section: enable interrupts
Will this solve the problem?

• What if the critical section is code that runs for an hour?


• Can some processes starve – never enter their critical
• section.
What if there are two CPUs?
Peterson’s Solution
Two process solution
Assume that the load and store
machine-language instructions are
atomic; that is, cannot be interrupted
The two processes share two variables:
◦ int turn;
◦ boolean flag[2]
The variable turn indicates whose turn
it is to enter the critical section
The flag array is used to indicate if a
process is ready to enter the critical
section.
◦ flag[i] = true implies that process Pi is
ready!
Algorithm for Process Pi

while (true){

flag[i] = true;
turn = i;
while (flag[j] && turn = = i)
;

/* critical section */

flag[i] = false;

/* remainder section */

}
Correctness of Peterson’s
Solution
Provable that the three CS requirement
are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn
= i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is
met
Peterson’s Solution and Modern Architecture

Although useful for demonstrating an


algorithm, Peterson’s Solution is not
guaranteed to work on modern architectures.
◦ To improve performance, processors and/or
compilers may reorder operations that have no
dependencies
Understanding why it will not work is useful
for better understanding race conditions.
For single-threaded this is ok as the result will
always be the same.
For multithreaded the reordering may produce
inconsistent or unexpected results!
Process Synchronization
When multiple processes execute concurrently
sharing system resources, then inconsistent
results might be produced.
Process Synchronization is a mechanism that
deals with the synchronization of processes.
It controls the execution of processes running
concurrently to ensure that consistent results
are produced.
Process synchronization is needed-
When multiple processes execute concurrently
sharing some system resources.
To avoid the inconsistent results.
Hardware Instructions
Special hardware instructions that allow
us to either test-and-modify the content
of a word, or two swap the contents of
two words atomically (uninterruptedly.)
◦ Test-and-Set instruction
◦ Compare-and-Swap instruction
Test-and-Set Instruction

It is an instruction that returns the old value


of a memory location and sets the memory
location value to 1 as a single atomic
operation.
If one process is currently executing a test-
and-set, no other process is allowed to begin
another test-and-set until the first process
test-and-set is finished.
The test_and_set Instruction
Definition

boolean test_and_set (boolean *target)


{
boolean rv = *target;
*target = true;
return rv:
}

Properties
◦ Executed atomically
◦ Returns the original value of passed parameter
◦ Set the new value of passed parameter to true
Solution Using test_and_set()
Shared boolean variable lock, initialized to false

Solution:
do {
while (test_and_set(&lock))
; /* do nothing */

/* critical section */

lock = false;

/* remainder section */
} while (true);
Does it solve the critical-section problem?
Scene-01:

Process P 0 arrives.
It executes the test-and-set(Lock) instruction.
Since lock value is set to 0, so it returns value 0 to the
while loop and sets the lock value to 1.
The returned value 0 breaks the while loop condition.
Process P 0 enters the critical section and executes.
Now, even if process P0 gets preempted in the middle, no
other process can enter the critical section.
Any other process can enter only after process
P0 completes and sets the lock value to 0.
Scene-02:

Another process P1 arrives.


It executes the test-and-set(Lock) instruction.
Since lock value is now 1, so it returns value 1 to
the while loop and sets the lock value to 1.
The returned value 1 does not break the while
loop condition.
The process P1 is trapped inside an infinite while
loop.
The while loop keeps the process P1 busy until the
lock value becomes 0 and its condition breaks.
Scene-03:

Process P0 comes out of the critical section and sets


the lock value to 0.
The while loop condition breaks.
Now, process P1 waiting for the critical section
enters the critical section.
Now, even if process P1 gets preempted in the
middle, no other process can enter the critical
section.
Any other process can enter only after process
P1 completes and sets the lock value to 0.
Characteristics
The characteristics of this synchronization
mechanism are-
It ensures mutual exclusion.
It is deadlock free.
It does not guarantee bounded waiting and may
cause starvation.
It suffers from spin lock.
It is not architectural neutral since it requires the
operating system to support test-and-set
instruction.
It is a busy waiting solution which keeps the CPU
busy when the process is actually waiting
Mutex Locks
Previous solutions are complicated and
generally inaccessible to application
programmers
OS designers build software tools to solve
critical section problem
Simplest is mutex lock
◦ Boolean variable indicating if lock is available or not
Protect a critical section by
◦ First acquire() a lock
◦ Then release() the lock
Calls to acquire() and release() must be atomic
◦ Usually implemented via hardware atomic instructions
such as compare-and-swap.
But this solution requires busy waiting
◦ This lock therefore called a spinlock
Solution to CS Problem Using Mutex
Locks

while (true) {
acquire lock

critical section

release lock

remainder section
}
Semaphore
Synchronization tool that provides more sophisticated for
processes to synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible (atomic) operations
◦ wait() and signal()
● Originally called P() and V()

Definition of the wait() operation


wait(S)
{
while (S <= 0) ; // busy wait
S--;
}

Definition of the signal() operation


signal(S)
{
S++;
}
The word semaphore was coined in 1801 by the French
inventor of the semaphore line itself, Claude Chappe.
This is a language of the ocean, one that is used in
emergency situations in order to communicate distress
(lighted wands may be used instead of flags at night).
What is Semaphore ? Like most words, semaphore comes
from the root of a Greek word. Sema, meaning sign and
phero meaning to bear.
There are 3-types of semaphores namely Binary, Counting
and Mutex semaphore . Binary semaphore exists in two
states ie. Acquired(Take), Released(Give).
In computer science, a semaphore is a variable or abstract
data type used to control access to a common resource by
multiple processes and avoid critical section problems in a
concurrent system such as a multitasking operating system.
Semaphore (Cont.)
Counting semaphore – integer value
can range over an unrestricted domain
Binary semaphore – integer value can
range only between 0 and 1
◦ Same as a mutex lock
Can implement a counting semaphore S
as a binary semaphore
With semaphores we can solve various
synchronization problems
Semaphore Usage Example
Solution to the CS Problem
◦ Create a semaphore “mutex” initialized to 1
wait(mutex);
CS
signal(mutex);
Consider P1 and P2 that with two statements S 1 and S 2
and the requirement that S 1 to happen before S 2
◦ Create a semaphore “synch” initialized to 0
P1:
S1 ;
signal(synch);
P2:
wait(synch);
S2 ;
Semaphore Implementation
Must guarantee that no two processes can
execute the wait() and signal() on the same
semaphore at the same time
Thus, the implementation becomes the critical
section problem where the wait and signal code
are placed in the critical section
Could now have busy waiting in critical
section implementation
◦ But implementation code is short
◦ Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time
in critical sections and therefore this is not a
good solution
Semaphore Implementation with no Busy waiting

With each semaphore there is an associated


waiting queue
Each entry in a waiting queue has two data
items:
◦ Value (of type integer)
◦ Pointer to next record in the list
Two operations:
◦ block – place the process invoking the operation on
the appropriate waiting queue
◦ wakeup – remove one of processes in the waiting
queue and place it in the ready queue
Classical Problems of
Synchronization
Classical problems used to test newly-
proposed synchronization schemes
◦ Bounded-Buffer Problem
◦ Readers and Writers Problem
◦ Dining-Philosophers Problem
Bounded-Buffer Problem

n buffers, each can hold one item


Semaphore mutex initialized to the value
1
Semaphore full initialized to the value
0
Semaphore empty initialized to the
value n
Readers-Writers Problem
A data set is shared among a number of concurrent processes
◦ Readers – only read the data set; they do not perform any
updates
◦ Writers – can both read and write
Problem – allow multiple readers to read at the same time
◦ Only one single writer can access the shared data at the
same time
Several variations of how readers and writers are considered –
all involve some form of priorities
Problem parameters:
One set of data is shared among a number of processes
Once a writer is ready, it performs its write. Only one writer
may write at a time
If a process is writing, no other process can read it
If at least one reader is reading, no other process can write
Readers may not write and only read
Readers-Writers Problem
(Cont.)
The structure of a writer process

do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
Readers-Writers Problem
(Cont.)
The structure of a reader process
do {

wait(mutex); // Only 1 can access read_count


read_count++;
if (read_count == 1)
wait(rw_mutex);// First reader blocks writer.
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);// No reader, unblock writer
signal(mutex);
} while (true);
Advantages of Semaphores

Some of the advantages of semaphores are


Semaphores allow only one process into
the critical section. They follow the mutual
exclusion principle strictly and are much
more efficient than some other methods of
synchronization.
There is no resource wastage because of
busy waiting in semaphores as processor
time is not wasted unnecessarily to check
if a condition is fulfilled to allow a process
to access the critical section.
Semaphores are implemented in the
machine independent code of the
microkernel. So they are machine
independent.
Disadvantages of Semaphores

Some of the disadvantages of semaphores are


Semaphores are complicated so the wait and
signal operations must be implemented in the
correct order to prevent deadlocks.
Semaphores are impractical for last scale use as
their use leads to loss of modularity. This
happens because the wait and signal operations
prevent the creation of a structured layout for
the system.
Semaphores may lead to a priority inversion
where low priority processes may access the
critical section first and high priority processes
later.
Monitors
A high-level abstraction that provides a
convenient and effective mechanism for
process synchronization
Abstract data type, internal variables only
accessible by code within the procedure
Only one process may be active within the
monitor at a time
Pseudocode syntax of a monitor:
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure P2 (…) { …. }
procedure Pn (…) {……}

initialization code (…) { … }


}
Schematic view of a Monitor
Monitor Implementation Using
Semaphores

Variables
semaphore mutex
mutex = 1
Each procedure P is replaced by
wait(mutex);

body of P;

signal(mutex);
Mutual exclusion within a monitor is
ensured
Monitor with Condition
Variables
Monitor Implementation Using
Semaphores

Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0; // number of
processes waiting
inside the monitor
Each function P will be replaced by
wait(mutex);

body of P;

if (next_count > 0)
signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured
Implementation – Condition
Variables
For each condition variable x, we have :
semaphore x_sem; // (initially = 0)
int x_count = 0;
The operation x.wait() can be implemented as:
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;
Advantages of Monitors:
Monitors are easy to implement than
semaphores.
Mutual exclusion in monitors is automatic
while in semaphores, mutual exclusion
needs to be implemented explicitly.
Monitors can overcome the timing errors
that occur while using semaphores.
Shared variables are global to all processes
in the monitor while shared variables are
hidden in semaphores.
Key Differences Between
Semaphore and Monitor
The basic difference between semaphore and monitor is that
the semaphore is an integer variable S which indicate the
number of resources available in the system whereas,
the monitor is the abstract data type which allows only one
process to execute in critical section at a time.
The value of semaphore can be modified
by wait() and signal() operation only. On the other hand, a
monitor has the shared variables and the procedures only
through which shared variables can be accessed by the
processes.
In Semaphore when a process wants to access shared resources
the process performs wait() operation and block the resources
and when it release the resources it performs signal() operation.
In monitors when a process needs to access shared resources, it
has to access them through procedures in monitor.
Monitor type has condition variables which semaphore does
not have.
Usage of Condition Variable
Example
Consider P1 and P2 that that need to
execute two statements S1 and S2 and the
requirement that S1 to happen before S2
◦ Create a monitor with two procedures F1 and F2
that are invoked by P 1 and P 2 respectively
◦ One condition variable “x” initialized to 0
◦ One Boolean variable “done”
◦ F1:
S 1;
done = true;
x.signal();
◦ F2:
if done = false
x.wait()
S 2;
Deadlocks
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
The Deadlock Problem
A set of blocked processes each holding a
resource and waiting to acquire a resource held
by another process in the set.
Example
◦ System has 2 tape drives.
◦ P 1 and P2 each hold one tape drive and each needs
another one.
Example
◦ semaphores A and B, initialized to 1

P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example

Traffic only in one direction.


Each section of a bridge can be viewed as a
resource.
If a deadlock occurs, it can be resolved if
one car backs up (preempt resources and
rollback).
Several cars may have to be backed upif a
deadlock occurs.
Starvation is possible.
System Model
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
◦ request
◦ use
◦ release
Deadlock Characterization
Deadlock can arise if four conditions hold
simultaneously.
Mutual exclusion: only one process at a time
can use a resource.
Hold and wait: a process holding at least one
resource is waiting to acquire additional
resources held by other processes.
No preemption: a resource can be released
only voluntarily by the process holding it, after
that process has completed its task.
Circular wait: there exists a set {P0, P1, …, P0}
of waiting processes such that P0 is waiting for a
resource that is held by
P1, P1 is waiting for a resource that is held by P2,
…, Pn–1 is waiting for a resource that is held by Pn,
and P0 is waiting for a resource that is held by P0.
Resource-Allocation Graph
A set of vertices V and a set of
edges E.
V is partitioned into two types:
◦ P = { P 1, P 2, …, P n}, the set consisting of all the
processes in the system.

◦ R = { R1, R2, …, Rm}, the set consisting of all


resource types in the system.
request edge – directed edge P1 → Rj
assignment edge – directed edge Rj → Pi
Resource-Allocation Graph
(Cont.)
Process

Resource Type with 4 instances

Pi requests instance of Rj
P
i
R
Pi is holding an instance
j
of Rj
P
i R
j
Example of a Resource
Allocation Graph
Resource Allocation Graph
With A Deadlock
Resource Allocation Graph With A Cycle But No
Deadlock
Basic Facts
If graph contains no cycles ⇒ no deadlock.
If graph contains a cycle ⇒
◦ if only one instance per resource type, then
deadlock.
◦ if several instances per resource type, possibility of
deadlock.
Methods for Handling
Deadlocks
Ensure that the system will never enter a
deadlock state.
Allow the system to enter a deadlock state
and then recover.
Ignore the problem and pretend that
deadlocks never occur in the system; used
by most operating systems, including UNIX.
Deadlock Prevention
Restrain the ways request can be
made.
Mutual Exclusion – not required for
sharable resources; must hold for
nonsharable resources.
Hold and Wait – must guarantee that
whenever a process requests a resource,
it does not hold any other resources.
◦ Require process to request and be allocated
all its resources before it begins execution, or
allow process to request resources only when
the process has none.
◦ Low resource utilization; starvation possible.
Deadlock Prevention (Cont.)
No Preemption –
◦ If a process that is holding some resources requests
another resource that cannot be immediately allocated
to it, then all resources currently being held are
released.
◦ Preempted resources are added to the list of resources
for which the process is waiting.
◦ Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
Circular Wait – impose a total ordering of all
resource types, and require that each process
requests resources in an increasing order of
enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori
information
available.
Simplest and most useful model requires
that each process declare the maximum
number of resources of each type that it
may need.
The deadlock-avoidance algorithm
dynamically examines the resource-
allocation state to ensure that there can
never be a circular-wait condition.
Resource-allocation state is defined by
the number of available and allocated
resources, and the maximum demands
of the processes.
Safe State
When a process requests an available resource,
system must decide if immediate allocation leaves
the system in a safe state.
System is in safe state if there exists a safe sequence
of all processes.
Sequence <P1 , P2 , …, Pn> is safe if for each Pi, the
resources that Pi can still request can be satisfied by
currently available resources + resources held by all
the Pj, with j<I.
◦ If P i resource needs are not immediately available, then Pi
can wait until all Pj have finished.
◦ When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
◦ When Pi terminates, Pi+1 can obtain its needed resources, and
so on.
Basic Facts
If a system is in safe state ⇒ no deadlocks.
If a system is in unsafe state ⇒ possibility of
deadlock.
Avoidance ⇒ ensure that a system will never
enter an unsafe state.
Safe, unsafe , deadlock state
spaces
Resource-Allocation Graph
Algorithm
Claim edge Pi → Rj indicated that process Pj
may request resource Rj; represented by a
dashed line.
Claim edge converts to request edge when a
process requests a resource.
When a resource is released by a process,
assignment edge reconverts to a claim edge.
Resources must be claimed a priori in the
system.
Resource-Allocation Graph For Deadlock
Avoidance
Unsafe State In A Resource-
Allocation Graph
Banker’s Algorithm
Multiple instances.
Each process must a priori claim maximum
use.
When a process requests a resource it may
have to wait.
When a process gets all its resources it must
return them in a finite amount of time.
Data Structures for the
Banker’s Algorithm
Let n = number of processes, and m = number of
resources types.
Available: Vector of length m. If available [j]
= k, there are k instances of resource type Rj
available.
Max: n x m matrix. If Max [i,j] = k, then
process Pi may request at most k instances
of resource type Rj.
Allocation: n x m matrix. If Allocation[i,j] =
k then Pi is currently allocated k instances of
Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi
may need k more instances of Rj to complete
its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1. Let Work and Finish be vectors of length m and
n, respectively. Initialize:
Work := Available
Finish [i] = false for i - 1,3, …, n.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi ≤ Work
If no such i exists, go to step 4.
3. Work := Work + Allocationi
Finish[i] := true
go to step 2.
4. If Finish [i] = true for all i, then the system is in
a safe state.
Resource-Request Algorithm
for Process Pi
Requesti = request vector for process Pi. If
Requesti [j] = k then process Pi wants k instances
of resource type Rj.
1. If Requesti ≤ Needi go to step 2. Otherwise, raise
error condition, since process has exceeded its
maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise P i
must wait, since resources are not available.
3. Pretend to allocate requested resources to P i by
modifying the state as follows:
Available := Available = Request i;
Allocationi := Allocationi + Request i;
Needi := Need i – Request i;;
• If safe ⇒ the resources are allocated to Pi.
• If unsafe ⇒ Pi must wait, and the old resource-allocation
state is restored
Example of Banker’s
Algorithm
5 processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances).
Snapshot at time T0:
Allocation Max Available
ABCABC ABC
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Example (Cont.)
The content of the matrix. Need is defined to
be Max – Allocation.
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
The system is in a safe state since the sequence
< P1 , P3 , P4 , P2 , P0 > satisfies safety criteria.
Example (Cont.): P1 request
(1,0,2)
Check that Request ≤ Available (that is, (1,0,2) ≤
(3,3,2) ⇒ true.
Allocation Need Available
ABC ABC ABC
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 1 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
Executing safety algorithm shows that sequence
<P1, P3, P4, P0, P2> satisfies safety requirement.
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
Deadlock Detection
Allow system to enter deadlock state
Detection algorithm
Recovery scheme
Single Instance of Each
Resource Type
Maintain wait-for graph
◦ Nodes are processes.
◦ Pi → Pj if Pi is waiting for Pj.
Periodically invoke an algorithm that
searches for acycle in the graph.
An algorithm to detect a cycle in a graph
requires an order of n2 operations, where n
is the number of vertices in the graph.
Resource-Allocation Graph And Wait-for Graph

Resource-Allocation Corresponding wait-for


Graph graph
Several Instances of a
Resource Type
Available: A vector of length m indicates
the number of available resources of
each type.
Allocation: An n x m matrix defines the
number of resources of each type
currently allocated to each process.
Request: An n x m matrix indicates the
current request of each process. If
Request [ij] = k, then process Pi is
requesting k more instances of resource
type. Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m
and n, respectively Initialize:
(a) Work :- Available
(b) For i = 1,2, …, n , if Allocationi ≠ 0, then
Finish[i] := false;otherwise, Finish[i] := true.
2. Find an index i such that both:
(a) Finish[i] = false
(b) Requesti ≤ Work
If no such i exists, go to step 4.
Detection Algorithm (Cont.)
3. Work := Work + Allocationi
Finish[ i ] := true
go to step 2.
4. If Finish[ i ] = false, for some i , 1 ≤ i ≤ n, then
the system is in deadlock state. Moreover, if
Finish[ i ] = false, then Pi is deadlocked.

Algorithm requires an order of m x n2 operations to


detect whether
the system is in deadlocked state.
Example of Detection
Algorithm
Five processes P0 through P4 ; three resource types
A (7 instances), B (2 instances), and C (6 instances).
Snapshot at time T0 :
Allocation Request Available
ABC ABC ABC
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
Sequence <P0 , P2 , P3 , P1 , P4 > will result in Finish[i] =
true for all i.
Example (Cont.)
P2 requests an additional instance of type C.
Request
ABC
P0 0 0 0
P1 2 0 1
P2 0 0 1
P3 1 0 0
P4 0 0 2
State of system?
◦ Can reclaim resources held by process P 0, but
insufficient resources to fulfill other processes;
requests.
◦ Deadlock exists, consisting of processes P 1, P 2, P 3, and
P 4.
Detection-Algorithm Usage
When, and how often, to invoke depends on:
◦ How often a deadlock is likely to occur?
◦ How many processes will need to be rolled back?
● one for each disjoint cycle
If detection algorithm is invoked arbitrarily,
there may be many cycles in the resource
graph and so we would not be able to tell
which of the many deadlocked processes
“caused” the deadlock.
Recovery from Deadlock: Process Termination

Abort all deadlocked processes.


Abort one process at a time until the
deadlock cycle is eliminated.
In which order should we choose to abort?
◦ Priority of the process.
◦ How long process has computed, and how much
longer to completion.
◦ Resources the process has used.
◦ Resources process needs to complete.
◦ How many processes will need to be terminated.
◦ Is process interactive or batch?
Recovery from Deadlock: Resource Preemption

Selecting a victim – minimize cost.


Rollback – return to some safe state, restart
process fro that state.
Starvation – same process may always be
picked as victim, include number of rollback
in cost factor.
Combined Approach to
Deadlock Handling
Combine the three basic approaches
◦ prevention
◦ avoidance
◦ detection
allowing the use of the optimal approach for
each of resources in the system.
Partition resources into hierarchically
ordered classes.
Use most appropriate technique for handling
deadlocks within each class.

You might also like