Bhavna-os IV Sem a+b1&e2+f Unit-3
Bhavna-os IV Sem a+b1&e2+f Unit-3
Bhavna-os IV Sem a+b1&e2+f Unit-3
Systems
Unit-III
Process
Synchronization
and IPC
Submitted by
Dr Bhavna Sharma
Outline
Introduction of IPC
The Critical-Section Problem
Peterson’s Solution
Hardware Support for Synchronization
Mutex Locks
Semaphores
Monitors
System Model for Deadlock
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
Inter Process Communication
(IPC)
Inter process communication (IPC) is used for
exchanging data between multiple threads in one or more
processes or programs. The Processes may be running on
single or multiple computers connected by a network. The
full form of IPC is Inter-process communication.
It is a set of programming interface which allow a
programmer to coordinate activities among various
program processes which can run concurrently in an
operating system. This allows a specific program to
handle many user requests at the same time.
Since every single user request may result in multiple
processes running in the operating system, the process
may require to communicate with each other. Each IPC
protocol approach has its own advantage and limitation,
so it is not unusual for a single program to use all of the
IPC methods.
Pipes
Pipe is widely used for communication between two related processes. This is a half-duplex method,
so the first process communicates with the second process. However, in order to achieve a full-duplex,
another pipe is needed.
Message Passing:
It is a mechanism for a process to communicate and synchronize. Using message passing, the process
communicates with each other without resorting to shared variables.
IPC mechanism provides two operations:
Send (message)- message size fixed or variable
Received (message)
Message Queues:
A message queue is a linked list of messages stored within the kernel. It is identified by a message
queue identifier. This method offers communication between single or multiple processes with full-
duplex capacity.
Direct Communication:
In this type of inter-process communication process, should name each other explicitly. In this
method, a link is established between one pair of communicating processes, and between each pair,
only one link exists.
Indirect Communication:
Indirect communication establishes like only when processes share a common mailbox each pair of
processes sharing several communication links. A link can communicate with many processes. The
link may be bi-directional or unidirectional.
Shared Memory:
Shared memory is a memory shared between two or more processes that are established using
shared memory between all the processes. This type of memory requires to protected from each other
by synchronizing access across all the processes.
FIFO:
Communication between two unrelated processes. It is a full-duplex method, which means that the
first process can communicate with the second process, and the opposite can also happen.
Background
Processes can execute concurrently
◦ May be interrupted at any time, partially
completing execution
Concurrent access to shared data may
result in data inconsistency
Maintaining data consistency requires
mechanisms to ensure the orderly
execution of cooperating processes
the Bounded Buffer problem with use of a
counter that is updated concurrently by the
producer and consumer,. Which lead to
race condition.
Race Condition
Processes P0 and P1 are creating child processes using the fork()
system call
Race condition on kernel variable next_available_pid which
represents the next available process identifier (pid)
while (true){
flag[i] = true;
turn = i;
while (flag[j] && turn = = i)
;
/* critical section */
flag[i] = false;
/* remainder section */
}
Correctness of Peterson’s
Solution
Provable that the three CS requirement
are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn
= i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is
met
Peterson’s Solution and Modern Architecture
Properties
◦ Executed atomically
◦ Returns the original value of passed parameter
◦ Set the new value of passed parameter to true
Solution Using test_and_set()
Shared boolean variable lock, initialized to false
Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
Does it solve the critical-section problem?
Scene-01:
Process P 0 arrives.
It executes the test-and-set(Lock) instruction.
Since lock value is set to 0, so it returns value 0 to the
while loop and sets the lock value to 1.
The returned value 0 breaks the while loop condition.
Process P 0 enters the critical section and executes.
Now, even if process P0 gets preempted in the middle, no
other process can enter the critical section.
Any other process can enter only after process
P0 completes and sets the lock value to 0.
Scene-02:
while (true) {
acquire lock
critical section
release lock
remainder section
}
Semaphore
Synchronization tool that provides more sophisticated for
processes to synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible (atomic) operations
◦ wait() and signal()
● Originally called P() and V()
do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
Readers-Writers Problem
(Cont.)
The structure of a reader process
do {
procedure P2 (…) { …. }
procedure Pn (…) {……}
Variables
semaphore mutex
mutex = 1
Each procedure P is replaced by
wait(mutex);
…
body of P;
…
signal(mutex);
Mutual exclusion within a monitor is
ensured
Monitor with Condition
Variables
Monitor Implementation Using
Semaphores
Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0; // number of
processes waiting
inside the monitor
Each function P will be replaced by
wait(mutex);
…
body of P;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured
Implementation – Condition
Variables
For each condition variable x, we have :
semaphore x_sem; // (initially = 0)
int x_count = 0;
The operation x.wait() can be implemented as:
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;
Advantages of Monitors:
Monitors are easy to implement than
semaphores.
Mutual exclusion in monitors is automatic
while in semaphores, mutual exclusion
needs to be implemented explicitly.
Monitors can overcome the timing errors
that occur while using semaphores.
Shared variables are global to all processes
in the monitor while shared variables are
hidden in semaphores.
Key Differences Between
Semaphore and Monitor
The basic difference between semaphore and monitor is that
the semaphore is an integer variable S which indicate the
number of resources available in the system whereas,
the monitor is the abstract data type which allows only one
process to execute in critical section at a time.
The value of semaphore can be modified
by wait() and signal() operation only. On the other hand, a
monitor has the shared variables and the procedures only
through which shared variables can be accessed by the
processes.
In Semaphore when a process wants to access shared resources
the process performs wait() operation and block the resources
and when it release the resources it performs signal() operation.
In monitors when a process needs to access shared resources, it
has to access them through procedures in monitor.
Monitor type has condition variables which semaphore does
not have.
Usage of Condition Variable
Example
Consider P1 and P2 that that need to
execute two statements S1 and S2 and the
requirement that S1 to happen before S2
◦ Create a monitor with two procedures F1 and F2
that are invoked by P 1 and P 2 respectively
◦ One condition variable “x” initialized to 0
◦ One Boolean variable “done”
◦ F1:
S 1;
done = true;
x.signal();
◦ F2:
if done = false
x.wait()
S 2;
Deadlocks
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
The Deadlock Problem
A set of blocked processes each holding a
resource and waiting to acquire a resource held
by another process in the set.
Example
◦ System has 2 tape drives.
◦ P 1 and P2 each hold one tape drive and each needs
another one.
Example
◦ semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example
Pi requests instance of Rj
P
i
R
Pi is holding an instance
j
of Rj
P
i R
j
Example of a Resource
Allocation Graph
Resource Allocation Graph
With A Deadlock
Resource Allocation Graph With A Cycle But No
Deadlock
Basic Facts
If graph contains no cycles ⇒ no deadlock.
If graph contains a cycle ⇒
◦ if only one instance per resource type, then
deadlock.
◦ if several instances per resource type, possibility of
deadlock.
Methods for Handling
Deadlocks
Ensure that the system will never enter a
deadlock state.
Allow the system to enter a deadlock state
and then recover.
Ignore the problem and pretend that
deadlocks never occur in the system; used
by most operating systems, including UNIX.
Deadlock Prevention
Restrain the ways request can be
made.
Mutual Exclusion – not required for
sharable resources; must hold for
nonsharable resources.
Hold and Wait – must guarantee that
whenever a process requests a resource,
it does not hold any other resources.
◦ Require process to request and be allocated
all its resources before it begins execution, or
allow process to request resources only when
the process has none.
◦ Low resource utilization; starvation possible.
Deadlock Prevention (Cont.)
No Preemption –
◦ If a process that is holding some resources requests
another resource that cannot be immediately allocated
to it, then all resources currently being held are
released.
◦ Preempted resources are added to the list of resources
for which the process is waiting.
◦ Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
Circular Wait – impose a total ordering of all
resource types, and require that each process
requests resources in an increasing order of
enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori
information
available.
Simplest and most useful model requires
that each process declare the maximum
number of resources of each type that it
may need.
The deadlock-avoidance algorithm
dynamically examines the resource-
allocation state to ensure that there can
never be a circular-wait condition.
Resource-allocation state is defined by
the number of available and allocated
resources, and the maximum demands
of the processes.
Safe State
When a process requests an available resource,
system must decide if immediate allocation leaves
the system in a safe state.
System is in safe state if there exists a safe sequence
of all processes.
Sequence <P1 , P2 , …, Pn> is safe if for each Pi, the
resources that Pi can still request can be satisfied by
currently available resources + resources held by all
the Pj, with j<I.
◦ If P i resource needs are not immediately available, then Pi
can wait until all Pj have finished.
◦ When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
◦ When Pi terminates, Pi+1 can obtain its needed resources, and
so on.
Basic Facts
If a system is in safe state ⇒ no deadlocks.
If a system is in unsafe state ⇒ possibility of
deadlock.
Avoidance ⇒ ensure that a system will never
enter an unsafe state.
Safe, unsafe , deadlock state
spaces
Resource-Allocation Graph
Algorithm
Claim edge Pi → Rj indicated that process Pj
may request resource Rj; represented by a
dashed line.
Claim edge converts to request edge when a
process requests a resource.
When a resource is released by a process,
assignment edge reconverts to a claim edge.
Resources must be claimed a priori in the
system.
Resource-Allocation Graph For Deadlock
Avoidance
Unsafe State In A Resource-
Allocation Graph
Banker’s Algorithm
Multiple instances.
Each process must a priori claim maximum
use.
When a process requests a resource it may
have to wait.
When a process gets all its resources it must
return them in a finite amount of time.
Data Structures for the
Banker’s Algorithm
Let n = number of processes, and m = number of
resources types.
Available: Vector of length m. If available [j]
= k, there are k instances of resource type Rj
available.
Max: n x m matrix. If Max [i,j] = k, then
process Pi may request at most k instances
of resource type Rj.
Allocation: n x m matrix. If Allocation[i,j] =
k then Pi is currently allocated k instances of
Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi
may need k more instances of Rj to complete
its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1. Let Work and Finish be vectors of length m and
n, respectively. Initialize:
Work := Available
Finish [i] = false for i - 1,3, …, n.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi ≤ Work
If no such i exists, go to step 4.
3. Work := Work + Allocationi
Finish[i] := true
go to step 2.
4. If Finish [i] = true for all i, then the system is in
a safe state.
Resource-Request Algorithm
for Process Pi
Requesti = request vector for process Pi. If
Requesti [j] = k then process Pi wants k instances
of resource type Rj.
1. If Requesti ≤ Needi go to step 2. Otherwise, raise
error condition, since process has exceeded its
maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise P i
must wait, since resources are not available.
3. Pretend to allocate requested resources to P i by
modifying the state as follows:
Available := Available = Request i;
Allocationi := Allocationi + Request i;
Needi := Need i – Request i;;
• If safe ⇒ the resources are allocated to Pi.
• If unsafe ⇒ Pi must wait, and the old resource-allocation
state is restored
Example of Banker’s
Algorithm
5 processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances).
Snapshot at time T0:
Allocation Max Available
ABCABC ABC
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Example (Cont.)
The content of the matrix. Need is defined to
be Max – Allocation.
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
The system is in a safe state since the sequence
< P1 , P3 , P4 , P2 , P0 > satisfies safety criteria.
Example (Cont.): P1 request
(1,0,2)
Check that Request ≤ Available (that is, (1,0,2) ≤
(3,3,2) ⇒ true.
Allocation Need Available
ABC ABC ABC
P0 0 1 0 7 4 3 2 3 0
P1 3 0 2 0 2 0
P2 3 0 1 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1
Executing safety algorithm shows that sequence
<P1, P3, P4, P0, P2> satisfies safety requirement.
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
Deadlock Detection
Allow system to enter deadlock state
Detection algorithm
Recovery scheme
Single Instance of Each
Resource Type
Maintain wait-for graph
◦ Nodes are processes.
◦ Pi → Pj if Pi is waiting for Pj.
Periodically invoke an algorithm that
searches for acycle in the graph.
An algorithm to detect a cycle in a graph
requires an order of n2 operations, where n
is the number of vertices in the graph.
Resource-Allocation Graph And Wait-for Graph