Unit 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

PROCESS CONCEPT:

Process is categorized into two types on the basis of


synchronization and these are given below:

 Independent Process
 Cooperative Process

Independent Processes

Two processes are said to be independent if the execution of one


process does not affect the execution of another process.

Cooperative Processes

Two processes are said to be cooperative if the execution of one


process affects the execution of another process. These processes
need to be synchronized so that the order of execution can be
guaranteed.

Process Synchronization
It is the task phenomenon of coordinating the execution of
processes in such a way that no two processes can have access to
the same shared data and resources.

 It is a procedure that is involved in order to preserve the


appropriate order of execution of cooperative processes.
 In order to synchronize the processes, there are various
synchronization mechanisms.
 Process Synchronization is mainly needed in a multi-process
system when multiple processes are running together, and
more than one processes try to gain access to the same
shared resource or any data at the same time.
Race Condition
At the time when more than one process is either executing the
same code or accessing the same memory or any shared variable; In
that condition, there is a possibility that the output or the value of
the shared variable is wrong so for that purpose all the processes
are doing the race to say that my output is correct. This condition is
commonly known as a race condition. As several processes access
and process the manipulations on the same data in a concurrent
manner and due to which the outcome depends on the particular
order in which the access of data takes place.

Mainly this condition is a situation that may occur inside the critical


section. Race condition in the critical section happens when the
result of multiple thread execution differs according to the order in
which the threads execute. But this condition is critical sections can
be avoided if the critical section is treated as an atomic instruction.
Proper thread synchronization using locks or atomic variables can
also prevent race conditions.
Sections of a Program
Here, are four essential elements of the critical section:

 Entry Section: It is part of the process which decides the entry of a


particular process.
 Critical Section: This part allows one process to enter and modify
the shared variable.
 Exit Section: Exit section allows the other process that are waiting in
the Entry Section, to enter into the Critical Sections. It also checks
that a process that finished its execution should be removed through
this Section.
 Remainder Section: All other parts of the Code, which is not in
Critical, Entry, and Exit Section, are known as the Remainder
Section.
Critical Section Problem
A Critical Section is a code segment that accesses shared variables
and has to be executed as an atomic action. It means that in a
group of cooperating processes, at a given point of time, only one
process must be executing its critical section. If any other process
also wants to execute its critical section, it must wait until the first
one finishes. The entry to the critical section is mainly handled
by wait() function while the exit from the critical section is controlled
by the signal() function.

The solution to the Critical Section Problem


A solution to the critical section problem must satisfy the following
three conditions:
1. Mutual Exclusion

Out of a group of cooperating processes, only one process can be


in its critical section at a given point of time.

2. Progress

If no process is in its critical section, and if one or more threads


want to execute their critical section then any one of these threads
must be allowed to get into its critical section.

3. Bounded Waiting

After a process makes a request for getting into its critical section,
there is a limit for how many other processes can get into their
critical section, before this process's request is granted. So after the
limit is reached, the system must grant the process permission to
get into its critical section.

Solutions for the Critical Section


The critical section plays an important role in Process
Synchronization so that the problem must be solved.

Some widely used method to solve the critical section problem are
as follows:

1.Peterson's Solution

This is widely used and software-based solution to critical section


problems. Peterson's solution was developed by a computer
scientist Peterson that's why it is named so.

With the help of this solution whenever a process is executing in


any critical state, then the other process only executes the rest of
the code, and vice-versa can happen. This method also helps to
make sure of the thing that only a single process can run in the
critical section at a specific time.
This solution preserves all three conditions:

 Mutual Exclusion is comforted as at any time only one process


can access the critical section.
 Progress is also comforted, as a process that is outside the
critical section is unable to block other processes from
entering into the critical section.
 Bounded Waiting is assured as every process gets a fair
chance to enter the Critical section.

PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) )
{ wait;
}
CRITICAL SECTION
FLAG[i] = false
turn = j; //choose another process to go to CS

The above shows the structure of process Pi in Peterson's


solution.

 Suppose there are N processes (P1, P2, ... PN) and as at


some point of time every process requires to enter in
the Critical Section
 A FLAG[] array of size N is maintained here which is by default
false. Whenever a process requires to enter in the critical
section, it has to set its flag as true. Example: If Pi wants to
enter it will set FLAG[i]=TRUE.
 Another variable is called TURN and is used to indicate the
process number that is currently waiting to enter into the
critical section.
 The process that enters into the critical section while exiting
would change the TURN to another number from the list of
processes that are ready.
 Example: If the turn is 3 then P3 enters the Critical section and
while exiting turn=4 and therefore P4 breaks out of the wait
loop.

Peterson’s Solution preserves all three conditions :


 Mutual Exclusion is assured as only one process can access the
critical section at any time.
 Progress is also assured, as a process outside the critical section does
not block other processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.
 
Disadvantages of Peterson’s Solution
 It involves Busy waiting
 It is limited to 2 processes.

Dekkers solution
The mutual exclusion problem has different solutions. Dekker was a  Dutch
mathematician who introduced a software-based solution for the mutual exclusion
problem. This algorithm is commonly called Dekker’s algorithm. The Dekker’s
algorithm was developed for an algorithm for mutual exclusion between two
processes.

Dekker's algorithm was the first provably-correct solution to


the critical section problem. It requires both an array of
boolean values and an integer variable:
var flag: array [0..1] of boolean;
turn: 0..1;
repeat

        flag[i] := true;
        while flag[j] do
                if turn = j then
                begin
                        flag[i] := false;
                        while turn = j do no-op;
                        flag[i] := true;
                end;

                critical section

        turn := j;
        flag[i] := false;

                remainder section

until false;
Hardware Synchronization Algorithms :
Lock, Test and Set
Process Synchronization problems occur when two processes running concurrently
share the same data or same variable. The value of that variable may not be updated
correctly before its being used by a second process. Such a condition is known as
Race Around Condition. There are software as well as hardware solutions to this
problem. In this article, we will talk about the most efficient hardware solution to
process synchronization problems and its implementation.
Test and Set algorithms in the hardware approach of solving Process Synchronization
problem:

Hardware instructions in many operating systems help in effective solution of critical


section problems.
1. USING LOCK TO MAINTAIN SYNCRONISATION:
2. Test and Set:

Here, the shared variable is lock which is initialized to false. TestAndSet(lock)


algorithm works in this way – it always returns whatever value is sent to it and sets
lock to true. The first process will enter the critical section at once as
TestAndSet(lock) will return false and it’ll break out of the while loop. The other
processes cannot enter now as lock is set to true and so the while loop continues to be
true. Mutual exclusion is ensured. Once the first process gets out of the critical
section, lock is changed to false. So, now the other processes can enter one by one.
Progress is also ensured. However, after the first process any process can go in. There
is no queue maintained, so any new process that finds the lock to be false again, can
enter. So bounded waiting is not ensured.

Test and Set Pseudocode –

//Shared variable lock initialized to false


boolean lock;

boolean TestAndSet (boolean &target){


boolean rv = *target;
*target = true;
return rv;
}

while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}

CLASSICAL PROBLEM IN IPC


Producer-Consumer problem

The Producer-Consumer problem is a classical multi-process synchronization problem, that is we are


trying to achieve synchronization between more than one process.

There is one Producer in the producer-consumer problem, Producer is producing some items,
whereas there is one Consumer that is consuming the items produced by the Producer. The same
memory buffer is shared by both producers and consumers which is of fixed-size.
The task of the Producer is to produce the item, put it into the memory buffer, and again start
producing items. Whereas the task of the Consumer is to consume the item from the memory
buffer.

Let's understand what is the problem?

Below are a few points that considered as the problems occur in Producer-Consumer:

o The producer should produce data only when the buffer is not full. In case it is found that
the buffer is full, the producer is not allowed to store any data into the memory buffer.

o Data can only be consumed by the consumer if and only if the memory buffer is not empty.
In case it is found that the buffer is empty, the consumer is not allowed to use any data from
the memory buffer.

o Accessing memory buffer should not be allowed to producer and consumer at the same
time.

Let's see the code for the above problem:

Producer Code
Consumer Code
Problem Case:
The solution of Producer-Consumer Problem using Semaphore
The above problems of Producer and Consumer which occurred due to context switch and producing
inconsistent result can be solved with the help of semaphores.

To solve the problem occurred above of race condition, we are going to use Binary Semaphore and
Counting Semaphore

Binary Semaphore: In Binary Semaphore, only two processes can compete to enter into
its CRITICAL SECTION at any point in time, apart from this the condition of mutual exclusion is also
preserved.

Counting Semaphore: In counting semaphore, more than two processes can compete to enter into
its CRITICAL SECTION at any point of time apart from this the condition of mutual exclusion is also
preserved.

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by


only two standard atomic operations - wait and signal, whose definitions are as follows:

1. wait( S ) / P(S) / down(S) 

2. {  

3. while( S <= 0) ;  

4. S--;  

5. }  

1.  signal( S ) / V(S) / up(S)

2. {  

3. S++;  

4. }  

From the above definitions of wait, it is clear that if the value of S <= 0 then it will enter into an
infinite loop (because of the semicolon; after while loop). Whereas the job of the signal is to
increment the value of S.

Let's see the code as a solution of producer and consumer problem using semaphore (Both Binary
and Counting Semaphore):

Producer Code- solution

1. void producer( void )  

2. {  

3.   wait ( empty );  

4.   wait(S);  

5.   Produce_item (item P)  

6.   buffer[ in ] = item P;  
7.   in = (in + 1)mod n  

8.   signal(S);  

9.   signal(full);  

10.     

11. }  

Consumer Code- solution

1. void consumer(void)  

2. {  

3.   wait ( full );  

4.   wait(S);  

5.   itemC = buffer[ out ];  

6.   out = ( out + 1 ) mod n;  

7.   signal(S);  

8.   signal(empty);  

9. }  

Let's understand the above Solution of Producer and Consumer code:

Before Starting an explanation of code, first, understand the few terms used in the above code:

1. "in" used in a producer code represent the next empty buffer

2. "out" used in consumer code represent first filled buffer

3. "empty" is counting semaphore which keeps a score of no. of empty buffer

4. "full" is counting semaphore which scores of no. of full buffer

5. "S" is a binary semaphore BUFFER

If we see the current situation of Buffer

S = 1(init. Value of Binary semaphore

in = 5( next empty buffer)

out = 0(first filled buffer)


As we can see from Fig: Buffer has total 8 spaces out of which the first 5 are filled, in = 5(pointing
next empty position) and out = 0(pointing first filled position).

Semaphores used in Producer Code:

6. wait(empty) will decrease the value of the counting semaphore variable empty by 1, that is when
the producer produces some element then the value of the space gets automatically decreased by
one in the buffer. In case the buffer is full, that is the value of the counting semaphore variable
"empty" is 0, then wait(empty); will trap the process (as per definition of wait) and does not allow to
go further.

7. wait(S) decreases the binary semaphore variable S to 0 so that no other process which is willing to
enter into its critical section is allowed.

8. signal(s) increases the binary semaphore variable S to 1 so that other processes who are willing to
enter into its critical section can now be allowed.

9. signal(full) increases the counting semaphore variable full by 1, as on adding the item into the
buffer, one space is occupied in the buffer and the variable full must be updated.

Semaphores used in Consumer Code:

10. wait(full) will decrease the value of the counting semaphore variable full by 1, that is when the
consumer consumes some element then the value of the full space gets automatically decreased by
one in the buffer. In case the buffer is empty, that is the value of the counting semaphore variable
full is 0, then wait(full); will trap the process(as per definition of wait) and does not allow to go
further.

11. wait(S) decreases the binary semaphore variable S to 0 so that no other process which is willing
to enter into its critical section is allowed.

12. signal(S) increases the binary semaphore variable S to 1 so that other processes who are willing
to enter into its critical section can now be allowed.

13. signal(empty) increases the counting semaphore variable empty by 1, as on removing an item


from the buffer, one space is vacant in the buffer and the variable empty must be updated
accordingly.
8

You might also like