CH 6

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 20

Operating Systems

By
Hamna Khalid

Department of Software Engineering


University of Sahiwal

1
Synchronization Tools

2
Process Synchronization
• Process Synchronization is the coordination of execution of
multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and
other synchronization issues in a concurrent system.

• The main objective of process synchronization is to ensure


that multiple processes access shared resources without
interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this,
various synchronization techniques such as semaphores,
monitors, and critical sections are used
3
Process Synchronization
• In a multi-process system, synchronization is necessary to
ensure data consistency and integrity, and to avoid the risk of
deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating
systems, and it plays a crucial role in ensuring the correct and
efficient functioning of multi-process systems.

4
Process Synchronization
Based on synchronization, processes are categorized as one of
the following two types:
• Independent Process: The execution of one process does not
affect the execution of other processes.
• Cooperative Process: A process that can affect or be affected
by other processes executing in the system.
Process synchronization problem arises in the case of
Cooperative processes also because resources are shared in
Cooperative processes.

5
Race Condition
When more than one process is executing the same code or accessing
the same memory or any shared variable in that condition there is a
possibility that the output or the value of the shared variable is wrong
so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes
access and process the manipulations over the same data concurrently,
and then the outcome depends on the particular order in which the
access occurs. A race condition is a situation that may occur inside a
critical section. This happens when the result of multiple thread
execution in the critical section differs according to the order in which
the threads execute. Race conditions in critical sections can be avoided
if the critical section is treated as an atomic instruction. Also, proper
thread synchronization using locks or atomic variables can prevent race
conditions.
6
Example

7
Example
• Let’s understand one example to understand the race condition
better:
Let’s say there are two processes P1 and P2 which share a common
variable (shared=10), both processes are present in – queue and
waiting for their turn to be executed. Suppose, Process P1 first come
under execution, and the CPU store a common variable between them
(shared=10) in the local variable (X=10) and increment it by 1(X=11),
after then when the CPU read line sleep(1),it switches from current
process P1 to process P2 present in ready-queue. The process P1 goes
in a waiting state for 1 second.

8
Example
Now CPU execute the Process P2 line by line and store common
variable (Shared=10) in its local variable (Y=10) and decrement Y by
1(Y=9), after then when CPU read sleep(1), the current process P2 goes
in waiting for state and CPU remains idle for some time as there is no
process in ready-queue, after completion of 1 second of process P1
when it comes in ready-queue, CPU takes the process P1 under
execution and execute the remaining line of code (store the local
variable (X=11) in common variable (shared=11) ), CPU remain idle for
sometime waiting for any process in ready-queue,after completion of 1
second of Process P2, when process P2 comes in ready-queue, CPU
start executing the further remaining line of Process P2(store the local
variable (Y=9) in common variable (shared=9) ).

9
Example
Note:
We are assuming the final value of a common variable(shared) after
execution of Process P1 and Process P2 is 10 (as Process P1 increment
variable (shared=10) by 1 and Process P2 decrement variable
(shared=11) by 1 and finally it becomes shared=10). But we are getting
undesired value due to a lack of proper synchronization.

10
Critical Section
A critical section is a code segment
that can be accessed by only one
process at a time. The critical
section contains shared variables
that need to be synchronized to
maintain the consistency of data
variables. So the critical section
problem means designing a way
for cooperative processes to access
shared resources without creating
data inconsistencies.

11
Critical Section
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can participate
in deciding which will enter the critical section next, and the selection can
not be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.

12
Synchronization Problems

• The Producer-Consumer Problem (also called The Bounded


Buffer Problem)
• The Readers–Writers Problem
• The Dining Philosophers Problem.
Synchronization Mechanism(Solution)
• Semaphores.
• Lock Variables(Hardware Synchronization).
• Test and Set Instructions (Hardware Synchronization).

13
Producer-Consumer Problem
The producer-consumer problem is an example of a
multi-process synchronization problem. The problem describes
two processes, the producer and the consumer that share a
common fixed-size buffer and use it as a queue.
• The producer’s job is to generate data, put it into the buffer,
and start again.
• At the same time, the consumer is consuming the data (i.e.,
removing it from the buffer), one piece at a time.

14
Semaphores
• Semaphore S – integer variable used to coordinate the activities
of multiple processes in a computer system and Can only be
accessed via two indivisible (atomic) operations
– wait() and signal()
• Originally called P() and V()
• They are used to enforce mutual exclusion, avoid race conditions,
and implement synchronization between processes.
• The process of using Semaphores provides two operations: wait
(P) and signal (V). The wait operation decrements the value of
the semaphore, and the signal operation increments the value of
the semaphore. When the value of the semaphore is zero, any
process that performs a wait operation will be blocked until
another process performs a signal operation.

15
Semaphores
• Semaphores are used to implement critical sections, which are
regions of code that must be executed by only one process at a
time.
• When a process performs a wait operation on a semaphore, the
operation checks whether the value of the semaphore is >0. If
so, it decrements the value of the semaphore and lets the
process continue its execution; otherwise, it blocks the process
on the semaphore. A signal operation on a semaphore activates
a process blocked on the semaphore if any, or increments the
value of the semaphore by 1. Due to these semantics,
semaphores are also called counting semaphores.

16
Semaphores
Semaphores are of two types:
1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values –
0 and 1. Its value is initialized to 1. It is used to implement the
solution of critical section problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.

17
Semaphores
First, look at two
operations that can be
used to access and
change the value of the
semaphore variable.

18
Some points regarding P and V operation:
1. P operation is also called wait, sleep, or
down operation, and V operation is also
called signal, wake-up, or up operation.
2. Both operations are atomic and
semaphore(s) is always initialized to one.
Here atomic means that variable on which
read, modify and update happens at the
same time/moment with no pre-emption
Semaphores i.e. in-between read, modify and update no
other operation is performed that may
change the variable.
3. A critical section is surrounded by both
operations to implement process
synchronization.The critical section of
Process P is in between P and V operation.

19
Semaphores
• Now, let us see how it implements
mutual exclusion. Let there be two
processes P1 and P2 and a semaphore
s is initialized as 1. Now if suppose P1
enters in its critical section then the
value of semaphore s becomes 0.
Now if P2 wants to enter its critical
section then it will wait until s > 0, this
can only happen when P1 finishes its
critical section and calls V operation
on semaphore s.
• This way mutual exclusion is achieved.

20

You might also like