Unit2 OS
Unit2 OS
Unit2 OS
Operating systems that support concurrency can execute multiple tasks simultaneously, leading to better
resource utilization, improved responsiveness, and enhanced user experience.
Concurrency is essential in modern operating systems due to the increasing demand for multitasking,
real-time processing, and parallel computing. It is used in a wide range of applications, including web
servers, databases, scientific simulations, and multimedia processing.
concurrency also introduces new challenges such as race conditions, deadlocks, and priority inversion,
which need to be managed effectively to ensure the stability and reliability of the system.
Principles of Concurrency
The principles of concurrency in operating systems are designed to ensure that multiple processes or
threads can execute efficiently and effectively, without interfering with each other or causing deadlock.
● Interleaving − Interleaving refers to the interleaved execution of multiple processes or threads. The
operating system uses a scheduler to determine which process or thread to execute at any given
time. Interleaving allows for efficient use of CPU resources and ensures that all processes or
threads get a fair share of CPU time.
● Synchronization − Synchronization refers to the coordination of multiple processes or threads to
ensure that they do not interfere with each other. This is done through the use of synchronization
primitives such as locks, semaphores, and monitors. These primitives allow processes or threads to
coordinate access to shared resources such as memory and I/O devices.
● Mutual exclusion − Mutual exclusion refers to the principle of ensuring that only one process or
thread can access a shared resource at a time. This is typically implemented using locks or
semaphores to ensure that multiple processes or threads do not access a shared resource
simultaneously.
● Deadlock avoidance − Deadlock is a situation in which two or more processes or threads are
waiting for each other to release a resource, resulting in a deadlock. Operating systems use various
techniques such as resource allocation graphs and deadlock prevention algorithms to avoid
deadlock.
● Process or thread coordination − Processes or threads may need to coordinate their activities to
achieve a common goal. This is typically achieved using synchronization primitives such as
semaphores or message passing mechanisms such as pipes or sockets.
● Resource allocation − Operating systems must allocate resources such as memory, CPU time,
and I/O devices to multiple processes or threads in a fair and efficient manner. This is typically
achieved using scheduling algorithms such as round-robin, priority-based, or real-time scheduling.
Concurrency Mechanisms
● Processes vs. Threads − An operating system can support concurrency using processes or
threads. A process is an instance of a program that can execute independently, while a thread is a
lightweight process that shares the same memory space as its parent process.
● Synchronization primitives − Operating systems provide synchronization primitives to coordinate
access to shared resources between multiple processes or threads. Common synchronization
primitives include semaphores, mutexes, and condition variables.
● Scheduling algorithms − Operating systems use scheduling algorithms to determine which
process or thread should execute next. Common scheduling algorithms include round-robin,
priority-based, and real-time scheduling.
● Message passing − Message passing is a mechanism used to communicate between processes or
threads. Messages can be sent synchronously or asynchronously and can include data, signals, or
notifications.
● Memory management − Operating systems provide memory management mechanisms to allocate
and manage memory resources. These mechanisms ensure that each process or thread has its own
memory space and can access memory safely without interfering with other processes or threads.
● Interrupt handling − Interrupts are signals sent by hardware devices to the operating system,
indicating that they require attention. The operating system uses interrupt handling mechanisms to
stop the current process or thread, save its state, and execute a specific interrupt handler to handle
the device's request.
Advantages of concurrency
Void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}
1. P1 invokes the echo procedure and is interrupted immediately after getchar returns its value and
stores it in chin. At this point, the most recently entered character, x, is stored in variable chin.
2. Process P2 is activated and invokes the echo procedure, which runs to conclusion, inputting and
then displaying a single character, y, on the screen.
3. Process P1 is resumed. By this time, the value x has been overwritten in chin and therefore lost.
Instead, chin contains y, which is transferred to chout and displayed.
Key Terms related to concurrency
Critical Section: A section of code within a process that requires access to shared resources and that
may not be executed while another process is in a corresponding section of code.
Deadlock: A situation in which two or more processes are unable to proceed because each is waiting for
one of the others to do something.
Livelock: A situation in which two more processes continuously change their state in response to
changes in the other processes without doing any useful work.
Mutual exclusion: The requirement that when one process is in a critical section that accesses shared
resources. No other process may be in a critical section that accesses any of those shared resources.
Race Condition: A situation in which multiple threads and processes read and write a shared data item
and the final result depends on the relative timing of their execution.
Starvation: A situation in which a runnable process is overlooked indefinitely by the scheduler; although it
is able to proceed, it is never chosen.
Race Condition
A race condition is a problem that occurs in an operating system (OS) where two or more processes or
threads are executing concurrently. The outcome of their execution depends on the order in which they
are executed. In a race condition, the exact timing of events is unpredictable, and the outcome of the
execution may vary based on the timing. This can result in unexpected or incorrect behavior of the
system.
For example:
If two threads are simultaneously accessing and changing the same shared resource, such as a variable
or a file, the final state of that resource depends on the order in which the threads execute. If the threads
are not correctly synchronized, they can overwrite each other's changes, causing incorrect results or even
system crashes.
OS concerns:-
Design and management issues raised by existence of concurrency:-
The OS must:-
● Uniprocessor system
● Disabling interrupt guarantees mutual exclusion
Disadvantages:
Machine code or machine language refers to a computer programming language consisting of a string of
ones and zeros, i.e., binary code. Computers can respond to machine code directly, i.e., without any
direction or conversion.
Special Machine Instructions
● These type of instructions are normally, access to a memory location excludes other access to the
same location
● Extensions : Designers have proposed machine instructions that performs two instructions
atomically (indivisible) on the same memory location (e.g. reading and writing)
● The execution of such an instruction is also mutually exclusive (even on Multiprocessors)
● They can be used to provide mutual exclusion but need to be complimented by other mechanisms
to satisfy the other requirements of critical solution problem
Techniques are used for hardware to the critical solution problem
2. Swap Instruction
1. Test and Set Instruction :
Set a memory location to 1 and return the previous value of the location. If a return value is 1 that means
lock acquired by someone else. If the return value is 0 then lock is free and it will set to 1. Test and modify
the content of a word automatically
int tmp ;
tmp = target ;
target =1 ; /* True */
return ( tmp )
}
Implementing mutual exclusion with test and set :
do {
critical_section ( ) ;
remainder_section ( ) ;
} while (1) ;
2. Swap(Exchange) Instruction :
boolean temp = a ;
a= b ;
b = temp ;
}
Shared Data : boolean lock = false ;
Process Pi :
do {
key = true ;
lock = false ;
}
Advantages of Special Machine Instruction
1. Applicable to any number of processes on either a single processor or
multiple processors sharing main memory
2. Simple and easy to verify
3. It can be used to support multiple critical sections; each critical section can be
defined by its own variable.
Disadvantages of Special Machine Instruction
1. Busy waiting is employed, thus while a process is waiting for access to critical
section, it continues to consume processor time.
2. Starvation is possible, when a process leaves a critical section and more than
one process is waiting, the selection of waiting process is arbitrary. Thus,
some process could indefinitely be denied access.
3. Deadlock is possible.
Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using these three
operations, semWait and semSignal that are used for process synchronization.
2. The semWait operation decrements the semaphore value. If the value becomes negative, then the
process executing the semWait is blocked. Otherwise, the process continues execution.
3. The semSignal operation increments the semaphore value. If the value is less than or equal to zero,
then a process blocked by a semWait operation is unblocked
Types of Semaphore
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are
used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the resources
are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait
operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It
is sometimes easier to implement binary semaphores than counting semaphores.
Strong/Weak Semaphores
A queue is used to hold processes waiting on the semaphore
• Strong Semaphores - The process that has been blocked the longest is released
from the queue first (FIFO)
• Weak Semaphores - The order in which processes are removed from the queue
is not specified
Semaphore Mechanism
Example of strong Semaphore
● Here processes A,B,C depend on a result from process D.
1. Initially, A is running; B,C,D are ready; and the semaphore count is 1, indicating D’s result is
available. When A issues a semWait instruction on semaphore s,the semaphore decrements to 0,
and A can continue to execute; subsequently it rejoins the ready queue.
2. Then B runs and issues semWait instruction, and is blocked, allowing D to run.
3. When D completes new result, it issues a semSignal instruction, which allows B to m ove to ready
queue
4. D rejoins the ready queue and C begins to run
5. Bit is blocked when it issues a semWait instruction. Similarly A and B are blocked on the semaphore,
allowing D to resume execution
6. When D has a result, it issues semSignal instruction, which transfers C to the ready queue.Later
cycles of D will release A and B from the Blocked state.
Mutual Exclusion using Semaphore
The Producer/Consumer Problem
One of the most common problems faced in concurrent processing:
● There are one or more producer generating some type of data(records, characters) and placing
these in buffer
● There is a single consumer is taking items out of the buffer one at a time.
● only one producer or consumer may access the buffer at any one time
The Problem: ensure that the producer can’t add data into full buffer and consumer can’t remove data
from an empty buffer
● The figure illustrates the structure of buffer b.
● The producer can generate items can store
them at its own space.
● Each time an index into the buffer is
incremented. The consumer proceeds in a
similar fashion
● But must make sure that it does not attempt
to read from an empty buffer.
● Hence, the consumer makes sure that the
producer has advanced beyond it (in>out)
before proceeding.
Producer code for Bounded buffer
Int count = 0;
V = item to be added
while(true)
{ b = is buffer
}
Semaphore with empty and full semaphore
void producer( void )
wait ( empty );
wait(S);
Produce_item(v)
buffer[ in ] = item P;
in = (in + 1)mod n
signal(S);
signal(full);
}
void consumer(void)
wait( full);
wait(S);
w= buffer[ out ];
signal(S);
signal(empty);
}
Monitors
Programming language construct that provides equivalent functionality to that of
semaphores and is easier to control
Only one process may be executing in the monitor at a time, any other process
that has invoked the monitor the monitor is blocked, waiting for the monitor to
become available.
Monitors : Mutual Exclusion
If there is a process executing in a
monitor, any process that calls a
monitor procedure is blocked outside
of the monitor.
}
Condition Variables
Synchronisation achieved by the use of condition variables that are contained
within the monitor and accessible only within the monitor :–
notfull: is true when there is room to add at least one character to the buffer
There are two common and popular approaches to address this problem:
Mesa Type (proposed by Lampson and Redell): The released process waits
somewhere and the signaling process continues to use the monitor.
What do you mean by “waiting somewhere”?
The signaling process (Hoare type) or the released process (Mesa type) must wait
somewhere. You could consider there is a waiting bench in a monitor for these
processes to wait. As a result, each process that involves in a monitor call can be in
one of the four states:
Active: The running one
Entering: Those blocked by the monitor
Waiting: Those waiting on a condition variable
Inactive: Those waiting on the waiting bench
Drawback of Hoare’s Monitor
1. If the process issuing the csignal has not finished with the monitor, then two
additional process switches are required:
● One to block this process
● another to resume it when the monitor becomes available.
2. Process scheduling associated with a signal must be perfectly reliable. When
csignal is issued, a process from the corresponding condition queue must be
activated immediately and the scheduler must ensure that no other process
enters the monitor before activation.
Otherwise, the condition under which the process was activated could
change.