Congestion Control

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

BRAINWARE UNIVERSITY

BNCSC201 CLASS NOTES Computer Networks

CONGESTION CONTROL

Congestion in a network may occur if the load on the network (the number of packets sent to the
network) is greater than the capacity of the network (the number of packets a network can handle).
Congestion control refers to the mechanisms and techniques to control the congestion and keep
the load below the capacity.
• When too many packets are pumped into the system, congestion occur leading into degradation
of performance.
• Congestion tends to feed upon itself and back-ups.
• Congestion shows lack of balance between various networking equipment.
• It is a global issue. In general, we can divide congestion control mechanisms into two broad
categories: open-loop congestion control (prevention) and closed-loop congestion control
(removal) as shown in figure below:

Open Loop Congestion Control:


In open-loop congestion control, policies are applied to prevent congestion before it happens. In
these mechanisms, congestion control is handled by either the source or the destination.
Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or corrupted,
the packet needs to be retransmitted. Retransmission in general may increase congestion in the
network. However, a good retransmission policy can prevent congestion. The retransmission
policy and the retransmission timers must be designed to optimize efficiency and at the same time
prevent congestion. For example, the retransmission policy used by TCP is designed to prevent or
alleviate congestion.

2022-23 Prepared by: Ayan Mukherjee (Brainware University, Barasat)


BRAINWARE UNIVERSITY
BNCSC201 CLASS NOTES Computer Networks

Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat window is
better than the Go-Back-N window for congestion control. In the Go-Back-N window, when the
timer for a packet times out, several packets may be resent, although some may have arrived safe
and sound at the receiver. This duplication may make the congestion worse. The Selective Repeat
window, on the other hand, tries to send the specific packets that have been lost or corrupted.
Acknowledgment Policy:
The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver
does not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion. Several approaches are used in this case. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may decide to
acknowledge only N packets at a time. We need to know that the acknowledgments are also part
of the load in a network. Sending fewer acknowledgments means imposing less load on the
network.
Discarding Policy:
A good discarding policy by the routers may prevent congestion and at the same time may not
harm the integrity of the transmission. For example, in audio transmission, if the policy is to
discard less sensitive packets when congestion is likely to happen, the quality of sound is still
preserved and congestion is prevented or alleviated.
Admission Policy:
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks. Switches in a flow first check the resource requirement of a flow before
admitting it to the network. A router can deny establishing a virtual- circuit connection if there is
congestion in the network or if there is a possibility of future congestion.

Closed-Loop Congestion Control


Closed-loop congestion control mechanisms try to alleviate congestion after it happens. Several
mechanisms have been used by different protocols.
Back-pressure:
The technique of backpressure refers to a congestion control mechanism in which a congested
node stops receiving data from the immediate upstream node or nodes. This may cause the
upstream node or nodes to become congested, and they, in turn, reject data from their upstream
nodes or nodes. And so on. Backpressure is a node-to-node congestion control that starts with a
node and propagates, in the opposite direction of data flow, to the source. The backpressure
technique can be applied only to virtual circuit networks, in which each node knows the upstream
node from which a flow of data is corning.

2022-23 Prepared by: Ayan Mukherjee (Brainware University, Barasat)


BRAINWARE UNIVERSITY
BNCSC201 CLASS NOTES Computer Networks

Node III in the figure has more input data than it can handle. It drops some packets in its input
buffer and informs node II to slow down. Node II, in turn, may be congested because it is slowing
down the output flow of data. If node II is congested, it informs node I to slow down, which in
turn may create congestion. If so, node I informs the source of data to slow down. This, in time,
alleviates the congestion. Note that the pressure on node III is moved backward to the source to
remove the congestion. None of the virtual-circuit networks we studied in this book use
backpressure. It was, however, implemented in the first virtual-circuit network, X.25. The
technique cannot be implemented in a datagram network because in this type of network, a node
(router) does not have the slightest knowledge of the upstream router.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note the
difference between the backpressure and choke packet methods. In back-pressure, the warning is
from one node to its upstream node, although the warning may eventually reach the source station.
In the choke packet method, the warning is from the router, which has encountered congestion, to
the source station directly. The intermediate nodes through which the packet has travelled are not
warned. We have seen an example of this type of control in ICMP. When a router in the Internet
is overwhelmed datagrams, it may discard some of them; but it informs the source host, using a
source quench ICMP message. The warning message goes directly to the source station; the
intermediate routers, and does not take any action. Figure shows the idea of a choke packet.

Implicit Signalling
In implicit signalling, there is no communication between the congested node or nodes and the
source. The source guesses that there is a congestion somewhere in the network from other
symptoms. For example, when a source sends several packets and there is no acknowledgment for
a while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down. We
will see this type of signalling when we discuss TCP congestion control later in the chapter.
Explicit Signalling The node that experiences congestion can explicitly send a signal to the source
or destination. The explicit signalling method, however, is different from the choke packet method.
In the choke packet method, a separate packet is used for this purpose; in the explicit signalling
method, the signal is included in the packets that carry data. Explicit signalling, as we will see in
Frame Relay congestion control, can occur in either the forward or the backward direction.
Backward signalling: A bit can be set in a packet moving in the direction opposite to the

2022-23 Prepared by: Ayan Mukherjee (Brainware University, Barasat)


BRAINWARE UNIVERSITY
BNCSC201 CLASS NOTES Computer Networks

congestion. This bit can warn the source that there is congestion and that it needs to slow down to
avoid the discarding of packets.
Forward signalling: A bit can be set in a packet moving in the direction of the congestion. This
bit can warn the destination that there is congestion. The receiver in this case can use policies,
such as slowing down the acknowledgments, to alleviate the congestion.

2022-23 Prepared by: Ayan Mukherjee (Brainware University, Barasat)

You might also like