Congestion Control Algorithms and How They Manage Data Transmission Rates to Avoid Network Congestion and Improve Overall Throughput


The rapid evolution of technology has led to a growth in the number of internet-connected devices, leading to an expansion of networks, and a significant challenge in maintaining optimal network performance.
Congestion in the network occurs when the demand for network resources is more than its capacity. Congestion control aims to ensure a balanced utilization of the network resources, minimize latency, prevent packet loss, and enhance user experience in the network.
Congestion Control Algorithms
Network congestion can be managed through the use of control algorithms which continuously observe the state of the network, identify signs of congestion, and implement suitable mechanisms to mitigate the congestion or prevent it from happening.
Leaky Bucket Algorithm
The working of this algorithm can be understood with a leaking bucket analogy. Picture a water bucket with a hole at the bottom, through which water leaks at a fixed rate.
The bucket represents a temporary storage for data packets, and water represents bursty traffic chunks. When the incoming traffic is too high, they spill out of the bucket and are lost, and no matter the rate at which water goes into the bucket(data transmission), the leak rate is constant, hence a uniform flow.
While this method can prevent network congestion by ensuring a steady, predictable throughput rate, it doesn’t give the possible maximum throughput as it may not fully utilize available network bandwidth during periods of low network traffic.
Token Bucket Algorithm
The token bucket algorithm works by generating tokens at a fixed rate and adding them to the bucket. Whenever a data packet is sent, it will need a token out of the bucket. It allows a burst of data to be sent at a go if the bucket is full of tokens. This flexibility allows the network to fully utilize available bandwidth when possible, hence optimizing throughput.
However, no packets can be sent in the absence of tokens, a feature that helps to shape the traffic to token generation rate. This algorithm is more adaptable, and offers flexible rates of data transfer, which ensures the network traffic conforms to a specified rate.
TCP Congestion Control Algorithm Variants
Over the years, various variants of TCP congestion control protocols have been introduced to improve the reliability and performance of TCP under diverse network conditions.
These variants include:
TCP Tahoe
TCP Tahoe manages data transmission by managing the size of the congestion window and adjusting it based on network conditions. This helps in ensuring fair bandwidth allocation among connections, and enhances the network throughput. It does this in three main phases.
The first phase is the Slow Start, which prevents immediate network congestion by allowing the sender to probe the network’s capacity. For instance, the sender starts with a small congestion window (cwnd) and doubles the window size with each successful acknowledgement from the receiver.
This process continues until a packet loss is detected or the window size reaches the Slow Start Threshold (ssthresh), then the second phase begins.
The second phase is the Additive Increase Multiplicative Decrease, which works by decreasing the congestion window size when there is congestion, and increasing it when the congestion is lower. The multiplicative decrease reduces the ssthresh to half of the current cwnd, as soon as a packet loss is detected.
The third phase is the Fast Retransmit, which is a loss detection algorithm triggered by duplicate acknowledgements. In TCP, each received packet prompts an acknowledgment from the receiver.
However, if packets arrive out of sequence, the receiver sends a duplicate of the last acknowledgment, signaling a potential packet loss to the sender. In this case, the sender resends the lost packets without waiting for a timeout.
TCP Reno
Is an improved version of the TCP Tahoe, and differs when a packet loss is detected due to acknowledgement duplicates. Unlike fast retransmission as in TCP Tahoe, TCP Reno has a Fast Recovery Phase.
Upon detecting packet loss, (signaled by 3 duplicate acknowledgements) TCP Reno reduces the window size to 50% of the current. For each duplicate acknowledgement received, the window size is incremented by one segment, allowing TCP Reno to sustain high data transfer rates while also preventing the disaster of a complete breakdown from congestion hence higher long term average throughput.
TCP Vegas
Unlike TCP-Reno and Tahoe, TCP-Vegas prioritizes packet delay over packet loss as an indicator to adjust the packet transmission rate. It identifies potential congestion early on by monitoring the increasing Round-Trip Time (RTT) values of the packets in the connection.
Its effectiveness heavily relies on the precise computation of the Base RTT value. If this value is underestimated, the connection's throughput will be less than the available bandwidth. Conversely, if it's overestimated, it will exceed the connection's capacity.
Conclusion
As networks continue to evolve to meet increasing data demands, the implementation of these congestion control algorithms is critical for maintaining network stability, fairness, improved throughput and reliability.
References
Fei, J., Zhu, X., & Zhang, R. (2023, May). Evaluation of tcp variants on dynamic uav networks. In 2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD) (pp. 11-16). IEEE.
https://ieeexplore.ieee.org/abstract/document/10152635/
Kumar, A., & Lobiyal, D. K. (2024). An Efficient Congestion Control Scheme for Large-Scale WSNs. SN Computer Science, 5(2), 221.
Subscribe to my newsletter
Read articles from Esther Adwets directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
