Congestion Control in Computer Networks
Last Updated :
21 Feb, 2025
Congestion in a computer network happens when there is too much data being sent at the same time, causing the network to slow down. Just like traffic congestion on a busy road, network congestion leads to delays and sometimes data loss. When the network can't handle all the incoming data, it gets "clogged," making it difficult for information to travel smoothly from one place to another.
Congestion control is a crucial concept in computer networks. It refers to the methods used to prevent network overload and ensure smooth data flow.Congestion control techniques help manage the traffic, so all users can enjoy a stable and efficient network connection. These techniques are essential for maintaining the performance and reliability of modern networks.
Effects of Congestion Control
- Improved Network Stability: Congestion control helps keep the network stable by preventing it from getting overloaded. It manages the flow of data so the network doesn't crash or fail due to too much traffic.
- Reduced Latency and Packet Loss: Without congestion control, data transmission can slow down, causing delays and data loss. Congestion control helps manage traffic better, reducing these delays and ensuring fewer data packets are lost, making data transfer faster and the network more responsive.
- Enhanced Throughput: By avoiding congestion, the network can use its resources more effectively. This means more data can be sent in a shorter time, which is important for handling large amounts of data and supporting high-speed applications.
- Fairness in Resource Allocation: Congestion control ensures that network resources are shared fairly among users. No single user or application can take up all the bandwidth, allowing everyone to have a fair share.
- Better User Experience: When data flows smoothly and quickly, users have a better experience. Websites, online services, and applications work more reliably and without annoying delays.
- Mitigation of Network Congestion Collapse: Without congestion control, a sudden spike in data traffic can overwhelm the network, causing severe congestion and making it almost unusable. Congestion control helps prevent this by managing traffic efficiently and avoiding such critical breakdowns.
Congestion Control Algorithm
Congestion Control is a mechanism that controls the entry of data packets into the network, enabling a better use of a shared network infrastructure and avoiding congestive collapse. Congestive-avoidance algorithms (CAA) are implemented at the TCP layer as the mechanism to avoid congestive collapse in a network.There are two congestion control algorithms which are as follows:
Leaky Bucket Algorithm
- The leaky bucket algorithm discovers its use in the context of network traffic shaping or rate-limiting.
- A leaky bucket execution and a token bucket execution are predominantly used for traffic shaping algorithms.
- This algorithm is used to control the rate at which traffic is sent to the network and shape the burst traffic to a steady traffic stream.
- The disadvantages compared with the leaky-bucket algorithm are the inefficient use of available network resources.
- The large area of network resources such as bandwidth is not being used effectively.
Let us consider an example to understand. Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the bucket, the outflow is at constant rate. When the bucket is full with water additional water entering spills over the sides and is lost.
Leaky BucketSimilarly, each network interface contains a leaky bucket and the following steps are involved in leaky bucket algorithm:
- When host wants to send packet, packet is thrown into the bucket.
- The bucket leaks at a constant rate, meaning the network interface transmits packets at a constant rate.
- Bursty traffic is converted to a uniform traffic by the leaky bucket.
- In practice the bucket is a finite queue that outputs at a finite rate.
Read in detail about Leaky Bucket Algorithm
Token Bucket Algorithm
- The leaky bucket algorithm has a rigid output design at an average rate independent of the bursty traffic.
- In some applications, when large bursts arrive, the output is allowed to speed up. This calls for a more flexible algorithm, preferably one that never loses information. Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-limiting.
- It is a control algorithm that indicates when traffic should be sent. This order comes based on the display of tokens in the bucket.
- The bucket contains tokens. Each of the tokens defines a packet of predetermined size. Tokens in the bucket are deleted for the ability to share a packet.
- When tokens are shown, a flow to transmit traffic appears in the display of tokens.
- No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak burst rate in good tokens in the bucket.
Read in detail about Token Bucket Algorithm
Need of Token Bucket Algorithm
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that the data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
- In regular intervals tokens are thrown into the bucket.
- The bucket has a maximum capacity.
- If there is a ready packet, a token is removed from the bucket, and the packet is sent.
- If there is no token in the bucket, the packet cannot be sent.
Let's understand with an example, In figure (A) we see a bucket holding three tokens, with five packets waiting to be transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure (B) We see that three of the five packets have gotten through, but the other two are stuck waiting for more tokens to be generated.
Token Bucket vs Leaky Bucket
The leaky bucket algorithm controls the rate at which the packets are introduced in the network, but it is very conservative in nature. Some flexibility is introduced in the token bucket algorithm. In the token bucket algorithm, tokens are generated at each tick (up to a certain limit). For an incoming packet to be transmitted, it must capture a token and the transmission takes place at the same rate. Hence some of the busty packets are transmitted at the same rate if tokens are available and thus introduces some amount of flexibility in the system.
Leaky BucketAdvantages
- Stable Network Operation: Congestion control ensures that networks remain stable and operational by preventing them from becoming overloaded with too much data traffic.
- Reduced Delays: It minimizes delays in data transmission by managing traffic flow effectively, ensuring that data packets reach their destinations promptly.
- Less Data Loss: By regulating the amount of data in the network at any given time, congestion control reduces the likelihood of data packets being lost or discarded.
- Optimal Resource Utilization: It helps networks use their resources efficiently, allowing for better throughput and ensuring that users can access data and services without interruptions.
- Scalability: Congestion control mechanisms are scalable, allowing networks to handle increasing volumes of data traffic as they grow without compromising performance.
- Adaptability: Modern congestion control algorithms can adapt to changing network conditions, ensuring optimal performance even in dynamic and unpredictable environments.
Disadvantages
- Complexity: Implementing congestion control algorithms can add complexity to network management, requiring sophisticated systems and configurations.
- Overhead: Some congestion control techniques introduce additional overhead, which can consume network resources and affect overall performance.
- Algorithm Sensitivity: The effectiveness of congestion control algorithms can be sensitive to network conditions and configurations, requiring fine-tuning for optimal performance.
- Resource Allocation Issues: Fairness in resource allocation, while a benefit, can also pose challenges when trying to prioritize critical applications over less essential ones.
- Dependency on Network Infrastructure: Congestion control relies on the underlying network infrastructure and may be less effective in environments with outdated or unreliable equipment.
Similar Reads
Supernetting in Network Layer Supernetting is the opposite of Subnetting. In subnetting, a single big network is divided into multiple smaller subnetworks. In Supernetting, multiple networks are combined into a bigger network termed a Supernetwork or Supernet. In this article, we'll explore the purpose and advantages of supernet
4 min read
Longest Prefix Matching in Routers What is Forwarding? Forwarding is moving incoming packets to the appropriate interface. Routers use a forwarding table to decide which incoming packet should be forwarded to which next hop. What is an IP prefix? IP prefix is a prefix of IP address. All computers on one network have the same IP prefi
3 min read
Classification of Routing Algorithms Pre-requisites: Difference between Static and Dynamic RoutingRouting is the process of establishing the routes that data packets must follow to reach the destination. In this process, a routing table is created which contains information regarding routes that data packets follow. Various routing alg
8 min read
Difference between Distance vector routing and Link State routing Routing is a process in computer networks which is used to find best path to transmit data packets from one node to another. Distance Vector Routing and Link State Routing are two most used dynamic routing algorithms. They both are a part of Intradomain routing which refer to routing of devices with
3 min read
Fixed and Flooding Routing algorithms In most situations, packets require multiple hops to make a journey towards the destination. Routing is one of the most complex and crucial aspects of packet-switched network design. Desirable Properties of Routing Algorithms:- Correctness and SimplicityRobustness: Ability of the network to deliver
5 min read
Distance Vector Routing (DVR) Protocol Distance Vector Routing (DVR) Protocol is a method used by routers to find the best path for data to travel across a network. Each router keeps a table that shows the shortest distance to every other router, based on the number of hops (or steps) needed to reach them. Routers share this information
5 min read
Unicast Routing - Link State Routing Unicast means the transmission from a single sender to a single receiver. It is a point-to-point communication between the sender and receiver. There are various unicast protocols such as TCP, HTTP, etc. TCP (Transmission Control Protocol) is the most commonly used unicast protocol. It is a connecti
6 min read
Internet Control Message Protocol (ICMP) Internet Control Message Protocol is known as ICMP. The protocol is at the network layer. It is mostly utilized on network equipment like routers and is utilized for error handling at the network layer. Since there are various kinds of network layer faults, ICMP can be utilized to report and trouble
11 min read
Open Shortest Path First (OSPF) Protocol Fundamentals Open Shortest Path First (OSPF) is a link-state routing protocol designed to efficiently route data within an Autonomous System (AS). It operates by using the Shortest Path First (SPF) algorithm to calculate the best path for packet forwarding. Unlike distance-vector protocols, OSPF triggers updates
7 min read
Types of Spanning Tree Protocol (STP) In Ethernet networks, switches use frames to forward data between devices. However, if there are multiple active paths between switches (such as when switches are interconnected), a loop can occur, causing frames to circulate indefinitely. This loop results in broadcast storms, high CPU utilization,
5 min read