Getting a Feel of TCP Flow Control and Congestion Control

Imagine you're driving across the country to visit a friend. Along the way, you'll face two distinct challenges: traffic jams at highway intersections where cars overflow the limited waiting space (congestion), and a full parking lot at your destination (receiver capacity). The internet faces these same challenges when sending data, and TCP (Transmission Control Protocol) manages both through two different mechanisms: congestion control and flow control.
The Core Problem: Limited Buffer Space Everywhere
Here's the key insight: both congestion and flow control exist because of the same fundamental limitation—buffer space. Think of buffers as waiting areas that can only hold so many items:
In the network (routers): Like intersection waiting areas that can only hold so many cars
At the destination (receiver): Like a parking lot with a fixed number of spaces
When these buffers fill up, bad things happen—packets get dropped, just like cars being turned away.
Flow Control: The Parking Lot Problem
Let's start with flow control. Imagine you're organizing a convoy of trucks to deliver goods to a warehouse with a small parking lot. The warehouse has only 20 parking spaces for trucks waiting to be unloaded.
The warehouse manager tells you: "My parking lot can only hold 20 trucks. If you send more than that before we unload some, we'll have to turn trucks away at the gate."
This is exactly what flow control prevents. The receiving computer has a limited buffer (parking lot) to temporarily store incoming data packets while processing them. If this buffer fills up, new packets are rejected—data is lost!
How Flow Control Works
The receiver constantly updates the sender about available buffer space:
Receiver: "I have 15 free spaces in my buffer"
Sender: "OK, I'll send at most 15 packets"
Receiver (after processing some): "I've freed up 8 more spaces"
Sender: "Great, I can send 8 more packets"
This "receive window" prevents buffer overflow at the destination. It's like the parking lot manager calling to say "5 trucks just left, you can send 5 more!"
Congestion Control: The Highway Intersection Problem
Now here's where congestion control comes in. Between you and your friend's warehouse are numerous highway intersections, each controlled by routers—think of them as smart traffic lights with small waiting areas.
Here's the crucial part: Each router has a limited buffer to temporarily hold packets while forwarding them. It's like each intersection having room for only 50 cars to wait. When more than 50 cars arrive:
The intersection can't hold them all
Excess cars (packets) are simply turned away (dropped)
Those cars never reach their destination
Why Packets Get Dropped
When too many data streams converge at a router (like rush hour at a major intersection):
The router's buffer fills up with waiting packets
New arriving packets find no space in the buffer
The router has no choice but to drop these packets
The sender never receives acknowledgment for dropped packets
It's like cars arriving at a full intersection being forced to disappear—they simply can't wait anywhere!
How TCP Detects Congestion
TCP monitors every packet's journey using acknowledgments (ACKs)—like delivery confirmations:
Normal conditions: Send packet → Receive ACK quickly → All good!
Growing congestion: Send packet → ACK takes longer → Buffers filling up at routers (like cars waiting longer at intersections)
Severe congestion: Send packet → No ACK arrives → Packet was dropped due to full buffers somewhere
Duplicate ACKs: When packets arrive out of order, the receiver sends duplicate ACKs. Three duplicate ACKs mean "I'm missing packet #5!"—likely dropped at a congested router
TCP's Response to Congestion
When TCP detects congestion (through missing or delayed ACKs), it immediately reduces its sending rate. This is like seeing a traffic report about a jammed intersection ahead and deciding to send fewer trucks to avoid making it worse.
The strategy:
Slow Start: Begin by sending just a few packets (like sending one truck to test the route)
Increase Gradually: If ACKs return promptly, send more (road is clear, send more trucks)
Back Off Quickly: At first sign of packet loss, cut sending rate in half (intersection is full, reduce traffic immediately)
Probe Again: Slowly increase rate again to find the optimal speed
How They Work Together
Both mechanisms protect against buffer overflow, but at different points:
Flow Control: Prevents overwhelming the receiver's buffer (destination parking lot)
Controlled by: Receiver explicitly stating available space
Protects: The destination computer's memory
Congestion Control: Prevents overwhelming router buffers (intersection waiting areas)
Controlled by: Sender detecting packet loss and delays
Protects: The network infrastructure
TCP respects both limits simultaneously. It's like checking both:
"How many parking spaces are available at destination?" (flow control)
"How much traffic can the intersections handle?" (congestion control)
And always using the lower limit.
Real-World Example: Video Streaming
When you watch Netflix:
Flow Control in Action:
Your smart TV has limited memory to buffer video
It tells Netflix: "I can only buffer 30 seconds of video" (my parking lot holds 30 trucks)
Netflix never sends more than this, preventing your TV from running out of memory
Congestion Control in Action:
Evening comes, everyone starts streaming
Router buffers in your ISP's network start filling up
Some packets get dropped (full intersections)
Netflix detects the drops through missing ACKs
Automatically reduces video quality to send less data
Prevents total network gridlock
Why Both Are Necessary
Without flow control:
Receivers would constantly drop packets due to full buffers
Like sending 100 trucks to a 20-space parking lot—80 get turned away
Without congestion control:
Router buffers would overflow everywhere
Like everyone driving at rush hour—complete gridlock
Nobody's data gets through
The Beauty of the System
TCP handles both challenges automatically:
Monitors receiver buffer space through explicit window advertisements
Detects network congestion through packet loss and delay patterns
Continuously adjusts sending rate to respect both limits
No central controller needed—each connection self-manages
It's like having smart trucks that:
Know exactly how many parking spaces are available at destination
Can sense when intersections are getting crowded
Automatically adjust their departure rate to prevent problems
Next time your internet feels slow, remember: it's either full buffers in the network (congested intersections) or full buffers at the receiver (packed parking lot). TCP is constantly monitoring both, using ACKs as its feedback system, dropping its sending rate whenever buffers anywhere start filling up. This careful balance keeps data flowing as efficiently as possible while preventing the digital equivalent of turned-away trucks and gridlocked intersections.
Subscribe to my newsletter
Read articles from Jyotiprakash Mishra directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Jyotiprakash Mishra
Jyotiprakash Mishra
I am Jyotiprakash, a deeply driven computer systems engineer, software developer, teacher, and philosopher. With a decade of professional experience, I have contributed to various cutting-edge software products in network security, mobile apps, and healthcare software at renowned companies like Oracle, Yahoo, and Epic. My academic journey has taken me to prestigious institutions such as the University of Wisconsin-Madison and BITS Pilani in India, where I consistently ranked among the top of my class. At my core, I am a computer enthusiast with a profound interest in understanding the intricacies of computer programming. My skills are not limited to application programming in Java; I have also delved deeply into computer hardware, learning about various architectures, low-level assembly programming, Linux kernel implementation, and writing device drivers. The contributions of Linus Torvalds, Ken Thompson, and Dennis Ritchie—who revolutionized the computer industry—inspire me. I believe that real contributions to computer science are made by mastering all levels of abstraction and understanding systems inside out. In addition to my professional pursuits, I am passionate about teaching and sharing knowledge. I have spent two years as a teaching assistant at UW Madison, where I taught complex concepts in operating systems, computer graphics, and data structures to both graduate and undergraduate students. Currently, I am an assistant professor at KIIT, Bhubaneswar, where I continue to teach computer science to undergraduate and graduate students. I am also working on writing a few free books on systems programming, as I believe in freely sharing knowledge to empower others.