Iptables Deep Dive: The Story Behind the Packets You Thought You Understood

Table of contents
- Introduction
- The Hidden Architecture I Never Saw Coming
- Mapping the Tables: Where Packet Stories Begin
- Unlocking the Checkpoints: Where Packets Get Judged
- The Hidden Symphony: How iptables Tables Actually Work Together
- Decoding The Hooks: Where The Real Packet Drama Happens
- When the Theory Gets Real: iptables in Action 🚀
- Writing Your Own iptables Rules
- Connection Tracking: The Stateful Superpower
- Tracking a Packet Through Kubernetes: The Real Iptables Adventure
- Following the Kubernetes Packet Trail 🚚
- The Full Picture
- Wrapping It All Up: From Mystery to Mastery

Introduction
It started with a simple question.
One that kept me awake, staring at my screen, more times than I’d like to admit:
"Where exactly is my packet going?"
In the labyrinth of Kubernetes networking and Linux firewalls, I was lost. Sure, I’d written firewall rules. I’d even tinkered with Kubernetes Services. But when it came to tracking a packet’s journey across a Kubernetes NodePort—step by step, chain by chain—I hit a wall.
I couldn’t see the full picture, and that terrified me.
Because if I couldn’t follow the packet, how could I troubleshoot it? How could I explain it? How could I control it?
I needed to understand iptables, not just as a list of cryptic commands—but as a living, breathing map of my network’s heartbeat.
So I rolled up my sleeves and went in deep. What I found was more fascinating—and more essential—than I ever imagined.
The Hidden Architecture I Never Saw Coming
At its core, iptables is the gatekeeper. It’s the frontline defense built right into the Linux kernel, thanks to something called the Netfilter framework.
Every packet that enters or exits a Linux system is examined here.
Accepted. Rejected. Transformed.
But where and when these decisions happen? That’s where most people—including me—get tripped up.
Here’s the core workflow:
A packet enters through a network interface.
It’s evaluated by tables and chains full of filtering rules.
The first matching rule decides the packet’s fate: pass, block, or change it.
Then the packet either moves forward or gets dropped.
But understanding how the tables and chains work together? That’s where the magic—and the confusion—lives.
Mapping the Tables: Where Packet Stories Begin
As I dug deeper, I realized iptables doesn’t just toss all rules into a single messy bucket. No, it’s brilliantly organized into tables—each one with a very specific job. And if you’re going to master this tool, understanding these tables is where your real journey begins.
Filter Table – The Gatekeeper
This is the default table, the one standing guard. It’s the first line of defense that decides:
"Hey, should this packet even be allowed inside?"
It checks incoming traffic, outgoing traffic, and the traffic passing through you like you’re just a middleman.
This table has three main checkpoints (chains):
INPUT: For traffic coming into your system.
OUTPUT: For traffic going out from your system.
FORWARD: For traffic simply passing through.
To peek into what’s happening in the filter table, run:
sudo iptables -t filter -L
Network Address Translation (NAT) Table - The Travel Agent
The NAT table is the master of address translation. It rewrites packet addresses, like updating a traveler’s ticket to make sure it gets to the right terminal.
It’s especially important when your Linux machine is doing routing or gateway duties. This table contains PREROUTING, OUTPUT, and POSTROUTING chains.
Check out the NAT table rules like this:
sudo iptables -t nat -L
Mangle Table – The Baggage Handler
The mangle table tweaks packets, changes headers, and marks them for special treatment—like tagging VIP luggage for faster handling.
To view mangle table rules:
sudo iptables -t mangle -L
Raw Table – The Fast Lane
The raw table is the shortcut lane. It decides whether some packets can skip the connection tracking system entirely—perfect when you need speed and don’t want to keep tabs on everything.
Here’s how you can see the raw table:
sudo iptables -t raw -L
Unlocking the Checkpoints: Where Packets Get Judged
Chains represent different points in the packet processing flow where rules can be applied. Each table contains specific chains that correspond to different stages of packet traversal through the system.
INPUT Chain – The Front Door
Packets headed directly for your system have to pass through the INPUT chain. This is where you decide:
"Do I let them knock, come in, or slam the door shut?"
Example: Allowing SSH access to your system:
sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
OUTPUT Chain – The Exit Gate
When your system wants to send something out, it walks through the OUTPUT chain. You control what your system is allowed to talk to.
Example: Allowing outbound HTTP traffic:
sudo iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
FORWARD Chain – The Middleman
When your system is just a passageway—when packets aren’t for you and aren’t from you—they pass through the FORWARD chain.
Example: Forwarding traffic between two network interfaces:
sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
PREROUTING Chain – The Early Checkpoint
The PREROUTING chain is the first place packets stop after arriving at your system, even before routing decisions are made. This is where you might redirect or tag them.
POSTROUTING Chain– The Final Check
POSTROUTING is where the final touches happen—right before a packet leaves the system. You might tweak the source address here or handle masquerading.
The Hidden Symphony: How iptables Tables Actually Work Together
Just when I thought I had a grip on the iptables tables, I stumbled onto something bigger.
It turns out these tables don’t just work in silos.
They collaborate.
They queue up.
They wait their turn in a well-orchestrated dance inside the Linux kernel.
But here’s the kicker:
If you don’t know who goes first and why, you’ll never trace your packet’s true path.
You’ll miss the crucial handshakes.
You’ll be flying blind.
That’s the mistake I made—until I discovered the processing order.
The Secret Order of Packet Judgment
Iptables isn’t just “one big filter.” It’s more like a queue at airport security. Each table gets its chance to inspect or modify your packet—but there’s a strict sequence.
Here’s the fast-pass lane:
Raw Table: First in line, used to bypass connection tracking.
Connection Tracking: Kicks in after the raw table does its thing.
Mangle Table: Adjusts packet headers and QoS markings.
NAT Table: Handles address translation—think forwarding and port mapping.
Filter Table: The final verdict—accept, drop, or reject.
Security Table: Applies Mandatory Access Control (like SELinux rules).
The key?
The raw table always comes first. And the filter table always comes last.
That’s how Linux ensures that packets get correctly modified before security decisions are made.
Decoding The Hooks: Where The Real Packet Drama Happens
So where does this processing actually unfold?
Enter the five critical hooks in the Linux network stack.
These are like security checkpoints, each playing a unique role.
📍 PREROUTING: Where All Journeys Begin
As soon as a packet lands on your system, it doesn’t wait—it’s instantly inspected in this order:
Raw Table: Marks packets with
NOTRACK
if you want them to skip connection tracking. Perfect for high-traffic scenarios.Mangle Table: Adjusts packet headers or adds QoS tags.
NAT Table: Redirects packets (Destination NAT) to different internal addresses or ports.
🎯 After this, Linux decides:
Should I keep this packet?
Or should I pass it along?
📍 INPUT: Guarding Your System’s Front Door
If the packet is destined for your machine, it walks through:
Mangle Table: One last chance to tweak the packet.
Filter Table: Accept? Drop? Reject? Your security rules rule here.
Security Table: SELinux or other advanced policies come into play.
NAT Table: Handles local NAT adjustments if needed.
📍 FORWARD: Playing Middleman
Packets just passing through? They hit:
Mangle Table: Adjusts traffic in motion.
Filter Table: Controls what’s allowed to pass.
Security Table: Enforces access control for transiting traffic.
📍 OUTPUT: Sending Packets From Your System
For packets generated by your system, the flow is:
Raw Table: Optionally skips connection tracking.
Mangle Table: Final header tweaks.
NAT Table: Performs NAT for outgoing traffic.
Filter Table: Decides what’s allowed to leave.
Security Table: Enforces outbound security.
📍 POSTROUTING: The Final Farewell
Before packets leave the system, they’re processed here:
Mangle Table: Last-minute packet adjustments.
NAT Table: Source NAT (SNAT) or masquerading happens here.
When the Theory Gets Real: iptables in Action 🚀
Let’s break this down with some real-world examples.
🖥️ Web Server Traffic Flow
Say you’ve got a web server. An incoming HTTP packet might flow like this:
Raw Table (PREROUTING): You could mark high-traffic HTTP packets with
NOTRACK
to reduce tracking load.Mangle Table (PREROUTING): You might tag HTTP packets with QoS for priority handling.
NAT Table (PREROUTING): Maybe you’re redirecting port 80 traffic to port 8080.
Filter Table (INPUT): Here’s where you decide: Is this request allowed in?
🌐 Router or Gateway Traffic Flow
If your system is acting as a gateway:
Mangle Table (PREROUTING): Tag packets for special routing.
NAT Table (PREROUTING): Forward packets to internal servers.
Filter Table (FORWARD): Decide which packets can traverse your system.
NAT Table (POSTROUTING): Masquerade packets so their source IPs are hidden.
Writing Your Own iptables Rules
Basic Rule Syntax
The general syntax for creating iptables rules follows this pattern:
sudo iptables -A <chain> -i <interface> -p <protocol> -s <source> --dport <port> -j <target>
Where:
-A
appends the rule to the specified chain-i
specifies the input network interface-p
defines the protocol (tcp, udp, icmp, or all)-s
sets the source address--dport
specifies the destination port-j
defines the target action
🛠️ Common Command Examples
Allow loopback traffic (essential for system functionality):
sudo iptables -A INPUT -i lo -j ACCEPT
Allow established and related connections:
sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
Allow incoming SSH from a specific network:
sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 22 -j ACCEPT
Allow HTTP and HTTPS traffic:
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
Viewing and Managing Rules
To list all current rules with line numbers:
sudo iptables -L --line-numbers
To view rules in a specific table:
sudo iptables -t nat -L -v
To delete a specific rule by line number:
sudo iptables -D INPUT 3
To flush all rules in a chain:
sudo iptables -F INPUT
Connection Tracking: The Stateful Superpower
Without connection tracking, iptables would be like a goldfish—forgetting every packet it ever saw.
But thanks to stateful filtering, iptables remembers.
It knows when:
A connection is starting (NEW)
A connection is already active (ESTABLISHED)
A connection is related to another one (RELATED)
A packet is suspicious (INVALID)
Example: Let existing connections flow, and allow new SSH sessions:
sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -m conntrack --ctstate NEW -p tcp --dport 22 -j ACCEPT
Tracking a Packet Through Kubernetes: The Real Iptables Adventure
Once you step into Kubernetes, iptables doesn’t just filter traffic—it becomes a traffic controller on steroids.
The kube-proxy component rewrites iptables rules on the fly to make service discovery, load balancing, and internal networking possible.
You can actually see this magic yourself.
✔️ To list Kubernetes-related iptables rules:
sudo iptables -t nat -L | grep KUBE
Following the Kubernetes Packet Trail 🚚
When you create a Kubernetes Service, kube-proxy sets up iptables rules that quietly balance traffic across backend pods.
Let’s trace that journey.
Start with the Kubernetes Service:
$> kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.102.84.62 <none> 80:32096/TCP,443:30482/TCP 2d7h
ingress-nginx-controller-admission ClusterIP 10.101.9.18 <none> 443/TCP 2d7h
✔️ Our NodePort service is exposed on port 32096
.
Let’s also grab the Node IP:
$> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
lab Ready control-plane 2d7h v1.30.14 192.168.71.147 <none> Ubuntu 22.04.4 LTS 5.15.0-142-generic docker://28.2.2
So when you hit http://192.168.71.147:32096
, where exactly does your packet go?
Let’s walk the path, step by step.
🎯 Step 1: PREROUTING Chain (nat table)
The first stop for incoming packets is the PREROUTING chain in the nat table.
Let’s list the rules:
$> iptables -t nat -L PREROUTING -v -n --line-number
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 6 440 CILIUM_PRE_nat all -- * * 0.0.0.0/0 0.0.0.0/0 /* cilium-feeder: CILIUM_PRE_nat */
2 4436 277K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
3 3194 194K DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
The packet will eventually jump to the KUBE-SERVICES chain.
🎯 Step 2: Jump to KUBE-SERVICES Chain
Let’s filter by our service IP to track it:
iptables -t nat -L KUBE-SERVICES -v -n --line-numbers |grep 10.102.84.62
15 0 0 KUBE-SVC-EDNDUDH2C75GIR6O tcp -- * * 0.0.0.0/0 10.102.84.62 /* ingress-nginx/ingress-nginx-controller:https cluster IP */
18 0 0 KUBE-SVC-CG5I4G2RS3ZVWGLK tcp -- * * 0.0.0.0/0 10.102.84.62 /* ingress-nginx/ingress-nginx-controller:http cluster IP */
✔️ We’re interested in rule #18—that’s our HTTP traffic, other one is for HTTPS that is for you to trace.
Let’s dig into that chain:
$> iptables -t nat -L KUBE-SVC-CG5I4G2RS3ZVWGLK -v -n --line-numbers
Chain KUBE-SVC-CG5I4G2RS3ZVWGLK (2 references)
num pkts bytes target prot opt in out source destination
1 3 192 KUBE-SEP-6H6KWTK54XINZYKA all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http -> 10.0.0.174:80 */ statistic mode random probability 0.50000000000
2 4 252 KUBE-SEP-FVC7R5WK2VSWSO3E all -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http -> 10.0.0.97:80 */
You might be wondering—why two rules?
That’s because we have two running pods for our nginx-ingress-controller.
Here’s where Kubernetes shows off its magic:
Look at the first rule → statistic mode random probability 0.50000000000
That means approximately 50% of the packets will hit this rule at random.
This is exactly how Kubernetes achieves iptables-based load balancing across pods—no external load balancer needed.
🎯 Step 3: The Final Handoff — Meeting the Pod
Let’s expand the first service endpoint rule:
$> iptables -t nat -L KUBE-SEP-6H6KWTK54XINZYKA -v -n --line-numbers
Chain KUBE-SEP-6H6KWTK54XINZYKA (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ all -- * * 10.0.0.174 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */
2 3 192 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* ingress-nginx/ingress-nginx-controller:http */ tcp to:10.0.0.174:80
✔️ Here’s the final destination rewrite:
The DNAT rule changes the destination IP and port to the pod’s IP and port → 10.0.0.174:80
SNAT: The Masquerade That Guides the Way Back
Also, check the first rule → it jumps to KUBE-MARK-MASQ
.
This marks the packet for masquerading (SNAT) so the pod can send the response back through the correct node IP.
This is how Kubernetes ensures return traffic doesn’t get lost.
The Full Picture
From the moment you type a URL in your browser to the second the pod sends back the response—every step is meticulously handled by iptables and kube-proxy.
✔️ NodePort → PREROUTING → KUBE-SERVICES → KUBE-SVC → KUBE-SEP → DNAT → Pod
✔️ On the way back → Masquerading (SNAT) → Correct source IP → Smooth return to the client
You’re not just watching packets anymore.
You’re reading the map. You’re following the trail.
And you’re in complete control.
Wrapping It All Up: From Mystery to Mastery
What started as a restless question—"Where exactly is my packet going?"—has now unfolded into something far more powerful.
This wasn’t just about tracing packets.
It was about pulling back the curtain on the intricate dance between Linux, iptables, and Kubernetes.
I learned that iptables isn’t just a firewall—it’s a map, a conductor, a quiet architect shaping every move your packets make.
I learned that Kubernetes isn’t just orchestrating containers—it’s orchestrating the very traffic highways they rely on.
And most importantly—I learned to stop being afraid of the layers I couldn’t see.
Now, when a packet crosses my system, I don’t just hope it reaches the right place—I know the roads it travels, the rules it faces, and the hands that pass it along.
And that, right there, is the shift.
From guessing to knowing.
From confusion to clarity.
From mystery to mastery.
So the next time you hit a NodePort, the next time a service misbehaves, the next time someone asks,
"But where is the packet, really?"
—you won’t blink.
You’ll trace it.
You’ll explain it.
You’ll own it.
And that’s the real win.
Subscribe to my newsletter
Read articles from Deepankar Pal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Deepankar Pal
Deepankar Pal
Certified Azure Solutions Architect and HashiCorp Certified Terraform Associate with extensive experience in cloud computing, automation, and infrastructure management. I specialize in leveraging Azure, Azure DevOps, Terraform, Ansible, Jenkins, and Python to design, automate, and optimize enterprise-grade solutions. With a strong background in Linux, AIX, and cloud infrastructure migration, I help businesses architect scalable, resilient, and efficient systems. Whether it's infrastructure as code, continuous integration/deployment pipelines, or complex cloud migrations, I bring both technical expertise and a passion for innovation.