iptables Made Simple: How Linux & Kubernetes Balance Traffic Like Pros

Deepankar PalDeepankar Pal
13 min read

Introduction

Iptables is a powerful command-line firewall utility that allows system administrators to configure the Linux kernel's built-in firewall functionality. It serves as the primary interface to the Netfilter framework, which has been included in the Linux kernel since version 2.4. Understanding iptables is crucial for anyone managing Linux servers, especially those running containerized applications like Docker or Kubernetes.

At its core, iptables works by examining network packets as they traverse through the system and making decisions based on predefined rules. These decisions can include accepting, dropping, or modifying packets based on various criteria such as source and destination addresses, ports, and protocols.

How Iptables Works: The Foundation

Iptables uses a hierarchical structure consisting of tables, chains, and rules to process network traffic. When a packet arrives at your system, it passes through multiple stages of processing, with each stage having the opportunity to examine and act upon the packet.

The basic workflow follows this pattern:

  1. A packet enters the system through a network interface

  2. The packet is examined against rules in specific tables and chains

  3. Based on the first matching rule, an action (target) is applied

  4. The packet either continues through the system or is dropped/rejected

Understanding Tables: The Building Blocks

Iptables organizes its rules into different tables, each serving a specific purpose in packet processing. Linux systems typically have four main tables available:

Filter Table

The filter table is the default table used by iptables and serves as the primary packet filtering mechanism. This table acts as a gatekeeper, determining which packets are allowed to enter or leave the network. The filter table contains three built-in chains: INPUT, OUTPUT, and FORWARD.

To view the current filter table rules, use:

sudo iptables -t filter -L

Network Address Translation (NAT) Table

The NAT table handles network address translation, which is essential for routing packets between different networks.This table is particularly important when your Linux system acts as a router or gateway. The NAT table contains three chains: PREROUTING, OUTPUT, and POSTROUTING.

To examine NAT table rules:

sudo iptables -t nat -L

Mangle Table

The mangle table is used for specialized packet alterations, allowing you to modify packet headers and set special marks on packets. This table is useful for advanced routing decisions and Quality of Service implementations.

To view mangle table rules:

sudo iptables -t mangle -L

Raw Table

The raw table is primarily used to configure exemptions from connection tracking. This table processes packets before they enter the connection tracking system, making it useful for performance optimization in high-traffic scenarios.

To check raw table rules:

sudo iptables -t raw -L

Understanding Chains: The Processing Paths

Chains represent different points in the packet processing flow where rules can be applied. Each table contains specific chains that correspond to different stages of packet traversal through the system.

INPUT Chain

The INPUT chain processes all packets destined for the local system. When a packet arrives at your server and is intended for a local process or service, it passes through the INPUT chain. This is where you would typically place rules to control incoming connections to services like SSH, HTTP, or database servers.

Example of allowing SSH access:

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

OUTPUT Chain

The OUTPUT chain handles packets generated by the local system and destined for other hosts. This chain controls outbound traffic from your server to external destinations. Rules in this chain can restrict which external services your server can connect to.

Example of allowing outbound HTTP traffic:

sudo iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT

FORWARD Chain

The FORWARD chain processes packets that are neither generated by nor destined for the local system. Instead, these packets are being routed through your system to reach other destinations . This chain is crucial when your Linux system acts as a router or gateway between different networks.

Example of allowing forwarding between interfaces:

sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT

PREROUTING Chain

The PREROUTING chain exists in the NAT, mangle, and raw tables. It processes packets immediately after they arrive at the system, before any routing decisions are made. This chain is commonly used for destination NAT (DNAT) operations.

POSTROUTING Chain

The POSTROUTING chain is found in the NAT and mangle tables. It processes packets just before they leave the system, after routing decisions have been made. This chain is typically used for source NAT (SNAT) operations and masquerading.

How Iptables Tables Work Together to Control Network Traffic

Iptables functions by interfacing with the netfilter framework built into the Linux kernel. This framework provides five key hooks in the network stack where packet processing occurs: PREROUTING, INPUT, FORWARD, OUTPUT, and POSTROUTING. Each table registers with specific hooks and is assigned a priority value that determines the order of processing when multiple tables operate at the same hook point.

The netfilter system processes operations in order of increasing numerical priority values, with lower numbers executed first. This priority-based system ensures that packet modifications and filtering occur in the correct sequence to maintain network functionality and security.

Table Processing Order and Priorities

The different iptables tables are processed in a specific order based on their assigned priority values within the netfilter framework. The standard processing order follows this sequence:

  1. Raw Table : Processed first to bypass connection tracking for specific packets

  2. Connection Tracking: Enabled after raw table processing

  3. Mangle Table : Handles packet alterations and Quality of Service markings

  4. NAT Table : Performs network address translation

  5. Filter Table : Implements primary packet filtering decisions

  6. Security Table : Handles Mandatory Access Control rules

This ordering ensures that packet modifications occur before filtering decisions, and that connection tracking information is available when needed.

How Tables Collaborate at Each Hook

PREROUTING Hook Processing

When a packet first arrives at the system, it encounters multiple tables at the PREROUTING hook in this sequence:

  1. Raw Table: Can mark packets with NOTRACK to bypass connection tracking, useful for high-traffic scenarios where stateful tracking is unnecessary

  2. Mangle Table: Modifies packet headers, sets QoS markings, or applies custom packet marks

  3. NAT Table: Performs Destination NAT (DNAT) to redirect packets to different internal addresses or ports

After PREROUTING processing, the kernel makes a routing decision to determine whether the packet is destined for the local system or should be forwarded.

INPUT Hook Processing

For packets destined to the local system, the INPUT hook processes them through these tables:

  1. Mangle Table: Final opportunity to modify packet characteristics before local delivery

  2. Filter Table: Primary filtering decisions - ACCEPT, DROP, or REJECT packets based on security policies

  3. Security Table: Applies SELinux or other Mandatory Access Control policies

  4. NAT Table: Handles any necessary NAT operations for locally destined traffic

FORWARD Hook Processing

Packets being routed through the system traverse the FORWARD hook with these tables:

  1. Mangle Table: Modifies packets in transit, often for traffic shaping or Quality of Service

  2. Filter Table: Implements forwarding policies to control which traffic can pass through the system

  3. Security Table: Applies access control policies to forwarded traffic

OUTPUT Hook Processing

Locally generated packets pass through the OUTPUT hook in this order:

  1. Raw Table: Can bypass connection tracking for outbound traffic

  2. Mangle Table: Modifies outbound packet characteristics

  3. NAT Table: Performs NAT operations on locally generated traffic

  4. Filter Table: Controls which locally generated traffic is permitted to leave

  5. Security Table: Applies security policies to outbound traffic

POSTROUTING Hook Processing

The final processing stage handles packets leaving the system:

  1. Mangle Table: Final packet modifications before transmission

  2. NAT Table: Performs Source NAT (SNAT) or masquerading to modify source addresses

Practical Interaction Examples

Web Server Scenario

Consider a web server receiving HTTP requests. The packet flow demonstrates table interaction:

  1. Raw Table (PREROUTING): Could mark high-volume HTTP traffic with NOTRACK to reduce connection tracking overhead

  2. Mangle Table (PREROUTING): Might set Quality of Service markings for HTTP traffic prioritization

  3. NAT Table (PREROUTING): Could redirect traffic from port 80 to port 8080 where the web server actually listens

  4. Filter Table (INPUT): Applies firewall rules to accept HTTP connections from authorized sources

Router/Gateway Scenario

For a system acting as a network gateway, tables work together for traffic forwarding:

  1. Mangle Table (PREROUTING): Marks packets for different routing policies or traffic classes

  2. NAT Table (PREROUTING): Performs port forwarding to internal servers

  3. Filter Table (FORWARD): Controls which traffic is permitted to traverse the gateway

  4. NAT Table (POSTROUTING): Performs masquerading to hide internal network addresses

Working with Iptables Rules

Basic Rule Syntax

The general syntax for creating iptables rules follows this pattern:

sudo iptables -A <chain> -i <interface> -p <protocol> -s <source> --dport <port> -j <target>

Where:

  • -A appends the rule to the specified chain

  • -i specifies the input network interface

  • -p defines the protocol (tcp, udp, icmp, or all)

  • -s sets the source address

  • --dport specifies the destination port

  • -j defines the target action

Common Rule Examples

Allow loopback traffic (essential for system functionality):

sudo iptables -A INPUT -i lo -j ACCEPT

Allow established and related connections:

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Allow incoming SSH from a specific network:

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 22 -j ACCEPT

Allow HTTP and HTTPS traffic:

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT

Viewing and Managing Rules

To list all current rules with line numbers:

sudo iptables -L --line-numbers

To view rules in a specific table:

sudo iptables -t nat -L -v

To delete a specific rule by line number:

sudo iptables -D INPUT 3

To flush all rules in a chain:

sudo iptables -F INPUT

Connection Tracking and Stateful Filtering

Connection tracking is a powerful feature that allows iptables to maintain information about network connections in memory. This enables stateful packet filtering, which is more secure than simple packet filtering because it understands the context of each packet within a connection.

Connection States

Iptables recognizes several connection states:

  • NEW: The packet starts a new connection

  • ESTABLISHED: The packet belongs to an existing connection

  • RELATED: The packet starts a new connection related to an existing one

  • INVALID: The packet does not belong to any known connection

Example of using connection states:

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -m conntrack --ctstate NEW -p tcp --dport 22 -j ACCEPT

Iptables with Kubernetes

Kubernetes networking involves more complex iptables configurations because it needs to handle service discovery, load balancing, and network policies. The kube-proxy component is responsible for programming iptables rules to implement Kubernetes Services.

Kube-proxy and Iptables Mode

In iptables mode, kube-proxy creates iptables rules to implement load balancing for Kubernetes Services. For each Service, kube-proxy creates multiple iptables rules that redirect traffic to the appropriate backend Pods.

To view Kubernetes-related iptables rules:

sudo iptables -t nat -L | grep KUBE

Kubernetes Service Implementation

When you create a Kubernetes Service, kube-proxy generates iptables rules in the NAT table to implement load balancing. The rules use the DNAT target to redirect traffic from the Service's virtual IP address to the actual Pod IP addresses.

Listing Service IPs

When you create a Service in Kubernetes, it gets a ClusterIP—a virtual IP address inside your cluster.

$> kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.102.84.62   <none>        80:32096/TCP,443:30482/TCP   2d7h
ingress-nginx-controller-admission   ClusterIP   10.101.9.18    <none>        443/TCP                      2d7h

Here 10.102.84.62 is the Cluster IP and the Service type is NodePort IP which means to access the application we need to access it on the node IP address and port 32096. Lets check the node ip address

Listing Node IP address

$> kubectl get nodes -o wide
NAME   STATUS   ROLES           AGE    VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
lab    Ready    control-plane   2d7h   v1.30.14   192.168.71.147   <none>        Ubuntu 22.04.4 LTS   5.15.0-142-generic   docker://28.2.2

How Traffic Enters the Cluster

When a client sends a request to http://192.168.71.147:32096, the packet arrives at the node’s network interface on port 32096.

iptables: NodePort Rule Matching

Kubernetes’ kube-proxy automatically creates iptables rules to handle NodePort services. Here’s what happens:

PREROUTING Chain (nat table):
The packet first hits the PREROUTING chain in the nat table.

    $> iptables -t nat -L PREROUTING -v -n --line-number
    Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
    num   pkts bytes target     prot opt in     out     source               destination
    1        6   440 CILIUM_PRE_nat  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* cilium-feeder: CILIUM_PRE_nat */
    2     4436  277K KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    3     3194  194K DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Jump to KUBE-SERVICES Chain:
At this point the traffic will be evaluated through the above order, against rules in CILIUM_PRE_nat chain first and if no rule matches then KUBE-SERVICES chain will be evaluated. Lets see what are the rules set in KUBE-SERVICES chain for our service IP.

    iptables -t nat -L KUBE-SERVICES -v -n --line-numbers  |grep 10.102.84.62
    15       0     0 KUBE-SVC-EDNDUDH2C75GIR6O  tcp  --  *      *       0.0.0.0/0            10.102.84.62         /* ingress-nginx/ingress-nginx-controller:https cluster IP */
    18       0     0 KUBE-SVC-CG5I4G2RS3ZVWGLK  tcp  --  *      *       0.0.0.0/0            10.102.84.62         /* ingress-nginx/ingress-nginx-controller:http cluster IP */

Here we got two rules one for HTTPS traffic (rule #15) and HTTP traffic (rule #18). The HTTP based rule is the point of interest for us as we are tracing that. Let’s further look into this chain

KUBE-SVC-xxxxxx Chain:

    $> iptables -t nat -L KUBE-SVC-CG5I4G2RS3ZVWGLK -v -n --line-numbers
    Chain KUBE-SVC-CG5I4G2RS3ZVWGLK (2 references)
    num   pkts bytes target     prot opt in     out     source               destination
    1        3   192 KUBE-SEP-6H6KWTK54XINZYKA  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:http -> 10.0.0.174:80 */ statistic mode random probability 0.50000000000
    2        4   252 KUBE-SEP-FVC7R5WK2VSWSO3E  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:http -> 10.0.0.97:80 */

This rule has further two rules, but why? That is because we have two running pods of the target, in this case nginx-ingress-controller.

At somepoint deep diving kubernetes, you must have wondered how it is able to perform load balancing across pod via service so efficiently? Well here is the magic trick it uses - Observe at the end of rule one that says “statistic mode random probability 0.50000000000”, this option means that this rule will match and process approximately 50% of the packets that reach it, chosen at random. So this is how Kubernetes is doing load balancing via the iptables.

KUBE-SEP-xxxxxx Chain:
As we noticed, the above service chain uses probabilistic rules to load balance and jumps to an endpoint-specific chain. Lets expand this rule to view the DNAT rule

    $> iptables -t nat -L KUBE-SEP-6H6KWTK54XINZYKA -v -n --line-numbers
    Chain KUBE-SEP-6H6KWTK54XINZYKA (1 references)
    num   pkts bytes target     prot opt in     out     source               destination
    1        0     0 KUBE-MARK-MASQ  all  --  *      *       10.0.0.174           0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:http */
    2        3   192 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:http */ tcp to:10.0.0.174:80

DNAT to Pod IP:
In the KUBE-SEP-xxxxxx chain, a DNAT rule rewrites the destination IP and port to the selected pod’s IP and port (e.g., 10.0.0.174:80).

SNAT the Pod IP

Here in the same output if you look at the first rule, it is basically targeting packets originating from the pod IP (10.0.0.174) and marks them for masquerading (SNAT, or Source NAT) using the KUBE-MARK-MASQ chain.

Now you’ve seen how Kubernetes and iptables work together behind the scenes to route your request from a browser all the way to the right pod. With each hop—service IP, NodePort, and DNAT—your traffic is efficiently load balanced and delivered to your application, no matter where it runs in the cluster.

Conclusion

Iptables is a fundamental tool for Linux system security and network management. Understanding its table and chain structure, packet flow, and interaction with containerized environments like Docker and Kubernetes is essential for modern system administration. The hierarchical nature of tables and chains, combined with the flexible rule syntax, provides powerful capabilities for controlling network traffic.

When working with containerized applications, remember that Kubernetes extensively modify iptables configurations to enable their networking features. Always test your firewall rules thoroughly and maintain proper documentation to ensure system security without disrupting application functionality.

By mastering iptables concepts and practical applications, system administrators can build robust, secure network infrastructures that properly support both traditional and containerized workloads.

Happy Learning!!

0
Subscribe to my newsletter

Read articles from Deepankar Pal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Deepankar Pal
Deepankar Pal

Certified Azure Solutions Architect and HashiCorp Certified Terraform Associate with extensive experience in cloud computing, automation, and infrastructure management. I specialize in leveraging Azure, Azure DevOps, Terraform, Ansible, Jenkins, and Python to design, automate, and optimize enterprise-grade solutions. With a strong background in Linux, AIX, and cloud infrastructure migration, I help businesses architect scalable, resilient, and efficient systems. Whether it's infrastructure as code, continuous integration/deployment pipelines, or complex cloud migrations, I bring both technical expertise and a passion for innovation.