Docker Networking

Rajesh GurajalaRajesh Gurajala
7 min read

๐Ÿง  What is Docker Networking?

In simple terms, Docker networking is how containers communicate with:

  • Other containers

  • The host machine

  • The external internet

  • Services running in other environments

Docker abstracts networking infrastructure using virtual networks, giving each container a virtual NIC (vNIC) and a unique IP address, managed by an internal DNS and routing system.


๐Ÿ” Core Objectives of Docker Networking:

  1. Isolation
    Each container can have its own network namespace, offering security and isolation.

  2. Connectivity
    Containers can talk to each other via IP, hostname, or DNSโ€”locally or across hosts.

  3. Portability
    Dockerโ€™s networking model is designed to work across platforms: local machines, VMs, cloud, and Swarm/K8s clusters.

  4. Programmability
    You can define custom networks, apply firewall rules, load balancing, and service discovery.


๐ŸŒ‰ 1. Docker Default Bridge Network

When you install Docker, it automatically creates a default bridge network named:

docker network ls
# Output:
# NETWORK ID     NAME      DRIVER    SCOPE
# abcd1234...    bridge    bridge    local

โš™๏ธ Technical Architecture: What Happens Under the Hood

๐Ÿ”— a. Virtual Ethernet (veth) Pair

Each time a container is launched into a bridge network:

  • Docker creates a veth pair:

    • One end lives inside the container as eth0

    • The other end connects to the bridge (e.g., br-xxxxx) on the host

๐Ÿง  b. Linux Bridge (docker0)

Docker creates a Linux bridge interface called docker0:

ip link show docker0

This acts like a virtual Layer 2 switch on the host. All container veth interfaces plug into this bridge.

๐ŸŒ c. IP Allocation

Docker runs an internal DHCP server that:

  • Assigns IPs from the 172.17.0.0/16 subnet

  • Example: 172.17.0.2, 172.17.0.3, ...

These are private IPs, not accessible from the outside directly.

๐Ÿ” d. NAT and IPTables

To allow containers to access the internet:

  • Docker sets up IP masquerading (NAT) via iptables rules

  • Outbound traffic from the container appears to originate from the host

To expose ports externally, Docker uses iptables DNAT rules.


๐Ÿ›œ Container Communication in Default Bridge

โœ… Can:

  • Containers on the same bridge can talk via IP

  • Host โ†” Container via localhost:port (if port is published)

โŒ Cannot:

  • Containers on different bridges cannot talk

  • Containers do not get DNS names unless on user-defined bridge (more on that later)

Example:

docker run -d --name c1 busybox sleep 1000
docker run -d --name c2 busybox sleep 1000

docker exec c1 ping c2
# ping: unknown host (fails โ€” no DNS)

docker exec c1 ping 172.17.0.X
# works if you know the IP

๐Ÿ” Port Publishing in Bridge Network

When you use -p or --publish:

docker run -d -p 8080:80 nginx

Docker:

  • Maps port 8080 on host โ†’ container:80

  • Creates an iptables DNAT rule

  • Allows external traffic to reach your container via localhost:8080

Use ss -tuln or iptables -t nat -L -n to verify.


๐Ÿ“ฆ Creating and Using Default Bridge Network

You donโ€™t need to create itโ€”it already exists.

docker network inspect bridge

๐ŸŒ‰ 2. User-Defined Bridge Network

A user-defined bridge network is a Docker network you explicitly create using:

docker network create --driver bridge my_bridge

It uses the same bridge driver as the default bridge network, but with better isolation, built-in DNS-based service discovery, and custom configurability.


๐Ÿ” Differences from the Default Bridge Network

FeatureDefault bridgeUser-defined bridge
DNS ResolutionโŒ No container-name-based DNSโœ… Yes, via internal DNS
IsolationโŒ All containers share itโœ… Only containers you attach
Custom IP/SubnetโŒ Fixed 172.17.0.0/16โœ… You define IP ranges
NamingโŒ Always "bridge"โœ… You name it
ConfigurableโŒ Limitedโœ… Yes (e.g., MTU, gateway, subnet)

โš™๏ธ How to Create a User-Defined Bridge Network

โœ… Simple:

docker network create mynet

โœ… With custom subnet and gateway:

docker network create \
  --driver bridge \
  --subnet 192.168.50.0/24 \
  --gateway 192.168.50.1 \
  mynet

Inspect it:

docker network inspect mynet

๐Ÿงช Container Communication: The Power of DNS

Launch two containers in this network:

docker network create mynet

docker run -dit --name c1 --network mynet busybox
docker run -dit --name c2 --network mynet busybox

Now from c1:

docker exec -it c1 ping c2
# โœ… Success: resolves 'c2' via embedded Docker DNS

๐Ÿ“Œ This is not possible with the default bridge.


๐Ÿงฑ Behind the Scenes

  • A Linux bridge device (like br-xxxxx) is created.

  • Docker sets up:

    • Custom IPAM (IP Address Management)

    • Built-in DNS service (typically on .11)

    • Isolated veth interfaces

  • Each container gets:

    • A unique private IP from the subnet

    • A veth pair connecting container โ†” host bridge


๐Ÿ” Port Exposure and NAT

Just like default bridge:

docker run -d --name web --network mynet -p 8080:80 nginx
  • iptables rules map host 8080 โ†’ container 80

  • NAT still applies for external traffic


๐Ÿง  Real-World Use Cases

ScenarioWhy use user-defined bridge
Microservices devDNS-based comm like db:5432, api:3000
Isolated environmentsCreate separate bridges per stack
Custom networkingControl IPs, subnets, gateways
SecurityLimit inter-container access via network policies

** โœ… Also :

containers on the default bridge network cannot directly communicate with containers on a user-defined bridge network.

โœ… Summary of communication of containers within their bridges :

From โ†’ ToCommunication
Default โ†’ Defaultโœ… Yes (via IP)
User-defined โ†’ User-defined (same)โœ… Yes (via name/IP)
Default โ†’ User-definedโŒ No
User-defined โ†’ DefaultโŒ No

๐Ÿ”ง 3. Host Network

When you run a container with --network host, the container does not get its own network namespace.

Instead, it shares the hostโ€™s network stack directlyโ€”same:

  • IP address

  • Network interfaces

  • Open ports

  • Routing table

  • DNS resolution

No virtual NIC, no bridge, no NATโ€”pure host-level access.


๐Ÿ“œ How to Use It

docker run --rm -it --network host nginx

Inside the container:

ip a                                    # Youโ€™ll see host's interfaces like eth0, lo, etc.

hostname -I                              # Same IP as host

๐Ÿ” What Actually Happens?

Normally (bridge):

  • Docker creates a container with its own network namespace

  • Isolated interfaces, IP, and NAT

With --network host:

  • Container shares PID 1โ€™s (hostโ€™s) network namespace

  • No veth pair

  • No Docker bridge

  • No IPTables NAT

This reduces overhead but removes isolation.


๐Ÿง  Implications of host Networking

AspectResult
IP AddressSame as host
DNSUses hostโ€™s /etc/resolv.conf
PortsDirect bind to host ports
NAT / Port mapping (-p)โŒ Ignored (no effect)
IsolationโŒ No network isolation
Performanceโœ… Maximum (no translation layer)

โš ๏ธ Behavior Differences from Bridge and Host Networks

ActionBridgeHost
-p 8080:80Maps host:8080 โ†’ container:80โŒ Ignored (container must bind to host:8080 itself)
Container IPSeparate from hostSame as host
InterfaceVirtual bridge (e.g. br-xyz)Host interfaces (e.g. eth0)

๐Ÿ“ฆ Use Case Example

Run NGINX in host mode:

docker run --rm --network host nginx

If NGINX listens on port 80, it binds directly to host port 80.

You can verify using:

ss -tuln | grep :80

Or curl from another machine using the host IP.


๐Ÿ” Security Considerations

  • Zero network isolation from host: container can sniff or interfere with all traffic

  • Should never be used for untrusted containers

11
Subscribe to my newsletter

Read articles from Rajesh Gurajala directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rajesh Gurajala
Rajesh Gurajala