Docker Networking

๐ง What is Docker Networking?
In simple terms, Docker networking is how containers communicate with:
Other containers
The host machine
The external internet
Services running in other environments
Docker abstracts networking infrastructure using virtual networks, giving each container a virtual NIC (vNIC) and a unique IP address, managed by an internal DNS and routing system.
๐ Core Objectives of Docker Networking:
Isolation
Each container can have its own network namespace, offering security and isolation.Connectivity
Containers can talk to each other via IP, hostname, or DNSโlocally or across hosts.Portability
Dockerโs networking model is designed to work across platforms: local machines, VMs, cloud, and Swarm/K8s clusters.Programmability
You can define custom networks, apply firewall rules, load balancing, and service discovery.
๐ 1. Docker Default Bridge Network
When you install Docker, it automatically creates a default bridge network named:
docker network ls
# Output:
# NETWORK ID NAME DRIVER SCOPE
# abcd1234... bridge bridge local
โ๏ธ Technical Architecture: What Happens Under the Hood
๐ a. Virtual Ethernet (veth) Pair
Each time a container is launched into a bridge network:
Docker creates a veth pair:
One end lives inside the container as
eth0
The other end connects to the bridge (e.g.,
br-xxxxx
) on the host
๐ง b. Linux Bridge (docker0
)
Docker creates a Linux bridge interface called docker0
:
ip link show docker0
This acts like a virtual Layer 2 switch on the host. All container veth
interfaces plug into this bridge.
๐ c. IP Allocation
Docker runs an internal DHCP server that:
Assigns IPs from the
172.17.0.0/16
subnetExample:
172.17.0.2
,172.17.0.3
, ...
These are private IPs, not accessible from the outside directly.
๐ d. NAT and IPTables
To allow containers to access the internet:
Docker sets up IP masquerading (NAT) via
iptables
rulesOutbound traffic from the container appears to originate from the host
To expose ports externally, Docker uses iptables DNAT
rules.
๐ Container Communication in Default Bridge
โ Can:
Containers on the same bridge can talk via IP
Host โ Container via
localhost
:port
(if port is published)
โ Cannot:
Containers on different bridges cannot talk
Containers do not get DNS names unless on user-defined bridge (more on that later)
Example:
docker run -d --name c1 busybox sleep 1000
docker run -d --name c2 busybox sleep 1000
docker exec c1 ping c2
# ping: unknown host (fails โ no DNS)
docker exec c1 ping 172.17.0.X
# works if you know the IP
๐ Port Publishing in Bridge Network
When you use -p
or --publish
:
docker run -d -p 8080:80 nginx
Docker:
Maps port 8080 on host โ container:80
Creates an
iptables
DNAT ruleAllows external traffic to reach your container via
localhost:8080
Use ss -tuln
or iptables -t nat -L -n
to verify.
๐ฆ Creating and Using Default Bridge Network
You donโt need to create itโit already exists.
docker network inspect bridge
๐ 2. User-Defined Bridge Network
A user-defined bridge network is a Docker network you explicitly create using:
docker network create --driver bridge my_bridge
It uses the same bridge driver as the default bridge
network, but with better isolation, built-in DNS-based service discovery, and custom configurability.
๐ Differences from the Default Bridge Network
Feature | Default bridge | User-defined bridge |
DNS Resolution | โ No container-name-based DNS | โ Yes, via internal DNS |
Isolation | โ All containers share it | โ Only containers you attach |
Custom IP/Subnet | โ Fixed 172.17.0.0/16 | โ You define IP ranges |
Naming | โ Always "bridge" | โ You name it |
Configurable | โ Limited | โ Yes (e.g., MTU, gateway, subnet) |
โ๏ธ How to Create a User-Defined Bridge Network
โ Simple:
docker network create mynet
โ With custom subnet and gateway:
docker network create \
--driver bridge \
--subnet 192.168.50.0/24 \
--gateway 192.168.50.1 \
mynet
Inspect it:
docker network inspect mynet
๐งช Container Communication: The Power of DNS
Launch two containers in this network:
docker network create mynet
docker run -dit --name c1 --network mynet busybox
docker run -dit --name c2 --network mynet busybox
Now from c1
:
docker exec -it c1 ping c2
# โ
Success: resolves 'c2' via embedded Docker DNS
๐ This is not possible with the default bridge
.
๐งฑ Behind the Scenes
A Linux bridge device (like
br-xxxxx
) is created.Docker sets up:
Custom IPAM (IP Address Management)
Built-in DNS service (typically on
.11
)Isolated veth interfaces
Each container gets:
A unique private IP from the subnet
A veth pair connecting container โ host bridge
๐ Port Exposure and NAT
Just like default bridge:
docker run -d --name web --network mynet -p 8080:80 nginx
iptables
rules map host8080
โ container80
NAT still applies for external traffic
๐ง Real-World Use Cases
Scenario | Why use user-defined bridge |
Microservices dev | DNS-based comm like db:5432 , api:3000 |
Isolated environments | Create separate bridges per stack |
Custom networking | Control IPs, subnets, gateways |
Security | Limit inter-container access via network policies |
** โ Also :
containers on the default bridge
network cannot directly communicate with containers on a user-defined bridge
network.
โ Summary of communication of containers within their bridges :
From โ To | Communication |
Default โ Default | โ Yes (via IP) |
User-defined โ User-defined (same) | โ Yes (via name/IP) |
Default โ User-defined | โ No |
User-defined โ Default | โ No |
๐ง 3. Host Network
When you run a container with --network host
, the container does not get its own network namespace.
Instead, it shares the hostโs network stack directlyโsame:
IP address
Network interfaces
Open ports
Routing table
DNS resolution
No virtual NIC, no bridge, no NATโpure host-level access.
๐ How to Use It
docker run --rm -it --network host nginx
Inside the container:
ip a # Youโll see host's interfaces like eth0, lo, etc.
hostname -I # Same IP as host
๐ What Actually Happens?
Normally (bridge):
Docker creates a container with its own network namespace
Isolated interfaces, IP, and NAT
With --network host
:
Container shares PID 1โs (hostโs) network namespace
No
veth
pairNo Docker bridge
No IPTables NAT
This reduces overhead but removes isolation.
๐ง Implications of host
Networking
Aspect | Result |
IP Address | Same as host |
DNS | Uses hostโs /etc/resolv.conf |
Ports | Direct bind to host ports |
NAT / Port mapping (-p ) | โ Ignored (no effect) |
Isolation | โ No network isolation |
Performance | โ Maximum (no translation layer) |
โ ๏ธ Behavior Differences from Bridge and Host Networks
Action | Bridge | Host |
-p 8080:80 | Maps host:8080 โ container:80 | โ Ignored (container must bind to host:8080 itself) |
Container IP | Separate from host | Same as host |
Interface | Virtual bridge (e.g. br-xyz ) | Host interfaces (e.g. eth0 ) |
๐ฆ Use Case Example
Run NGINX in host mode:
docker run --rm --network host nginx
If NGINX listens on port 80, it binds directly to host port 80.
You can verify using:
ss -tuln | grep :80
Or curl from another machine using the host IP.
๐ Security Considerations
Zero network isolation from host: container can sniff or interfere with all traffic
Should never be used for untrusted containers
Subscribe to my newsletter
Read articles from Rajesh Gurajala directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
