}

Docker Container Networking Explained (2026): Bridge, Host, Overlay

Docker Container Networking Explained (2026): Bridge, Host, Overlay

Last updated: March 2026

Docker container networking controls how containers communicate with each other and with the outside world. The default bridge network works for simple cases but does not provide DNS-based container discovery — use a custom bridge network for multi-container apps. This guide covers all six network drivers, container-to-container communication patterns, and the critical difference between EXPOSE and publishing ports.


Docker Network Drivers Overview

Driver Scope Use Case
bridge Single host Default for standalone containers; custom bridges for multi-container apps
host Single host Remove network isolation; container uses host's network stack
none Single host Complete network isolation; container has only loopback
overlay Multi-host Docker Swarm; containers on different hosts communicate
macvlan Single host Assign MAC/IP directly from physical network; legacy app compatibility
ipvlan Single host Similar to macvlan but layer 3; fewer ARP broadcasts

The Default Bridge Network

When Docker installs, it creates a bridge network called docker0 (on Linux). Every container started without a --network flag is connected to this default bridge.

# List all networks
docker network ls

# Inspect the default bridge
docker network inspect bridge

The default bridge assigns containers IPs from 172.17.0.0/16 by default.

Critical limitation of the default bridge: containers cannot reach each other by name, only by IP. The IP can change every time a container restarts.

# Start two containers on the default bridge
docker run -d --name app1 nginx
docker run -d --name app2 alpine sleep 3600

# app2 cannot reach app1 by name on the default bridge
docker exec app2 ping app1
# ping: bad address 'app1'

# You must use the IP address instead
docker inspect app1 --format '{{.NetworkSettings.IPAddress}}'
# 172.17.0.2
docker exec app2 ping 172.17.0.2
# Works, but fragile — IP changes on restart

Custom Bridge Networks (Recommended)

Custom bridge networks solve the DNS limitation. Docker runs an embedded DNS server that resolves container names to IPs automatically.

# Create a custom bridge network
docker network create mynet

# Or with custom subnet and gateway
docker network create \
  --driver bridge \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  mynet

# Start containers on the custom network
docker run -d --name web --network mynet nginx
docker run -d --name app --network mynet myapp:latest
docker run -d --name db --network mynet postgres:16

# Containers find each other by name
docker exec app ping web
# PING web (172.20.0.2): 56 data bytes
# Works!

docker exec app ping db
# Works too

Connect a running container to a network:

docker network connect mynet existing-container

Disconnect:

docker network disconnect mynet existing-container

Inspect to see which containers are on the network:

docker network inspect mynet

Host Network Mode

Host mode removes network isolation entirely. The container uses the host's network stack directly — same IP, same ports, no NAT.

docker run -d --network host nginx
# Nginx now listens on port 80 of the host directly

When to use host mode: - Maximum network performance (no NAT overhead) - Applications that need to listen on dynamic port ranges - Network monitoring tools that need access to all interfaces - When you have many port mappings and NAT overhead is measurable

When not to use host mode: - When you need network isolation between containers - When multiple containers need to bind the same port - In production environments where isolation is a security requirement

Host mode is only available on Linux. On macOS and Windows, containers run inside a VM so --network host connects to the VM's network, not your machine's.


None Network Mode

The none driver gives the container only a loopback interface. No external connectivity.

docker run --network none alpine ping 8.8.8.8
# PING 8.8.8.8 (8.8.8.8): 56 data bytes
# ping: sendto: Network unreachable

docker run --network none alpine ip addr
# 1: lo: <LOOPBACK,UP,LOWER_UP>
#     inet 127.0.0.1/8 scope host lo
# That's it — only loopback

Use cases: batch processing jobs that process local files and do not need network access, security-sensitive workloads.


Overlay Networks (Docker Swarm)

Overlay networks span multiple Docker hosts. They are used with Docker Swarm to allow containers on different machines to communicate as if they were on the same network.

# Initialize Swarm on the manager node
docker swarm init

# Create an overlay network
docker network create --driver overlay myoverlay

# Deploy a service on the overlay network
docker service create --name web --network myoverlay --replicas 3 nginx

# Containers on any Swarm node can reach each other by service name

Overlay networks use VXLAN encapsulation to tunnel traffic between hosts. Docker handles the routing table and key-value store (using its built-in Raft implementation in Swarm mode).

For multi-host communication without Swarm, consider using Compose with the attachable flag:

docker network create --driver overlay --attachable myoverlay

Macvlan Networks

Macvlan assigns each container its own MAC address and IP from the physical network, making containers appear as physical devices to the network switch.

docker network create \
  --driver macvlan \
  --subnet 192.168.1.0/24 \
  --gateway 192.168.1.1 \
  --opt parent=eth0 \
  macvlan_net

docker run -d --network macvlan_net --ip 192.168.1.100 nginx

Use macvlan when: - A legacy application needs a specific IP from the corporate network - You need containers directly accessible from the LAN without port mapping - Network monitoring requires a real MAC address

Note: most cloud providers and virtual switches do not support macvlan due to MAC address filtering.


EXPOSE vs Publishing Ports

This is one of the most common Docker networking confusions:

EXPOSE in a Dockerfile: documents which ports the application uses. It does not open the port or make it accessible from outside the container. Think of it as metadata.

EXPOSE 8080  # Documentation only — does nothing to network access

-p / --publish at docker run time: actually binds a port on the host to a port in the container, making it accessible from outside.

# Bind host port 8080 to container port 80
docker run -p 8080:80 nginx

# Bind to a specific host IP
docker run -p 127.0.0.1:8080:80 nginx   # localhost only

# Bind to any host IP (explicit)
docker run -p 0.0.0.0:8080:80 nginx

# Publish all EXPOSE'd ports to random host ports
docker run -P nginx
# Check what ports were assigned
docker port mycontainer

Container-to-container communication does not need published ports. Containers on the same Docker network can reach each other using the container port directly — port publishing is only needed for external access.

# app talks to db on port 5432 directly — no -p needed
docker run -d --name db --network mynet postgres:16
docker run -d --name app --network mynet \
  -e DATABASE_URL=postgresql://db:5432/mydb \
  myapp:latest
# app reaches db:5432 internally; 5432 is never exposed to the host

Debugging Docker Networking

# List all networks
docker network ls

# Inspect a network (see connected containers and their IPs)
docker network inspect mynet

# Inspect container networking
docker inspect mycontainer | python3 -m json.tool | grep -A 30 Networks

# Check DNS resolution inside a container
docker exec mycontainer nslookup db
docker exec mycontainer cat /etc/resolv.conf

# Check connectivity between containers
docker exec app curl -s http://web:80/
docker exec app ping -c 3 db

# View port mappings for a container
docker port mycontainer

# Check iptables rules Docker created
sudo iptables -L DOCKER -n -v

# Trace network traffic
docker run --network container:mycontainer \
  nicolaka/netshoot \
  tcpdump -i eth0

Docker Compose Networking

Docker Compose automatically creates a custom bridge network for your project. All services in the same compose.yml can reach each other by service name.

services:
  web:
    image: nginx
    ports:
      - "80:80"        # Published to host — accessible externally
    networks:
      - frontend

  app:
    build: .
    networks:
      - frontend
      - backend

  db:
    image: postgres:16
    networks:
      - backend        # db is NOT on frontend — web cannot reach it directly
    # No ports: — db is internal only

networks:
  frontend:
  backend:

Using multiple networks isolates services: web can reach app, app can reach db, but web cannot reach db directly.


Common Networking Issues and Fixes

Problem Diagnosis Fix
Container cannot reach the internet docker exec myapp ping 8.8.8.8 fails Check daemon.json DNS settings; verify iptables FORWARD chain
Containers cannot reach each other by name Default bridge instead of custom Create custom bridge network
Port already in use docker run -p 8080:80 fails Change host port; stop the conflicting service
IP range conflicts with VPN 172.17.0.0/16 used by VPN Set custom bip and default-address-pools in daemon.json
Cannot connect to container from host Container on host network Verify -p flag is set correctly; check firewall

Related Articles


FAQ

Q: Why can two containers on the same default bridge not resolve each other by name? The default bridge network pre-dates Docker's embedded DNS server. For backward compatibility, Docker kept the default bridge without DNS resolution. All user-created bridge networks (and Compose-created networks) get the embedded DNS server automatically. Always create a custom bridge network for any application with more than one container.

Q: How do I share a network between multiple Docker Compose projects? Create the network externally and mark it as external in both Compose files:

docker network create shared_net
# compose.yml in project A and project B
networks:
  shared_net:
    external: true

Q: What is the performance difference between bridge and host networking? Host networking eliminates NAT and iptables processing overhead. For most web applications this is negligible (less than 1% latency difference). For high-throughput applications processing millions of packets per second (e.g., network appliances, high-frequency trading), host networking can make a meaningful difference. Benchmark your specific workload before switching to host mode.