}

Docker Logging Drivers: Configuration and Log Management (2026)

Docker Logging Drivers: Configuration and Log Management (2026)

Last updated: March 2026

Every container produces output. Docker's logging subsystem captures that output and routes it to a configurable backend called a logging driver. The driver you choose determines where logs go, how long they are kept, and what tools can read them. By default, Docker writes logs as JSON files on the host — simple to start with but capable of filling your disk if left unconfigured. This guide covers the default driver, how to configure it system-wide via daemon.json, alternative drivers for production environments, and how to read logs with the docker logs command.


The Default json-file Driver

Docker's default logging driver is json-file. Each container writes its stdout and stderr output to a JSON file on the host at:

/var/lib/docker/containers/<container-id>/<container-id>-json.log

Each log entry is a JSON object:

{"log":"Starting application server on port 8080\n","stream":"stdout","time":"2026-03-26T10:00:00.123456789Z"}

The json-file driver has two major advantages: it is zero-configuration, and it supports the docker logs command. Most other drivers do not support docker logs.

The disk growth problem

By default, json-file sets no size limits. A chatty container can write gigabytes of logs before you notice. Always configure limits in production.


Configuring the Logging Driver in daemon.json

The Docker daemon reads its global configuration from /etc/docker/daemon.json. You can set the default logging driver and its options here.

Setting log rotation for json-file

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
  • max-size: Maximum size of a single log file before rotation. Accepts k, m, g suffixes.
  • max-file: Number of rotated log files to keep. At max-file: 3, Docker keeps the current file plus 2 rotated files — a maximum of 3 × max-size disk usage per container.

After editing daemon.json, reload the daemon:

sudo systemctl reload docker
# If reload is not supported:
sudo systemctl restart docker

This setting applies only to new containers. Existing containers continue using the configuration they were started with.

For the full list of daemon.json options and other daemon settings, see the Docker daemon.json configuration guide.

Per-container override

You can override the logging driver for a single container at run time:

docker run --log-driver json-file \
  --log-opt max-size=5m \
  --log-opt max-file=5 \
  nginx:alpine

In compose.yml:

services:
  web:
    image: nginx:alpine
    logging:
      driver: json-file
      options:
        max-size: "5m"
        max-file: "5"

The docker logs Command

docker logs reads from a container's log buffer. It works with the json-file and local drivers. For other drivers (syslog, journald, fluentd), use the driver's native tools.

Basic usage

# Print all logs for a container
docker logs my-container

# Follow (tail -f equivalent)
docker logs -f my-container
docker logs --follow my-container

# Show only the last N lines
docker logs --tail 100 my-container

# Combine follow with tail
docker logs --tail 50 -f my-container

Filtering by time

# Logs since a specific timestamp
docker logs --since 2026-03-26T10:00:00 my-container

# Logs since a relative time
docker logs --since 1h my-container
docker logs --since 30m my-container

# Logs until a timestamp
docker logs --until 2026-03-26T12:00:00 my-container

# Combine since and until
docker logs --since 2026-03-26T10:00:00 --until 2026-03-26T11:00:00 my-container

Timestamps

Add RFC 3339 timestamps to each log line:

docker logs --timestamps my-container
docker logs -t my-container

Example output:

2026-03-26T10:00:01.234567890Z Starting application...
2026-03-26T10:00:02.345678901Z Listening on port 8080

Filtering stdout and stderr separately

# Only stdout
docker logs my-container 2>/dev/null

# Only stderr
docker logs my-container 1>/dev/null

Alternative Logging Drivers

local Driver

The local driver is a compact binary format. It uses less disk space than json-file and has built-in log rotation enabled by default. It also supports docker logs.

{
  "log-driver": "local",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

For new deployments where you want docker logs support and efficient storage, local is a better default than json-file.

syslog Driver

Routes container logs to the local or remote syslog daemon. Useful when your infrastructure already centralizes logs via syslog.

{
  "log-driver": "syslog",
  "log-opts": {
    "syslog-address": "udp://192.168.1.10:514",
    "syslog-facility": "daemon",
    "tag": "{{.Name}}"
  }
}

For local syslog (rsyslog or syslog-ng):

docker run --log-driver syslog nginx:alpine

Logs appear in /var/log/syslog (Debian/Ubuntu) or /var/log/messages (RHEL/CentOS). The docker logs command is not available with this driver.

journald Driver

On systemd-based hosts, route container logs directly to journald:

{
  "log-driver": "journald"
}

Read logs with journalctl:

# All logs for a specific container
journalctl CONTAINER_NAME=my-container

# Follow live
journalctl -f CONTAINER_NAME=my-container

# By Docker image
journalctl CONTAINER_TAG=nginx:alpine

The journald driver integrates natively with the systemd ecosystem and supports structured fields. docker logs is not supported.

fluentd Driver

Fluentd is a log aggregator that can forward logs to Elasticsearch, S3, BigQuery, and dozens of other destinations.

{
  "log-driver": "fluentd",
  "log-opts": {
    "fluentd-address": "localhost:24224",
    "tag": "docker.{{.Name}}"
  }
}

Fluentd must be running and listening before Docker tries to send logs. If the Fluentd endpoint is unavailable, containers using this driver will fail to start unless you set fluentd-async to true:

{
  "log-driver": "fluentd",
  "log-opts": {
    "fluentd-address": "localhost:24224",
    "fluentd-async": "true",
    "tag": "docker.{{.Name}}"
  }
}

awslogs Driver

Sends logs directly to AWS CloudWatch Logs:

docker run \
  --log-driver awslogs \
  --log-opt awslogs-region=us-east-1 \
  --log-opt awslogs-group=my-app-logs \
  --log-opt awslogs-stream=my-container \
  myapp:latest

Log Driver Comparison

Driver docker logs Rotation Best for
json-file Yes Manual config Development, simple setups
local Yes Built-in Production single-host
syslog No Syslog daemon Existing syslog infrastructure
journald No journald config systemd hosts
fluentd No Fluentd config Central log aggregation
awslogs No CloudWatch policy AWS deployments
none No N/A Containers that must not log

Log Rotation Best Practices

Always set limits in production

A minimal production configuration:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  }
}

This caps each container at 50 MB of logs total. Adjust based on your log volume and disk capacity.

Calculate capacity

If you have 50 containers, each with max-size=10m and max-file=5:

50 containers × 50 MB = 2.5 GB maximum log disk usage

Check current log sizes

# Size of all Docker container logs
sudo du -sh /var/lib/docker/containers/*/*-json.log | sort -h

# Total Docker log disk usage
sudo du -sh /var/lib/docker/containers/

Truncate a log file manually (last resort)

Do not delete log files while the container is running. Truncate instead:

# Find the log file path
LOG_FILE=$(docker inspect --format='{{.LogPath}}' my-container)

# Truncate (does not break the file descriptor)
sudo truncate -s 0 "$LOG_FILE"

Use logrotate for fine-grained control

For the json-file driver, you can also manage rotation with the system logrotate:

/var/lib/docker/containers/*/*.log {
    daily
    rotate 7
    compress
    missingok
    notifempty
    copytruncate
}

Place this in /etc/logrotate.d/docker-containers.


Checking the Active Logging Driver

To see which driver a running container is using:

docker inspect --format='{{.HostConfig.LogConfig.Type}}' my-container

To see the daemon-level default:

docker info | grep "Logging Driver"

Structured Logging Tips

Regardless of driver, structured (JSON) application logs are easier to parse and filter downstream:

import json, sys
print(json.dumps({"level": "info", "msg": "request complete", "status": 200, "path": "/api/v1/users"}))

When each log line is valid JSON, tools like jq and log aggregators can query specific fields without regex:

docker logs my-container | jq 'select(.level == "error")'

Related Articles