}

Docker Compose v2 Tutorial 2026: From Local Dev to Production with Profiles, Health Checks, and Secrets

Docker Compose v2 Tutorial 2026: From Local Dev to Production

Docker Compose v2 has matured into an indispensable tool for defining and running multi-container applications. Whether you are spinning up a local development environment or deploying a production-grade stack, Compose gives you a single declarative file to describe every service, network, volume, and secret your application needs.

This tutorial walks through everything you need to know in 2026: the differences between v1 and v2, the compose.yaml file structure, a realistic four-service example (FastAPI + PostgreSQL + Redis + Nginx), health checks with dependency conditions, profiles for environment-specific services, secrets management, volume strategies, override files, and the commands you will reach for every day.


1. Compose v2 vs v1: What Changed and Why It Matters

The command has changed

The original Compose tool was a standalone Python binary installed separately from Docker. You invoked it as:

docker-compose up -d        # v1 — deprecated

Compose v2 ships as a first-class Docker CLI plugin written in Go. The command is now:

docker compose up -d        # v2 — no hyphen

The difference is more than cosmetic. The v1 docker-compose binary reached end-of-life in July 2023. Docker Desktop has bundled v2 since version 3.4.0, and most Linux distributions now install docker-compose-plugin alongside the Docker engine. Running docker compose version should show v2.24 or later in 2026.

If you see docker-compose (with a hyphen) anywhere in scripts or CI pipelines, replace it.

The file has a new preferred name

Compose v2 prefers compose.yaml over docker-compose.yml. Both names are still supported for backward compatibility, but the Compose Specification (compose-spec.io) defines compose.yaml as the canonical filename. When both exist, compose.yaml wins.

The Compose Specification

Compose v2 implements the open Compose Specification, maintained independently of Docker at compose-spec.io. This means the format is vendor-neutral; Podman Compose and other runtimes can consume the same file. The spec dropped several v2/v3 version number distinctions — you no longer need a version: key at the top of the file (it is ignored if present).


2. The compose.yaml File Structure

A compose.yaml file can contain six top-level keys:

services:    # required — the containers to run
networks:    # optional — custom networks
volumes:     # optional — named volumes
secrets:     # optional — sensitive data
configs:     # optional — non-sensitive configuration files

Each key maps to a set of named objects. Here is a minimal example:

services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"

Real projects are rarely this simple. The next section builds a production-realistic example from the ground up.


3. A Realistic Example: FastAPI + PostgreSQL + Redis + Nginx

We will build a stack with:

  • api — a FastAPI application, built from a local Dockerfile
  • db — PostgreSQL 16 with a named volume
  • cache — Redis 7 with persistence
  • proxy — Nginx as a reverse proxy

Directory layout

my-app/
├── compose.yaml
├── compose.override.yaml        # dev overrides (covered in section 7)
├── .env                         # environment variables
├── secrets/
│   ├── db_password.txt
│   └── api_secret_key.txt
├── nginx/
│   └── default.conf
└── api/
    ├── Dockerfile
    ├── main.py
    └── requirements.txt

The full compose.yaml

services:

  # ── Reverse proxy ─────────────────────────────────────────────────
  proxy:
    image: nginx:1.27-alpine
    ports:
      - "${HTTP_PORT:-80}:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
    networks:
      - frontend
    depends_on:
      api:
        condition: service_healthy
    restart: unless-stopped

  # ── FastAPI application ────────────────────────────────────────────
  api:
    build:
      context: ./api
      dockerfile: Dockerfile
      platforms:
        - linux/amd64
        - linux/arm64
    environment:
      DATABASE_URL: "postgresql+asyncpg://app:${DB_PASSWORD}@db:5432/${DB_NAME:-appdb}"
      REDIS_URL: "redis://cache:6379/0"
    secrets:
      - api_secret_key
    networks:
      - frontend
      - backend
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/healthz"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    restart: unless-stopped

  # ── PostgreSQL database ────────────────────────────────────────────
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
      POSTGRES_DB: ${DB_NAME:-appdb}
    secrets:
      - db_password
    volumes:
      - db_data:/var/lib/postgresql/data
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d ${DB_NAME:-appdb}"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 10s
    restart: unless-stopped

  # ── Redis cache ────────────────────────────────────────────────────
  cache:
    image: redis:7-alpine
    command: ["redis-server", "--appendonly", "yes", "--requirepass", "${REDIS_PASSWORD}"]
    volumes:
      - redis_data:/data
    networks:
      - backend
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

  # ── Dev-only tools (activated with --profile dev) ──────────────────
  pgadmin:
    image: dpage/pgadmin4:latest
    profiles: [dev]
    environment:
      PGADMIN_DEFAULT_EMAIL: [email protected]
      PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_PASSWORD:-pgadmin}
    ports:
      - "5050:80"
    networks:
      - backend
    depends_on:
      db:
        condition: service_healthy

  redis-commander:
    image: rediscommander/redis-commander:latest
    profiles: [dev]
    environment:
      REDIS_HOSTS: "local:cache:6379:0:${REDIS_PASSWORD}"
    ports:
      - "8081:8081"
    networks:
      - backend

  # ── Production-only exporter (activated with --profile prod) ───────
  postgres-exporter:
    image: prometheuscommunity/postgres-exporter:latest
    profiles: [prod]
    environment:
      DATA_SOURCE_NAME: "postgresql://app:${DB_PASSWORD}@db:5432/${DB_NAME:-appdb}?sslmode=disable"
    networks:
      - backend
    depends_on:
      db:
        condition: service_healthy

networks:
  frontend:
  backend:
    internal: true        # no direct internet access for backend services

volumes:
  db_data:
  redis_data:

secrets:
  db_password:
    file: ./secrets/db_password.txt
  api_secret_key:
    file: ./secrets/api_secret_key.txt

There is a lot happening here. The following sections explain each major concept in depth.


4. Health Checks and depends_on with Conditions

One of the most impactful improvements in Compose v2 is the condition key on depends_on. In v1, depends_on only waited for a container to start, not for it to become ready. This caused countless race conditions where the API tried to connect to PostgreSQL before the database had finished initializing.

Defining a health check

The healthcheck block tells Docker how to test whether a service is healthy:

healthcheck:
  test: ["CMD-SHELL", "pg_isready -U app -d appdb"]
  interval: 10s      # how often to run the test
  timeout: 5s        # how long to wait for a result
  retries: 5         # consecutive failures before marking unhealthy
  start_period: 10s  # grace period after container starts (failures don't count)

The test field accepts four forms:

FormExample
NONEDisable inherited health check
CMD["CMD", "curl", "-f", "http://localhost/health"]
CMD-SHELL["CMD-SHELL", "pg_isready -U postgres"]
String"curl -f http://localhost/health" (runs in shell)

Use CMD when the executable is already in PATH and you do not need shell features. Use CMD-SHELL when you need pipes, &&, or environment variable expansion.

Using conditions in depends_on

depends_on:
  db:
    condition: service_healthy    # waits for health check to pass
  migration:
    condition: service_completed_successfully   # waits for a one-shot container to exit 0
  cache:
    condition: service_started    # original behaviour — just waits for start

The three conditions are:

  • service_started — the container has started (default, same as v1 behaviour)
  • service_healthy — the container's health check is passing
  • service_completed_successfully — the container has exited with code 0 (useful for database migrations run as a separate service)

In our example, proxy waits for api to be healthy, and api waits for both db and cache to be healthy. This guarantees the full stack is ready before traffic is accepted.


5. Profiles: One File for Dev, Staging, and Production

Profiles let you mark services as belonging to named groups. Services without a profiles key are always started. Services with a profiles key are started only when that profile is explicitly activated.

Defining profiles

services:
  pgadmin:
    profiles: [dev]
    image: dpage/pgadmin4:latest
    # ...

  postgres-exporter:
    profiles: [prod]
    image: prometheuscommunity/postgres-exporter:latest
    # ...

Activating profiles

# Start core services only (no profiles)
docker compose up -d

# Start core services + dev tools
docker compose --profile dev up -d

# Start core services + production exporters
docker compose --profile prod up -d

# Multiple profiles
docker compose --profile dev --profile monitoring up -d

You can also set profiles via the COMPOSE_PROFILES environment variable:

export COMPOSE_PROFILES=dev
docker compose up -d

Recommended profile strategy

ProfileIncluded services
(none)api, db, cache, proxy
devpgAdmin, Redis Commander, live-reload watcher
prodPrometheus exporters, log shippers, backup cron
testTest database, mock services, fixtures

This pattern keeps a single compose.yaml as the source of truth for your entire stack, while giving each environment exactly the services it needs.


6. Secrets Management

Docker secrets vs environment variables

Environment variables are the simplest way to pass configuration, but they are visible in docker inspect, process listings (/proc/<pid>/environ), and can be leaked into logs. For credentials and API keys, Docker secrets are safer.

Docker secrets in Compose (non-Swarm mode) are mounted as files inside the container at /run/secrets/<secret-name>. Applications read the file at startup instead of reading an environment variable.

Defining secrets in compose.yaml

secrets:
  db_password:
    file: ./secrets/db_password.txt
  api_secret_key:
    file: ./secrets/api_secret_key.txt

The file: source reads the secret from a local file. The file should not be committed to version control — add secrets/ to .gitignore.

Attaching secrets to a service

services:
  db:
    image: postgres:16-alpine
    secrets:
      - db_password
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password

PostgreSQL, MySQL, and many other official images support _FILE variants of their environment variables, which tell the process to read the value from a file path rather than the variable itself.

For your own application, read the secret in code:

# Python example
from pathlib import Path

def read_secret(name: str) -> str:
    secret_path = Path(f"/run/secrets/{name}")
    if secret_path.exists():
        return secret_path.read_text().strip()
    raise RuntimeError(f"Secret {name} not found")

SECRET_KEY = read_secret("api_secret_key")

When to use .env files

Use .env files for non-sensitive configuration: port numbers, feature flags, database names, image tags. Combine them with secrets for a clean separation:

# .env — committed to version control (no secrets!)
DB_NAME=appdb
HTTP_PORT=80
REDIS_PASSWORD=devpassword123   # OK for dev; use secrets in prod

# secrets/db_password.txt — NOT committed
supersecretproductionpassword

7. Named Volumes vs Bind Mounts

Named volumes

Named volumes are managed by Docker and stored in Docker's data directory (typically /var/lib/docker/volumes/). They are the recommended approach for database data and any state that must persist across container restarts.

volumes:
  db_data:          # Docker-managed, persistent, portable
  redis_data:

Advantages of named volumes:

  • Survive docker compose down (data is only deleted with docker compose down -v)
  • Portable across different host operating systems
  • Can be backed up and restored with docker run --rm -v db_data:/data ...
  • Better I/O performance on macOS and Windows (no filesystem translation layer)

Bind mounts

Bind mounts map a host directory or file into the container. They are ideal for development workflows where you want live code reloading.

services:
  api:
    volumes:
      - ./api:/app          # bind mount: host path → container path
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro   # read-only

The :ro suffix makes the mount read-only inside the container — a good practice for configuration files.

Best practice summary

Use caseRecommendation
Database dataNamed volume
Redis / cache dataNamed volume
Source code (dev)Bind mount
Config filesBind mount (:ro)
Build artifactsNamed volume or tmpfs

Never use a bind mount for database data in production. Permissions, UID/GID mismatches, and filesystem differences between host and container cause subtle corruption bugs.


8. Override Files: docker-compose.override.yaml

Compose automatically merges compose.override.yaml (or docker-compose.override.yml) with compose.yaml when both exist. This is the standard pattern for layering development-specific configuration on top of a production-safe base file.

compose.override.yaml for development

# compose.override.yaml — development overrides, NOT used in production
services:
  api:
    build:
      target: development      # multi-stage build target with dev dependencies
    volumes:
      - ./api:/app             # live code reloading
      - /app/__pycache__       # exclude cache from bind mount
    environment:
      DEBUG: "true"
      LOG_LEVEL: DEBUG
    command: ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

  db:
    ports:
      - "5432:5432"            # expose DB port for local tools in dev only

  cache:
    ports:
      - "6379:6379"            # expose Redis port for local inspection

In development, run docker compose up -d and both files are merged automatically. In CI or production, specify only the base file explicitly:

docker compose -f compose.yaml up -d

You can also compose multiple override files:

docker compose -f compose.yaml -f compose.staging.yaml up -d

9. Environment Variables and .env Files

The .env file

Compose automatically loads a .env file from the directory where you run docker compose. Variables defined there are available for substitution in compose.yaml:

# .env
DB_NAME=appdb
DB_PASSWORD=devpassword
HTTP_PORT=8080
REDIS_PASSWORD=devredis
PGADMIN_PASSWORD=pgadmin
# compose.yaml
services:
  db:
    environment:
      POSTGRES_DB: ${DB_NAME}           # substituted from .env
      POSTGRES_PASSWORD: ${DB_PASSWORD:-defaultpass}  # with fallback

The ${VARIABLE:-default} syntax provides a default value if the variable is unset or empty.

Passing environment variables to containers

There are three ways to set environment variables on a service:

services:
  api:
    # 1. Inline key=value
    environment:
      DEBUG: "false"
      LOG_LEVEL: INFO

    # 2. Reference host environment (no value = pass through)
    environment:
      - CI
      - BUILD_NUMBER

    # 3. Separate env_file
    env_file:
      - ./config/api.env
      - ./config/api.${ENVIRONMENT:-dev}.env

The env_file approach is useful for managing many variables without cluttering compose.yaml.


10. Multi-Architecture Builds with platform

Modern infrastructure runs on both x86-64 (amd64) servers and ARM64 machines (Apple Silicon Macs, AWS Graviton, Raspberry Pi). Compose v2 supports multi-platform builds via the platform key:

services:
  api:
    build:
      context: ./api
      platforms:
        - linux/amd64
        - linux/arm64

To actually build multi-platform images and push them to a registry, use docker buildx bake or docker buildx build --platform linux/amd64,linux/arm64 --push. The platforms key in compose.yaml primarily ensures the correct platform image is pulled when no build is performed.

You can also pin an individual service to a specific platform regardless of the host:

services:
  legacy-service:
    image: some/old-image:latest
    platform: linux/amd64    # force amd64 even on ARM hosts (uses emulation)

11. Essential Commands

Starting and stopping

docker compose up -d                    # start all services in detached mode
docker compose --profile dev up -d      # start with dev profile
docker compose down                     # stop and remove containers + networks
docker compose down -v                  # also remove named volumes (destructive!)
docker compose stop                     # stop containers without removing them
docker compose start                    # restart stopped containers
docker compose restart api              # restart a single service

Inspecting state

docker compose ps                       # list running services and their status
docker compose ps -a                    # include stopped containers
docker compose logs -f                  # stream logs from all services
docker compose logs -f api db           # stream logs from specific services
docker compose logs --tail=100 api      # last 100 lines from api
docker compose top                      # show running processes inside containers

Running commands inside containers

docker compose exec api bash            # open a shell in the running api container
docker compose exec db psql -U app appdb  # connect to PostgreSQL
docker compose run --rm api pytest      # run a one-off command in a new container

Building and updating images

docker compose build                    # build all services with a build context
docker compose build --no-cache api     # rebuild without layer cache
docker compose pull                     # pull latest images for all services
docker compose up -d --pull always      # pull and recreate if image changed

Scaling (for stateless services)

docker compose up -d --scale api=3     # run 3 replicas of the api service

Note: scaling works best with stateless services behind a load balancer. Database services should never be scaled this way.


12. Production Patterns and Hardening

Use explicit image tags

Never use latest in production. Pin to a specific digest or version tag:

image: postgres:16.3-alpine        # good — pinned minor version
image: postgres:16-alpine          # acceptable — pinned major version
image: postgres:latest             # bad — unpredictable updates

Set resource limits

services:
  api:
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
        reservations:
          cpus: "0.25"
          memory: 128M

Note: deploy.resources is respected by docker compose up since Compose v2.17. In older versions it was only used in Swarm mode.

Set restart policies

restart: unless-stopped    # recommended for most services
restart: on-failure        # for one-shot or migration services
restart: always            # restarts even after docker daemon restart
restart: "no"              # default — no automatic restart

Use read-only filesystems where possible

services:
  api:
    read_only: true
    tmpfs:
      - /tmp            # writable tmpfs for temp files
      - /app/cache

Separate networks

The example uses two networks: frontend (proxy + api) and backend (api + db + cache). The backend network is marked internal: true, which means containers on it cannot reach the internet directly. This reduces the blast radius if a backend service is compromised.


13. Putting It All Together: Workflow Summary

For a typical development session:

# 1. Copy and populate the env file
cp .env.example .env && vim .env

# 2. Create secret files
echo "mysecretdbpassword" > secrets/db_password.txt
echo "myfastapisecretkey" > secrets/api_secret_key.txt
chmod 600 secrets/*.txt

# 3. Start the full dev stack
docker compose --profile dev up -d --build

# 4. Tail logs to verify health
docker compose logs -f

# 5. Check service health status
docker compose ps

# 6. Run database migrations (one-shot container)
docker compose run --rm api alembic upgrade head

# 7. Open pgAdmin at http://localhost:5050
# 8. Open Redis Commander at http://localhost:8081

# 9. When done, stop everything (data is preserved in named volumes)
docker compose down

For a production deployment:

# Pull latest images and rebuild application
docker compose -f compose.yaml --profile prod pull
docker compose -f compose.yaml --profile prod build --no-cache api

# Recreate changed containers with zero-downtime for stateless services
docker compose -f compose.yaml --profile prod up -d --no-deps api proxy

References

  • Docker Compose official documentation: docs.docker.com/compose/
  • Compose Specification: compose-spec.io
  • Docker Blog — Compose v2 GA announcement: docker.com/blog/announcing-compose-v2-general-availability/
  • PostgreSQL Docker image environment variables: hub.docker.com/_/postgres
  • Redis Docker image documentation: hub.docker.com/_/redis
  • Docker Buildx multi-platform documentation: docs.docker.com/buildx/working-with-buildx/

Leonardo Lazzaro

Software engineer and technical writer. 10+ years experience in DevOps, Python, and Linux systems.

More articles by Leonardo Lazzaro