}

Docker Multi-Stage Builds: From 1.2GB to Production-Ready Images (2026)

Docker Multi-Stage Builds: From 1.2GB to Production-Ready Images (2026)

A container image that ships a compiler, a package manager, build headers, and gigabytes of development dependencies into production is not just wasteful — it is a security liability. Every tool included in an image is a potential attack surface. Multi-stage builds solve this cleanly: use one or more heavyweight stages to compile and assemble your application, then copy only the finished artifacts into a minimal final image. The result is smaller, faster to pull, and dramatically harder to exploit.

This tutorial covers multi-stage builds for Python, Go, and Node.js, including FROM scratch for zero-overhead Go binaries, cache optimization strategies, build targets for debugging, and integration with GitHub Actions.

The Problem with Single-Stage Images

Consider a naive Python Dockerfile:

FROM python:3.12
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]

python:3.12 is a Debian-based image that includes gcc, make, the full Python development headers, and pip's build isolation tools — everything needed to compile C extensions. That base image alone is around 1.0GB. Add a moderately complex requirements.txt with packages like numpy, cryptography, or Pillow, and you are looking at a 1.2GB image going to every production host.

That image contains:

  • A C compiler that can build exploits
  • Source code alongside binaries
  • Development headers useful for privilege escalation
  • Tools like curl, wget, and git that aid lateral movement

None of those belong in production. Multi-stage builds let you keep them in the build environment only.

The Multi-Stage Concept

A multi-stage Dockerfile has multiple FROM instructions. Each FROM starts a new stage with its own filesystem. Stages can reference artifacts from previous stages using COPY --from=<stage>. Only the final stage becomes the image that gets pushed and deployed.

FROM heavyweight-base AS builder
# ... compile, install, build artifacts

FROM lightweight-base
COPY --from=builder /path/to/artifact /path/to/artifact
CMD ["/path/to/artifact"]

The build tools, intermediate files, and source code from the builder stage are completely absent from the final image. They never existed as far as the produced image is concerned.

Python: From 1.1GB to 180MB

Here is a production-ready multi-stage Dockerfile for a Python application:

FROM python:3.12 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user -r requirements.txt

FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
CMD ["python", "main.py"]

The builder stage uses the full python:3.12 image to install all packages, including any that require compilation. The installed packages land in /root/.local (because of --user).

The final stage uses python:3.12-slim, which is Debian-slim with only the Python runtime — no gcc, no headers, no build tools. It copies the pre-installed packages directly from the builder and runs the application.

Size comparison:

Approach Image Size
Single-stage (python:3.12) ~1.1 GB
Multi-stage (python:3.12-slim) ~180 MB
Multi-stage (python:3.12-alpine) ~120 MB

The Alpine variant shaves another 60MB, but Alpine uses musl libc instead of glibc, which can cause compatibility issues with packages that link against glibc. Test thoroughly before using Alpine in production with compiled dependencies.

Go: 5MB Binary with FROM scratch

Go compiles to a statically linked binary by default (with CGO_ENABLED=0). That binary has no runtime dependencies — it does not need libc, a shell, or any OS utilities. This makes Go ideal for FROM scratch, a completely empty base image.

FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o app .

FROM scratch
COPY --from=builder /app/app .
EXPOSE 8080
CMD ["/app"]

FROM scratch is not a real image — it is a Docker keyword meaning "start with an empty filesystem". The final image contains exactly one file: the compiled Go binary. A typical Go web service produces an image of 5–15MB.

There are two practical considerations:

  1. TLS certificates: If your binary makes HTTPS requests, it needs CA certificates. Copy them from the builder: COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/.
  2. Debugging: A scratch image has no shell, no ls, no ps. You cannot exec into it. For debugging, use a FROM gcr.io/distroless/static base instead — it adds a few hundred kilobytes but provides a non-root user and basic process tooling.

For Rust binaries compiled with --target x86_64-unknown-linux-musl, the same approach applies: a fully static binary in a FROM scratch image.

Node.js: Build Once, Run Lean

A Node.js application with a build step (TypeScript compilation, bundling, asset processing) has a clear multi-stage boundary:

FROM node:22 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:22-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]

The builder stage installs all dependencies (including devDependencies) and runs the build. The final stage installs only production dependencies with --omit=dev and copies the compiled dist/ directory. TypeScript, esbuild, webpack, and any other dev tools are left behind.

For server-side rendered applications where the built output includes static assets, the same pattern applies — copy only the dist/ or .next/ directory, not the source.

Named Stages and Build Targets

Stages can be named with AS:

FROM python:3.12 AS builder
# ...

FROM python:3.12-slim AS tester
COPY --from=builder /root/.local /root/.local
COPY . .
RUN python -m pytest

FROM python:3.12-slim AS production
COPY --from=builder /root/.local /root/.local
COPY . .
CMD ["python", "main.py"]

You can build a specific stage with --target:

docker build --target tester .

This is useful in CI pipelines: run tests in the tester stage and only build the full production image if tests pass. It is also useful for debugging — build the builder stage and exec into it to inspect the build environment:

docker build --target builder -t myapp-debug .
docker run --rm -it myapp-debug /bin/bash

COPY --from: Referencing Specific Files

COPY --from can copy individual files, directories, or glob patterns from any named stage or even from an external image:

# Copy a specific binary from a stage
COPY --from=builder /app/bin/server /usr/local/bin/server

# Copy a directory
COPY --from=builder /app/static ./static

# Copy from an external image (useful for pinned tool versions)
COPY --from=alpine:3.19 /usr/bin/curl /usr/local/bin/curl

Copying from external images is a powerful pattern for including pinned versions of tools (like curl or jq) without building them yourself or polluting the final image with a full Alpine layer.

Cache Optimization: Layer Order Matters

Docker builds images layer by layer. Each instruction in a Dockerfile is a layer. If a layer's input has not changed, Docker reuses the cached layer and skips re-executing that instruction.

The golden rule: copy files that change rarely before files that change often.

Slow (invalidates cache on every code change):

FROM python:3.12 AS builder
WORKDIR /app
COPY . .                          # copies everything, including source
RUN pip install --user -r requirements.txt  # re-runs on every code change

Fast (requirements cached independently of source):

FROM python:3.12 AS builder
WORKDIR /app
COPY requirements.txt .           # only invalidated when requirements change
RUN pip install --user -r requirements.txt  # cached unless requirements.txt changes
COPY . .                          # source copied after dependencies are installed

In the optimized version, pip install is only re-run when requirements.txt changes. A code-only change skips straight to the COPY . . layer. On a large project with many dependencies, this saves minutes per build.

The same principle applies to Go (COPY go.mod go.sum . before COPY . .) and Node.js (COPY package*.json ./ before COPY . .).

Security Benefits

The security improvement from multi-stage builds is concrete and measurable. A production image built with a single stage from python:3.12 includes:

  • gcc and binutils (can compile shellcode)
  • pip and setuptools (can install arbitrary packages)
  • git (can clone repositories)
  • curl and wget (can exfiltrate data or download payloads)
  • Python headers (useful for exploit development)

A python:3.12-slim final stage contains none of these. Vulnerability scanners (Trivy, Grype, Docker Scout) report far fewer CVEs on the slim image because there are fewer packages to have vulnerabilities in. A scratch-based Go image typically reports zero OS-level CVEs.

Fewer tools in the image also means that if an attacker does achieve code execution inside the container, they have far less to work with. Container escape and lateral movement both become harder.

GitHub Actions Integration

The docker/build-push-action GitHub Action handles multi-stage builds transparently — it builds whatever the Dockerfile specifies and pushes the final stage. Cache management is handled with cache-from and cache-to:

- name: Build and push
  uses: docker/build-push-action@v6
  with:
    context: .
    push: true
    tags: myorg/myapp:latest
    cache-from: type=gha
    cache-to: type=gha,mode=max
    target: production   # optional: build a specific named stage

type=gha uses GitHub Actions cache for Docker layer caching. With mode=max, all intermediate layers (including the builder stage) are cached, not just the final image layers. This means even if the final image is tiny, the expensive pip install or go build steps are cached between runs.

To run tests in CI before building the production image, use two separate build steps with different target values:

- name: Run tests
  uses: docker/build-push-action@v6
  with:
    context: .
    target: tester
    push: false
    cache-from: type=gha
    cache-to: type=gha,mode=max

- name: Build and push production image
  uses: docker/build-push-action@v6
  with:
    context: .
    target: production
    push: true
    tags: myorg/myapp:${{ github.sha }}
    cache-from: type=gha

The tester build reuses cached layers from the builder stage. The production build reuses the same cache again. Both steps together are often faster than a single uncached build because the layer cache is shared.

Summary

Multi-stage builds are the standard approach for production Docker images in 2026. The workflow is always the same: use a full-featured base image to build and assemble, then copy only the finished artifacts into a minimal final image. The size and security improvements are significant regardless of language — Python images shrink from over a gigabyte to under 200MB, Go binaries fit in images measured in single-digit megabytes. Combined with careful layer ordering to maximize cache reuse, multi-stage builds make your CI pipeline faster and your production deployments leaner and more secure.