GitHub Actions CI/CD: Python + Docker + VPS Deployment (2026)
GitHub Actions executes over 6 million workflows per day, and 90% of Fortune 100 companies use it as part of their delivery pipeline. If you are shipping Python applications in 2026, knowing how to build a full CI/CD pipeline — automated tests, Docker image build, push to a registry, and deploy to a server — is a baseline skill.
This tutorial walks through an end-to-end pipeline: lint and test Python with pytest, build and push a Docker image to the GitHub Container Registry (GHCR), then deploy to a VPS via SSH. Every optimization that matters in practice — dependency caching, Docker layer caching, matrix builds, OIDC-based secrets — is covered.
Core Concepts
Before writing any YAML, it helps to understand the vocabulary GitHub Actions uses.
Events trigger workflows. The most common are push (code lands on a branch), pull_request (a PR is opened or updated), and schedule (a cron expression, e.g. nightly builds). You can also trigger workflows manually with workflow_dispatch.
Jobs are isolated units of work that run on a runner machine. Jobs in the same workflow run in parallel by default. You make them sequential by declaring needs: [job-name].
Steps are the individual commands inside a job. Each step is either a shell command (run:) or a pre-built action (uses:).
Runners are the machines that execute jobs. GitHub provides hosted runners (ubuntu-latest, macos-latest, windows-latest). You can also register self-hosted runners on your own infrastructure.
Secrets are encrypted values stored in your repository or organization settings. They are injected into workflows as environment variables and never appear in logs.
Your First Workflow: Running pytest on Every Push
Create the directory .github/workflows/ at the root of your repository, then add ci.yml:
name: CI
on:
push:
branches: ["main", "develop"]
pull_request:
branches: ["main"]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
- name: Run tests
run: pytest tests/ -v
Push this file and open the Actions tab on your repository — you will see the workflow queue and then run. This is the foundation every subsequent optimization builds on.
Dependency Caching: 60–80% Faster Jobs
Installing pip dependencies from scratch on every run is wasteful. actions/cache stores the pip download cache between runs, keyed by a hash of your requirements.txt. When the hash does not change, the cache is restored and pip skips downloading packages it already has.
- name: Cache pip packages
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
Place this step before Install dependencies. In practice this cuts install time from 40–90 seconds down to 5–15 seconds on warm cache hits. The restore-keys fallback means a partial cache is still used when requirements.txt changes — only the new packages are downloaded.
Matrix Builds: Test Across Python 3.11, 3.12, and 3.13 in Parallel
Testing against a single Python version gives false confidence. The strategy.matrix key tells GitHub Actions to spin up one job per combination of values, all running in parallel:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12", "3.13"]
fail-fast: false
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Cache pip packages
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: Install dependencies
run: pip install -r requirements.txt pytest
- name: Run tests
run: pytest tests/ -v
fail-fast: false keeps all three matrix jobs running even if one fails — useful for seeing which versions are broken before fixing anything.
Building and Pushing a Docker Image to GHCR
GitHub Container Registry (GHCR) at ghcr.io is free for public repositories and integrated directly with your GitHub organization or user account. You authenticate with the GITHUB_TOKEN that GitHub automatically provides to every workflow — no manual secret setup needed for pushing to your own registry.
The three actions you need:
docker/login-action— authenticates to the registrydocker/setup-buildx-action— enables BuildKit for multi-platform and layer-caching featuresdocker/build-push-action— builds and pushes the image
build-and-push:
runs-on: ubuntu-latest
needs: [test]
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: |
ghcr.io/${{ github.repository }}:latest
ghcr.io/${{ github.repository }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
The package is named after your repository: if your repo is myorg/myapp, the image is ghcr.io/myorg/myapp. You must set the package visibility to public (or grant your server access) before pulling it on the VPS.
Docker Layer Caching: The Critical Optimization
The lines cache-from: type=gha and cache-to: type=gha,mode=max are the most impactful optimization in the entire Docker build step. They store Docker build cache in GitHub's own cache storage (the same backend as actions/cache).
On the first build, every layer is built from scratch. On subsequent builds, unchanged layers are pulled from cache. For a typical Python application image, this reduces build time from 3–5 minutes to 20–40 seconds.
mode=max caches all intermediate layers, not just the final image layers. This is important for multi-stage builds where intermediate stages (e.g., a build stage that compiles native extensions) are expensive.
Secrets and OIDC: Never Hardcode Credentials
GitHub Secrets store values like SSH private keys, API tokens, and database URLs. Go to your repository Settings → Secrets and variables → Actions → New repository secret. Reference them in workflows as ${{ secrets.SECRET_NAME }}.
For cloud providers (AWS, GCP, Azure), the modern approach is OIDC (OpenID Connect) rather than long-lived access keys. OIDC lets GitHub Actions authenticate to your cloud provider using a short-lived token that is valid only for the duration of the workflow run.
For AWS this looks like:
- name: Configure AWS credentials via OIDC
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/GitHubActionsRole
aws-region: us-east-1
No AWS access key or secret is stored anywhere. The GitHub token is exchanged for a temporary AWS session. This eliminates the most common source of credential leaks in CI/CD systems.
Deploying to a VPS via SSH
After the image is pushed to GHCR, the deploy job connects to your VPS over SSH and runs docker compose pull && docker compose up -d.
Store your private SSH key as a secret named VPS_SSH_KEY, the server address as VPS_HOST, and the username as VPS_USER.
deploy:
runs-on: ubuntu-latest
needs: [build-and-push]
environment: production
steps:
- name: Deploy to VPS
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.VPS_SSH_KEY }}
script: |
cd /opt/myapp
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
docker compose pull
docker compose up -d --remove-orphans
docker image prune -f
The environment: production declaration links this job to a GitHub Environment, which is where protection rules live.
Environment Protection Rules: Require Approval Before Production
In your repository Settings → Environments → production, you can enable Required reviewers. When enabled, the deploy job pauses and sends a notification to the listed reviewers. The job only continues after a reviewer approves it in the GitHub UI.
This gives you a human gate between an automated build and a production deployment — without any third-party tooling. You can also restrict which branches can deploy to the environment (e.g., only main).
Testing Locally with act
Pushing to GitHub just to test a workflow change is slow. act simulates GitHub Actions runners locally using Docker.
Install it:
# macOS
brew install act
# Linux
curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
Run your workflow locally:
# Run all jobs triggered by a push event
act push
# Run a specific job
act push -j test
# Pass a secret
act push -s MY_SECRET=value
# Use a smaller runner image (faster, less compatible)
act push -P ubuntu-latest=catthehacker/ubuntu:act-latest
act is not a perfect simulator — some actions behave differently, and OIDC does not work locally — but it catches the majority of syntax errors and logic bugs before you consume Actions minutes.
Reusable Workflows: DRY Pipelines with workflow_call
If you have multiple repositories that all follow the same test → build → deploy pattern, you can extract the workflow into a central repository and call it from others. This is the workflow_call trigger.
In a shared repository (e.g., myorg/.github), create .github/workflows/python-ci.yml:
on:
workflow_call:
inputs:
python-version:
required: false
type: string
default: "3.12"
secrets:
VPS_HOST:
required: true
VPS_SSH_KEY:
required: true
Then call it from any other repository:
jobs:
ci:
uses: myorg/.github/.github/workflows/python-ci.yml@main
with:
python-version: "3.13"
secrets:
VPS_HOST: ${{ secrets.VPS_HOST }}
VPS_SSH_KEY: ${{ secrets.VPS_SSH_KEY }}
Changes to the shared workflow propagate to all consuming repositories automatically. This is the most effective way to maintain CI/CD consistency across a large number of repositories.
Complete Workflow: Lint → Test → Build → Push → Deploy
Here is the full .github/workflows/ci.yml that ties everything together:
name: CI/CD Pipeline
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install linting tools
run: pip install ruff
- name: Lint with ruff
run: ruff check .
test:
runs-on: ubuntu-latest
needs: [lint]
strategy:
matrix:
python-version: ["3.11", "3.12", "3.13"]
fail-fast: false
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Cache pip packages
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: Install dependencies
run: pip install -r requirements.txt pytest pytest-cov
- name: Run tests with coverage
run: pytest tests/ -v --cov=src --cov-report=xml
- name: Upload coverage report
uses: actions/upload-artifact@v4
with:
name: coverage-${{ matrix.python-version }}
path: coverage.xml
build-and-push:
runs-on: ubuntu-latest
needs: [test]
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=sha-
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
runs-on: ubuntu-latest
needs: [build-and-push]
environment: production
steps:
- name: Deploy to VPS via SSH
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.VPS_SSH_KEY }}
script: |
cd /opt/myapp
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io \
-u ${{ github.actor }} --password-stdin
docker compose pull
docker compose up -d --remove-orphans
docker image prune -f
echo "Deployed $(date)" >> /var/log/myapp-deploy.log
This workflow:
- Lints with ruff on every push and PR.
- Tests across Python 3.11, 3.12, and 3.13 in parallel, with pip caching and coverage upload.
- Builds and pushes the Docker image to GHCR with layer caching — only on pushes to
main. - Deploys to the VPS via SSH, gated by an environment approval rule.
Key Takeaways
- Use
actions/cachewith ahashFileskey onrequirements.txtto cut pip install time by 60–80%. - Use
cache-from: type=ghaandcache-to: type=gha,mode=maxfor Docker layer caching — this is the single biggest Docker build optimization available in GitHub Actions. - Never put credentials in workflow YAML. Use GitHub Secrets for static values and OIDC for cloud provider authentication.
- Environment protection rules give you a manual approval gate before production deploys — no extra tooling required.
- Use
actto run workflows locally and avoid burning Actions minutes on syntax errors. - Extract common pipelines into reusable workflows with
workflow_callto keep large organizations consistent.
The pipeline above is production-ready as written. Adjust the branch filters, registry namespace, and deploy script to match your project, and you have a complete automated delivery system.