}

Platform Engineering with Backstage 2026: Build Your Internal Developer Portal


TL;DR

This tutorial walks you through building a production-ready Internal Developer Portal (IDP) using Backstage in 2026. You will install Backstage, configure its Software Catalog, write TechDocs, scaffold golden-path service templates, wire up GitHub discovery, display Kubernetes cluster state, build a custom React plugin, configure GitHub OAuth, and deploy everything to Kubernetes with Helm. By the end you will have a portal that reduces cognitive load for every developer on your team and gives platform engineers a single pane of glass over the entire software estate.

Key things you will accomplish:

  • Bootstrap a Backstage app in under five minutes with npx @backstage/create-app.
  • Register services, APIs, systems, domains, and resources in the Software Catalog using catalog-info.yaml.
  • Publish auto-generated documentation with TechDocs and MkDocs.
  • Create Scaffolder templates so developers can spin up new services in one click.
  • Enable automatic catalog entity discovery from GitHub repositories.
  • Surface Kubernetes pod and deployment health directly in Backstage.
  • Write a custom Backstage plugin in React and TypeScript from scratch.
  • Deploy Backstage to a Kubernetes cluster using the official Helm chart.

1. What Is Platform Engineering? IDP vs DevOps

Platform engineering is the discipline of designing and building self-service internal developer platforms (IDPs) that abstract away infrastructure complexity and give application developers curated, golden-path workflows. Where traditional DevOps focused on cultural change and breaking down silos between development and operations teams, platform engineering goes one step further: it treats the platform itself as a product with internal customers.

Gartner predicts that 80% of software engineering organisations will establish platform engineering teams by 2027, and that organisations with mature IDPs will see a 40% reduction in time-to-production for new services. The driver is simple economics: every hour a developer spends hunting for runbook locations, figuring out which team owns a service, or waiting for infrastructure to be provisioned by hand is an hour not spent shipping product value.

An IDP is not a replacement for DevOps. It is the concrete implementation of DevOps principles at scale. DevOps tells you what to do (automate, collaborate, measure, share). An IDP tells you how to do it for hundreds of developers simultaneously. Where DevOps was a philosophy, platform engineering is an engineering practice with deliverables: service catalogs, scaffolder templates, golden CI/CD pipelines, curated observability dashboards, and self-service infrastructure provisioning.

Backstage, open-sourced by Spotify in 2020 and donated to the Cloud Native Computing Foundation (CNCF) in 2022, has become the de facto open-source framework for building IDPs. It provides the foundational scaffolding — catalog, search, TechDocs, scaffolder — and an extensible plugin architecture so teams can surface anything from Datadog metrics to PagerDuty incidents inside one coherent developer portal.


2. Backstage Architecture Overview

Backstage is a Node.js monorepo composed of four core features and an infinite plugin surface area.

Software Catalog — A centralised registry of every software component, service, API, library, website, data pipeline, and infrastructure resource in your organisation. Entities are described by catalog-info.yaml files committed alongside source code. Backstage continuously ingests these files and builds a graph of relationships.

TechDocs — A docs-as-code solution that reads MkDocs projects from your repositories and publishes them as a searchable, versioned documentation site inside the portal. Developers write Markdown next to the code; TechDocs renders it beautifully.

Scaffolder — A template engine that lets platform engineers define golden-path templates for new services. Developers fill in a form in the UI; Scaffolder runs a sequence of actions — creating a GitHub repository, copying files, registering the new component in the catalog — without a human platform engineer in the loop.

Plugins — Backstage's superpower. Everything beyond the four core features is a plugin: Kubernetes, PagerDuty, Datadog, GitHub Actions, Argo CD, Vault, and hundreds more from the community. You can also write your own plugins in React and TypeScript with a well-documented SDK.


3. Installing Backstage

You need Node.js 20 LTS, Yarn 4 (via Corepack), and Docker installed locally. Then run:

npx @backstage/create-app@latest

The CLI will prompt for an app name. Use something like my-idp. It will scaffold the full monorepo under my-idp/:

my-idp/
  app-config.yaml          # main configuration
  app-config.local.yaml    # local overrides, git-ignored
  packages/
    app/                   # React frontend
    backend/               # Node.js backend
  plugins/                 # your custom plugins go here

Start the development server:

cd my-idp
yarn install
yarn dev

Backstage opens at http://localhost:3000. The backend API runs on port 7007. You now have a fully functional IDP skeleton.


4. Configuring app-config.yaml

The app-config.yaml file is the heart of your Backstage instance. It controls integrations, authentication, catalog ingestion rules, and plugin settings. Here is a representative production-ready configuration:

app:
  title: My Internal Developer Portal
  baseUrl: https://backstage.example.com

organization:
  name: Acme Corp

backend:
  baseUrl: https://backstage.example.com
  listen:
    port: 7007
  csp:
    connect-src: ["'self'", "http:", "https:"]
  cors:
    origin: https://backstage.example.com
    methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
    credentials: true
  database:
    client: pg
    connection:
      host: ${POSTGRES_HOST}
      port: ${POSTGRES_PORT}
      user: ${POSTGRES_USER}
      password: ${POSTGRES_PASSWORD}
      database: backstage

integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}

auth:
  environment: production
  providers:
    github:
      production:
        clientId: ${GITHUB_CLIENT_ID}
        clientSecret: ${GITHUB_CLIENT_SECRET}

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, API, System, Domain, Resource, Location, User, Group]
  locations:
    - type: url
      target: https://github.com/acme-corp/backstage-catalog/blob/main/all.yaml
    - type: github-discovery
      target: https://github.com/acme-corp

techdocs:
  builder: external
  generator:
    runIn: docker
  publisher:
    type: googleGcs
    googleGcs:
      bucketName: acme-techdocs
      credentials: ${GOOGLE_APPLICATION_CREDENTIALS}

kubernetes:
  serviceLocatorMethod:
    type: multiTenant
  clusterLocatorMethods:
    - type: config
      clusters:
        - url: ${K8S_CLUSTER_URL}
          name: production
          authProvider: serviceAccount
          skipTLSVerify: false
          serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN}
          caData: ${K8S_CA_DATA}

Store secrets in environment variables and inject them at runtime via your secret manager (AWS Secrets Manager, GCP Secret Manager, Vault, or Kubernetes Secrets). Never commit credentials to the repository.


5. Software Catalog: Registering Components

Every service, library, website, or pipeline in your organisation is represented in the catalog as an entity. You describe an entity by committing a catalog-info.yaml file to the root of the repository. Here is an example for a backend service:

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: payments-service
  description: Handles all payment processing for checkout flows
  annotations:
    github.com/project-slug: acme-corp/payments-service
    backstage.io/techdocs-ref: dir:.
    pagerduty.com/service-id: P1234AB
    datadoghq.com/service-monitors: payments-service
  tags:
    - java
    - spring-boot
    - payments
  links:
    - url: https://payments.acme-corp.com/health
      title: Health Check
      icon: health
spec:
  type: service
  lifecycle: production
  owner: group:payments-team
  system: checkout
  dependsOn:
    - component:order-service
    - resource:payments-postgres-db
  providesApis:
    - payments-api

Register it in Backstage by navigating to Catalog > Register Existing Component and pasting the URL to the raw catalog-info.yaml file on GitHub. Alternatively, add it to your central catalog all.yaml location and let Backstage discover it automatically.


6. Entity Types: System, Domain, API, Resource

The catalog schema is deliberately rich. Beyond Component, Backstage supports several other entity kinds that let you model your entire software estate as a graph.

System — A collection of components and resources that together deliver a product capability. A checkout system might contain the cart service, payments service, inventory service, and the PostgreSQL databases they share.

apiVersion: backstage.io/v1alpha1
kind: System
metadata:
  name: checkout
  description: End-to-end checkout experience
spec:
  owner: group:platform-team
  domain: ecommerce

Domain — A high-level grouping of related systems, aligned with business domains. Useful for large organisations with hundreds of services.

apiVersion: backstage.io/v1alpha1
kind: Domain
metadata:
  name: ecommerce
  description: Everything related to selling products online
spec:
  owner: group:engineering-leadership

API — Describes an interface exposed by a component: REST, GraphQL, gRPC, or AsyncAPI. You embed the OpenAPI spec inline or reference it by URL, and Backstage renders an interactive API explorer.

apiVersion: backstage.io/v1alpha1
kind: API
metadata:
  name: payments-api
  description: REST API for initiating and querying payments
spec:
  type: openapi
  lifecycle: production
  owner: group:payments-team
  system: checkout
  definition:
    $text: https://github.com/acme-corp/payments-service/blob/main/openapi.yaml

Resource — Represents infrastructure a component depends on: databases, S3 buckets, message queues, CDNs.

apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
  name: payments-postgres-db
  description: PostgreSQL database for payments service
spec:
  type: database
  owner: group:payments-team
  system: checkout

7. TechDocs: Docs as Code with MkDocs

TechDocs transforms Markdown documentation committed alongside source code into a beautifully rendered site inside Backstage. Developers never leave the portal to find runbooks, architecture decision records, or onboarding guides.

First, annotate your component entity with backstage.io/techdocs-ref: dir:.. Then add an mkdocs.yml to the repository root:

site_name: Payments Service
site_description: Documentation for the Payments Service
docs_dir: docs
nav:
  - Home: index.md
  - Architecture: architecture.md
  - Runbooks:
      - Deployment: runbooks/deployment.md
      - Rollback: runbooks/rollback.md
  - API Reference: api-reference.md

plugins:
  - techdocs-core

Create a docs/ directory with your Markdown files. A minimal docs/index.md looks like this:

# Payments Service

The payments service handles all payment processing for the Acme Corp checkout flow.
It integrates with Stripe for card processing and supports Apple Pay, Google Pay,
and bank transfers.

## Ownership

- **Team**: Payments Team
- **On-call rotation**: PagerDuty service P1234AB
- **Slack channel**: #payments-eng

In production, configure Backstage to build TechDocs externally (in CI) and publish the generated HTML to a cloud storage bucket (GCS, S3, or Azure Blob Storage). The techdocs-cli handles both the build and publish steps:

npx @techdocs/cli generate --source-dir . --output-dir ./site
npx @techdocs/cli publish --publisher-type googleGcs \
  --storage-name acme-techdocs \
  --entity default/Component/payments-service

Add these commands to your CI pipeline so documentation is always up to date with the code.


8. Scaffolder: Golden Path Templates

The Scaffolder is the most impactful Backstage feature for reducing toil. A template encodes your organisation's best practices — repository structure, CI/CD pipeline, observability setup, security scanning — into a form developers fill in. The result is a ready-to-run service with zero manual steps.

Save templates in a dedicated templates/ directory in your catalog repository. Here is a Spring Boot service template:

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: spring-boot-service
  title: Spring Boot Microservice
  description: Creates a production-ready Spring Boot service with CI/CD and observability
  tags:
    - java
    - spring-boot
    - recommended
spec:
  owner: group:platform-team
  type: service

  parameters:
    - title: Service Details
      required: [name, description, owner]
      properties:
        name:
          title: Service Name
          type: string
          description: Unique name for the service (kebab-case)
          pattern: "^[a-z][a-z0-9-]*$"
        description:
          title: Description
          type: string
        owner:
          title: Owning Team
          type: string
          ui:field: OwnerPicker
          ui:options:
            allowedKinds: [Group]
        system:
          title: System
          type: string
          ui:field: EntityPicker
          ui:options:
            allowedKinds: [System]

    - title: Infrastructure
      properties:
        javaVersion:
          title: Java Version
          type: string
          default: "21"
          enum: ["17", "21"]
        enablePostgres:
          title: Include PostgreSQL integration
          type: boolean
          default: true

  steps:
    - id: fetch-template
      name: Fetch Base Template
      action: fetch:template
      input:
        url: ./skeleton
        values:
          name: ${{ parameters.name }}
          description: ${{ parameters.description }}
          owner: ${{ parameters.owner }}
          system: ${{ parameters.system }}
          javaVersion: ${{ parameters.javaVersion }}
          enablePostgres: ${{ parameters.enablePostgres }}

    - id: create-repo
      name: Create GitHub Repository
      action: publish:github
      input:
        repoUrl: github.com?owner=acme-corp&repo=${{ parameters.name }}
        description: ${{ parameters.description }}
        defaultBranch: main
        repoVisibility: private
        topics: [java, spring-boot, microservice]

    - id: register
      name: Register in Catalog
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps['create-repo'].output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml

  output:
    links:
      - title: Repository
        url: ${{ steps['create-repo'].output.remoteUrl }}
      - title: Open in Catalog
        url: ${{ steps['register'].output.entityRef }}

The skeleton/ directory next to the template YAML contains the file tree that gets copied into the new repository, with Nunjucks template variables (${{ values.name }}) expanded by the Scaffolder.


9. GitHub Integration: Auto-Discover Catalog Entities

Rather than manually registering every repository, Backstage can crawl your GitHub organisation and automatically discover any repository that contains a catalog-info.yaml file. Add the following to your app-config.yaml catalog locations:

catalog:
  providers:
    github:
      acme-corp:
        organization: acme-corp
        catalogPath: /catalog-info.yaml
        filters:
          branch: main
          repository: ".*"   # regex — use "^(payments|orders|inventory).*" to restrict
        schedule:
          frequency: { minutes: 30 }
          timeout: { minutes: 3 }

Then install the GitHub catalog provider in the backend:

yarn --cwd packages/backend add @backstage/plugin-catalog-backend-module-github

Register it in packages/backend/src/index.ts:

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-github/alpha')
);

backend.start();

Backstage will now scan the acme-corp GitHub organisation every 30 minutes and register every repository that has a catalog-info.yaml on its main branch. New services appear in the catalog automatically as soon as their first catalog-info.yaml is merged.


10. Kubernetes Plugin: Cluster State in Backstage

The Backstage Kubernetes plugin surfaces pod health, deployment rollout status, replica counts, and recent events directly on the component's page in the portal. Developers no longer need kubectl access to answer "is my service healthy in production?".

Install the frontend and backend packages:

yarn --cwd packages/app add @backstage/plugin-kubernetes
yarn --cwd packages/backend add @backstage/plugin-kubernetes-backend

Add the annotation to your component's catalog-info.yaml so Backstage knows which Kubernetes resources belong to it:

metadata:
  annotations:
    backstage.io/kubernetes-id: payments-service
    backstage.io/kubernetes-namespace: production

Backstage matches these annotations to Kubernetes resources by looking for a backstage.io/kubernetes-id label on Deployments, Pods, Services, and Ingresses. Add the label to your Kubernetes manifests or Helm chart values:

# In your Helm values.yaml
labels:
  backstage.io/kubernetes-id: payments-service

Add the Kubernetes tab to the entity page in packages/app/src/components/catalog/EntityPage.tsx:

import { EntityKubernetesContent } from '@backstage/plugin-kubernetes';

// Inside your serviceEntityPage definition:
<EntityLayout.Route path="/kubernetes" title="Kubernetes">
  <EntityKubernetesContent refreshIntervalMs={30000} />
</EntityLayout.Route>

11. Building a Custom Plugin

When the community plugin ecosystem does not cover your needs, you can build your own. Here is a minimal plugin that displays a custom deployment frequency metric fetched from your internal API.

Generate the plugin scaffold:

yarn backstage-cli new --select plugin
# Enter plugin ID: deployment-frequency

This creates plugins/deployment-frequency/ with a standard Backstage plugin structure. Edit the main component at plugins/deployment-frequency/src/components/DeploymentFrequencyCard/DeploymentFrequencyCard.tsx:

import React from 'react';
import { useApi, fetchApiRef } from '@backstage/core-plugin-api';
import { InfoCard, Progress } from '@backstage/core-components';
import { useEntity } from '@backstage/plugin-catalog-react';
import useAsync from 'react-use/lib/useAsync';

type DeploymentStats = {
  deploymentsPerDay: number;
  lastDeployedAt: string;
  successRate: number;
};

export const DeploymentFrequencyCard = () => {
  const { entity } = useEntity();
  const fetchApi = useApi(fetchApiRef);
  const serviceName = entity.metadata.name;

  const { value, loading, error } = useAsync(async (): Promise<DeploymentStats> => {
    const response = await fetchApi.fetch(
      `/api/deployment-frequency/${serviceName}`
    );
    if (!response.ok) {
      throw new Error(`HTTP ${response.status}`);
    }
    return response.json();
  }, [serviceName]);

  if (loading) return <Progress />;
  if (error) return <p>Error loading deployment stats: {error.message}</p>;

  return (
    <InfoCard title="Deployment Frequency">
      <dl>
        <dt>Deployments / day</dt>
        <dd>{value?.deploymentsPerDay.toFixed(1)}</dd>
        <dt>Last deployed</dt>
        <dd>{value?.lastDeployedAt}</dd>
        <dt>Success rate</dt>
        <dd>{(value!.successRate * 100).toFixed(1)}%</dd>
      </dl>
    </InfoCard>
  );
};

Export the component from the plugin's index.ts, then import and add it to the entity page in the app package:

// packages/app/src/components/catalog/EntityPage.tsx
import { DeploymentFrequencyCard } from '@internal/plugin-deployment-frequency';

// Inside the overviewContent grid:
<Grid item md={4}>
  <DeploymentFrequencyCard />
</Grid>

For the backend half of the plugin (if you need a proxy or a backend API route), generate a backend plugin module:

yarn backstage-cli new --select backend-plugin
# Enter plugin ID: deployment-frequency-backend

The backend plugin integrates with Backstage's new backend system the same way as the catalog and Kubernetes plugins shown earlier, using createBackendPlugin and registering HTTP routes via the HttpRouterService.


12. Auth Providers: GitHub OAuth

Backstage supports many auth providers. GitHub OAuth is the most common for organisations already using GitHub. You already added the auth configuration to app-config.yaml in section 4. Now wire up the frontend.

In packages/app/src/App.tsx, configure the GitHub auth provider:

import { githubAuthApiRef } from '@backstage/core-plugin-api';
import { SignInPage } from '@backstage/core-components';

const app = createApp({
  apis,
  bindRoutes({ bind }) {
    // route bindings
  },
  components: {
    SignInPage: props => (
      <SignInPage
        {...props}
        auto
        provider={{
          id: 'github-auth-provider',
          title: 'GitHub',
          message: 'Sign in using your GitHub account',
          apiRef: githubAuthApiRef,
        }}
      />
    ),
  },
});

In the backend, the new backend system handles auth automatically when you add the GitHub auth module:

yarn --cwd packages/backend add @backstage/plugin-auth-backend-module-github-provider

Register it in packages/backend/src/index.ts:

backend.add(import('@backstage/plugin-auth-backend'));
backend.add(
  import('@backstage/plugin-auth-backend-module-github-provider')
);

Create a GitHub OAuth App at github.com/settings/developers. Set the Authorization callback URL to https://backstage.example.com/api/auth/github/handler/frame. Copy the Client ID and Client Secret into your secret manager and reference them as ${GITHUB_CLIENT_ID} and ${GITHUB_CLIENT_SECRET} in app-config.yaml.


13. Deploying Backstage to Kubernetes with Helm

The Backstage community maintains a Helm chart at https://backstage.github.io/charts. First, build and push your Backstage Docker image:

# Build the production Docker image
yarn build:all
docker image build . -f packages/backend/Dockerfile \
  --tag ghcr.io/acme-corp/backstage:$VERSION

docker push ghcr.io/acme-corp/backstage:$VERSION

Add the Helm repository and create a values.yaml for your deployment:

helm repo add backstage https://backstage.github.io/charts
helm repo update
# backstage-values.yaml
backstage:
  image:
    registry: ghcr.io
    repository: acme-corp/backstage
    tag: "1.2.3"
  extraEnvVarsSecrets:
    - backstage-secrets

  appConfig:
    app:
      baseUrl: https://backstage.example.com
    backend:
      baseUrl: https://backstage.example.com
      database:
        client: pg
        connection:
          host: ${POSTGRES_HOST}
          port: "5432"
          user: ${POSTGRES_USER}
          password: ${POSTGRES_PASSWORD}
          database: backstage

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: backstage.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: backstage-tls
      hosts:
        - backstage.example.com

postgresql:
  enabled: true
  auth:
    username: backstage
    database: backstage
    existingSecret: backstage-secrets
    secretKeys:
      adminPasswordKey: POSTGRES_PASSWORD
      userPasswordKey: POSTGRES_PASSWORD

serviceAccount:
  create: true
  name: backstage

Create the Kubernetes secret that holds your credentials:

kubectl create secret generic backstage-secrets \
  --from-literal=GITHUB_TOKEN="ghp_..." \
  --from-literal=GITHUB_CLIENT_ID="Iv1...." \
  --from-literal=GITHUB_CLIENT_SECRET="..." \
  --from-literal=POSTGRES_PASSWORD="..." \
  --namespace backstage

Deploy with Helm:

helm upgrade --install backstage backstage/backstage \
  --namespace backstage \
  --create-namespace \
  --values backstage-values.yaml \
  --wait

Verify the rollout:

kubectl rollout status deployment/backstage -n backstage
kubectl get ingress -n backstage

For production, integrate with your GitOps toolchain (Argo CD or Flux) by committing the Helm release manifest to your infrastructure repository. Add a HelmRelease resource if you use Flux, or an Argo CD Application resource. This ensures every Backstage upgrade goes through code review and is automatically reconciled.


FAQ

Q: How do we handle hundreds of repositories? Will catalog ingestion slow down?

Backstage ingests entities incrementally and stores them in PostgreSQL. GitHub discovery uses the GitHub API with rate-limit-aware scheduling. For organisations with thousands of repositories, tune the schedule.frequency and use fine-grained filters.repository regex patterns to limit scope per provider instance. You can run multiple GitHub provider instances with different filters in parallel.

Q: Can we restrict which catalog entities a user can see?

Yes. Backstage's permission framework (backed by the @backstage/plugin-permission-backend) lets you write custom permission policies in TypeScript. You can restrict catalog entity visibility, scaffolder template execution, and TechDocs access based on the authenticated user's group membership.

Q: TechDocs builds are slow in CI. Any tips?

Enable the --no-docker flag on the techdocs-cli generate command in CI to skip the Docker pull step and use the locally installed MkDocs binary instead. Cache the pip/npm dependencies between CI runs. For very large documentation sites, consider generating TechDocs only when the docs/ directory or mkdocs.yml changes, using path-based CI triggers.

Q: How do we keep the catalog accurate? Developers forget to update catalog-info.yaml.

Enforce it with GitHub Actions. Add a workflow that validates catalog-info.yaml on every pull request using @backstage/catalog-validator. Block merges if the file is missing required fields or references non-existent entities. Combine this with Scaffolder templates that generate a valid catalog-info.yaml automatically, so developers never write it from scratch.

Q: Is Backstage suitable for small teams?

Yes, though the value compounds with scale. A team of 10 engineers will mostly benefit from TechDocs centralisation and the Software Catalog as a single source of truth. A team of 200 engineers will additionally benefit from Scaffolder templates eliminating toil, the Kubernetes plugin reducing ops burden, and custom plugins surfacing cost and security insights. Start small, pick the two or three highest-pain problems your developers have, and build from there.

Q: What database does Backstage support in production?

PostgreSQL is the only supported production database. SQLite is available for local development only. Use a managed PostgreSQL service (RDS, Cloud SQL, Azure Database for PostgreSQL) in production and ensure you have automated backups. The Backstage database schema is migrated automatically on startup via Knex migrations.


Sources

Leonardo Lazzaro

Software engineer and technical writer. 10+ years experience in DevOps, Python, and Linux systems.

More articles by Leonardo Lazzaro