Helm Charts Tutorial 2026: Create, Test, and Publish Kubernetes Applications
Deploying applications to Kubernetes means writing a lot of YAML. A single web app might need a Deployment, a Service, an Ingress, a HorizontalPodAutoscaler, ConfigMaps, Secrets, and ServiceAccounts — and you need slightly different versions of all of them for development, staging, and production. Helm solves this by turning Kubernetes manifests into parameterised templates and packaging them into a single, versioned, shareable unit called a chart.
Think of Helm the way you think of apt for Debian or brew for macOS: it installs, upgrades, rolls back, and removes packages, except the packages are Kubernetes applications.
This guide covers everything from first principles to publishing a chart on GitHub Container Registry (GHCR) and ArtifactHub.
What is Helm?
Helm is the official package manager for Kubernetes, maintained by the Cloud Native Computing Foundation (CNCF). A Helm chart is a directory of templates and a values.yaml file. Helm renders the templates against the values and applies the resulting manifests to a Kubernetes cluster.
Key concepts:
- Chart — the package (a directory or
.tgzarchive) - Release — a deployed instance of a chart in a cluster
- Repository — a server that hosts charts (HTTP or OCI registry)
- Values — the configuration that customises a chart for a specific environment
One chart can be deployed multiple times in the same cluster under different release names with different values — for example, a postgresql chart installed as both db-primary and db-replica.
Helm 3 vs Helm 2: Tiller is Gone
Helm 2 required a server-side component called Tiller running inside the cluster. Tiller had broad cluster-admin privileges and was a common security concern. Helm 3, released in 2019, removed Tiller entirely.
In Helm 3:
- All operations happen client-side, using your local
kubeconfigcredentials - Release state is stored as Kubernetes Secrets in the release namespace (not in a separate Tiller pod)
- RBAC applies directly to the user running
helm— no extra permissions needed - Chart repositories now support OCI registries in addition to HTTP servers
If you are reading old tutorials that mention helm init or tiller, they are describing Helm 2. Everything in this guide is Helm 3.
Installation
# macOS
brew install helm
# Linux (official script)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Verify
helm version
# version.BuildInfo{Version:"v3.17.0", ...}
You also need a working kubectl context pointing at a cluster. For local development, kind or k3d create a cluster in Docker in under a minute:
kind create cluster --name dev
kubectl cluster-info
Chart Structure
Running helm create myapp scaffolds a chart with the standard layout:
myapp/
├── Chart.yaml # Chart metadata (name, version, appVersion)
├── values.yaml # Default configuration values
├── charts/ # Dependency charts (populated by helm dependency update)
├── templates/ # Go templates rendered into Kubernetes manifests
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── hpa.yaml
│ ├── serviceaccount.yaml
│ ├── _helpers.tpl # Named templates (not rendered directly)
│ └── NOTES.txt # Printed to stdout after install
└── .helmignore # Files to exclude when packaging
Chart.yaml
Chart.yaml is the manifest for the chart itself:
apiVersion: v2
name: myapp
description: A Helm chart for my web application
type: application
version: 0.1.0 # Chart version — bump this on every chart change
appVersion: "1.0.0" # Version of the application being deployed
version follows SemVer 2. When you helm upgrade, Helm compares chart versions. appVersion is informational — it describes the Docker image tag your chart deploys.
values.yaml
values.yaml holds all configurable defaults:
replicaCount: 1
image:
repository: nginx
pullPolicy: IfNotPresent
tag: "1.27"
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: chart-example.local
paths:
- path: /
pathType: Prefix
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
These are the defaults. Users override them per environment — production might set replicaCount: 3 and autoscaling.enabled: true.
Creating a Chart from Scratch
Rather than relying entirely on the scaffolded output, walk through each template to understand what Helm is doing.
Deployment template
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.fullname" . }}
labels:
{{- include "myapp.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "myapp.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "myapp.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
Service template
# templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "myapp.fullname" . }}
labels:
{{- include "myapp.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "myapp.selectorLabels" . | nindent 4 }}
Ingress template
# templates/ingress.yaml
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "myapp.fullname" . }}
labels:
{{- include "myapp.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "myapp.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- end }}
Note {{- if .Values.ingress.enabled -}} — the entire Ingress resource is skipped if the value is false. The {{- and -}} strips whitespace around the block so the rendered YAML stays clean.
HPA template
# templates/hpa.yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "myapp.fullname" . }}
labels:
{{- include "myapp.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "myapp.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- end }}
Go Templating in Helm
Helm templates use Go's text/template package, extended with Sprig functions.
Accessing values
{{ .Values.image.repository }} → value from values.yaml
{{ .Chart.Name }} → from Chart.yaml
{{ .Release.Name }} → the helm install release name
{{ .Release.Namespace }} → the target namespace
Conditionals
{{- if .Values.ingress.enabled }}
... render only when true ...
{{- else }}
... fallback ...
{{- end }}
Loops with range
env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Given values.yaml:
env:
APP_ENV: production
LOG_LEVEL: info
This renders to:
env:
- name: APP_ENV
value: "production"
- name: LOG_LEVEL
value: "info"
The default function
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
If image.tag is empty or not set, it falls back to appVersion from Chart.yaml.
toYaml and nindent
resources:
{{- toYaml .Values.resources | nindent 12 }}
toYaml serialises a map back to YAML text. nindent 12 prepends 12 spaces to every line and adds a leading newline, which keeps the indentation correct inside the parent block.
_helpers.tpl: Named Templates
_helpers.tpl (any file starting with _ is not rendered as a manifest) holds reusable named templates. The scaffolded version defines several:
{{/*
Expand the name of the chart.
*/}}
{{- define "myapp.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
Truncate at 63 characters — some Kubernetes name fields are limited.
*/}}
{{- define "myapp.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Common labels applied to every resource.
*/}}
{{- define "myapp.labels" -}}
helm.sh/chart: {{ include "myapp.chart" . }}
{{ include "myapp.selectorLabels" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels — used in matchLabels and pod templates.
*/}}
{{- define "myapp.selectorLabels" -}}
app.kubernetes.io/name: {{ include "myapp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
Call a named template with include:
{{- include "myapp.labels" . | nindent 4 }}
The . passes the current context (including .Values, .Chart, .Release) into the template. You can also pass a custom context, which is useful when iterating:
{{- range .Values.ingress.hosts }}
{{- include "myapp.hostblock" (dict "host" . "root" $) }}
{{- end }}
Values Overrides
Multiple values files
# Install with production overrides
helm install myapp ./myapp -f values.yaml -f prod-values.yaml
Files listed later take precedence. A minimal prod-values.yaml:
replicaCount: 3
image:
tag: "1.27.2"
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
ingress:
enabled: true
className: nginx
hosts:
- host: myapp.example.com
paths:
- path: /
pathType: Prefix
Inline --set
helm install myapp ./myapp --set replicaCount=2 --set image.tag=1.27.2
--set uses dot notation for nested keys and comma separation for multiple values:
helm install myapp ./myapp \
--set image.repository=ghcr.io/myorg/myapp \
--set image.tag=sha-abc1234 \
--set ingress.enabled=true \
--set "ingress.hosts[0].host=myapp.example.com"
--set values take precedence over -f files. For complex values (lists, multi-line strings) prefer -f over --set — it is easier to review in code review and less error-prone.
Dependencies
Real applications depend on other services — a PostgreSQL database, a Redis cache, a metrics exporter. Helm handles this through chart dependencies declared in Chart.yaml:
dependencies:
- name: postgresql
version: "15.5.35"
repository: "oci://registry-1.docker.io/bitnamicharts"
condition: postgresql.enabled
- name: redis
version: "20.6.2"
repository: "oci://registry-1.docker.io/bitnamicharts"
condition: redis.enabled
condition points to a values key — if postgresql.enabled is false, Helm skips installing the PostgreSQL sub-chart.
After editing dependencies, fetch them:
helm dependency update ./myapp
This downloads the dependency charts into charts/ as .tgz files and writes a Chart.lock with the exact resolved versions. Commit both Chart.yaml and Chart.lock; add charts/*.tgz to .helmignore or commit the archives too depending on your workflow.
Configure the sub-chart through top-level values using the dependency name as a key:
postgresql:
enabled: true
auth:
username: myapp
password: secretpassword
database: myapp_production
primary:
persistence:
size: 20Gi
Testing and Validation
helm lint
helm lint checks your chart for syntax errors, missing required fields, and common mistakes:
helm lint ./myapp
# ==> Linting ./myapp
# [INFO] Chart.yaml: icon is recommended
# 1 chart(s) linted, 0 chart(s) failed
Run lint in CI before every merge.
helm template
helm template renders the templates locally without installing anything into the cluster. This is invaluable for reviewing what Kubernetes objects Helm will create:
helm template myapp ./myapp -f prod-values.yaml
Pipe through kubectl apply --dry-run=client -f - to validate the rendered YAML against the Kubernetes API schema without applying it:
helm template myapp ./myapp | kubectl apply --dry-run=client -f -
Pipe through grep to check specific fields:
helm template myapp ./myapp --set autoscaling.enabled=true | grep -A5 "kind: HorizontalPodAutoscaler"
helm unittest
helm-unittest is a Helm plugin for writing YAML-based unit tests against rendered templates. It lets you assert that specific fields have specific values for a given input, without a running cluster.
Install the plugin:
helm plugin install https://github.com/helm-unittest/helm-unittest.git
Tests live in tests/ inside the chart directory (or anywhere you configure):
# myapp/tests/deployment_test.yaml
suite: deployment tests
templates:
- deployment.yaml
tests:
- it: should set replica count from values
set:
replicaCount: 3
asserts:
- equal:
path: spec.replicas
value: 3
- it: should use image tag from values
set:
image.repository: myorg/myapp
image.tag: "2.0.0"
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: "myorg/myapp:2.0.0"
- it: should not set replicas when autoscaling is enabled
set:
autoscaling.enabled: true
asserts:
- notExists:
path: spec.replicas
# myapp/tests/ingress_test.yaml
suite: ingress tests
templates:
- ingress.yaml
tests:
- it: should not render ingress when disabled
set:
ingress.enabled: false
asserts:
- hasDocuments:
count: 0
- it: should render ingress when enabled
set:
ingress.enabled: true
ingress.className: nginx
ingress.hosts:
- host: app.example.com
paths:
- path: /
pathType: Prefix
asserts:
- equal:
path: spec.ingressClassName
value: nginx
- equal:
path: spec.rules[0].host
value: app.example.com
Run the tests:
helm unittest ./myapp
# ### Chart [ myapp ] myapp
#
# PASS deployment tests myapp/tests/deployment_test.yaml
# PASS ingress tests myapp/tests/ingress_test.yaml
#
# Charts: 1 passed, 1 total
# Test Suites: 2 passed, 2 total
# Tests: 4 passed, 4 total
Add helm unittest to your CI pipeline alongside helm lint:
# .github/workflows/helm-test.yml
- name: Lint chart
run: helm lint ./myapp
- name: Unit test chart
run: helm unittest ./myapp
The Release Lifecycle
Install
helm install myapp ./myapp \
--namespace myapp \
--create-namespace \
-f prod-values.yaml
--create-namespace creates the namespace if it does not exist. Helm names this release myapp and stores state in a Secret named sh.helm.release.v1.myapp.v1 in the myapp namespace.
List all releases:
helm list -A # all namespaces
helm list -n myapp # specific namespace
Upgrade
After changing your chart or values, upgrade the existing release:
helm upgrade myapp ./myapp \
--namespace myapp \
-f prod-values.yaml \
--set image.tag=2.0.0
Helm diffs the new manifests against the running state and applies only what changed. If the upgrade fails (pods crash, readiness probes fail), Helm marks the release as failed but does not automatically roll back — you do that explicitly.
Rollback
helm history myapp -n myapp
# REVISION STATUS CHART APP VERSION DESCRIPTION
# 1 superseded myapp-0.1.0 1.0.0 Install complete
# 2 deployed myapp-0.1.0 1.0.0 Upgrade complete
helm rollback myapp 1 -n myapp
helm rollback re-applies the manifests from the specified revision. Kubernetes performs a rolling update back to the previous pod template.
Uninstall
helm uninstall myapp -n myapp
This deletes all resources the chart created and removes the release history. Add --keep-history to keep the history (useful for auditing) without keeping the resources.
helm history
helm history myapp -n myapp --max 10
Each helm install or helm upgrade creates a new revision entry. Helm stores these as Kubernetes Secrets, so they survive pod restarts and cluster upgrades.
Publishing to an OCI Registry (GHCR)
Since Helm 3.8, OCI registries are the preferred distribution mechanism for charts. GitHub Container Registry (GHCR) is free for public images and integrates with GitHub Actions.
Package the chart
helm package ./myapp
# Successfully packaged chart and saved it to: /path/to/myapp-0.1.0.tgz
Authenticate to GHCR
echo $GITHUB_TOKEN | helm registry login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin
Push the chart
helm push myapp-0.1.0.tgz oci://ghcr.io/YOUR_GITHUB_ORG
The chart is now available at oci://ghcr.io/YOUR_GITHUB_ORG/myapp.
Pull and install from OCI
# Pull to inspect locally
helm pull oci://ghcr.io/YOUR_GITHUB_ORG/myapp --version 0.1.0
# Install directly from the registry
helm install myapp oci://ghcr.io/YOUR_GITHUB_ORG/myapp \
--version 0.1.0 \
--namespace myapp \
--create-namespace \
-f prod-values.yaml
No helm repo add step is needed for OCI — you reference the registry URL directly.
Publishing with GitHub Actions
# .github/workflows/helm-publish.yml
name: Publish Helm Chart
on:
push:
tags:
- "v*"
jobs:
publish:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Install Helm
uses: azure/setup-helm@v4
- name: Log in to GHCR
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | \
helm registry login ghcr.io \
--username ${{ github.actor }} \
--password-stdin
- name: Package chart
run: |
VERSION="${{ github.ref_name }}"
VERSION="${VERSION#v}" # strip leading 'v'
helm package ./myapp --version "$VERSION"
- name: Push chart
run: |
VERSION="${{ github.ref_name }}"
VERSION="${VERSION#v}"
helm push myapp-${VERSION}.tgz oci://ghcr.io/${{ github.repository_owner }}
Create a release tag to trigger a publish:
git tag v0.2.0
git push origin v0.2.0
Amazon ECR
AWS ECR also supports OCI artifacts. Authenticate with the AWS CLI:
aws ecr get-login-password --region us-east-1 | \
helm registry login \
--username AWS \
--password-stdin \
123456789012.dkr.ecr.us-east-1.amazonaws.com
helm push myapp-0.1.0.tgz oci://123456789012.dkr.ecr.us-east-1.amazonaws.com
ArtifactHub
ArtifactHub.io is the official CNCF marketplace for Helm charts (and also Tekton pipelines, OPA policies, and other cloud-native artifacts). When someone runs helm search hub postgresql, results come from ArtifactHub.
Publishing your chart to ArtifactHub increases discoverability significantly. The process depends on your registry type:
- Create an account at artifacthub.io
- Go to User Settings > Repositories > Add repository
- Choose the repository type (OCI, HTTP, GitHub)
- For OCI: provide
oci://ghcr.io/YOUR_GITHUB_ORG/myapp - ArtifactHub crawls the registry and displays the chart with README, values documentation, and install instructions
To enable automatic values documentation, add a artifacthub-repo.yml file to the root of your repository:
# artifacthub-repo.yml
repositoryID: your-repo-id-from-artifacthub
owners:
- name: Your Name
email: [email protected]
ArtifactHub also reads annotations in Chart.yaml to enrich the listing:
annotations:
artifacthub.io/changes: |
- kind: added
description: Added HPA support
- kind: fixed
description: Fixed Ingress path rendering for nginx
artifacthub.io/license: Apache-2.0
artifacthub.io/maintainers: |
- name: Your Name
email: [email protected]
A Complete CI/CD Workflow
Putting it all together, a typical Helm chart CI/CD pipeline looks like this:
Pull Request:
1. helm lint ./myapp
2. helm unittest ./myapp
3. helm template myapp ./myapp | kubectl apply --dry-run=client -f -
Merge to main:
4. (Optional) helm upgrade myapp oci://... in a staging cluster
Tag release (v0.2.0):
5. helm package + helm push to GHCR
6. ArtifactHub crawls and updates the listing
Deploy to production:
7. helm upgrade myapp oci://ghcr.io/org/myapp --version 0.2.0
8. Monitor with helm history + kubectl rollout status
9. helm rollback if needed
Quick Reference
| Command | Purpose |
|---|---|
helm create myapp |
Scaffold a new chart |
helm lint ./myapp |
Validate chart structure |
helm template myapp ./myapp |
Render templates locally |
helm unittest ./myapp |
Run unit tests |
helm dependency update ./myapp |
Fetch dependency charts |
helm install myapp ./myapp |
Deploy to cluster |
helm upgrade myapp ./myapp |
Update a release |
helm rollback myapp 1 |
Roll back to revision 1 |
helm uninstall myapp |
Remove release |
helm history myapp |
Show revision history |
helm package ./myapp |
Package chart as .tgz |
helm push myapp-0.1.0.tgz oci://... |
Publish to OCI registry |
helm pull oci://... |
Download chart from registry |
Summary
Helm turns Kubernetes YAML from a copy-paste problem into a proper software engineering problem with versioning, testing, and dependency management. The key patterns to take away from this tutorial:
- Use
_helpers.tplfor all label sets and name computations — one definition, used consistently across every resource - Keep
values.yamlas the single source of truth for defaults; override per environment with-f prod-values.yaml - Wrap optional resources (Ingress, HPA) in
{{- if .Values.feature.enabled }}blocks so the chart works correctly across environments with different configurations - Use
helm unittestto assert specific rendering behaviour — it catches regressions before the chart reaches a cluster - Publish to an OCI registry (GHCR is free and integrates naturally with GitHub Actions) and list on ArtifactHub for community discoverability