}

K3s Tutorial 2026: Run Kubernetes on a Raspberry Pi or 1GB VPS

K3s Tutorial 2026: Run Kubernetes on a Raspberry Pi or 1GB VPS

Kubernetes is the de facto standard for container orchestration, but a standard upstream install demands several gigabytes of RAM and a multi-core machine before it will even accept a workload. That cost is tolerable in a cloud data center, but it excludes the vast middle ground of edge devices, single-board computers, home labs, and small VPS instances that need real orchestration without the resource tax.

K3s solves that problem. Rancher Labs (now part of SUSE) packaged all of Kubernetes into a single 70 MB binary, stripped non-essential components, replaced etcd with embedded SQLite for single-node installs, and got the baseline memory footprint below 512 MB. The project has grown roughly 200% year-over-year and is now a CNCF Sandbox project with production deployments running on everything from ARM-based Raspberry Pi clusters to 5G network edge nodes.

This tutorial covers every step from a blank server to a running, production-ready K3s cluster: single-node setup, Traefik ingress, persistent storage, multi-node expansion, HA configuration, Helm, Raspberry Pi–specific tuning, security hardening, and automated upgrades.

TL;DR

  • K3s is a fully conformant Kubernetes distribution that fits in 70 MB and runs on 512 MB RAM.
  • Install a single-node server: curl -sfL https://get.k3s.io | sh -
  • Join agent nodes by exporting K3S_URL and K3S_TOKEN.
  • Traefik is bundled and ready to handle ingress out of the box.
  • The local-path storage provisioner is included for automatic PVC binding on single-node clusters.
  • For HA, run three or more server nodes backed by an external PostgreSQL or MySQL datastore.
  • Raspberry Pi 4 (64-bit OS) is a supported and well-tested platform.
  • Upgrade clusters safely using the system-upgrade-controller CRD.

What Is K3s? K3s vs K8s vs Kind vs minikube

K3s is not a fork of Kubernetes. It ships the same upstream Kubernetes code, compiled against the same API server, and passing the same CNCF conformance tests. What makes it "lightweight" is a set of deliberate omissions and replacements:

  • Alpha and legacy features that are off by default in standard Kubernetes are removed entirely.
  • The default storage backend is SQLite rather than etcd, eliminating an entire process and its memory footprint.
  • containerd replaces Docker as the container runtime, shaving overhead while keeping full OCI compatibility.
  • Several in-tree cloud provider integrations and storage drivers are excluded; they can be added back as external plugins when needed.
  • The entire distribution ships as a single static binary with no external dependencies.

When to use K3s over the alternatives:

ToolBest forRAM floorMulti-node?
K3sEdge, IoT, homelabs, small VPS, CI, production at scale~512 MBYes
minikubeLocal development on a laptop~2 GBNo
Kind (Kubernetes in Docker)CI pipelines, controller development~1 GBSimulated
Full kubeadm K8sLarge production clusters, full cloud-provider integration~2 GB per nodeYes

Choose K3s whenever you need real multi-node Kubernetes with a small resource envelope — or whenever you are running on ARM hardware where a full install is impractical. Choose minikube or Kind when you only need local development and will not promote the cluster to production.

K3s Architecture

A K3s cluster is made up of two roles.

Server node (control plane + optional worker): Runs the Kubernetes API server, the controller manager, the scheduler, and the K3s agent. By default the server node also schedules workloads, which is appropriate for single-node setups. The server exposes port 6443 (Kubernetes API) and port 6444 internally.

Agent node (worker): Runs the K3s agent process, which in turn manages containerd and kube-proxy. Agent nodes join the cluster by connecting to the server's API port with the cluster token.

Embedded SQLite vs etcd: On a single server node, K3s stores cluster state in an embedded SQLite database at /var/lib/rancher/k3s/server/db/. SQLite is synchronous, reliable, and requires zero configuration. It cannot be replicated across nodes, which is why multi-server HA setups must use an external datastore (PostgreSQL, MySQL, or an embedded etcd cluster of three or more server nodes).

Embedded components K3s ships with: - Traefik v2 (ingress controller) - CoreDNS - Flannel (CNI, with options for Calico or Cilium) - local-path-provisioner (dynamic PVC provisioning backed by host paths) - metrics-server - Helm controller (deploy Helm charts via CRDs)

All of these can be individually disabled at install time with flags, giving you a minimal base to build from.

Requirements

Minimum hardware: - 512 MB RAM (1 GB recommended for production single-node) - 1 vCPU - 1 GB disk for the OS; additional disk for persistent volumes

Supported architectures: - amd64 (x86_64) - arm64 (aarch64) — Raspberry Pi 4/5, AWS Graviton, Apple Silicon VMs - arm (armhf) — Raspberry Pi 3 and similar

Operating system: Any modern Linux distribution. Ubuntu 22.04 LTS, Debian 12, Fedora 40, Alpine, and openSUSE are all well tested. The kernel must be 4.15 or later.

Ports to open: - 6443/tcp — Kubernetes API server (all nodes to server) - 8472/udp — Flannel VXLAN (all nodes, inter-node only) - 51820/udp — WireGuard (if using WireGuard CNI backend) - 10250/tcp — kubelet metrics (server to agents, optional) - 80/tcp and 443/tcp — Traefik ingress (server, public-facing)

Single-Node Install

The entire install is one command:

curl -sfL https://get.k3s.io | sh -

The install script downloads the k3s binary for your architecture, installs it to /usr/local/bin/k3s, creates a systemd service, and starts it. The process takes under a minute on a modern machine.

To pin a specific version (recommended for reproducible infrastructure):

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.32.3+k3s1 sh -

To disable Traefik if you plan to use a different ingress controller:

curl -sfL https://get.k3s.io | sh -s - --disable traefik

To use a non-default data directory (useful if your root partition is small and you have a separate data volume):

curl -sfL https://get.k3s.io | sh -s - --data-dir /mnt/data/k3s

After installation, check the service status:

sudo systemctl status k3s

You should see active (running). Logs are available via:

sudo journalctl -u k3s -f

Verify Installation

K3s ships its own kubectl wrapper. After install, the node is ready within 30–60 seconds:

sudo k3s kubectl get nodes

Expected output:

NAME        STATUS   ROLES                  AGE   VERSION
my-server   Ready    control-plane,master   2m    v1.32.3+k3s1

For convenience, copy the kubeconfig to your home directory so you can use kubectl directly:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config

Now standard kubectl commands work without sudo:

kubectl get nodes
kubectl get pods -A

Verify all system pods are running before proceeding:

kubectl get pods -n kube-system

All pods (coredns, traefik, local-path-provisioner, metrics-server) should reach Running status within two minutes.

Deploy Your First Application

Create an nginx Deployment with three replicas and expose it via a ClusterIP Service:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.27-alpine
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "50m"
              memory: "32Mi"
            limits:
              cpu: "200m"
              memory: "128Mi"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP

Apply the manifest:

kubectl apply -f nginx-deployment.yaml
kubectl rollout status deployment/nginx

Verify the pods are spread across (or all on) your node:

kubectl get pods -o wide

Test connectivity from inside the cluster using a temporary pod:

kubectl run curl-test --image=curlimages/curl:8.7.1 --rm -it --restart=Never -- \
  curl -s http://nginx.default.svc.cluster.local

You should see the nginx welcome page HTML returned in your terminal.

Traefik Ingress: Expose Apps to the Outside World

K3s includes Traefik v2 as the default ingress controller. It binds to ports 80 and 443 on your server node's public IP. You can use either the standard Kubernetes Ingress resource or Traefik's native IngressRoute CRD.

Using a standard Ingress resource

# nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: default
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: nginx.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 80
kubectl apply -f nginx-ingress.yaml

Point your DNS A record for nginx.example.com to the server's public IP. Traefik will begin serving traffic immediately.

Using IngressRoute with automatic TLS (Let's Encrypt)

Traefik's IngressRoute CRD gives you more control, including automatic certificate provisioning. First, create a TLSStore and configure the ACME resolver in Traefik's Helm values (stored as a ConfigMap by K3s at /var/lib/rancher/k3s/server/manifests/traefik-config.yaml):

# /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    additionalArguments:
      - "[email protected]"
      - "--certificatesresolvers.letsencrypt.acme.storage=/data/acme.json"
      - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
    persistence:
      enabled: true
      size: 128Mi

After saving this file, K3s's Helm controller automatically applies the configuration change. Then create the IngressRoute:

# nginx-ingressroute.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: nginx-tls
  namespace: default
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`nginx.example.com`)
      kind: Rule
      services:
        - name: nginx
          port: 80
  tls:
    certResolver: letsencrypt
kubectl apply -f nginx-ingressroute.yaml

Traefik obtains and renews the certificate automatically.

Persistent Storage

local-path-provisioner (included)

K3s ships a local-path StorageClass that dynamically provisions PersistentVolumes backed by directories on the node's filesystem. This is ready to use without any configuration:

kubectl get storageclass
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer

Create a PersistentVolumeClaim and a Pod that uses it:

# local-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: local-pvc-test
  namespace: default
spec:
  containers:
    - name: busybox
      image: busybox:1.36
      command: ["sh", "-c", "echo 'K3s storage works!' > /data/test.txt && sleep 3600"]
      volumeMounts:
        - mountPath: /data
          name: local-storage
  volumes:
    - name: local-storage
      persistentVolumeClaim:
        claimName: local-pvc
kubectl apply -f local-pvc.yaml
kubectl exec local-pvc-test -- cat /data/test.txt

The data is stored under /var/lib/rancher/k3s/storage/ on the node. This is adequate for single-node setups but provides no replication.

Longhorn for High-Availability Storage

For multi-node clusters where data must survive a node failure, install Longhorn. It provides distributed block storage with replication across nodes and a built-in web UI.

Prerequisites: open-iscsi must be installed on every node.

# Run on every node
sudo apt install -y open-iscsi
sudo systemctl enable --now iscsid

Install Longhorn via its official manifest:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.2/deploy/longhorn.yaml

Wait for all pods to be ready (takes two to four minutes):

kubectl get pods -n longhorn-system --watch

Longhorn registers a longhorn StorageClass. To make it the default:

kubectl patch storageclass local-path -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass longhorn -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Longhorn volumes replicate data across three nodes by default, so your PVCs survive a single node failure.

Multi-Node Cluster: Adding Agent Nodes

To expand your cluster, you need the server's join token and its IP address.

On the server node, retrieve the token:

sudo cat /var/lib/rancher/k3s/server/node-token

On each agent node, run:

curl -sfL https://get.k3s.io | \
  K3S_URL=https://<SERVER_IP>:6443 \
  K3S_TOKEN=<NODE_TOKEN> \
  sh -

Replace <SERVER_IP> with the server's IP address (private IP is preferred if all nodes share a LAN) and <NODE_TOKEN> with the full token string from the previous command.

Within 30 seconds, verify the new node appears:

kubectl get nodes
NAME        STATUS   ROLES                  AGE    VERSION
server-01   Ready    control-plane,master   1h     v1.32.3+k3s1
agent-01    Ready    <none>                 45s    v1.32.3+k3s1
agent-02    Ready    <none>                 30s    v1.32.3+k3s1

Labeling nodes: Assign roles or topology labels for scheduling:

kubectl label node agent-01 node-role.kubernetes.io/worker=worker
kubectl label node agent-01 topology.kubernetes.io/zone=eu-west-1a

Preventing workloads on the control plane: On multi-node clusters you may want to taint the server node so it does not run application pods:

kubectl taint node server-01 node-role.kubernetes.io/control-plane:NoSchedule

High-Availability Setup: Multiple Server Nodes

A single server node is a single point of failure. For production, run three or more server nodes backed by a shared external datastore.

Option 1: Embedded etcd (simplest)

K3s can bootstrap its own etcd cluster when you pass --cluster-init to the first server. Three server nodes are required for quorum.

On the first server:

curl -sfL https://get.k3s.io | sh -s - \
  --cluster-init \
  --tls-san <LOAD_BALANCER_IP>

Retrieve the token:

sudo cat /var/lib/rancher/k3s/server/node-token

On the second and third server nodes:

curl -sfL https://get.k3s.io | sh -s - \
  --server https://<FIRST_SERVER_IP>:6443 \
  --token <NODE_TOKEN> \
  --tls-san <LOAD_BALANCER_IP>

Place a TCP load balancer (HAProxy, nginx stream, or a cloud LB) in front of all three server nodes on port 6443. Set KUBECONFIG to point at the load balancer address.

Option 2: External PostgreSQL datastore

If you already operate a managed PostgreSQL instance, point K3s at it:

curl -sfL https://get.k3s.io | sh -s - \
  --datastore-endpoint "postgres://k3s:[email protected]:5432/k3s" \
  --tls-san <LOAD_BALANCER_IP>

Join additional server nodes with the same --datastore-endpoint and --token flags. The database holds all cluster state; the server nodes themselves are stateless and can be replaced without data loss.

Helm on K3s

K3s includes the Helm controller, which lets you deploy Helm charts using a HelmChart CRD without installing the Helm CLI. However, the standard Helm CLI is also fully compatible with K3s.

Install the Helm CLI

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Confirm the install:

helm version

Deploy a chart

Because K3s writes the kubeconfig to /etc/rancher/k3s/k3s.yaml, export it before running Helm:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Install cert-manager as an example:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

Using the built-in HelmChart CRD

Drop a manifest into /var/lib/rancher/k3s/server/manifests/ and K3s's Helm controller installs it automatically on startup or when the file changes:

# /var/lib/rancher/k3s/server/manifests/cert-manager.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: cert-manager
  namespace: kube-system
spec:
  repo: https://charts.jetstack.io
  chart: cert-manager
  version: v1.16.2
  targetNamespace: cert-manager
  createNamespace: true
  set:
    crds.enabled: "true"

This approach is idempotent and works well for GitOps workflows where cluster configuration is stored as files in a repository.

K3s on Raspberry Pi 4

The Raspberry Pi 4 with 4 GB or 8 GB RAM is a capable K3s node. A few preparations are required.

1. Use a 64-bit operating system

K3s on arm64 is the supported and well-tested path. Raspberry Pi OS (64-bit), Ubuntu Server 22.04 arm64, and Debian 12 arm64 all work. The 32-bit Raspberry Pi OS image has known cgroup limitations and should be avoided for production K3s deployments.

Verify your OS architecture:

uname -m
# Expected: aarch64

2. Enable cgroups in the kernel command line

By default, the Raspberry Pi bootloader does not enable the memory and CPU cgroups that Kubernetes requires. Edit /boot/firmware/cmdline.txt (Ubuntu) or /boot/cmdline.txt (Raspberry Pi OS) and append the following parameters to the end of the single existing line — do not create a new line:

cgroup_memory=1 cgroup_enable=memory cgroup_enable=cpuset

Reboot after making this change:

sudo reboot

Verify cgroups are active after reboot:

cat /proc/cgroups | grep memory
# Should show memory listed with a non-zero hierarchy ID

3. Install K3s

The standard install command works unchanged on arm64:

curl -sfL https://get.k3s.io | sh -

K3s auto-detects the arm64 architecture and downloads the correct binary.

4. Optional: disable swap (recommended)

sudo dphys-swapfile swapoff
sudo systemctl disable dphys-swapfile

K3s runs without disabling swap, but Kubernetes scheduling decisions assume swap is absent. Disabling it avoids unexpected behaviour under memory pressure.

Securing K3s

TLS and API server certificates

K3s generates a self-signed CA and issues server certificates automatically. If you front the API server with a load balancer or access it by a DNS name, add that name to the certificate's SANs at install time:

curl -sfL https://get.k3s.io | sh -s - \
  --tls-san k3s.example.com \
  --tls-san 203.0.113.10

You can add additional SANs later by re-running the install script with the updated flags.

kubeconfig permissions

By default, /etc/rancher/k3s/k3s.yaml is readable only by root. Copy it to your home directory and restrict permissions:

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config

Never share the raw kubeconfig file; it contains the cluster CA and a client certificate with cluster-admin privileges.

Network policies

Flannel (the default K3s CNI) does not enforce Kubernetes NetworkPolicy resources by default. To enable network policy enforcement, replace Flannel with Calico or Cilium, or add the network-policy flag:

curl -sfL https://get.k3s.io | sh -s - --flannel-backend=none

Then install Calico:

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml

With Calico installed, apply a default-deny policy to namespace production:

# default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
kubectl apply -f default-deny.yaml

RBAC

K3s ships with RBAC enabled by default. Create service accounts with the minimum required permissions rather than granting cluster-admin to application pods. Audit RBAC rules periodically:

kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name == "cluster-admin") | .subjects'

Secrets encryption at rest

Enable encryption of Kubernetes Secrets stored in the SQLite database:

curl -sfL https://get.k3s.io | sh -s - --secrets-encryption

This creates an encryption configuration and rotates it via a K3s subcommand when keys need to be updated.

Upgrading K3s with the system-upgrade-controller

Manual upgrades (curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z sh -) work but require running the command on every node individually. The system-upgrade-controller automates this with a CRD-driven approach.

Install the controller

kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml

Create upgrade plans

A Plan resource describes which nodes to upgrade and to which version. Create one plan for server nodes and another for agent nodes, and make the agent plan depend on the server plan so control-plane nodes are upgraded first.

# k3s-upgrade-plans.yaml
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: k3s-server
  namespace: system-upgrade
spec:
  concurrency: 1
  cordon: true
  nodeSelector:
    matchExpressions:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade
  version: v1.33.0+k3s1
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: k3s-agent
  namespace: system-upgrade
spec:
  concurrency: 2
  cordon: true
  nodeSelector:
    matchExpressions:
      - key: node-role.kubernetes.io/control-plane
        operator: DoesNotExist
  prepare:
    args:
      - prepare
      - k3s-server
    image: rancher/k3s-upgrade
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade
  version: v1.33.0+k3s1
kubectl apply -f k3s-upgrade-plans.yaml

The controller cordons each node, drains its workloads, upgrades the K3s binary, and uncordons the node before moving to the next. Watch the progress:

kubectl get pods -n system-upgrade
kubectl get plans -n system-upgrade

To upgrade to a new version, update the version field in both Plans and re-apply the manifest. No SSH access to individual nodes is required.

FAQ

Q: Can K3s run on a 512 MB Raspberry Pi Zero 2 W? Yes. The Zero 2 W has 512 MB of RAM and a quad-core arm64 CPU. K3s is usable, but you will have very little headroom for workloads. Keep deployments minimal and disable metrics-server and Traefik if you do not need them. A Pi 4 with 4 GB is a far more comfortable experience.

Q: Is K3s production-ready? Yes. SUSE, Rancher, and many enterprises run K3s in production. It passes the full CNCF Kubernetes conformance test suite. The embedded SQLite backend is production-ready for single-node or read-heavy workloads; for multi-node HA use embedded etcd or an external relational database.

Q: Can I use K3s alongside Docker? K3s uses containerd as its runtime, not Docker. Your existing Docker images are fully compatible (they use the OCI format), but docker CLI commands do not talk to the K3s runtime. Use kubectl for cluster operations and crictl (bundled with K3s) for low-level container inspection.

Q: How do I access the Traefik dashboard? Traefik's dashboard is disabled by default in K3s. Enable it by patching the Traefik HelmChartConfig:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    dashboard:
      enabled: true
    ports:
      traefik:
        expose:
          default: true

Then port-forward to access it locally:

kubectl port-forward -n kube-system deployment/traefik 9000:9000

Open http://localhost:9000/dashboard/ in your browser.

Q: How do I uninstall K3s? K3s ships an uninstall script for both server and agent nodes:

# Server node
/usr/local/bin/k3s-uninstall.sh

# Agent node
/usr/local/bin/k3s-agent-uninstall.sh

These scripts stop the service, remove the binary, and clean up iptables rules and CNI interfaces.

Q: Can K3s use GPU workloads? Yes. Install the NVIDIA device plugin as you would on standard Kubernetes. K3s has no restriction on device plugins, and containerd supports the NVIDIA Container Toolkit. Set --default-runtime in the containerd config to nvidia for nodes with a GPU.

Q: What is the difference between K3s and K0s? Both are lightweight Kubernetes distributions. K0s (from Mirantis) takes a similar single-binary approach but includes its own OpenRC and systemd integration and uses kube-router as the default CNI. K3s has wider community adoption, better ARM support documentation, and the Rancher/SUSE ecosystem backing it. Both are good choices; K3s has more third-party tutorials and a larger user base as of 2026.

Sources

Leonardo Lazzaro

Software engineer and technical writer. 10+ years experience in DevOps, Python, and Linux systems.

More articles by Leonardo Lazzaro