}

Kubernetes for Beginners: First Deployment with kind (No Cloud, Free, 2026)

Kubernetes for Beginners: First Deployment with kind (No Cloud, Free, 2026)

According to the CNCF Annual Survey, 82% of teams using containers are running Kubernetes in production. Learning Kubernetes no longer requires a cloud account or a credit card. kind — Kubernetes IN Docker — runs a full K8s cluster using Docker containers as nodes. It is free, runs on your laptop, and is lightweight enough to spin up in under a minute.

This tutorial takes you from zero to a running multi-node cluster with a deployed application, autoscaling, secrets, and config management — entirely on your local machine.

What Kubernetes Does

Containers solve the "it works on my machine" problem. Kubernetes solves what comes next: running many containers reliably across many machines.

Kubernetes is a container orchestration platform. You tell it the desired state of your system (three replicas of this API, always running, with 500 MB of RAM each) and Kubernetes continuously works to make reality match that description. This is called the desired state model or reconciliation loop.

Key behaviors you get automatically:

  • Self-healing: if a container crashes, Kubernetes restarts it. If a node goes down, pods are rescheduled on healthy nodes.
  • Rolling updates: deploy a new version without downtime by gradually replacing old pods.
  • Horizontal scaling: add more replicas when traffic spikes.
  • Service discovery: containers find each other by name, not by IP address.

Core Concepts You Need to Know

Pod — the smallest deployable unit in Kubernetes. A pod wraps one or more containers that share a network namespace and storage. In practice, most pods contain a single container. Pods are ephemeral: they can be killed and replaced at any time.

Deployment — manages a set of identical pods. You declare how many replicas you want and which container image to use. The Deployment controller watches the actual state and creates or removes pods to match. This is what you interact with for stateless applications.

Service — gives pods a stable network endpoint. Because pods are ephemeral (and their IP addresses change), a Service provides a fixed DNS name and IP that load-balances traffic across all matching pods.

Namespace — a logical partition inside the cluster. Different teams or environments (staging, production) can share a cluster while being isolated in different namespaces.

ConfigMap — stores non-sensitive configuration data (environment variables, config files) as key-value pairs that pods can consume.

Secret — stores sensitive data (passwords, tokens, certificates) in base64-encoded form. Secrets are kept separate from application code and injected at runtime.

Why kind Instead of a Cloud Cluster

Cloud-managed Kubernetes (EKS, GKE, AKE) costs money and requires account setup. Minikube runs a single-node cluster but uses a VM. kind uses Docker containers as nodes — no VM, no cloud, no cost.

Additional advantages:

  • kind is the tool the Kubernetes project itself uses to test releases.
  • Multi-node clusters work: you can simulate a control plane with multiple workers.
  • Cluster creation takes 30–60 seconds.
  • Delete and recreate clusters freely — nothing persists.

Prerequisites

You need:

  • Docker running (Docker Desktop on macOS/Windows, Docker Engine on Linux)
  • At least 2 CPUs and 4 GB of free RAM
  • kubectl — the Kubernetes command-line tool

Installing kind and kubectl

Install kind:

# Linux (amd64)
curl -Lo /usr/local/bin/kind \
  https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64
chmod +x /usr/local/bin/kind

# macOS with Homebrew
brew install kind

Install kubectl:

# Linux
curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl && sudo mv kubectl /usr/local/bin/

# macOS with Homebrew
brew install kubectl

Verify both are installed:

kind version
kubectl version --client

Creating Your First Cluster

A single-node cluster (control plane only) is enough to start:

kind create cluster --name myapp

Output:

Creating cluster "myapp" ...
 ✓ Ensuring node image (kindest/node:v1.31.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-myapp"

kind automatically updates your ~/.kube/config and sets the context. You can now talk to the cluster:

kubectl get nodes
# NAME                  STATUS   ROLES           AGE
# myapp-control-plane   Ready    control-plane   40s

Multi-Node Cluster: Control Plane + 2 Workers

For a more realistic setup, define the cluster topology in a YAML file:

# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker

Create it:

kind create cluster --name myapp --config kind-config.yaml
kubectl get nodes
# NAME                  STATUS   ROLES           AGE
# myapp-control-plane   Ready    control-plane   60s
# myapp-worker          Ready    <none>           45s
# myapp-worker2         Ready    <none>           45s

Now you have a cluster with a dedicated control plane and two worker nodes where your workloads will run.

kubectl Basics

kubectl is the primary tool for interacting with Kubernetes. The pattern is:

kubectl <verb> <resource> [name] [flags]

Common commands:

# List resources
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get all

# Get detailed info about a resource
kubectl describe pod my-pod-xyz

# Follow logs from a pod
kubectl logs -f my-pod-xyz

# Execute a command inside a running container
kubectl exec -it my-pod-xyz -- /bin/sh

# Apply a YAML manifest
kubectl apply -f deployment.yaml

# Delete a resource
kubectl delete -f deployment.yaml

Use -n <namespace> to target a specific namespace. The default namespace is called default.

Your First Deployment: nginx

Create a file called deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.27
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "100m"
              memory: "64Mi"
            limits:
              cpu: "250m"
              memory: "128Mi"

Apply it:

kubectl apply -f deployment.yaml
# deployment.apps/nginx created

kubectl get pods
# NAME                     READY   STATUS    RESTARTS   AGE
# nginx-7c79c4bf97-xkp2d   1/1     Running   0          12s

Scaling: More Replicas in One Command

kubectl scale deployment nginx --replicas=3

kubectl get pods
# NAME                     READY   STATUS    RESTARTS   AGE
# nginx-7c79c4bf97-xkp2d   1/1     Running   0          2m
# nginx-7c79c4bf97-b9s8f   1/1     Running   0          5s
# nginx-7c79c4bf97-qr4nt   1/1     Running   0          5s

Three pods, all running, load-balanced by the Service you are about to create. Kill one to see self-healing:

kubectl delete pod nginx-7c79c4bf97-xkp2d
kubectl get pods
# Kubernetes immediately starts a replacement to maintain 3 replicas

Services: Giving Pods a Stable Endpoint

A Deployment alone is not enough — you need a Service to reach your pods. There are three common Service types:

  • ClusterIP (default): accessible only inside the cluster. Used for internal communication between services.
  • NodePort: exposes the service on a port of each node. This is how you access services in kind from your host machine.
  • LoadBalancer: provisions an external load balancer (only works in cloud environments; in kind it requires an additional tool like MetalLB).

Create a NodePort Service for nginx:

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
kubectl apply -f service.yaml

# Forward the port to your local machine to test it
kubectl port-forward service/nginx 8080:80

Open http://localhost:8080 and you will see the nginx welcome page.

Loading a Local Docker Image into kind

When working with your own application image, you do not need a registry. kind can load images directly from your local Docker daemon:

# Build your image locally
docker build -t myapp:latest .

# Load it into the kind cluster
kind load docker-image myapp:latest --name myapp

Then reference myapp:latest in your Deployment YAML. Set imagePullPolicy: Never so Kubernetes uses the locally loaded image instead of trying to pull from a registry:

      containers:
        - name: myapp
          image: myapp:latest
          imagePullPolicy: Never

This workflow eliminates the need for a registry during development and testing.

ConfigMaps: Inject Configuration Without Rebuilding Images

ConfigMaps decouple configuration from container images. Create one from a file:

# config.yaml contains your app configuration
kubectl create configmap app-config --from-file=config.yaml

# Or from literal values
kubectl create configmap app-env \
  --from-literal=LOG_LEVEL=info \
  --from-literal=PORT=8080

Mount it in your Deployment:

      containers:
        - name: myapp
          image: myapp:latest
          envFrom:
            - configMapRef:
                name: app-env
          volumeMounts:
            - name: config-volume
              mountPath: /app/config
      volumes:
        - name: config-volume
          configMap:
            name: app-config

Change the ConfigMap and restart the pods — no image rebuild required.

Secrets: Store Credentials Separately

Secrets work like ConfigMaps but are intended for sensitive values. Kubernetes stores them base64-encoded (note: not encrypted by default, but separate from your application code and image):

kubectl create secret generic db-secret \
  --from-literal=password=supersecret \
  --from-literal=username=myapp

Reference it in a pod spec:

      containers:
        - name: myapp
          env:
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-secret
                  key: password

The secret value is available to the container as an environment variable at runtime, but never appears in your Deployment YAML or image.

Resource Limits: Prevent One Pod From Starving Others

Always specify CPU and memory requests and limits in production. Requests tell the scheduler how much capacity the pod needs to be placed on a node. Limits cap what the pod can actually consume:

          resources:
            requests:
              cpu: "250m"       # 0.25 CPU cores
              memory: "256Mi"
            limits:
              cpu: "500m"       # 0.5 CPU cores
              memory: "512Mi"

m stands for millicores: 1000m = 1 CPU core. Without limits, a misbehaving pod can consume all available resources on a node and starve neighboring pods.

Horizontal Pod Autoscaler: Auto-Scale on CPU

The Horizontal Pod Autoscaler (HPA) watches a metric (CPU utilization by default) and adjusts the replica count automatically.

First, install the Kubernetes Metrics Server in kind:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Patch metrics-server to work in kind (disables TLS verification for local clusters)
kubectl patch deployment metrics-server -n kube-system \
  --type=json \
  -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'

Create an HPA for the nginx deployment:

kubectl autoscale deployment nginx \
  --cpu-percent=50 \
  --min=2 \
  --max=10

This keeps CPU utilization across nginx pods at 50%. Below 50%, it scales down toward 2 replicas. Above 50%, it scales up to a maximum of 10.

Check its status:

kubectl get hpa
# NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS
# nginx   Deployment/nginx   12%/50%   2         10        2

Cleanup

Delete the cluster when you are done:

kind delete cluster --name myapp

This removes all nodes, all data, and all resources. Your Docker environment is clean again. Recreating the cluster later takes about 60 seconds.

What to Learn Next

Once you are comfortable with the above:

  • Ingress controllers: route HTTP traffic to multiple services by hostname or path (nginx-ingress, Traefik).
  • Helm: the package manager for Kubernetes — install complex applications from versioned charts with a single command.
  • Persistent Volumes: attach storage that survives pod restarts (required for databases).
  • RBAC: control which users and service accounts can perform which actions in the cluster.
  • Network Policies: restrict traffic between pods at the network level.

Summary

ConceptWhat it does
PodSmallest unit; wraps one or more containers
DeploymentManages replicas; handles updates and self-healing
ServiceStable DNS + IP endpoint in front of pods
ConfigMapInject non-sensitive configuration into pods
SecretInject sensitive values at runtime
HPAAuto-scale replica count based on CPU/memory
kindRuns a full K8s cluster inside Docker — free and local

kind removes every barrier to getting started with Kubernetes. You do not need a cloud account, a credit card, or a powerful workstation. A laptop with Docker is enough to learn the same platform that runs production workloads for thousands of companies.

Leonardo Lazzaro

Software engineer and technical writer. 10+ years experience in DevOps, Python, and Linux systems.

More articles by Leonardo Lazzaro