Kubernetes (K8s) is the industry standard for container orchestration. Where Docker Compose targets a single server, K8s manages clusters of dozens of machines: automatic scaling, self-healing, rolling updates, service discovery — all built in. This article covers the core concepts with practical YAML examples.
Core Concepts
- Node: a physical/virtual server in the cluster
- Pod: the smallest deployable unit, containing one or more containers
- Deployment: manages pods, defines replica count, enables rolling updates
- Service: gives pods a stable IP and DNS name
- Ingress: routes HTTP(S) traffic from outside into the cluster
- ConfigMap / Secret: config and sensitive data
- Namespace: logical isolation inside the cluster
Your First Deployment
# app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels: { app: webapp }
spec:
replicas: 3
selector:
matchLabels: { app: webapp }
template:
metadata:
labels: { app: webapp }
spec:
containers:
- name: app
image: ghcr.io/user/webapp:v1.2.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
resources:
requests: { cpu: 100m, memory: 128Mi }
limits: { cpu: 500m, memory: 512Mi }
livenessProbe:
httpGet: { path: /health, port: 3000 }
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet: { path: /ready, port: 3000 }
periodSeconds: 5
kubectl apply -f app-deployment.yaml
kubectl get pods
kubectl logs -f deployment/webapp
kubectl describe deployment webapp
Service
Pod IPs are ephemeral (they change when pods are recreated). A Service selects pods via a label selector and provides a stable access point.
apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
type: ClusterIP # cluster-internal; others: NodePort, LoadBalancer
selector:
app: webapp
ports:
- port: 80
targetPort: 3000
Now other pods inside the cluster can reach it at http://webapp/.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts: [example.com]
secretName: webapp-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp
port: { number: 80 }
ConfigMap and Secret
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
NODE_ENV: production
LOG_LEVEL: info
---
apiVersion: v1
kind: Secret
metadata:
name: webapp-secret
type: Opaque
stringData:
DATABASE_URL: postgres://user:pass@db/app
JWT_SECRET: supersecret
# Usage inside a Deployment
spec:
containers:
- name: app
envFrom:
- configMapRef: { name: webapp-config }
- secretRef: { name: webapp-secret }
Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Rolling Updates and Rollback
# Ship a new version
kubectl set image deployment/webapp app=ghcr.io/user/webapp:v1.3.0
# Rollout status
kubectl rollout status deployment/webapp
# History
kubectl rollout history deployment/webapp
# Roll back
kubectl rollout undo deployment/webapp
kubectl rollout undo deployment/webapp --to-revision=3
Common kubectl Commands
# Context and namespace
kubectl config current-context
kubectl config use-context prod-cluster
kubectl get all -n my-namespace
# Shell into a pod
kubectl exec -it pod/webapp-abc123 -- /bin/sh
# Port forward (local debugging)
kubectl port-forward service/webapp 8080:80
# Logs (all pods)
kubectl logs -l app=webapp --all-containers --tail=100 -f
# Resource usage (requires metrics-server)
kubectl top pods
kubectl top nodes
# Export as YAML
kubectl get deployment webapp -o yaml > webapp.yaml
Helm for Package Management
As raw YAML gets more complex, Helm steps in — templates + values.yaml let you reuse the same manifests across different environments. Almost every popular service has a ready-made Helm chart (ingress-nginx, cert-manager, prometheus, grafana, redis).
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-redis bitnami/redis --set auth.password=secret
helm list
helm upgrade my-redis bitnami/redis --set replica.replicaCount=3
Local Development
- minikube — single-node local K8s
- kind — Kubernetes inside Docker, great for CI too
- k3d — k3s (lightweight K8s) inside Docker
- Docker Desktop — one-click K8s
Managed K8s Services
Running your own cluster is complex. Prefer managed options: GKE (Google), EKS (AWS), AKS (Azure), DigitalOcean, Hetzner (not managed but cheap), Civo. The cloud runs the control plane so you only worry about worker nodes.
Conclusion
The Kubernetes learning curve is steep but the architectural payoff is enormous — managing hundreds of containers with a single command, plus autoscaling and self-healing. Still, not every project needs it; for small teams Docker Compose or a managed PaaS is often the better fit. The decision to adopt K8s is a function of traffic, team and complexity.
Reach out to KEYDAL for K8s cluster design, Helm chart authoring and migration support. Contact us