Kubernetes rolling updates diagram showing pods being gradually replaced with new versions while maintaining zero downtime.

πŸ”„ Kubernetes Rolling Updates & Zero Downtime Deployments

This is one of the most important concepts in Kubernetes β€” deploying updates without breaking your app.

  • πŸš€ Deploy new versions safely
  • 🟒 Keep your app running during updates
  • ↩️ Instantly roll back if something fails
Big idea 🧠
Kubernetes replaces pods gradually, not all at once β€” this is how zero downtime works.

βš™οΈ 1. Create the Deployment

This deployment runs 4 replicas of nginx and uses a rolling update strategy.

πŸ“„ File: 01-rolling-updates.yaml

apiVersion: apps/v1
kind: Deployment

metadata:
  name: web-app
  labels:
    app: web-app

spec:
  replicas: 4  # Always keep 4 pods running

  strategy:
    type: RollingUpdate

    rollingUpdate:
      maxSurge: 1        # Allow 1 extra pod during update
      maxUnavailable: 1  # Allow 1 pod to go down

  selector:
    matchLabels:
      app: web-app

  template:
    metadata:
      labels:
        app: web-app

    spec:
      containers:
      - name: nginx

        # Initial version of the app
        image: nginx:1.14

        ports:
        - containerPort: 80

        # Only send traffic when pod is ready
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5

This setup ensures:

  • 🟒 Your app always has running instances
  • πŸ”„ Updates happen gradually
  • 🚦 Traffic only hits ready pods

πŸš€ Apply the Deployment

kubectl apply -f 01-rolling-updates.yaml

🌐 2. Expose the App (Service)

To properly test rolling updates, we need a stable way to access the app. This is where a Service comes in.

πŸ“„ File: 02-service.yaml

apiVersion: v1
kind: Service

metadata:
  name: web-app-service

spec:
  type: NodePort

  selector:
    app: web-app  # Targets our deployment pods

  ports:
    - port: 80
      targetPort: 80
      nodePort: 30007  # External port

This creates a stable endpoint that always routes traffic to healthy pods.

πŸš€ Apply the Service

kubectl apply -f 02-service.yaml

🌍 Access the App

minikube service web-app-service

Now open the app in your browser β€” keep it open for the next step πŸ‘‡

πŸ”„ 3. Perform a Rolling Update (Live Test)

Now we upgrade the app while it’s running.

kubectl set image deployment/web-app nginx=nginx:1.25 --record

πŸ‘€ Watch Pods

kubectl get pods -l app=web-app -w

While this runs:

  • πŸ” Refresh your browser
  • πŸ‘€ Notice: no downtime
What’s happening πŸ’‘
Kubernetes creates new pods first, waits for them to be ready, then removes old ones.

πŸ“œ 4. View Deployment History

kubectl rollout history deployment web-app

Kubernetes tracks every deployment revision β€” this is what makes rollbacks possible.

↩️ 5. Rollback (When Things Go Wrong)

Let’s simulate a bad deployment:

kubectl set image deployment/web-app nginx=nginx:broken-version --record

Now fix it instantly:

kubectl rollout undo deployment web-app

Your app recovers without downtime πŸ”₯

🧠 Key Concepts

Concept What it means
Kubernetes Rolling Updates Gradual pod replacement
maxSurge Extra pods allowed during update
maxUnavailable Pods allowed to go down
Rollback Revert to previous version

🧹 Cleanup

kubectl delete -f 01-rolling-updates.yaml
kubectl delete -f 02-service.yaml

🧠 Final Thoughts

This is how real production systems deploy safely every day.

  • πŸš€ Zero downtime deployments
  • πŸ” Safe rollbacks
  • 🟒 High availability
If you understand this, you're thinking like a DevOps engineer 🧠

Leave a Reply

Your email address will not be published. Required fields are marked *