Skip to content

[DevOps Series] Part 4: K8s in a nutshell

Lilhuy

📚 Series Table of Contents

  1. 📖 Chapter 0: Introduction and Stories
  2. 📚 Chapter 1: Some concepts and terminologies
  3. 🚀 Chapter 2: A noob guy deploy his web app
  4. 🐳 Chapter 3: Docker and the world of containerization
  5. ☸️ Chapter 4: K8s in a nutshell (You are here) 🎯
  6. 🔧 Chapter 5: K8s in details 🛠️
  7. 🏠 Chapter 6: Before go to the ground 🏡
  8. 🐧 Chapter 7: Ubuntu server and the world of Linux 🖥️
  9. Chapter 8: MicroK8s the simple and powerful K8s ⚙️
  10. ☁️ Chapter 9: Harvester HCI the native cloud 🌐
  11. 🏭 Chapter 10: More about Harvester HCI 🏢
  12. 🖥️ Chapter 11: Promox VE the best VM manager 💾
  13. 🌐 Chapter 12: Turn a server into a router with Pfsense 🔌
  14. 🛠️ Chapter 13: Some tools, services that you can installed for your devops pipeline 🔧
  15. 🌍 Chapter 14: Hello Internet with Cloudflare Zero Trust 🔒
  16. 🎉 Chapter 15: Maybe it the end of the series 🏁

Whether you’re a DevOps engineer or not, you’ve probably heard about Kubernetes or K8s. In this blog, we’ll learn what it is and how it works. This is just a brief overview - in the next chapter, we’ll dive deeper into K8s.

From Docker to Kubernetes

Remember our Docker journey? We learned to containerize applications with docker run, docker-compose, and Dockerfiles. But what happens when you need to run hundreds or thousands of containers across multiple servers?

Docker Limitations:

# This works great for development
docker run -p 3000:3000 my-app
docker-compose up -d

But in production, you need:

Kubernetes to the Rescue: K8s is like having a smart manager for your container fleet. It handles all the complex orchestration so you can focus on your applications.

Think of it this way:

DockerKubernetes
Single containerMultiple containers
One machineMultiple machines
Manual managementAutomated orchestration
Basic networkingAdvanced networking
Simple scalingIntelligent scaling

Note: Actually, Docker has Docker Swarm to manage multiple containers across multiple machines. But it’s not as powerful as K8s, so very few people use it.

K8s Core Concepts (Overview)

YAML meme

🏗️ Cluster Architecture

📦 Control the cluster

Install MicroK8s

The best way to learn K8s is to try installing it on your local machine. MicroK8s is the simplest way to get Kubernetes running locally. It’s perfect for learning and development.

# Install MicroK8s
sudo snap install microk8s --classic

At this point, your machine is now a K8s cluster with 1 master node. Note that MicroK8s installs its own kubectl as microk8s kubectl. In the real world, you need to install kubectl on your machine and configure it to control your MicroK8s cluster (and other clusters you have too - e.g., I’m managing about 10 clusters).

Your First K8s Deployment (Commands)

Let’s deploy a simple nginx app using kubectl commands:

# Get nodes in the cluster
sudo microk8s kubectl get nodes

# Create a deployment
sudo microk8s kubectl create deployment nginx-app --image=nginx:alpine

# Scale the deployment
sudo microk8s kubectl scale deployment nginx-app --replicas=3

# Expose the deployment
sudo microk8s kubectl expose deployment nginx-app --port=80 --type=LoadBalancer

# Check what we created
sudo microk8s kubectl get pods
sudo microk8s kubectl get services
sudo microk8s kubectl get deployments

# Get detailed info
sudo microk8s kubectl describe pod <pod-name>
sudo microk8s kubectl describe service nginx-app

# View logs
sudo microk8s kubectl logs <pod-name>
sudo microk8s kubectl logs -f <pod-name>  # Follow logs

# Execute commands in pod
sudo microk8s kubectl exec -it <pod-name> -- /bin/sh

K8s Features in Action

Rolling Updates

# Update the image
sudo microk8s kubectl set image deployment nginx-app nginx=nginx:latest

# Check rollout status
sudo microk8s kubectl rollout status deployment nginx-app

# Rollback if needed
sudo microk8s kubectl rollout undo deployment nginx-app

# Check rollout history
sudo microk8s kubectl rollout history deployment nginx-app

Scaling

# Scale up
sudo microk8s kubectl scale deployment nginx-app --replicas=5  # Scale to 5 pods

# Scale down
sudo microk8s kubectl scale deployment nginx-app --replicas=2

# Auto-scaling (if metrics-server is enabled)
sudo microk8s kubectl autoscale deployment nginx-app --min=2 --max=10 --cpu-percent=50

Deploy with YAML Files

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancernginx-deployment.yaml
# Apply the configuration
sudo microk8s kubectl apply -f nginx-deployment.yaml

# Check status
sudo microk8s kubectl get pods
sudo microk8s kubectl get services

# Update the deployment
sudo microk8s kubectl apply -f nginx-deployment.yaml

# Delete resources
sudo microk8s kubectl delete -f nginx-deployment.yaml

Essential kubectl Commands

Some kubectl commands you should know:

# Cluster info
sudo microk8s kubectl cluster-info
sudo microk8s kubectl get nodes

# Pods
sudo microk8s kubectl get pods
sudo microk8s kubectl get pods -o wide
sudo microk8s kubectl describe pod <pod-name>
sudo microk8s kubectl logs <pod-name>

# Deployments
sudo microk8s kubectl get deployments
sudo microk8s kubectl describe deployment <deployment-name>
sudo microk8s kubectl rollout status deployment <deployment-name>

# Services
sudo microk8s kubectl get services
sudo microk8s kubectl describe service <service-name>

# Namespaces
sudo microk8s kubectl get namespaces
sudo microk8s kubectl create namespace my-namespace

# Delete resources
sudo microk8s kubectl delete pod <pod-name>
sudo microk8s kubectl delete deployment <deployment-name>
sudo microk8s kubectl delete service <service-name>

What’s Next?

This is just the beginning! In the next chapter, we’ll dive deeper into:

Quick Summary

What we covered:

Conclusion

Kubernetes might seem complex at first, but it’s just a container orchestrator that makes your life easier. Start with the basics, practice locally with MicroK8s, and gradually explore advanced features. The best way to learn K8s is by doing - deploy applications, break things, and fix them! Don’t know how to fix? Don’t forget these days you have AI assistants to help you.


📚 Series Navigation

Previous ChapterSeries InfoNext Chapter
← Previous Chapter
🐳 Docker and the world of containerization
DevOps Series
Chapter 4 of 16
Next Chapter →
🔧 K8s in details
Edit this post
Previous
[DevOps Series] Part 5: K8s in details
Next
[DevOps Series] Part 3: Docker and the world of containerization