Skip to content

[DevOps Series] Part 5: K8s in details

Lilhuy

📚 Series Table of Contents

  1. 📖 Chapter 0: Introduction and Stories
  2. 📚 Chapter 1: Some concepts and terminologies
  3. 🚀 Chapter 2: A noob guy deploy his web app
  4. 🐳 Chapter 3: Docker and the world of containerization
  5. ☸️ Chapter 4: K8s in a nutshell
  6. 🔧 Chapter 5: K8s in details (You are here) 🎯
  7. 🏠 Chapter 6: Before go to the ground 🏡
  8. 🐧 Chapter 7: Ubuntu server and the world of Linux 🖥️
  9. Chapter 8: MicroK8s the simple and powerful K8s ⚙️
  10. ☁️ Chapter 9: Harvester HCI the native cloud 🌐
  11. 🏭 Chapter 10: More about Harvester HCI 🏢
  12. 🖥️ Chapter 11: Promox VE the best VM manager 💾
  13. 🌐 Chapter 12: Turn a server into a router with Pfsense 🔌
  14. 🛠️ Chapter 13: Some tools, services that you can installed for your devops pipeline 🔧
  15. 🌍 Chapter 14: Hello Internet with Cloudflare Zero Trust 🔒
  16. 🎉 Chapter 15: Maybe it the end of the series 🏁

In the last chapter of this series, we discussed the basics of K8s and what problems it solves. We also deployed a nginx service in our local MicroK8s cluster with 3 running pods. In this post let’s go deeper so you can really use K8s to deploy a backend API using Node.js.

This post is a learn-by-doing approach, not theory only. Assume you are using Ubuntu or any Linux distro and have MicroK8s installed: https://canonical.com/microk8s as we have done in the previous chapter.

1. Manage k8s cluster with kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
## test and verify
kubectl version --client
mkdir -p ~/.kube
microk8s kubectl config view --raw > ~/.kube/config
chmod 600 ~/.kube/config

The chmod 600 is important — kubectl will warn about insecure file permissions if you skip it. The config file contains your K8s API endpoints and credentials (base64 encoded), so it should only be readable by your user.

kubectl get ns

If you see no error and the default namespace is listed, you’re ready to go. To check how many contexts exist and which is current, use: kubectl config get-contexts

2. The image registry

{
  "insecure-registries": ["localhost:32000"]
}daemon.json

Then run sudo systemctl restart docker to apply. Now you can push to the built-in registry: docker push localhost:32000/myapp:v1.0.0

const express = require("express");
const app = express();
const port = 3000;

app.get("/", (req, res) => {
  res.send("Hello World!");
});

app.listen(port, () => {
  console.log(`Example app listening on port ${port}`);
});index.js
node_modules/
Dockerfile.dockerignore
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]Dockerfile
docker build -t localhost:32000/hello-k8s:v1.0.0 .
docker push localhost:32000/hello-k8s:v1.0.0

Tip: Always tag your images with an explicit version (v1.0.0, v1.0.1, etc.) instead of relying on the default latest tag. K8s nodes cache images locally, so if you push a new build with the same latest tag, the cluster may keep running the old cached image and silently ignore your update. A version tag forces a fresh pull every time.

3. Networking

---
apiVersion: v1
kind: Namespace
metadata:
  name: hello
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
  namespace: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
        - name: hello
          image: localhost:32000/hello-k8s:v1.0.0:v1.0.0
          ports:
            - containerPort: 3000
---deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: hello
  namespace: hello
spec:
  selector:
    app: hello
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000

cluster ip

apiVersion: v1
kind: Service
metadata:
  name: hello
  namespace: hello
spec:
  selector:
    app: hello
  type: NodePort
  ports:
    - name: http
      port: 3000
      targetPort: 3000
      nodePort: 30300

At the end of the day: all pods sit behind a ClusterIP. To reach them from outside the cluster you can use NodePort or LoadBalancer. Let’s update our deployment.yaml to use NodePort so we can test:

---
apiVersion: v1
kind: Namespace
metadata:
  name: hello
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
  namespace: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
        - name: hello
          image: localhost:32000/hello-k8s:v1.0.0
          ports:
            - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: hello
  namespace: hello
spec:
  selector:
    app: hello
  type: NodePort
  ports:
    - name: http
      port: 3000
      targetPort: 3000
      nodePort: 30300
---deployment.yaml

Then visit localhost:30300 and you will see our backend running.

4. Ingress for exposing k8s services

What is Ingress and how does it work?

Ingress in Kubernetes has two separate parts you must understand:

  1. Ingress Controller: This is a real workload (a pod running nginx, or Traefik, or HAProxy, etc.) deployed inside your cluster — usually as a DaemonSet so it runs on every node. It watches for Ingress resources in the cluster and dynamically configures itself (nginx routing rules) whenever you add or change an Ingress. It listens on port 80 and 443 of the host node, so traffic entering those standard ports gets picked up.

  2. Ingress resource: This is a K8s config object (a YAML file) where you declare your routing rules — which hostname maps to which service, which path maps to which backend, and optionally TLS certificates. The Ingress Controller reads these rules and applies them to its internal nginx config automatically.

Think of the Ingress Controller as the nginx server itself, and each Ingress resource as a server {} block in nginx config — except K8s manages the config file for you.

What Ingress gives you:

curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb && sudo dpkg -i cloudflared.deb

Then:

cloudflared tunnel --url localhost:80

No account needed — it’ll give you a random *.trycloudflare.com URL instantly. Keep this terminal running.

Now update your deployment like this:

---
apiVersion: v1
kind: Namespace
metadata:
  name: hello
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
  namespace: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
        - name: hello
          image: localhost:32000/hello-k8s:v1.0.0
          ports:
            - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: hello
  namespace: hello
spec:
  selector:
    app: hello
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello
  namespace: hello
spec:
  ingressClassName: public
  rules:
    - host: "your-cloudflare-domain.trycloudflare.com"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello
                port:
                  number: 3000deployment.yaml

Conclusion

This post is pretty long — I hope you followed it well. At this point you can:

I recommend reading more K8s documentation about services and Ingress, or using AI to go deeper on specific topics.

In the next chapter we’ll cover “Before go to the ground” — some important foundations to set up before building a real on-premise cluster, so you don’t have to redo things later.


📚 Series Navigation

Previous ChapterSeries InfoNext Chapter
← Previous Chapter
☸️ K8s in a nutshell
DevOps Series
Chapter 5 of 16
Next Chapter →
🏠 Before go to the ground
Edit this post
Next
[DevOps Series] Part 4: K8s in a nutshell