Max Heyer

Kubernetes is not only about scalability

I use Kubernetes whenever possible - even for this blog.


The problem with managing servers

I've been running Linux servers since I was twelve.

And it's always the same story:

Endless yak shaving before the code even runs.

Even on VMs, I had to tinker with "hardware-level" details - network interfaces, storage, mounts, dependencies. So much overhead just to get a simple service online.

Docker helped. But it also added complexity. We finally had containers, but no real consensus on monitoring, backups, or updates. Everyone solved these problems differently.

Why standards without enforcements fail

Years ago I ran dozens of docker containers across Linux VMs on Proxmox hosts. I tried to standardize everything: Terraform, Ansible, Docker, GitOps.

It worked until deadlines hit. Then people skipped steps, changed patterns, pushed quick fixes. Standards without structure die quietly, one exception at a time.

And it wasn't just us. Talking to others in the industry - customers, competitors, partners - everyone struggled with the same thing. Most didn't even have GitOps. Everyone was building their own unique mess.

Why Kubernetes is different

Kubernetes was built for Google-scale systems. 65,000-node clusters. That scale forces discipline. You can't paper over inconsistency at that level.

So I started experimenting with K8s on some VMs and immediately saw the difference. Networking, security, deployments, load balancing, configuration management - all built in with sensible defaults, all defined as code. Kubernetes made infrastructure problems disappear behind APIs.

But here's what really matters: K8s enforces standards. You can't just configure networking however you want. You configure a Service resource. You don't invent your own deployment pattern. You write a Deployment manifest. The API itself forces you into standardized patterns.

With Ansible or Docker Compose, nothing stops you from doing it differently in every project. With K8s, the system only accepts resources that follow its specification. That's why the standards actually work.

Helm and Operators changed everything

Helm charts are blueprints for entire environments. You define services, dependencies, and configuration in a standardized format. You can publish them, reuse them, and deploy identical stacks anywhere in minutes.

Yes, they can get complex. But it's much simpler to read a complex Helm chart than reverse-engineering a landscape of half-documented Linux servers.

Operators

K8s Operators take this even further. They make infrastructure self-managing. They are a software pattern to implement software-defined infrastucture.

Take CloudNativePG: it runs PostgreSQL from single instances to full HA clusters with autoscaling, failover, and backups - all declaratively.

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-example
spec:
  instances: 3
  storage:
    size: 1Gi

Backups? Just another YAML config:

apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
  name: backup-example
spec:
  schedule: "0 0 0 * * *"  # At midnight every day
  backupOwnerReference: self
  cluster:
    name: cluster-example

That's it. Networking, HA, storage, recovery - done.

Running everything on K8s

Today I run as much as possible on K8s. From simple homelab stuff like Home Assistant to high available storage clusters. At enum almost every service runs on K8s clusters. Each cluster shares the same defaults for networking, monitoring, storage, and backups. It's predictable. Boring in the best possible way.

No SSH-ing into random servers. No debugging why NGINX is configured differently "just here". Every deployment follows the same process. Every config lives in Git.

Yes, K8s does not enforce everything and people can still mess around and give a shit on any standards. But in reality K8s enforces a lot already and the community follows the standards, just because they see the benefits from it.

Use Managed Kubernetes

You can run Kubernetes clusters yourself. But most teams shouldn't.

Operating the control plane is its own craft - scaling, updates, disaster recovery, networking quirks. Most teams don't need to care about any of that. They need to ship products, not clusters.

We operate a large-scale platform. You don't and you should not care about it. I've seen how easily people lose control of their systems once they start mixing automation with improvisation. Use a managed platform. It's not less "real". It's just less painful.

Yes, K8s is not for everything

Kubernetes isn't the holy grail for every problem. But it's holy for me.

Some workloads don't belong there and that's fine. The cluster overhead isn't worth it for everything. I still run game servers and some monitoring tools on plain VMs - even a GitLab instance that breaks now and then. It's fine. Some entropy is healthy.

But Kubernetes changed how I think about infrastructure. It turned chaos into code. It made systems boring - and boring is what reliability feels like.

If you're thinking about trying Kubernetes, do it. You won't regret it.

Tags: #kubernetes #cloud #container #homelab #vm #devops #infrastructure