K3s in Place: Closing the Loop on a TypeScript-Native Infra Stack

After several iterations, late-night refactors, and more pulumi destroy
runs than I care to count, the final piece has landed: automated K3s installation. With this, the entire infrastructure stack for kreativarc.com
is now fully operational — provisioned, configured, and clustered from scratch using Pulumi and Hetzner Cloud.
No bash glue, no half-manual steps, no mismatched languages. Just a single, clean TypeScript codebase that turns a cloud token into a Kubernetes cluster — complete with private networking, firewalls, SSH key handling, and, now, a self-validating K3s deployment.
K3s, TypeScript, and a Budget VPS Walk Into a Datacenter...
The setup provisions a classic three-node setup: one control plane, two workers — all Hetzner CX22s, tucked into a private subnet with proper firewalling. K3s is installed on the control plane first (Traefik disabled, obviously), then the node token is fetched securely and used to join the workers. The kubeconfig
is exported and made available to other repos via Pulumi stack outputs. It's the bridge between infra and app layer — and it's finally real.
Under the hood, the SshCli
helper now does most of the heavy lifting. It generates per-instance ED25519 keypairs, establishes secure SSH connections (with host key checks, because we’re adults), and runs the install scripts remotely. We treat SSH access like a first-class citizen, not a shell hack.
Testing? Yes. Jest-based tests check not just the resource state, but actual cluster health and node readiness. The CI doesn’t consider the infra “done” unless kubectl get nodes
shows green across the board. Worker node flaking? You'll find out fast.
A Refactor Worth the Effort
Bringing in K3s triggered a full project refactor. Hetzner resources were moved into scoped subfolders, config handling was centralized, and the Pulumi wrapper was split into a pulumiCli
object with better test ergonomics. Output handling got saner, especially around secrets like node tokens and exported kubeconfig
.
The stack is now far more modular — a side effect of solving real-world deployment bugs that only surface when you actually try to join a cluster over a freshly provisioned network with restricted firewalls and tight SSH policies.
One Language to Rule Them All (For Now)
The entire project is TypeScript-based, which keeps context switches minimal and CI/CD straightforward. Everything — from infra definitions to helper functions and tests — is in the same language, same tooling, same mental model. It’s fast, typed, and works.
Eventually, specialized tools (e.g., Go for k8s CRD wrangling or Python for AI-heavy workflows) may make their way in. But today, TypeScript is the lingua franca of kreativarc_infra
. And that clarity has helped move faster, especially while solo.
It All Runs for Under €10 a Month
The stack runs on budget-friendly Hetzner instances with no managed services. Firewalls are locked down, tunnels handle exposure (when needed), and there’s zero persistent infra outside the VPS box. CI/CD is handled separately via GitHub Actions, pulling kubeconfig from Pulumi outputs when needed.
You don’t need to spend hundreds per month for a real infrastructure backbone — just automate ruthlessly, test everything, and don’t cut corners on security.
Next stop: modular namespaces, observability stack rework, and possibly a Golang sidecar or two. But for now, it clusters, it connects, and it tests itself. That’s a solid milestone.
Subscribe to my newsletter
Read articles from Arnold Lovas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Arnold Lovas
Arnold Lovas
Senior full-stack dev with an AI twist. I build weirdly useful things on my own infrastructure — often before coffee.