Infrastructure Testing with Confidence: Jest + Pulumi + Central Config

In the previous post, I showed how to define an entire infrastructure stack using a single, strongly-typed TypeScript configuration file. It’s clean, scalable, and easy to reason about — no more hunting through scattered YAMLs or mismatched stack files.
But here’s the real win: once your system is described by a single source of truth, that same config can power a complete infrastructure validation suite. Not just deployment, but verification. Every single time. Automatically.
Why bother?
Manually checking if hetzner_server.control_plane
has the right server type, image, location, network, and firewall gets old fast — and breaks even faster. Two servers are manageable. Five are annoying. Ten? You’re guessing and hoping.
Even worse, Pulumi's state can drift. Resources may partially apply, fail silently, or get manually edited in the UI. Having a fast, reproducible sanity check is invaluable.
The method
The core idea is simple:
- Load the
config.ts
file (the central configuration). - Query the actual state of live infrastructure (via
hcloud
, file system, etc.). - Compare the two.
- If anything diverges: fail fast.
This isn’t snapshot testing. This is state validation. And since the tests derive everything from the config, they’re future-proof: add a new server to the config, and the test suite picks it up automatically. No test rewrites required.
The technical catch
Pulumi’s SDK is designed to run inside the Pulumi runtime. You don’t get rich access to stack data from external scripts. And Jest runs outside that bubble.
So — here comes the hack — we interrogate Pulumi (and Hetzner) using shell commands. Crude, yes. But effective.
Example: get the current stack name using pulumi stack ls
:
export function pulumiGetStack(): string {
const raw = execSync(`pulumi stack ls --json`, { encoding: "utf-8" });
return JSON.parse(raw).find((s: any) => s.current).name;
}
From there, tests live in tests/*.test.ts
, where they read the central config and assert against the real infrastructure.
A test run in action
npm test
> test
> cd src && jest --verbose
PASS tests/networkSubnet.test.ts (6.39 s)
Hetzner network subnet creation
✓ subnet exists
✓ subnet type matches input
✓ subnet IP range matches input
✓ subnet network zone matches input
PASS tests/network.test.ts (6.574 s)
Hetzner network creation
✓ network exists
✓ network IP range matches input
PASS tests/sshKey.test.ts (6.577 s)
SSH Key Generation (homedir only)
✓ should create private key for control-plane in ~/.ssh
✓ should create public key for control-plane in ~/.ssh
✓ private key for control-plane in ~/.ssh should not be empty
✓ public key for control-plane in ~/.ssh should not be empty
✓ private key for control-plane in ~/.ssh should start with '-----BEGIN'
✓ public key for control-plane in ~/.ssh should start with 'ssh-'
✓ should create private key for worker-node-1 in ~/.ssh
✓ should create public key for worker-node-1 in ~/.ssh
✓ private key for worker-node-1 in ~/.ssh should not be empty
✓ public key for worker-node-1 in ~/.ssh should not be empty
✓ private key for worker-node-1 in ~/.ssh should start with '-----BEGIN'
✓ public key for worker-node-1 in ~/.ssh should start with 'ssh-'
PASS tests/firewall.test.ts (7.486 s)
Hetzner Cloud Firewall
✓ Firewall exists for server control-plane
✓ Firewall rules match for server control-plane
✓ Firewall exists for server worker-node-1
✓ Firewall rules match for server worker-node-1
PASS tests/firewallAttachment.test.ts (8.287 s)
Hetzner server firewall attachment
✓ Server control-plane has the correct firewall
✓ Server worker-node-1 has the correct firewall
PASS tests/server.test.ts (8.74 s)
Hetzner server configuration
✓ Server control-plane is running
✓ Server control-plane has correct server type
✓ Server control-plane has correct image
✓ Server control-plane has correct location
✓ Server control-plane is in the correct network
✓ Server worker-node-1 is running
✓ Server worker-node-1 has correct server type
✓ Server worker-node-1 has correct image
✓ Server worker-node-1 has correct location
✓ Server worker-node-1 is in the correct network
Test Suites: 6 passed, 6 total
Tests: 34 passed, 34 total
Snapshots: 0 total
Time: 9.08 s, estimated 10 s
Ran all test suites.
In less than 10 seconds, you know whether:
- The correct network and subnet exist
- SSH keys are present and valid (checked via filesystem, format, contents)
- All servers are running, correctly typed, properly located
- Firewalls are created and attached with the right rules
All of it defined from — and validated against — the config.
Why this matters
- Add a new server? It gets validated.
- Something drifts in Pulumi state? Detected.
- Accidentally delete something from the UI? Caught.
- Firewall rule typo? Fails test.
And most importantly: you get a one-minute feedback loop. Infrastructure either matches config, or it doesn’t.
Final thoughts
Most IaC projects break down because there’s no feedback mechanism. You define, deploy, and hope. This approach gives you a definitive answer every time: yes, it matches; or no, it’s broken.
It’s not pure — some parts are duct-taped together with shell calls — but it works. And once your config is canonical, everything else (deployment, validation, debugging) becomes a matter of diffing reality against expectation.
Subscribe to my newsletter
Read articles from Arnold Lovas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Arnold Lovas
Arnold Lovas
Senior full-stack dev with an AI twist. I build weirdly useful things on my own infrastructure — often before coffee.