🚀 From Blueprints to Execution: Breathing Life into the Test Automation Platform


Hello, my dear subscribers! I was quite occupied lately and did not have time to report about progress made, so trying to bridge the gap.
In the first article, I described the motivation behind building a modern, API-first test automation platform. Then, in the second, we dived into the architectural heart: test nodes, the reusable, configurable units that define every test flow.
Now it’s time to move from theory to practice.
✅ Real Test Executions – Orchestrated and Observable
One of the core goals was clear from the beginning: tests must run server-side. That allows us to support complex execution logic, enforce access control, and prepare for future integrations (like scheduled runs or webhooks triggering tests on production events).
The server now takes full responsibility for executing test runs - initiated via API - and relays real-time updates to the frontend via WebSockets. That means as a user watches a test execute, they can see which node is running, where it failed, and how data flows through the test.
This live view enables:
Real-time debugging
Immediate feedback loops
An intuitive visual experience even for non-technical users
🧠 Smart Results: Stored, Shared, and Reused
Every test run produces a stream of valuable data: success/failure states, response payloads, execution times, and intermediate values. I make sure nothing is lost.
MongoDB stores complete test run results for inspection, traceability, and audit.
Redis caches runtime results of nodes - allowing subsequent nodes to reuse outputs from previous ones, similar to variable chaining in traditional automation tools such as n8n or make.com.
This design allows for dynamic, data-driven tests: for example, you can extract a token in one node, and use it in headers or body of a later HTTP call - without writing a single line of code. And yes - drag-n-drop is supported :)
It is not only node test results, but environment variables and randomizer functions as well.
And if a test fails? It’s all there - logs, inputs, outputs - ready for inspection by any user in the system.
🧬 Developer-Friendly: Containerized and CI/CD-Ready
One of the most empowering steps was embracing containerization.
Developers can now run the entire platform locally - including server, database, and frontend - using Docker Compose.
The same setup plugs seamlessly into CI/CD pipelines, allowing tests to run headless across environments and stages.
Since the platform supports Playwright, HTTP calls, and custom scripting nodes, it’s equally good for API validation, UI testing, or hybrid test suites.
This opens the door to endless integrations: run tests before every deployment, validate staging environments, or even trigger tests on external events using webhooks or message queues.
🔁 Reusability: Copy, Paste, Extend
To speed up test creation, we’ve introduced one deceptively simple but powerful feature: copy & paste for test nodes.
Users can now:
Clone complex test logic
Share snippets across flows
Extend tests without breaking existing ones
Combined with versioning and branching (coming soon), this feature lays the groundwork for test reusability at scale, essential for enterprise-grade testing.
👀 What’s Next?
With the fundamentals in place, the focus shifts to intelligence and resilience. Here’s a sneak peek at what’s coming:
Conditional logic and expressions to dynamically control flow
Visual test comparisons for regression detection
Alerting and notifications via email, Slack, Telegram, and more
🔚 Final Thoughts
What started as a bold idea - an automation platform that merges test design, execution, and observability - is now a working system. Developers can use it locally, teams can debug collaboratively, and companies can integrate it into their deployment pipelines.
Step by step, it’s turning into the platform I wished existed.
Subscribe to my newsletter
Read articles from Vasyl Melnychuk directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
