AI Beyond the Hype: How Agents and Artificial Intelligence Are Exposing a Broken, Deprecated Infrastructure.


From Hype to Breakdown
Artificial Intelligence is now part of our daily lives. Like the sunrise, agents are beginning to appear more and more in every aspect of human interaction—helping us save time, work more efficiently, or patch the gaps we didn’t even know we had. As this happens, engineers rush to update their skills, and companies across the market scramble to adapt. Because from now on, if you don't work with AI, you are deprecated.
Just as human skills must be redefined—learning new capabilities and updating old ones—our technological infrastructure must be redefined as well. Most of the software, SDKs, and frameworks we use today were built on an outdated paradigm: tasks are linear, models are static, and services live peacefully inside neatly isolated containers.
But LLMs and agents have broken that entire schema.
They don’t behave like services, can’t be orchestrated like functions, and don’t fit into the classic IT Lego set we spent decades building. And instead of creating new foundations for a new era, we're just patching the old one—layering wrappers, plugins, and compatibility hacks over an architecture that was never meant to support intelligence.
In this post, I’ll explore how agents should be seen as the fundamental unit of this new era—where protocols like A2A and MCP are not just useful tools, but the first signs of a future where infrastructure itself must evolve.
The old paradigm - What bring us here?
Micro-services, Containers, and the Illusion of Control
When we started building this technological world, everyone had one goal in mind: automate tasks and simplify work. That led us to design algorithms that, on every execution, would return the same result — predictable, controlled, repeatable.
But with the introduction of AI, that paradigm has collapsed.
We are no longer just simplifying work. We’re trying to transfer context from human to machine, and communicate in ways we’ve never seen before. Agents don’t just complete tasks faster — they introduce autonomy, uncertainty, and contextual reasoning. And once you embrace that, predictability is no longer in your hands. You provide the context and intent, then wait for the best possible outcome.
So the infrastructure we built to manage static algorithms is now deprecated.
Just like monoliths gave way to containers, now containers themselves are starting to break under the demands of interconnected, reasoning-driven agents. Trying to deploy agents and models inside isolated containers quickly becomes a nightmare — especially if you're aiming for speed and low cost.
Internal traffic grows rapidly. Inter-agent calls multiply. A single model download can weigh 80GB and take hours to prepare. Deploying multiple agents, each requiring guards, context loaders, login policies — it’s a painful, brittle process. We're trying to fit “life” into lifeless systems.
The truth is: we're not ready for this, and the technology is only now being rewritten.
Tools like A2A and MCP are not patches — they are re-foundations. They don’t adapt the old world to fit agents. They rebuild the foundations to match what agents truly are.
We’re Patching, Not Rebuilding – And That’s a Problem
Today we have frameworks like FastAPI that can be adapted to work with protocols like MCP — but let's be clear: this is a patch, not a native design. They implement decorators or wrappers to make things work, but nothing about these frameworks was originally built to support agent-native protocols or dynamic model-serving architectures.
There is no mainstream framework or runtime that natively supports agents or LLMs as core components, rather than just treating them as external tools. No infrastructure has yet reimagined core operations like routing, context management, or memory persistence to align with how agents actually behave.
Why? Because in a world obsessed with innovation velocity, time is everything. And building new infrastructure from scratch takes time — as the saying goes, Rome wasn’t built in a day.
So we settle for patches. We extend, wrap, decorate, and adapt legacy frameworks to mimic what the new paradigm needs.
This creates a growing layer of technical debt. Over the next few years, we’ll likely witness a slow migration toward truly native architectures built for this new agent-first world.
And with that shift, a new kind of engineer will emerge — one that doesn’t just write code for APIs, but designs systems for intelligence. The future belongs to AI-native engineers.
Up next: Truth, Trust, and the Agent
In a world flooded with fake news, where LLMs and agents constantly query and remix external data, we face a new danger: the emergence of AI-powered echo chambers. If we don’t build reliable, agent-based sources of truth, we may automate misinformation at scale.
In my next post, I’ll explore why the future of epistemology in AI depends on agents designed not just to respond, but to validate — and how we can build them.
Subscribe to my newsletter
Read articles from Joselo Martinez directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
