Beyond the Hype: How AI Agents Are Quietly Revolutionising Healthcare


The terms are everywhere: “AI Agents,” “Agentic AI,” “Autonomous Systems.” It's the latest deafening hype cycle in tech, where buzzwords threaten to obscure a fundamental shift. But behind the noise, a new paradigm is taking hold, and nowhere are its implications more profound than in healthcare.
So, what are we really talking about? Are these just glorified chatbots?
Not even close. The word “agent” is chosen for a very specific reason: it implies agency. As Gartner’s Chris Howard puts it, AI agents have the ability to make autonomous decisions. They don’t just follow a script; they sense their environment, understand context, formulate a plan, and act on it to achieve a goal. They learn from their interactions and get progressively better.
Think of it this way: a chatbot is like a call centre employee with a detailed script. An AI agent is like an experienced doctor who can listen to a patient, assess the situation, and decide on the best course of action in real-time.
This capability is unlocking a new frontier of medical innovation. Let’s explore how.
The Empathetic Digital Companion
One of the most powerful early examples of agentic AI in healthcare is its role in patient support. Consider the challenge of helping someone quit smoking. A simple app might send reminders or offer generic advice. An AI agent, however, can engage in a truly therapeutic conversation.
A project developed with the University of Toronto and the Centre for Addiction and Mental Health (CAMH) does exactly this. When a user says, “I started smoking again during the pandemic and I feel bad about it,” the agent doesn't just reply with, “Smoking is bad for you.” Instead, its "chain of thought" reasoning guides it to offer encouragement and ask gentle questions to build trust. It senses the user's emotional state and adapts its strategy, much like a human therapist would, to guide them toward their goal. This is a stateful, long-term interaction where the agent learns and refines its approach over time.
This same principle is being applied to manage chronic diseases, provide mental health support, and monitor post-operative patients, creating a new paradigm of continuous, personalised care.
The Rise of Multi-Agent Systems: Your Digital Medical Team
Perhaps the most exciting frontier is the development of multi-agent systems, where different AI agents collaborate to solve complex problems. This is where we move from a single helper to a coordinated digital team.
Imagine these scenarios, which are already in development:
The Compliance Guardian: An AI agent is tasked with generating a clinical summary. A second "guardian" agent, trained specifically on HIPAA or GDPR regulations, immediately interrogates that summary to ensure no sensitive personal health information is exposed. As regulations change, you don't need to reprogram the system; you simply retrain the guardian agent.
The Diagnostic Roundtable: A patient presents with complex symptoms. One agent analyses their lab results, another examines their MRI scans, and a third reviews their genomic data and medical history. They work in parallel to propose potential diagnoses. A final "resolver" agent then evaluates their findings, weighs the evidence, and presents the most likely diagnosis, or a set of differential diagnoses, to the human clinician.
The Self-Healing Hospital: Deep within a hospital's IT infrastructure, a swarm of agents monitors network traffic, server loads, and cybersecurity threats. When an anomaly is detected, a potential equipment failure or a security breach, the agents don't just send an alert. They autonomously diagnose the root cause, reroute traffic, isolate the affected system, and deploy a fix, ensuring that critical clinical systems remain online.
From Predicting Words to Predicting Actions
What powers this evolution? It’s a move beyond Large Language Models (LLMs) to what some are calling Large Action Models (LAMs). While an LLM is a master of language, predicting the next most likely word, a LAM is a master of strategy, predicting the next most effective action. This isn't just a small step; it's the difference between an AI that can describe a medical procedure and an AI that can assemble the resources to perform one.
When an agent senses a problem, it uses a LAM to determine the optimal workflow. Should it query a database? Should it activate a robotic arm? Should it send a message to a specific specialist? This ability to compose and execute complex workflows is what gives agents their power to interact meaningfully with both the digital and physical worlds.
The Regulatory Horizon: Are Our Rules Ready for AI Agents?
The innovation is exhilarating, but it raises a critical question: how do we regulate a medical device that can think and act on its own? Does our current definition of "AI as a Medical Device" (AIaMD) truly cover it?
On the surface, yes. An AI agent used for diagnosis or treatment is clearly a medical device. It falls under the "high-risk" category of the EU AI Act, which mandates human oversight, robust data governance, and post-market surveillance. But agentic AI doesn't just fit into this framework; it actively pushes against its boundaries, raising profound new questions for manufacturers and regulators alike.
The traditional model for medical device approval is based on validating a "locked," static product. You test it, you prove its safety and efficacy for a specific intended use, and you release it. Any significant change requires a new validation and approval. Agentic AI shatters this model. Its core value is its ability to learn and adapt after deployment. This creates three major regulatory challenges:
The Moving Target Problem: How do you grant approval for a device that is designed to change? If an agent refines its diagnostic algorithm based on thousands of new real-world interactions, is it still the same device that was approved? This is where new concepts, like the FDA's Predetermined Change Control Plan (PCCP), are emerging. A PCCP requires manufacturers to define, before marketing, exactly what aspects of the device can change, the methodology for implementing those changes, and how they will be validated. This is incredibly challenging. It forces developers to anticipate the learning pathways of their AI and to prove that these self-modifications will not push the device's performance outside of its safety and efficacy guardrails.
The Question of Liability: Consider the "Diagnostic Roundtable" we imagined earlier. If that multi-agent system produces a misdiagnosis, where does the liability fall? Our legal system has two clear buckets for harm: product liability for a faulty device and medical malpractice for a poor clinical decision. An autonomous agent doesn't fit in either; it straddles them, creating a legal chasm. Was the algorithm flawed from the start (product liability), or did the agent make a "bad judgment call" based on valid, real-time data, a scenario that looks more like malpractice, but with no human to hold responsible? As agents become more autonomous, the lines of responsibility blur, challenging existing product liability directives and forcing a difficult conversation about whether we need an entirely new legal framework for algorithmic harm.
The Challenge of Meaningful Oversight: The EU AI Act mandates "human oversight," but what does that look like for a swarm of agents making thousands of decisions per second within a hospital's IT network? Direct supervision is impossible. The paradigm must shift from hands-on control to sophisticated governance. This means building systems that provide real-time performance dashboards, immutable audit logs of every decision an agent makes, and robust alert mechanisms for when performance drifts or an agent encounters a situation outside its training. Crucially, it requires clear protocols for human intervention, not just a "kill switch," but an intelligent system for escalating decisions to a human expert when an agent's confidence falls below a certain threshold.
While the term AIaMD is our starting point, it's clear that regulating agentic systems will require a paradigm shift. It’s less about approving a finished product and more about approving a governance framework for a dynamic system. The focus must be on the robustness of the data pipelines, the transparency of the learning process, and the integrity of the ethical guardrails that ensure these powerful new partners act safely and predictably in the complex world of human health.
Turn Regulatory Hurdles into Your Launchpad
The challenges of regulating agentic AI are significant, but they are not insurmountable. The key is to meet these challenges with a proactive strategy for compliance and governance, one that transforms regulatory burden into a strategic advantage.
At Neural Vibe, we specialise in building the robust, future-ready frameworks that turn these regulatory hurdles into a foundation for trust and innovation. We help you:
Unify Your Compliance: We seamlessly weave the new AI-specific standards like ISO 42001 into your existing ISO 13485 QMS, creating a single, unified system for managing quality and risk.
Prove Meaningful Oversight: We work with you to design the governance structures and technical solutions necessary for meaningful human oversight, ensuring your agents operate safely and transparently.
Prepare for a Dynamic Future: From developing Predetermined Change Control Plans to building the infrastructure for continuous post-market monitoring, we help you create a system that is built to evolve.
The future of healthcare is agentic. Let us handle the regulatory complexity so you can focus on building it responsibly. Connect with Neural Vibe to secure your path to market.
The Future is Agentic
The key takeaway is this: AI agents aren't just an upgrade; they represent a fundamental shift from AI as a tool to AI as a collaborative partner. With the autonomy to act and the capacity to learn, they are set to redefine how we deliver care. The hype is temporary, but the innovation is real and it's already changing medicine from the inside out.
Subscribe to my newsletter
Read articles from Brett Marshall directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
