My Perspective on Trustworthy AI in Healthcare: Possibilities, Challenges, and Ethical Practice


My Journey into AI in Healthcare
Having recently completed a course on Trustworthy AI in Healthcare at Politecnico di Milano, I’ve come to appreciate not just the technical sophistication of AI systems, but also the ethical, social, and human-centered dimensions that determine whether they truly benefit patients and clinicians. This journey gave me a deeper understanding of how AI has evolved in healthcare, where it's going, and why trustworthiness is no longer optional—it's essential.
When I first encountered the term artificial intelligence (AI), I thought of machines mimicking human cognition—learning, reasoning, decision-making. That’s not incorrect, but in healthcare, it takes on a much more nuanced meaning. AI here refers to systems trained on vast datasets—from electronic health records to diagnostic images—to support or automate tasks that historically required human expertise.
The earliest applications of AI in healthcare were rule-based expert systems. One example that stood out to me was MYCIN, developed in the 1970s to recommend antibiotics. However, these systems couldn’t adapt or learn from new data. What changed the game was the rise of machine learning—models that learn patterns from data and continuously improve.
Today, I see AI being used in a wide range of healthcare applications:
Image-based diagnostics in radiology and dermatology
Predictive algorithms for early detection of diseases
Drug discovery pipelines that cut down research time
Hospital workflow optimization
Conversational agents that guide patients through self-care
But I also now understand that deploying AI in healthcare isn't as simple as building a high-performance model. Through my training, I’ve become more aware of the challenges AI faces in healthcare settings:
Bias in data leading to unfair outcomes
Opacity in model decision-making (the “black box” problem)
Generalizability issues when a model trained in one population underperforms in another
Lack of clinician trust due to insufficient explainability
This course taught me that for AI to succeed in healthcare, it must be trustworthy—ethically grounded, transparent, and designed with humans in mind.
Understanding Trustworthy AI: What It Really Means to Me
One of the highlights of my learning was exploring the Ethics Guidelines for Trustworthy AI developed by the European Commission’s High-Level Expert Group on AI. These guidelines reshaped how I view AI implementation—not just from a technical standpoint but from a deeply human and ethical one.
Trustworthy AI, as I now understand it, is rooted in three foundational pillars:
Lawfulness – AI systems must comply with all applicable regulations.
Ethical Soundness – They must respect fundamental rights and values.
Technical and Social Robustness – They should function reliably under normal and unexpected conditions.
The four core ethical principles I studied—autonomy, prevention of harm, fairness, and explicability—help frame every AI decision in a moral context. What stood out most to me was explicability. If clinicians can’t understand or explain an AI’s decision, they can’t trust it—or safely act on it.
I also learned about the seven requirements to operationalize trustworthy AI:
Human agency and oversight
Technical robustness and safety
Privacy and data governance
Transparency
Diversity, non-discrimination, and fairness
Societal and environmental well-being
Accountability
Each of these aligns with real-world healthcare concerns—from informed consent to equitable access—and the course encouraged me to think critically about how we integrate these into AI system design.
Z-Inspection®: The Framework That Brought It All Together
Another significant takeaway for me was the Z-Inspection® process—a robust, interdisciplinary methodology designed to evaluate AI systems for trustworthiness in real-life contexts. It’s not just a checklist, but a reflective, participatory process involving stakeholders from diverse backgrounds.
The Z-Inspection® process includes:
Contextual analysis: Understanding the system’s environment and use case
Holistic assessment: Ethical, legal, technical, and organizational dimensions
Dialogic reflection: Facilitated discussions between developers, healthcare workers, patients, and ethicists
What I found powerful was how Z-Inspection® surfaces concerns that might otherwise be ignored in a purely technical audit. It centers human values, emphasizing that healthcare AI must enhance—not replace—human judgment.
Case Studies We Explored
Throughout the course, we examined several real-world case studies. These helped ground the theories in practical, often messy realities. Each case revealed both the power of AI and the importance of trustworthy design and evaluation.
1. Detecting Cardiac Arrest Symptoms in Emergency Calls
This case focused on using AI to analyze emergency calls and detect cardiac arrest from audio patterns and spoken language. The goal was to help dispatchers recognize cardiac arrest faster than human judgment alone.
What impressed me was the life-saving potential of the system—every second counts in cardiac events. But Z-Inspection® helped highlight deeper challenges:
Would the system work equally well across different languages, dialects, or noisy environments?
How would dispatchers react if the AI contradicted their instincts?
Was the model trained on sufficiently diverse data?
This case reinforced to me that even high-performing models must be evaluated within the cultural, linguistic, and operational context of emergency services.
2. Classifying Skin Lesions
In this case, we looked at AI models trained to classify images of skin lesions as benign or malignant. These tools aim to support dermatologists and increase early detection of skin cancers.
Technically, the models were accurate. But the Z-Inspection® analysis uncovered ethical concerns:
The training data came mostly from light-skinned individuals. Could the system fairly assess patients with darker skin tones?
Was there sufficient explainability for clinicians to trust the AI’s classification?
How would liability be handled if a wrong diagnosis led to harm?
This case opened my eyes to the risks of data bias and unequal healthcare access—issues that are particularly relevant in diverse and global contexts like mine.
3. Assessing Lung Damage from Chest X-Rays
The third case involved AI models assessing lung damage (e.g., from COVID-19 or pneumonia) through chest x-rays. These systems aimed to support diagnosis and treatment prioritization, especially in overwhelmed healthcare settings.
The model's ability to detect subtle patterns was impressive. However, Z-Inspection® exposed critical points:
Inconsistent imaging protocols across hospitals affected performance.
Some radiologists were skeptical due to a lack of transparency.
Ethical concerns arose around automated triage—who gets treatment first if resources are limited?
This case emphasized the need for human oversight, clear communication of AI capabilities and limitations, and value alignment with public health ethics.
Final Reflections: Where I Stand Now
This course did more than teach me about algorithms—it reshaped how I think about AI's role in human health. I now believe that the future of AI in healthcare depends not only on innovation but on intentional, value-driven design.
As someone preparing to contribute to the future of healthcare and technology, I see myself not just as a developer or user of AI, but as a steward of ethical responsibility. We must build AI systems that enhance care, respect patient dignity, and remain accountable to the people they serve.
The principles of trustworthy AI are not abstract ideals—they are practical tools for ensuring that AI benefits all, not just a privileged few. And as I continue in this journey, I’ll carry forward this vision: that the true promise of AI in healthcare lies not in its intelligence, but in its alignment with our deepest human values.
Interested in learning more about Trustworthy AI in Healthcare? I highly recommend the course "Trustworthy AI for Healthcare Management" offered by Politecnico di Milano on Coursera. You can find it here: https://coursera.org/learn/trustworthy-ai-for-healthcare-management
Subscribe to my newsletter
Read articles from Wiz IsaakAkins directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
