AI in healthcare demands structural trust, not simulated empathy

Trust allows patients to open up, delegate authority, and accept treatment under uncertainty, based not only on competence but on the emotional presence and ethical intent of the caregiver. AI, however, lacks this moral intimacy. It cannot interpret fear, recognize tone, or reciprocate uncertainty. Trust in AI, then, becomes more technical than relational.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 10-04-2025 22:08 IST | Created: 10-04-2025 22:08 IST
AI in healthcare demands structural trust, not simulated empathy
Representative Image. Credit: ChatGPT

Artificial intelligence is transforming medical decision-making, but the trust patients and clinicians place in machines must be earned, not assumed, according to a new study that calls for a paradigm shift in how trust is conceptualized in healthcare technology. The study, titled “Not someone, but something: Rethinking trust in the age of medical AI,” presents a comprehensive philosophical and ethical analysis of the unique challenges posed by the growing role of AI in clinical care.

Authored by Jan Beger, the paper examines trust not as a seamless transfer from human physicians to AI systems, but as a fundamentally different relationship that must be designed, governed, and maintained with moral clarity and structural transparency. Drawing from philosophical theory, clinical ethics, and health system design, the study argues that conventional models of emotional trust cannot apply to artificial intelligence. Instead, it suggests a redefinition of trust in terms of reliability, accountability, and alignment with core values of care.

Can Patients Trust an AI That Doesn’t Understand Being Trusted?

At the core of the study is the assertion that trust in healthcare is inherently relational, rooted in shared vulnerability and moral responsibility between clinician and patient. Trust allows patients to open up, delegate authority, and accept treatment under uncertainty, based not only on competence but on the emotional presence and ethical intent of the caregiver. AI, however, lacks this moral intimacy. It cannot interpret fear, recognize tone, or reciprocate uncertainty. Trust in AI, then, becomes more technical than relational.

This shift creates a one-sided trust dynamic. Patients and clinicians are increasingly relying on systems that cannot experience the act of being trusted. The result, according to the study, is brittle trust, based not on moral judgment but on a calculated belief that the system has been adequately designed, tested, and monitored. Evidence cited in the paper shows that even seasoned professionals often defer to AI recommendations, sometimes despite obvious errors, driven by initial trust or automation bias.

The study also highlights evidence from surveys across North America and the Middle East showing widespread discomfort with medical AI, even among professionals familiar with the technology. This suggests that trust does not automatically increase with exposure. Rather, it hinges on perceived transparency, governance, and the user’s confidence that the system respects ethical boundaries, something machines themselves cannot guarantee.

Where Does Moral Responsibility Lie When AI Shapes Clinical Decisions?

A central concern raised in the study is the erosion of clear accountability as AI becomes embedded in healthcare workflows. While clinicians are still legally and ethically responsible for patient outcomes, the lines of responsibility become blurred when AI systems heavily influence diagnostic or treatment decisions. AI may not replace human judgment outright, but it can guide, constrain, or nudge it in subtle ways that diminish the clinician’s independent role.

The report warns that as AI systems update themselves and adapt over time, accountability becomes further diffused across developers, institutions, regulators, and end-users. In such environments, when something goes wrong, it becomes difficult to pinpoint who, or what, is responsible. This diffusion risks undermining trust not because of malfeasance, but because no individual or entity clearly carries the moral weight of decisions.

To address this, the study proposes a model of distributed accountability that shifts the focus from trusting the AI itself to trusting the broader sociotechnical systems in which it operates. It references frameworks like the “Trust Octagon,” which evaluates AI systems on fairness, transparency, legal compliance, and social responsibility. The author argues that building trust into healthcare AI should not involve mimicking human empathy or intention but enforcing structural integrity and governance mechanisms that ensure ethical consistency and explainability.

What Kind of Trust Should We Expect from AI in a Continuously Evolving System?

The study further explores the challenge of adaptive trust in the age of dynamic and continually learning AI systems. Unlike traditional medical procedures and protocols that evolve slowly and under strict oversight, AI models can update rapidly, sometimes silently, retraining on new data or rebalancing their internal parameters without clinician awareness. This undermines continuity, a critical factor in trust formation.

To maintain trust in such fast-changing systems, the author calls for new infrastructure: real-time monitoring dashboards, explainable uncertainty signals, and transparent communication of updates. The paper notes that many clinicians do not require full interpretability of AI outputs, but they do demand clear indicators of reliability, limitations, and risk, especially when AI recommendations influence high-stakes decisions.

The findings underscore the need for “adaptive trust”, a form of trust that is continuously reassessed based on model performance, transparency, and alignment with human care values. Without mechanisms for clinicians to track changes and assess evolving risk, the paper warns, trust in AI will degrade with every unexpected system behavior or misaligned output.

Finally, the author critiques the design trend toward anthropomorphism, making AI seem more human-like to increase user comfort. While studies show people respond more positively to human-like interfaces, the paper argues this can create false impressions of empathy or moral agency, further complicating trust. Instead, AI should focus on integrity, clarity, and reinforcement of human judgment, rather than simulated understanding.

The study asserts that trust in medical AI must not be treated as a static attribute, nor should it be engineered through emotional design alone. It is a dynamic, context-sensitive relationship that must be earned continuously through transparency, explainability, governance, and accountability. 

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback