Trust crisis could derail AI in healthcare, experts warn

The clock’s ticking. As AI races ahead, trust hangs in the balance. The researchers see a future where it’s a partner, not a pariah, but only if biases fade, data improves, and governance tightens. With no new data generated, the study leans on theory and cases to sound the alarm: healthcare’s AI era hinges on faith as much as circuits.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 01-04-2025 17:43 IST | Created: 01-04-2025 17:43 IST
Trust crisis could derail AI in healthcare, experts warn
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) holds the promise of revolutionizing healthcare with sharper diagnostics and tailored treatments, but a new perspective published in npj Health Systems reveals a towering obstacle: trust. Researchers from Johns Hopkins University argue that without tackling deep-seated skepticism, among patients, providers, and even AI itself, the technology’s potential could stall.

As of late 2024, more than a thousand AI medical devices have been cleared for clinical use by the U.S. Food and Drug Administration. These innovative devices promise to revolutionize diagnostics, treatment planning, and healthcare efficiency. However, the authors argue that technical innovation alone is insufficient. Without systemic trust, these tools risk being sidelined, misused, or rejected outright, particularly in communities already skeptical of the healthcare system.

Led by Tinglong Dai and a team from Johns Hopkins’ Bloomberg School of Public Health and Carey Business School, the perspective "Trust in AI-assisted health systems and AI’s trust in humans" dives into the tangled web of trust shaping AI-assisted healthcare. It’s a two-way street: patients and doctors must rely on AI’s opaque “black box” outputs, while AI hinges on flawed human data to work. With hesitancy rife, the study warns that mistrust could derail its integration into routine care.

For patients, trusting AI in medicine often means trusting that their physician uses these tools wisely. This indirect trust chain is strained by historical inequities and present-day disparities. Communities such as Black, Hispanic, and Native American populations, already reporting lower levels of institutional trust, may view AI with heightened suspicion, especially if they fear it will replicate systemic biases embedded in healthcare data.

The paper highlights a notable case where an algorithm used to manage healthcare populations underestimated the needs of Black patients, leading to resource misallocation. This occurred because the model used healthcare spending as a proxy for patient need—a decision that encoded existing disparities into algorithmic logic. The authors warn that AI systems, trained on biased data and built without input from diverse populations, can amplify discrimination under the illusion of neutrality.

Trust also plays a critical role in how and when people seek medical care. Drawing on behavioral models, the study shows that trust directly influences care-seeking behavior across demographics, shaping patients’ expectations of benefit, cost, and quality. A system that integrates AI without accounting for these trust dynamics could unintentionally deter engagement, especially among marginalized populations.

From the provider’s perspective, AI adoption introduces additional concerns. Physicians face uncertainty about legal liability when following or rejecting AI recommendations. The current malpractice framework protects adherence to established standards of care, but the introduction of AI blurs those standards. If a physician follows AI guidance that leads to harm, it’s unclear whether fault lies with the provider, the developer, or the healthcare institution. This legal ambiguity fosters caution, potentially limiting AI's role in decision-making.

Some clinicians fear that AI may erode their autonomy or even displace their roles. Surveys reveal that up to 38% of radiologists express concern about being replaced. However, the research suggests that familiarity with AI tends to reduce resistance. The study notes that AI’s greatest potential may lie in relieving administrative burdens, such as documentation, thereby freeing providers to engage more deeply with patients. Yet in for-profit systems, these efficiencies are more likely to be redirected toward increased caseloads rather than improved care, potentially weakening provider–patient relationships.

Healthcare institutions themselves face a delicate balancing act. While AI tools promise operational efficiency, their deployment may be viewed by both patients and providers as financially driven rather than care-centered. A prominent example cited is UnitedHealthcare’s use of an AI system alleged to have denied medically necessary coverage to elderly patients with a high error rate. Lawsuits stemming from these denials have fueled public distrust in both insurers and the AI tools they deploy.

For AI to become a trusted partner in medicine, the authors argue, institutions must embrace shared accountability. This includes rigorous validation for equity, transparent communication of AI strategies, and the meaningful inclusion of providers in the development and evaluation of AI systems. Importantly, the study introduces the idea of AI’s dependence on human trustworthiness. AI systems, while not sentient, rely on the quality of human-provided data and decisions to function effectively. Misaligned human inputs such as biased training data or inconsistent clinical documentation can degrade AI performance.

This bidirectional trust loop raises further accountability questions: Who bears responsibility when AI recommendations fail or when providers override accurate suggestions? Without clear governance frameworks, the human–AI relationship may be marked more by suspicion than synergy.

The clock’s ticking. As AI races ahead, trust hangs in the balance. The researchers see a future where it’s a partner, not a pariah, but only if biases fade, data improves, and governance tightens. 

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback