Do patients trust AI in healthcare? New study reveals widespread concerns
The study explored various factors that might influence trust in AI, including AI knowledge, health literacy, demographic differences, and income levels. Surprisingly, knowledge of AI did not significantly impact trust levels - whether patients understood AI’s role in healthcare or not, their trust remained largely unchanged. This finding contradicts the common assumption that more education about AI would lead to greater trust.
Artificial intelligence (AI) is revolutionizing healthcare, offering advancements in diagnostics, treatment recommendations, and patient management. However, as AI becomes more integrated into health systems, a crucial question arises: Do patients trust AI to be used responsibly in their care? The rapid adoption of AI technology has outpaced research on public perception, leaving a gap in understanding how much confidence patients have in AI-driven healthcare decisions.
A recent study, "Patients’ Trust in Health Systems to Use Artificial Intelligence," authored by Paige Nong, PhD, and Jodyn Platt, PhD, published in JAMA Network Open (2025), sheds light on this issue. The study examines whether patients trust their healthcare systems to use AI responsibly and ensure that AI tools do not cause harm. The findings, based on a national survey of U.S. adults, reveal low levels of trust in AI-driven healthcare, highlighting the urgent need for transparent communication and ethical AI governance.
Trust in AI: The survey findings
The study surveyed 2,039 U.S. adults between June and July 2023 using the AmeriSpeak Panel by the National Opinion Research Center (NORC). The researchers assessed patients’ trust in AI by asking whether they believed healthcare systems would (1) use AI responsibly and (2) ensure AI tools would not harm them. Respondents rated their trust levels on a 4-point Likert scale, with responses categorized as “not true,” “somewhat true,” “fairly true,” or “very true.”
The results showed widespread skepticism toward AI in healthcare. 65.8% of respondents reported low trust in their healthcare system’s ability to use AI responsibly, while 57.7% doubted that their system would protect them from AI-related harm. These numbers indicate a significant public concern about AI’s role in medical decision-making.
Further analysis revealed that general trust in the healthcare system was the strongest predictor of AI trust. Patients who already had high trust in healthcare institutions were 4.29 times more likely to believe that AI would be used responsibly and 3.97 times more likely to trust that AI tools would not harm them.
However, those who had experienced discrimination in healthcare were far less likely to trust AI. Patients with a history of discrimination were 34% less likely to trust healthcare systems using AI responsibly and 43% less likely to believe AI tools would be safe. This suggests that past negative experiences shape how patients perceive new healthcare technologies.
Factors influencing trust in AI healthcare
The study explored various factors that might influence trust in AI, including AI knowledge, health literacy, demographic differences, and income levels. Surprisingly, knowledge of AI did not significantly impact trust levels - whether patients understood AI’s role in healthcare or not, their trust remained largely unchanged. This finding contradicts the common assumption that more education about AI would lead to greater trust.
Gender differences also played a role. Female respondents were 23% less likely than males to trust AI-powered healthcare systems, suggesting potential concerns about gender bias in AI algorithms or a general skepticism toward technological interventions in medical decision-making.
Income levels also correlated with AI trust. Higher-income individuals (earning over $75,000 annually) were 29% less likely to trust AI in healthcare compared to those earning less. This could reflect greater access to personalized healthcare options, making reliance on AI-driven care seem unnecessary or risky.
The need for ethical AI and transparency in healthcare
The study highlights an urgent need for greater transparency and ethical oversight in AI-powered healthcare. Patients’ reluctance to trust AI suggests that health systems must do more to ensure fairness, reduce biases, and openly communicate AI’s role in medical decision-making.
To improve trust, healthcare organizations should:
- Enhance AI transparency by clearly explaining how AI systems are used in diagnostics and treatment recommendations.
- Address biases in AI models, particularly those that disproportionately affect marginalized communities.
- Increase patient engagement, ensuring that AI-driven decisions align with human oversight and personalized care.
- Implement strict AI governance policies, ensuring that AI is tested rigorously for accuracy and fairness before deployment in clinical settings.
Without these measures, low trust in AI could become a significant barrier to its adoption in healthcare, potentially limiting its benefits for early disease detection, precision medicine, and treatment optimization.
Conclusion
The findings from Nong and Platt’s study serve as a wake-up call for healthcare providers and AI developers. While AI has the potential to transform healthcare, public trust remains a major hurdle. The study underscores the importance of trust-building efforts, particularly among communities with a history of discrimination in healthcare.
As AI continues to advance, healthcare institutions must prioritize transparency, accountability, and ethical AI development to ensure that patients feel safe and confident in AI-driven medical decisions. Trust in AI is not just about technology - it is about ensuring that patients feel seen, heard, and protected in an increasingly automated healthcare system
- FIRST PUBLISHED IN:
- Devdiscourse

