Clinicians want actionable AI in cancer screening, not just risk scores
The research highlighted a strong desire for AI tools that go beyond mere statistical outputs. When asked about ideal formats for AI recommendations, a plurality of clinicians preferred tools that provided a recommended time until the next screening rather than simply reporting risk probabilities or issuing binary screen/don’t-screen suggestions.
A new survey study of primary care providers has revealed that artificial intelligence (AI) tools intended to assist in cancer screening are more likely to be embraced if they deliver actionable recommendations and integrate seamlessly with existing electronic health record (EHR) systems. The research, titled “Primary Care Provider Preferences Regarding Artificial Intelligence in Point-of-Care Cancer Screening,” was published this week in MDM Policy & Practice.
Conducted among clinicians across U.S. medical centers, the study aimed to define how AI-driven clinical decision-making tools should be designed and deployed to optimize cancer screening in primary care. Researchers surveyed 733 providers and analyzed 99 completed responses to assess attitudes toward AI, preferred functionalities, and workflow integration preferences across colorectal, breast, and lung cancer screening.
What do clinicians expect from AI in cancer screening?
The research highlighted a strong desire for AI tools that go beyond mere statistical outputs. When asked about ideal formats for AI recommendations, a plurality of clinicians preferred tools that provided a recommended time until the next screening rather than simply reporting risk probabilities or issuing binary screen/don’t-screen suggestions.
Fifty-two percent of respondents chose this prescriptive format for colorectal cancer screening tools, followed by 39% for breast cancer and 37% for lung cancer. By contrast, only 16% preferred the 5-year probability of a cancer diagnosis, indicating that numeric risk scores, while potentially useful, are not as actionable or confidence-inspiring in fast-paced primary care settings.
This preference is consistent with broader efforts to ensure that AI enhances clinical decision-making by aligning with how clinicians actually deliver care. In the absence of standardized deployment protocols, the findings suggest that model developers should prioritize the creation of AI tools that explicitly recommend screening intervals tailored to individual patient risk profiles.
How should AI tools be integrated into primary care workflows?
In terms of clinical implementation, survey respondents overwhelmingly favored integration within the electronic health record environment. More than half (57%) preferred that AI predictions trigger chart flags for eligible patients within the EHR, rather than being delivered via smartphone applications or delegated to care managers. This approach reflects clinicians’ preference for minimizing workflow disruptions and maintaining control over AI-generated outputs.
Fewer than 15% supported using risk calculators via external apps or assigning follow-up responsibilities to care managers—highlighting a consistent skepticism of offloading responsibilities traditionally performed by primary care providers. The low interest in delegation also raises questions about team-based care strategies, suggesting clinicians may view AI recommendations as requiring direct physician validation.
In scenarios involving detection of suspicious findings on imaging, most providers believed that alerts should go to attending radiologists (71% for breast cancer, 69% for lung cancer) and primary care providers themselves (54% for both), rather than other members of the care team. These responses suggest a strong belief that AI tools must support, not bypass, clinician judgment in diagnostic follow-up.
Furthermore, when asked about the ideal sequencing of AI usage in radiology workflows, 44% favored radiologists reviewing AI-generated reports after assessing imaging but before finalizing their interpretation. This sequencing allows clinicians to maintain interpretive independence while benefiting from supplemental insights.
Are clinicians prepared and willing to adopt AI-based screening support?
Despite broad optimism about AI’s potential to enhance cancer screening, most respondents expressed concern about their own preparedness to use these tools. A striking 91% said their undergraduate medical education did not include adequate training on AI, and 90% said the same for graduate medical training.
Even though only 14% had prior experience with AI tools in practice, most clinicians agreed that AI would improve screening decisions for colorectal (65%), breast (67%), and lung (70%) cancers. Additionally, nearly half expected AI tools to reduce the number of unnecessary procedures—such as colonoscopies and biopsies—suggesting that adoption could reduce overtreatment and streamline diagnostics.
However, clinicians appeared divided on explainability. When asked how an AI model should justify its binary recommendation in breast cancer screening, preferences were diffuse: 33% opted for clear delineation of risk factors, 18% wanted the model to highlight suspicious areas on imaging, and 8% preferred reports of breast density. Notably, 23% said they had no preference, and 17% did not respond—indicating a lack of consensus or perhaps insufficient understanding of explainability frameworks.
Only 25% of respondents said individual providers should be responsible for the AI tools they use. A majority favored shared oversight models, citing the federal government (76%), hospital systems (49%), and physician organizations (48%) as the appropriate entities for regulation. This regulatory preference aligns with growing calls for clearer governance over AI-assisted decision-making tools, especially in settings with high patient throughput and potential liability concerns.
Implications for policy, education, and AI development
The findings of the study help shape a foundational framework for developers, educators, and policymakers aiming to support AI implementation in frontline healthcare. Specifically, the researchers recommend that future AI models focus on:
-
Delivering prescriptive, interval-based recommendations tailored to individual patient data;
-
Integrating predictions directly into existing EHR systems to preserve clinician workflow;
-
Maintaining clinician autonomy in reviewing and acting on AI outputs;
-
Developing transparent yet user-friendly methods for model explainability; and
-
Clarifying regulatory accountability among institutions and governmental bodies.
The study underscores an urgent need for medical education reform to prepare the next generation of clinicians to work effectively with AI. It also calls attention to the regulatory vacuum surrounding AI in medicine, which could hinder trust and widespread adoption unless addressed proactively.
While the surveyed sample was limited to clinicians in primarily academic settings in the U.S. Northeast, the consistency of responses across cancer types points to broader relevance. Further research will be necessary to validate the findings in community and rural practices and to explore preferences among specialists like radiologists, who play key roles in cancer diagnostics.
- FIRST PUBLISHED IN:
- Devdiscourse

