AI in rare disease diagnosis demands new ethical and legal guidelines

The study identifies five ethical issues and one legal issue specific to rare diseases. These include the limited understanding of disease progression, insufficient clinical data, the absence of validated assessment tools, risks of over- or under-diagnosis, and the unsuitability of standard economic metrics such as ICER for rare disease technologies. From a legal perspective, the study highlights the threat of defensive medicine, where clinicians might overutilize AI tools to avoid liability rather than prioritize patient benefit.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 07-04-2025 12:49 IST | Created: 07-04-2025 12:49 IST
AI in rare disease diagnosis demands new ethical and legal guidelines
Representative Image. Credit: ChatGPT

A comprehensive new study calls for urgent revisions to European health technology assessment (HTA) practices to better evaluate the ethical, legal, and social risks of AI-based systems used in the prevention and diagnosis of rare diseases. The study, titled “Ethical, Legal, and Social Assessment of AI-Based Technologies for Prevention and Diagnosis of Rare Diseases in Health Technology Assessment Processes” and published in Healthcare, identifies 13 critical issues not fully addressed by the current EUnetHTA Core Model® and proposes new guiding questions for future evaluations.

Conducted by researchers from Università Cattolica del Sacro Cuore in Rome and leading European HTA experts, the study uses a mixed-methods approach, including a literature review and expert focus group, to expand on the existing Core Model. The result is a novel set of ethical, legal, and social criteria tailored for AI-driven diagnostics and tools targeting ultra-rare conditions, such as childhood melanoma.

Why is the current HTA model insufficient for rare diseases and AI?

The EUnetHTA Core Model®, widely adopted across Europe for evaluating new health technologies, provides a modular structure for assessing clinical, economic, ethical, legal, and social implications. While well-established for common interventions, the model falls short when applied to rare diseases or AI-based systems. Rare diseases, by nature, lack large patient datasets and validated metrics, while AI tools introduce rapidly evolving algorithmic behaviors, black-box decision-making, and unique challenges to accountability and fairness.

The study identifies five ethical issues and one legal issue specific to rare diseases. These include the limited understanding of disease progression, insufficient clinical data, the absence of validated assessment tools, risks of over- or under-diagnosis, and the unsuitability of standard economic metrics such as ICER for rare disease technologies. From a legal perspective, the study highlights the threat of defensive medicine, where clinicians might overutilize AI tools to avoid liability rather than prioritize patient benefit.

Each of these issues is accompanied by a structured question designed to guide HTA evaluations. For example, evaluators are encouraged to ask whether the natural history of the disease is understood, whether effective assessment instruments exist, and whether defensive practices could be exacerbated by new technologies. These targeted inquiries provide a framework for navigating uncertainty while prioritizing ethical patient care in underrepresented populations.

What unique risks do AI-based technologies introduce?

Beyond the challenges tied to rare diseases, the study delineates seven issues specific to AI applications in healthcare—three ethical, two legal, and two social. Ethical concerns include algorithmic discrimination due to biased training data, lack of explicability in AI decision-making processes, and the environmental burden of high-performance computing systems powering AI tools.

Legal risks are centered on ambiguous accountability structures when AI tools malfunction or provide incorrect recommendations. As systems increasingly influence clinical decisions, it remains unclear whether developers, providers, or institutions will bear responsibility for adverse outcomes. Reimbursement challenges also loom large, with many AI systems lacking clear pathways for financial integration into health systems.

From a societal perspective, the study emphasizes the need for transparent communication with patients about the involvement of AI in their diagnosis and care. Equally pressing are workforce implications, as automation may reshape clinical roles and potentially displace jobs, particularly in diagnostics, pathology, and radiology.

Each identified risk is paired with an actionable question, such as whether patients are informed about AI use, whether the technology could lead to job losses, and whether its environmental impact is justified by healthcare benefits. These additions, the authors argue, fill critical gaps in the current HTA model and support a more holistic evaluation of disruptive digital health tools.

How can these issues shape future HTA practices in Europe and beyond?

The researchers position their findings as a foundational upgrade to the EUnetHTA Core Model®, which underpins joint clinical assessments under Regulation (EU) 2021/2282. By explicitly incorporating the identified ethical, legal, and social risks into the HTA process, the framework enhances its capacity to evaluate novel technologies while safeguarding equity, transparency, and sustainability.

The study also addresses a larger methodological debate within the HTA community: whether new technologies require new frameworks or simply better use of existing tools. In response, the authors defend their proposal, noting that transparency in evaluation, especially in ethically ambiguous or legally uncertain contexts, reduces the likelihood of overlooking critical risks.

Their expanded question set is not intended to replace the Core Model but rather to augment it with forward-looking guidance. For example, the issue of explicability may intersect with existing risk-benefit assessments, but making it an explicit line of inquiry ensures evaluators confront the unique opacity of AI systems. Similarly, while fairness is a general concern in healthcare, algorithmic bias demands a distinct analytical approach.

The recommendations are drawn from both academic literature and a focused consultation with six HTA experts across five countries. Though the authors acknowledge the limited scale of this focus group and the European orientation of the findings, they argue that the framework is adaptable to broader contexts, including other regions and health systems with parallel regulatory challenges.

The study aligns with broader EU initiatives such as the MELCAYA project, which aims to improve diagnostics for childhood melanoma, and EU Regulation 2024/1689 on AI, which mandates stricter controls over algorithmic systems in healthcare. In this policy environment, HTA frameworks must adapt not only to new scientific capabilities but also to new ethical standards and legal requirements.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback