Clinicians not morally obligated to reveal AI use in diagnosis
Proponents of the risk-based argument maintain that medical ML systems introduce significant dangers such as vulnerability to adversarial cyberattacks, poor generalizability in real clinical settings, overconfidence in predictions, and algorithmic bias. However, Hatherley contends these risks are overstated.
A new study questions the prevailing ethical belief that clinicians are morally obligated to tell patients when artificial intelligence systems assist in their diagnosis or treatment. The research, published as a forthcoming article in the Journal of Medical Ethics under the title "Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?" by bioethicist Joshua Hatherley, argues that the so-called “disclosure thesis” lacks convincing moral foundations and could ultimately do more harm than good.
The disclosure thesis asserts that clinicians must ethically inform patients whenever a medical machine learning (ML) system has been used to support decision-making. This has been a dominant stance in the ethical and legal literature, with non-disclosure sometimes framed as deceit. Hatherley challenges this consensus by critically examining the four primary ethical arguments used to support it: the risk-based argument, the rights-based argument, the materiality argument, and the autonomy argument.
Do AI-assisted decisions pose unique or serious risks to patient safety that would ethically necessitate disclosure?
Proponents of the risk-based argument maintain that medical ML systems introduce significant dangers such as vulnerability to adversarial cyberattacks, poor generalizability in real clinical settings, overconfidence in predictions, and algorithmic bias. However, Hatherley contends these risks are overstated. Adversarial attacks, for instance, are more likely to be used for financial fraud than clinical sabotage and can be detected with high accuracy. Likewise, generalizability and robustness challenges should preclude a system’s clinical use altogether unless properly mitigated, not trigger patient-level disclosure. Algorithmic bias, while real, is not necessarily more dangerous than the implicit biases of human clinicians, biases that are rarely disclosed to patients. Thus, the paper argues that the ethical burden for addressing these risks lies with developers, hospitals, and regulators - not with individual clinicians disclosing every tool they use.
The study also assesses whether non-disclosure violates patients’ moral rights, particularly a proposed “right to refuse” medical interventions involving AI. The strong version of this right would entitle patients to reject any diagnostic or treatment process that involves machine learning assistance. However, Hatherley points out the impracticality and overreach of such a position. The criteria for what counts as a “rational concern” are so broad that they could apply to numerous aspects of healthcare, including non-AI systems like managed care. If all healthcare practices that might raise rational concerns required disclosure and opt-out options, the healthcare system would become unmanageable. Moreover, as AI becomes further embedded in standard diagnostics such as in imaging or electronic health records, upholding this right would necessitate duplicate infrastructure that few health systems can support. Therefore, the rights-based argument is deemed too expansive and untenable to justify mandatory disclosure.
Is knowledge of AI involvement relevant enough to affect patients’ decisions about their care, and thus necessary for informed consent?
This argument relies heavily on the claim that many patients would opt out of AI-assisted care if informed. But Hatherley rebuts this by distinguishing between what people prefer and what is ethically required. He notes that many tools clinicians use, including outdated memories from training, colleague advice, or journal articles, affect decisions without being disclosed. Furthermore, the assumption that patients exhibit “algorithmic aversion” lacks empirical grounding in this context. Even if some patients would react negatively to knowing AI tools were used, this does not automatically make such information ethically material. A “reasonable patient” standard is still required, and algorithm aversion, an irrational bias, does not meet that threshold.
The final argument reviews concerns patient autonomy and the risk that opaque, value-laden algorithms may undermine shared decision-making between clinicians and patients. Hatherley acknowledges that some AI systems embed ethical values, such as prioritizing longevity over quality of life. The author, however, maintains that as long as clinicians remain the decision-makers and use AI only as guidance, shared decision-making and autonomy are not compromised. Additionally, embedded values exist in medical textbooks and clinical guidelines as well, yet there is no expectation that clinicians disclose every influence on their judgment. On the issue of opacity, Hatherley concedes that clinicians may struggle to explain AI-generated outputs. However, this again points to a system design issue, opacity should be addressed before implementation, not transferred to patients as a matter of disclosure. If an AI tool is so opaque that it undermines a clinician’s ability to explain their reasoning, the ethical failing lies in the use of that tool, not the failure to disclose its use.
More importantly, the study warns that enforcing mandatory disclosure could have unintended consequences. It may shift moral and legal responsibility away from system designers and institutions onto individual clinicians. It could also create a false sense of ethical clearance, allowing AI use to continue unchecked under the guise of transparency. Patients may be harmed not by lack of disclosure, but by premature, unsafe, or inappropriate system deployment that disclosure fails to fix.
- FIRST PUBLISHED IN:
- Devdiscourse

