The potential applications of large language models (LLMs)—a form of generative artificial intelligence (AI)—in medicine and health care are being increasingly explored by medical practitioners and health care researchers.
This paper considers the ethical implications of LLMs for medical practitioners in their delivery of clinical care through the ethical framework of principlism.
It finds that, regarding beneficence, LLMs can improve patient outcomes through supporting administrative tasks that surround patient care, and by directly informing clinical care. Simultaneously, LLMs can cause patient harm through various mechanisms, meaning non-maleficence would prevent their deployment in the absence of sufficient risk mitigation. Regarding autonomy, medical practitioners must inform patients if their medical care will be influenced by LLMs for their consent to be informed, and alternative care uninfluenced by LLMs must be available for patients who withhold such consent. Finally, regarding justice, LLMs could promote the standardisation of care within individual medical practitioners by mitigating any biases harboured by those practitioners and by protecting against human factors, while also up-skilling existing medical practitioners in low-resource settings to reduce global health disparities.
Accordingly, this paper finds a strong case for the incorporation of LLMs into clinical practice and, if their risk of patient harm is sufficiently mitigated, this incorporation might be ethically required, at least according to principlism.