Implications of Large Language Models for Clinical Practice: Ethical Analysis Through the Principlism Framework

IF 2.1 4区 医学 Q3 HEALTH CARE SCIENCES & SERVICES
Richard C. Armitage
{"title":"Implications of Large Language Models for Clinical Practice: Ethical Analysis Through the Principlism Framework","authors":"Richard C. Armitage","doi":"10.1111/jep.14250","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>The potential applications of large language models (LLMs)—a form of generative artificial intelligence (AI)—in medicine and health care are being increasingly explored by medical practitioners and health care researchers.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>This paper considers the ethical implications of LLMs for medical practitioners in their delivery of clinical care through the ethical framework of principlism.</p>\n </section>\n \n <section>\n \n <h3> Findings</h3>\n \n <p>It finds that, regarding beneficence, LLMs can improve patient outcomes through supporting administrative tasks that surround patient care, and by directly informing clinical care. Simultaneously, LLMs can cause patient harm through various mechanisms, meaning non-maleficence would prevent their deployment in the absence of sufficient risk mitigation. Regarding autonomy, medical practitioners must inform patients if their medical care will be influenced by LLMs for their consent to be informed, and alternative care uninfluenced by LLMs must be available for patients who withhold such consent. Finally, regarding justice, LLMs could promote the standardisation of care within individual medical practitioners by mitigating any biases harboured by those practitioners and by protecting against human factors, while also up-skilling existing medical practitioners in low-resource settings to reduce global health disparities.</p>\n </section>\n \n <section>\n \n <h3> Discussion</h3>\n \n <p>Accordingly, this paper finds a strong case for the incorporation of LLMs into clinical practice and, if their risk of patient harm is sufficiently mitigated, this incorporation might be ethically required, at least according to principlism.</p>\n </section>\n </div>","PeriodicalId":15997,"journal":{"name":"Journal of evaluation in clinical practice","volume":"31 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jep.14250","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of evaluation in clinical practice","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jep.14250","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

The potential applications of large language models (LLMs)—a form of generative artificial intelligence (AI)—in medicine and health care are being increasingly explored by medical practitioners and health care researchers.

Methods

This paper considers the ethical implications of LLMs for medical practitioners in their delivery of clinical care through the ethical framework of principlism.

Findings

It finds that, regarding beneficence, LLMs can improve patient outcomes through supporting administrative tasks that surround patient care, and by directly informing clinical care. Simultaneously, LLMs can cause patient harm through various mechanisms, meaning non-maleficence would prevent their deployment in the absence of sufficient risk mitigation. Regarding autonomy, medical practitioners must inform patients if their medical care will be influenced by LLMs for their consent to be informed, and alternative care uninfluenced by LLMs must be available for patients who withhold such consent. Finally, regarding justice, LLMs could promote the standardisation of care within individual medical practitioners by mitigating any biases harboured by those practitioners and by protecting against human factors, while also up-skilling existing medical practitioners in low-resource settings to reduce global health disparities.

Discussion

Accordingly, this paper finds a strong case for the incorporation of LLMs into clinical practice and, if their risk of patient harm is sufficiently mitigated, this incorporation might be ethically required, at least according to principlism.

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.80
自引率
4.20%
发文量
143
审稿时长
3-8 weeks
期刊介绍: The Journal of Evaluation in Clinical Practice aims to promote the evaluation and development of clinical practice across medicine, nursing and the allied health professions. All aspects of health services research and public health policy analysis and debate are of interest to the Journal whether studied from a population-based or individual patient-centred perspective. Of particular interest to the Journal are submissions on all aspects of clinical effectiveness and efficiency including evidence-based medicine, clinical practice guidelines, clinical decision making, clinical services organisation, implementation and delivery, health economic evaluation, health process and outcome measurement and new or improved methods (conceptual and statistical) for systematic inquiry into clinical practice. Papers may take a classical quantitative or qualitative approach to investigation (or may utilise both techniques) or may take the form of learned essays, structured/systematic reviews and critiques.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信