对医疗保健和医学中大型语言模型的伦理考虑的系统回顾。

IF 3.2 Q1 HEALTH CARE SCIENCES & SERVICES
Frontiers in digital health Pub Date : 2025-09-11 eCollection Date: 2025-01-01 DOI:10.3389/fdgth.2025.1653631
Muhammad Fareed, Madeeha Fatima, Jamal Uddin, Adeel Ahmed, Muhammad Awais Sattar
{"title":"对医疗保健和医学中大型语言模型的伦理考虑的系统回顾。","authors":"Muhammad Fareed, Madeeha Fatima, Jamal Uddin, Adeel Ahmed, Muhammad Awais Sattar","doi":"10.3389/fdgth.2025.1653631","DOIUrl":null,"url":null,"abstract":"<p><p>The rapid integration of large language models (LLMs) into healthcare offers significant potential for improving diagnosis, treatment planning, and patient engagement. However, it also presents serious ethical challenges that remain incompletely addressed. In this review, we analyzed 27 peer-reviewed studies published between 2017 and 2025 across four major open-access databases using strict eligibility criteria, robust synthesis methods, and established guidelines to explicitly examine the ethical aspects of deploying LLMs in clinical settings. We explore four key aspects, including the main ethical issues arising from the use of LLMs in healthcare, the prevalent model architectures employed in ethical analyses, the healthcare application domains that are most frequently scrutinized, and the publication and bibliographic patterns characterizing this literature. Our synthesis reveals that bias and fairness ( <math><mi>n</mi> <mo>=</mo> <mn>7</mn></math> , 25.9%) are the most frequently discussed concerns, followed by safety, reliability, transparency, accountability, and privacy, and that the GPT family predominates ( <math><mi>n</mi> <mo>=</mo> <mn>14</mn></math> , 51.8%) among examined models. While privacy protection and bias mitigation received notable attention in the literature, no existing review has systematically addressed the comprehensive ethical issues surrounding LLMs. Most previous studies focus narrowly on specific clinical subdomains and lack a comprehensive methodology. As a systematic mapping of open-access literature, this synthesis identifies dominant ethical patterns, but it is not exhaustive of all ethical work on LLMs in healthcare. We also synthesize identified challenges, outline future research directions and include a provisional ethical integration framework to guide clinicians, developers, and policymakers in the responsible integration of LLMs into clinical workflows.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1653631"},"PeriodicalIF":3.2000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460403/pdf/","citationCount":"0","resultStr":"{\"title\":\"A systematic review of ethical considerations of large language models in healthcare and medicine.\",\"authors\":\"Muhammad Fareed, Madeeha Fatima, Jamal Uddin, Adeel Ahmed, Muhammad Awais Sattar\",\"doi\":\"10.3389/fdgth.2025.1653631\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The rapid integration of large language models (LLMs) into healthcare offers significant potential for improving diagnosis, treatment planning, and patient engagement. However, it also presents serious ethical challenges that remain incompletely addressed. In this review, we analyzed 27 peer-reviewed studies published between 2017 and 2025 across four major open-access databases using strict eligibility criteria, robust synthesis methods, and established guidelines to explicitly examine the ethical aspects of deploying LLMs in clinical settings. We explore four key aspects, including the main ethical issues arising from the use of LLMs in healthcare, the prevalent model architectures employed in ethical analyses, the healthcare application domains that are most frequently scrutinized, and the publication and bibliographic patterns characterizing this literature. Our synthesis reveals that bias and fairness ( <math><mi>n</mi> <mo>=</mo> <mn>7</mn></math> , 25.9%) are the most frequently discussed concerns, followed by safety, reliability, transparency, accountability, and privacy, and that the GPT family predominates ( <math><mi>n</mi> <mo>=</mo> <mn>14</mn></math> , 51.8%) among examined models. While privacy protection and bias mitigation received notable attention in the literature, no existing review has systematically addressed the comprehensive ethical issues surrounding LLMs. Most previous studies focus narrowly on specific clinical subdomains and lack a comprehensive methodology. As a systematic mapping of open-access literature, this synthesis identifies dominant ethical patterns, but it is not exhaustive of all ethical work on LLMs in healthcare. We also synthesize identified challenges, outline future research directions and include a provisional ethical integration framework to guide clinicians, developers, and policymakers in the responsible integration of LLMs into clinical workflows.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"7 \",\"pages\":\"1653631\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12460403/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2025.1653631\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1653631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

将大型语言模型(llm)快速集成到医疗保健中,为改进诊断、治疗计划和患者参与提供了巨大的潜力。然而,它也提出了严重的道德挑战,这些挑战仍未完全解决。在本综述中,我们分析了2017年至2025年间在四个主要开放获取数据库中发表的27项同行评审研究,使用严格的资格标准、稳健的合成方法和建立的指南来明确检查在临床环境中部署法学硕士的伦理方面。我们探讨了四个关键方面,包括在医疗保健中使用法学硕士所产生的主要伦理问题,伦理分析中采用的流行模型架构,最常被审查的医疗保健应用领域,以及表征这些文献的出版和书目模式。我们的综合研究显示,偏见和公平性(n = 7, 25.9%)是最常被讨论的问题,其次是安全性、可靠性、透明度、问责制和隐私性,并且在被检查的模型中,GPT家族占主导地位(n = 14, 51.8%)。虽然隐私保护和偏见缓解在文献中得到了显著的关注,但没有现有的综述系统地解决了围绕法学硕士的全面伦理问题。以往的研究大多局限于特定的临床子领域,缺乏全面的研究方法。作为开放获取文献的系统映射,这一综合确定了主要的伦理模式,但它不是医疗保健法学硕士的所有伦理工作的详尽无遗。我们还综合了确定的挑战,概述了未来的研究方向,并包括一个临时伦理整合框架,以指导临床医生,开发人员和政策制定者将法学硕士负责地整合到临床工作流程中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A systematic review of ethical considerations of large language models in healthcare and medicine.

A systematic review of ethical considerations of large language models in healthcare and medicine.

A systematic review of ethical considerations of large language models in healthcare and medicine.

A systematic review of ethical considerations of large language models in healthcare and medicine.

The rapid integration of large language models (LLMs) into healthcare offers significant potential for improving diagnosis, treatment planning, and patient engagement. However, it also presents serious ethical challenges that remain incompletely addressed. In this review, we analyzed 27 peer-reviewed studies published between 2017 and 2025 across four major open-access databases using strict eligibility criteria, robust synthesis methods, and established guidelines to explicitly examine the ethical aspects of deploying LLMs in clinical settings. We explore four key aspects, including the main ethical issues arising from the use of LLMs in healthcare, the prevalent model architectures employed in ethical analyses, the healthcare application domains that are most frequently scrutinized, and the publication and bibliographic patterns characterizing this literature. Our synthesis reveals that bias and fairness ( n = 7 , 25.9%) are the most frequently discussed concerns, followed by safety, reliability, transparency, accountability, and privacy, and that the GPT family predominates ( n = 14 , 51.8%) among examined models. While privacy protection and bias mitigation received notable attention in the literature, no existing review has systematically addressed the comprehensive ethical issues surrounding LLMs. Most previous studies focus narrowly on specific clinical subdomains and lack a comprehensive methodology. As a systematic mapping of open-access literature, this synthesis identifies dominant ethical patterns, but it is not exhaustive of all ethical work on LLMs in healthcare. We also synthesize identified challenges, outline future research directions and include a provisional ethical integration framework to guide clinicians, developers, and policymakers in the responsible integration of LLMs into clinical workflows.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信