Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.

IF 2 3区 医学 Q2 MEDICINE, GENERAL & INTERNAL
Patient preference and adherence Pub Date : 2025-07-31 eCollection Date: 2025-01-01 DOI:10.2147/PPA.S527922
Avishek Pal, Tenzin Wangmo, Trishna Bharadia, Mithi Ahmed-Richards, Mayank Bhailalbhai Bhanderi, Rohitbhai Kachhadiya, Samuel S Allemann, Bernice Simone Elger
{"title":"Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.","authors":"Avishek Pal, Tenzin Wangmo, Trishna Bharadia, Mithi Ahmed-Richards, Mayank Bhailalbhai Bhanderi, Rohitbhai Kachhadiya, Samuel S Allemann, Bernice Simone Elger","doi":"10.2147/PPA.S527922","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (gAI) tools and large language models (LLMs) are gaining popularity among non-specialist audiences (patients, caregivers, and the general public) as a source of plain language medical information. AI-based models have the potential to act as a convenient, customizable and easy-to-access source of information that can improve patients' self-care and health literacy and enable greater engagement with clinicians. However, serious negative outcomes could occur if these tools fail to provide reliable, relevant and understandable medical information. Herein, we review published findings on opportunities and risks associated with such use of gAI/LLMs. We reviewed 44 articles published between January 2023 and July 2024. From the included articles, we find a focus on readability and accuracy; however, only three studies involved actual patients. Responses were reported to be reasonably accurate and sufficiently readable and detailed. The most commonly reported risks were oversimplification, over-generalization, lower accuracy in response to complex questions, and lack of transparency regarding information sources. There are ethical concerns that overreliance/unsupervised reliance on gAI/LLMs could lead to the \"humanizing\" of these models and pose a risk to patient health equity, inclusiveness and data privacy. For these technologies to be truly transformative, they must become more transparent, have appropriate governance and monitoring, and incorporate feedback from healthcare professionals (HCPs), patients, and other experts. Uptake of these technologies will also need education and awareness among non-specialist audiences around their optimal use as sources of plain language medical information.</p>","PeriodicalId":19972,"journal":{"name":"Patient preference and adherence","volume":"19 ","pages":"2227-2249"},"PeriodicalIF":2.0000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12325106/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patient preference and adherence","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2147/PPA.S527922","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Generative artificial intelligence (gAI) tools and large language models (LLMs) are gaining popularity among non-specialist audiences (patients, caregivers, and the general public) as a source of plain language medical information. AI-based models have the potential to act as a convenient, customizable and easy-to-access source of information that can improve patients' self-care and health literacy and enable greater engagement with clinicians. However, serious negative outcomes could occur if these tools fail to provide reliable, relevant and understandable medical information. Herein, we review published findings on opportunities and risks associated with such use of gAI/LLMs. We reviewed 44 articles published between January 2023 and July 2024. From the included articles, we find a focus on readability and accuracy; however, only three studies involved actual patients. Responses were reported to be reasonably accurate and sufficiently readable and detailed. The most commonly reported risks were oversimplification, over-generalization, lower accuracy in response to complex questions, and lack of transparency regarding information sources. There are ethical concerns that overreliance/unsupervised reliance on gAI/LLMs could lead to the "humanizing" of these models and pose a risk to patient health equity, inclusiveness and data privacy. For these technologies to be truly transformative, they must become more transparent, have appropriate governance and monitoring, and incorporate feedback from healthcare professionals (HCPs), patients, and other experts. Uptake of these technologies will also need education and awareness among non-specialist audiences around their optimal use as sources of plain language medical information.

Abstract Image

为患者、护理人员和公众提供简单语言医疗信息的生成人工智能/法学硕士:机会、风险和伦理。
生成式人工智能(gAI)工具和大型语言模型(llm)作为普通语言医疗信息的来源,在非专业受众(患者、护理人员和公众)中越来越受欢迎。基于人工智能的模型有可能成为一种方便、可定制和易于访问的信息来源,可以改善患者的自我护理和健康素养,并促进与临床医生的更多接触。然而,如果这些工具不能提供可靠、相关和可理解的医疗信息,可能会产生严重的负面后果。在此,我们回顾了与使用gAI/ llm相关的机会和风险的已发表的研究结果。我们回顾了2023年1月至2024年7月期间发表的44篇文章。从收录的文章中,我们发现了对可读性和准确性的关注;然而,只有三项研究涉及实际患者。据报告,答复相当准确,具有足够的可读性和详细程度。最常见的风险报告是过度简化,过度概括,对复杂问题的反应准确性较低,以及缺乏信息来源的透明度。伦理方面的担忧是,对gAI/ llm的过度依赖/无监督依赖可能导致这些模型的“人性化”,并对患者健康公平、包容性和数据隐私构成风险。为了使这些技术真正具有变革性,它们必须变得更加透明,具有适当的治理和监控,并纳入来自医疗保健专业人员(hcp)、患者和其他专家的反馈。采用这些技术还需要教育和提高非专业受众的意识,使他们了解如何将这些技术作为简单语言医疗信息的最佳来源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Patient preference and adherence
Patient preference and adherence MEDICINE, GENERAL & INTERNAL-
CiteScore
3.60
自引率
4.50%
发文量
354
审稿时长
6-12 weeks
期刊介绍: Patient Preference and Adherence is an international, peer reviewed, open access journal that focuses on the growing importance of patient preference and adherence throughout the therapeutic continuum. The journal is characterized by the rapid reporting of reviews, original research, modeling and clinical studies across all therapeutic areas. Patient satisfaction, acceptability, quality of life, compliance, persistence and their role in developing new therapeutic modalities and compounds to optimize clinical outcomes for existing disease states are major areas of interest for the journal. As of 1st April 2019, Patient Preference and Adherence will no longer consider meta-analyses for publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信