Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology – a recent scoping review

IF 2.4 3区 医学 Q2 PATHOLOGY
Ehsan Ullah, Anil Parwani, Mirza Mansoor Baig, Rajendra Singh
{"title":"Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology – a recent scoping review","authors":"Ehsan Ullah, Anil Parwani, Mirza Mansoor Baig, Rajendra Singh","doi":"10.1186/s13000-024-01464-7","DOIUrl":null,"url":null,"abstract":"The integration of large language models (LLMs) like ChatGPT in diagnostic medicine, with a focus on digital pathology, has garnered significant attention. However, understanding the challenges and barriers associated with the use of LLMs in this context is crucial for their successful implementation. A scoping review was conducted to explore the challenges and barriers of using LLMs, in diagnostic medicine with a focus on digital pathology. A comprehensive search was conducted using electronic databases, including PubMed and Google Scholar, for relevant articles published within the past four years. The selected articles were critically analyzed to identify and summarize the challenges and barriers reported in the literature. The scoping review identified several challenges and barriers associated with the use of LLMs in diagnostic medicine. These included limitations in contextual understanding and interpretability, biases in training data, ethical considerations, impact on healthcare professionals, and regulatory concerns. Contextual understanding and interpretability challenges arise due to the lack of true understanding of medical concepts and lack of these models being explicitly trained on medical records selected by trained professionals, and the black-box nature of LLMs. Biases in training data pose a risk of perpetuating disparities and inaccuracies in diagnoses. Ethical considerations include patient privacy, data security, and responsible AI use. The integration of LLMs may impact healthcare professionals’ autonomy and decision-making abilities. Regulatory concerns surround the need for guidelines and frameworks to ensure safe and ethical implementation. The scoping review highlights the challenges and barriers of using LLMs in diagnostic medicine with a focus on digital pathology. Understanding these challenges is essential for addressing the limitations and developing strategies to overcome barriers. It is critical for health professionals to be involved in the selection of data and fine tuning of the models. Further research, validation, and collaboration between AI developers, healthcare professionals, and regulatory bodies are necessary to ensure the responsible and effective integration of LLMs in diagnostic medicine.","PeriodicalId":11237,"journal":{"name":"Diagnostic Pathology","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diagnostic Pathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13000-024-01464-7","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PATHOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

The integration of large language models (LLMs) like ChatGPT in diagnostic medicine, with a focus on digital pathology, has garnered significant attention. However, understanding the challenges and barriers associated with the use of LLMs in this context is crucial for their successful implementation. A scoping review was conducted to explore the challenges and barriers of using LLMs, in diagnostic medicine with a focus on digital pathology. A comprehensive search was conducted using electronic databases, including PubMed and Google Scholar, for relevant articles published within the past four years. The selected articles were critically analyzed to identify and summarize the challenges and barriers reported in the literature. The scoping review identified several challenges and barriers associated with the use of LLMs in diagnostic medicine. These included limitations in contextual understanding and interpretability, biases in training data, ethical considerations, impact on healthcare professionals, and regulatory concerns. Contextual understanding and interpretability challenges arise due to the lack of true understanding of medical concepts and lack of these models being explicitly trained on medical records selected by trained professionals, and the black-box nature of LLMs. Biases in training data pose a risk of perpetuating disparities and inaccuracies in diagnoses. Ethical considerations include patient privacy, data security, and responsible AI use. The integration of LLMs may impact healthcare professionals’ autonomy and decision-making abilities. Regulatory concerns surround the need for guidelines and frameworks to ensure safe and ethical implementation. The scoping review highlights the challenges and barriers of using LLMs in diagnostic medicine with a focus on digital pathology. Understanding these challenges is essential for addressing the limitations and developing strategies to overcome barriers. It is critical for health professionals to be involved in the selection of data and fine tuning of the models. Further research, validation, and collaboration between AI developers, healthcare professionals, and regulatory bodies are necessary to ensure the responsible and effective integration of LLMs in diagnostic medicine.
将 ChatGPT 等大型语言模型(LLM)用于诊断医学(重点是数字病理学)的挑战和障碍--最新范围综述
将大型语言模型(LLMs)(如 ChatGPT)整合到以数字病理学为重点的医学诊断中,已经引起了广泛关注。然而,了解在此背景下使用 LLMs 所面临的挑战和障碍对其成功实施至关重要。我们进行了一次范围界定综述,以探讨在诊断医学中使用 LLMs(侧重于数字病理学)所面临的挑战和障碍。我们使用电子数据库(包括 PubMed 和 Google Scholar)对过去四年内发表的相关文章进行了全面检索。对所选文章进行了批判性分析,以确定和总结文献中报道的挑战和障碍。范围界定审查确定了与在诊断医学中使用 LLMs 相关的若干挑战和障碍。这些挑战和障碍包括语境理解和可解释性方面的局限性、训练数据的偏差、伦理考虑、对医疗保健专业人员的影响以及监管问题。由于缺乏对医学概念的真正理解,这些模型没有经过培训的专业人员对所选病历的明确训练,以及 LLM 的黑箱性质,因此在上下文理解和可解释性方面存在挑战。训练数据中的偏差有可能导致诊断中的差异和不准确性长期存在。伦理方面的考虑包括患者隐私、数据安全和负责任地使用人工智能。LLM 的整合可能会影响医疗保健专业人员的自主性和决策能力。监管方面的问题是需要制定指导方针和框架,以确保安全和合乎道德地实施。范围界定综述强调了在诊断医学中使用 LLMs 所面临的挑战和障碍,重点是数字病理学。了解这些挑战对于解决局限性和制定克服障碍的策略至关重要。医疗专业人员参与数据选择和模型微调至关重要。有必要在人工智能开发人员、医疗保健专业人员和监管机构之间开展进一步的研究、验证和合作,以确保负责任地将 LLMs 有效地整合到诊断医学中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Diagnostic Pathology
Diagnostic Pathology 医学-病理学
CiteScore
4.60
自引率
0.00%
发文量
93
审稿时长
1 months
期刊介绍: Diagnostic Pathology is an open access, peer-reviewed, online journal that considers research in surgical and clinical pathology, immunology, and biology, with a special focus on cutting-edge approaches in diagnostic pathology and tissue-based therapy. The journal covers all aspects of surgical pathology, including classic diagnostic pathology, prognosis-related diagnosis (tumor stages, prognosis markers, such as MIB-percentage, hormone receptors, etc.), and therapy-related findings. The journal also focuses on the technological aspects of pathology, including molecular biology techniques, morphometry aspects (stereology, DNA analysis, syntactic structure analysis), communication aspects (telecommunication, virtual microscopy, virtual pathology institutions, etc.), and electronic education and quality assurance (for example interactive publication, on-line references with automated updating, etc.).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信