大语言模型的临床医生指南:以幻觉为焦点的一般观点。

IF 1.9 Q3 MEDICINE, RESEARCH & EXPERIMENTAL
Dimitri Roustan, François Bastardot
{"title":"大语言模型的临床医生指南:以幻觉为焦点的一般观点。","authors":"Dimitri Roustan, François Bastardot","doi":"10.2196/59823","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.</p>","PeriodicalId":51757,"journal":{"name":"Interactive Journal of Medical Research","volume":"14 ","pages":"e59823"},"PeriodicalIF":1.9000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11815294/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Clinicians' Guide to Large Language Models: A General Perspective With a Focus on Hallucinations.\",\"authors\":\"Dimitri Roustan, François Bastardot\",\"doi\":\"10.2196/59823\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.</p>\",\"PeriodicalId\":51757,\"journal\":{\"name\":\"Interactive Journal of Medical Research\",\"volume\":\"14 \",\"pages\":\"e59823\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2025-01-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11815294/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interactive Journal of Medical Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2196/59823\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, RESEARCH & EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interactive Journal of Medical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/59823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)是一种人工智能工具,有望深刻改变我们在医学实践的各个方面。考虑到法学硕士在医学领域的巨大潜力,以及许多医疗保健利益相关者对将其应用于日常实践的兴趣,因此临床医生必须意识到与使用这些模型相关的基本风险。也就是说,与使用llm相关的一个重大风险是它们可能产生幻觉。llm产生的幻觉(虚假信息)由多种原因引起,包括与训练数据集相关的因素以及它们的自回归性质。对临床实践的影响范围从产生不准确的诊断和治疗信息到强化有缺陷的诊断推理途径,以及如果使用不当则缺乏可靠性。为了降低这种风险,我们开发了一个通用的技术框架,用于在一般临床实践中接近法学硕士,以及在更大的机构规模上实施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Clinicians' Guide to Large Language Models: A General Perspective With a Focus on Hallucinations.

Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Interactive Journal of Medical Research
Interactive Journal of Medical Research MEDICINE, RESEARCH & EXPERIMENTAL-
自引率
0.00%
发文量
45
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信