人工智能生成医学知识中的幻觉分析与消除策略

IF 3.5 2区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Fengxian Chen, Yan Li, Yaolong Chen, Zhaoxiang Bian, La Duo, Qingguo Zhou, Lu Zhang
{"title":"人工智能生成医学知识中的幻觉分析与消除策略","authors":"Fengxian Chen, Yan Li, Yaolong Chen, Zhaoxiang Bian, La Duo, Qingguo Zhou, Lu Zhang","doi":"10.1111/jebm.70075","DOIUrl":null,"url":null,"abstract":"<p><p>The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce \"hallucinations\"-plausible but factually incorrect or unsubstantiated outputs-that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.</p>","PeriodicalId":16090,"journal":{"name":"Journal of Evidence‐Based Medicine","volume":" ","pages":"e70075"},"PeriodicalIF":3.5000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge.\",\"authors\":\"Fengxian Chen, Yan Li, Yaolong Chen, Zhaoxiang Bian, La Duo, Qingguo Zhou, Lu Zhang\",\"doi\":\"10.1111/jebm.70075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce \\\"hallucinations\\\"-plausible but factually incorrect or unsubstantiated outputs-that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.</p>\",\"PeriodicalId\":16090,\"journal\":{\"name\":\"Journal of Evidence‐Based Medicine\",\"volume\":\" \",\"pages\":\"e70075\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Evidence‐Based Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1111/jebm.70075\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Evidence‐Based Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/jebm.70075","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)在医疗保健中的应用日益广泛,在辅助诊断和治疗方面显示出巨大的潜力。然而,生成式人工智能(GAI)模型经常产生“幻觉”——看似合理但实际上不正确或未经证实的输出——这威胁到临床决策和患者安全。本文系统地分析了数据、训练和推理维度上产生幻觉的原因,并提出了减轻幻觉的多维策略。我们的研究结果揭示了三个关键结论:通过知识图谱和多阶段培训的技术优化显著减少了幻觉,而通过专家反馈回路和多学科工作流程的临床整合提高了输出的可靠性。此外,实施强大的评估系统,结合对抗性测试和现实世界验证,大大提高了临床环境中的事实准确性。这些综合战略强调了协调技术进步与临床治理以开发可信赖的、以患者为中心的人工智能系统的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge.

The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce "hallucinations"-plausible but factually incorrect or unsubstantiated outputs-that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Evidence‐Based Medicine
Journal of Evidence‐Based Medicine MEDICINE, GENERAL & INTERNAL-
CiteScore
11.20
自引率
1.40%
发文量
42
期刊介绍: The Journal of Evidence-Based Medicine (EMB) is an esteemed international healthcare and medical decision-making journal, dedicated to publishing groundbreaking research outcomes in evidence-based decision-making, research, practice, and education. Serving as the official English-language journal of the Cochrane China Centre and West China Hospital of Sichuan University, we eagerly welcome editorials, commentaries, and systematic reviews encompassing various topics such as clinical trials, policy, drug and patient safety, education, and knowledge translation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信