Fengxian Chen, Yan Li, Yaolong Chen, Zhaoxiang Bian, La Duo, Qingguo Zhou, Lu Zhang
{"title":"Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge.","authors":"Fengxian Chen, Yan Li, Yaolong Chen, Zhaoxiang Bian, La Duo, Qingguo Zhou, Lu Zhang","doi":"10.1111/jebm.70075","DOIUrl":null,"url":null,"abstract":"<p><p>The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce \"hallucinations\"-plausible but factually incorrect or unsubstantiated outputs-that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.</p>","PeriodicalId":16090,"journal":{"name":"Journal of Evidence‐Based Medicine","volume":" ","pages":"e70075"},"PeriodicalIF":3.5000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Evidence‐Based Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/jebm.70075","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce "hallucinations"-plausible but factually incorrect or unsubstantiated outputs-that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.
期刊介绍:
The Journal of Evidence-Based Medicine (EMB) is an esteemed international healthcare and medical decision-making journal, dedicated to publishing groundbreaking research outcomes in evidence-based decision-making, research, practice, and education. Serving as the official English-language journal of the Cochrane China Centre and West China Hospital of Sichuan University, we eagerly welcome editorials, commentaries, and systematic reviews encompassing various topics such as clinical trials, policy, drug and patient safety, education, and knowledge translation.