临床推理的人工智能:可靠性挑战和循证实践之路。

IF 7.3 4区 医学 Q1 MEDICINE, GENERAL & INTERNAL
He Xu, Yueqing Wang, Yangqin Xun, Ruitai Shao, Yang Jiao
{"title":"临床推理的人工智能:可靠性挑战和循证实践之路。","authors":"He Xu, Yueqing Wang, Yangqin Xun, Ruitai Shao, Yang Jiao","doi":"10.1093/qjmed/hcaf114","DOIUrl":null,"url":null,"abstract":"<p><p>The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.</p>","PeriodicalId":20806,"journal":{"name":"QJM: An International Journal of Medicine","volume":" ","pages":""},"PeriodicalIF":7.3000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence for clinical reasoning: the reliability challenge and path to evidence-based practice.\",\"authors\":\"He Xu, Yueqing Wang, Yangqin Xun, Ruitai Shao, Yang Jiao\",\"doi\":\"10.1093/qjmed/hcaf114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.</p>\",\"PeriodicalId\":20806,\"journal\":{\"name\":\"QJM: An International Journal of Medicine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"QJM: An International Journal of Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/qjmed/hcaf114\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"QJM: An International Journal of Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/qjmed/hcaf114","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

摘要

将生成式人工智能(AI),特别是大型语言模型(llm)集成到临床推理中,预示着医疗实践的变革潜力。然而,它们真实地复制人类临床决策复杂性的能力仍然不确定——这里定义为可靠性挑战。虽然研究表明llm有能力通过医疗执照考试并达到与医生相当的诊断准确性,但关键的限制仍然存在。至关重要的是,法学硕士模仿推理模式,而不是执行真正的逻辑推理,他们对过时或非区域数据的依赖削弱了临床相关性。为了弥补这一差距,我们提倡一种协同模式,即医生利用先进的临床专业知识,同时人工智能向透明度和可解释性发展。这需要人工智能系统集成实时的、特定于上下文的证据,与当地医疗保健约束保持一致,并采用可解释的架构(例如多步骤推理框架或临床知识图)来揭开决策路径的神秘面纱。最终,用于临床推理的可靠人工智能取决于协调技术创新与人类监督,确保在推进以证据为基础的、以患者为中心的护理的同时,遵守有益和无害的道德准则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial intelligence for clinical reasoning: the reliability challenge and path to evidence-based practice.

The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.90
自引率
5.30%
发文量
263
审稿时长
4-8 weeks
期刊介绍: QJM, a renowned and reputable general medical journal, has been a prominent source of knowledge in the field of internal medicine. With a steadfast commitment to advancing medical science and practice, it features a selection of rigorously reviewed articles. Released on a monthly basis, QJM encompasses a wide range of article types. These include original papers that contribute innovative research, editorials that offer expert opinions, and reviews that provide comprehensive analyses of specific topics. The journal also presents commentary papers aimed at initiating discussions on controversial subjects and allocates a dedicated section for reader correspondence. In summary, QJM's reputable standing stems from its enduring presence in the medical community, consistent publication schedule, and diverse range of content designed to inform and engage readers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信