He Xu, Yueqing Wang, Yangqin Xun, Ruitai Shao, Yang Jiao
{"title":"临床推理的人工智能:可靠性挑战和循证实践之路。","authors":"He Xu, Yueqing Wang, Yangqin Xun, Ruitai Shao, Yang Jiao","doi":"10.1093/qjmed/hcaf114","DOIUrl":null,"url":null,"abstract":"<p><p>The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.</p>","PeriodicalId":20806,"journal":{"name":"QJM: An International Journal of Medicine","volume":" ","pages":""},"PeriodicalIF":7.3000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence for clinical reasoning: the reliability challenge and path to evidence-based practice.\",\"authors\":\"He Xu, Yueqing Wang, Yangqin Xun, Ruitai Shao, Yang Jiao\",\"doi\":\"10.1093/qjmed/hcaf114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.</p>\",\"PeriodicalId\":20806,\"journal\":{\"name\":\"QJM: An International Journal of Medicine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"QJM: An International Journal of Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/qjmed/hcaf114\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"QJM: An International Journal of Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/qjmed/hcaf114","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Artificial intelligence for clinical reasoning: the reliability challenge and path to evidence-based practice.
The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.
期刊介绍:
QJM, a renowned and reputable general medical journal, has been a prominent source of knowledge in the field of internal medicine. With a steadfast commitment to advancing medical science and practice, it features a selection of rigorously reviewed articles.
Released on a monthly basis, QJM encompasses a wide range of article types. These include original papers that contribute innovative research, editorials that offer expert opinions, and reviews that provide comprehensive analyses of specific topics. The journal also presents commentary papers aimed at initiating discussions on controversial subjects and allocates a dedicated section for reader correspondence.
In summary, QJM's reputable standing stems from its enduring presence in the medical community, consistent publication schedule, and diverse range of content designed to inform and engage readers.