ArgMed-Agents: Explainable Clinical Decision Reasoning with Large Language Models via Argumentation Schemes

Shengxin Hong, Liang Xiao, Xin Zhang, Jianxia Chen
{"title":"ArgMed-Agents: Explainable Clinical Decision Reasoning with Large Language Models via Argumentation Schemes","authors":"Shengxin Hong, Liang Xiao, Xin Zhang, Jianxia Chen","doi":"arxiv-2403.06294","DOIUrl":null,"url":null,"abstract":"There are two main barriers to using large language models (LLMs) in clinical\nreasoning. Firstly, while LLMs exhibit significant promise in Natural Language\nProcessing (NLP) tasks, their performance in complex reasoning and planning\nfalls short of expectations. Secondly, LLMs use uninterpretable methods to make\nclinical decisions that are fundamentally different from the clinician's\ncognitive processes. This leads to user distrust. In this paper, we present a\nmulti-agent framework called ArgMed-Agents, which aims to enable LLM-based\nagents to make explainable clinical decision reasoning through interaction.\nArgMed-Agents performs self-argumentation iterations via Argumentation Scheme\nfor Clinical Decision (a reasoning mechanism for modeling cognitive processes\nin clinical reasoning), and then constructs the argumentation process as a\ndirected graph representing conflicting relationships. Ultimately, Reasoner(a\nsymbolic solver) identify a series of rational and coherent arguments to\nsupport decision. ArgMed-Agents enables LLMs to mimic the process of clinical\nargumentative reasoning by generating explanations of reasoning in a\nself-directed manner. The setup experiments show that ArgMed-Agents not only\nimproves accuracy in complex clinical decision reasoning problems compared to\nother prompt methods, but more importantly, it provides users with decision\nexplanations that increase their confidence.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"144 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.06294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

There are two main barriers to using large language models (LLMs) in clinical reasoning. Firstly, while LLMs exhibit significant promise in Natural Language Processing (NLP) tasks, their performance in complex reasoning and planning falls short of expectations. Secondly, LLMs use uninterpretable methods to make clinical decisions that are fundamentally different from the clinician's cognitive processes. This leads to user distrust. In this paper, we present a multi-agent framework called ArgMed-Agents, which aims to enable LLM-based agents to make explainable clinical decision reasoning through interaction. ArgMed-Agents performs self-argumentation iterations via Argumentation Scheme for Clinical Decision (a reasoning mechanism for modeling cognitive processes in clinical reasoning), and then constructs the argumentation process as a directed graph representing conflicting relationships. Ultimately, Reasoner(a symbolic solver) identify a series of rational and coherent arguments to support decision. ArgMed-Agents enables LLMs to mimic the process of clinical argumentative reasoning by generating explanations of reasoning in a self-directed manner. The setup experiments show that ArgMed-Agents not only improves accuracy in complex clinical decision reasoning problems compared to other prompt methods, but more importantly, it provides users with decision explanations that increase their confidence.
ArgMed-Agents:通过论证方案利用大型语言模型进行可解释的临床决策推理
在临床推理中使用大型语言模型(LLM)有两个主要障碍。首先,虽然大型语言模型在自然语言处理(NLP)任务中大有可为,但在复杂推理和规划中的表现却不尽如人意。其次,LLMs 使用无法解读的方法做出临床决策,与临床医生的认知过程存在本质区别。这导致了用户的不信任。ArgMed-Agents 通过 "临床决策论证方案"(Argumentation Schemefor Clinical Decision,一种用于模拟临床推理中认知过程的推理机制)进行自我论证迭代,然后将论证过程构建为表示冲突关系的定向图。最后,推理器(一种符号解算器)会找出一系列合理、连贯的论据来支持决策。ArgMed-Agents 使 LLM 能够模仿临床论证推理过程,以自我导向的方式生成推理解释。设置实验表明,与其他提示方法相比,ArgMed-Agents 不仅提高了复杂临床决策推理问题的准确性,更重要的是,它为用户提供的决策解释增强了他们的信心。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信