{"title":"ArgMed-Agents: Explainable Clinical Decision Reasoning with Large Language Models via Argumentation Schemes","authors":"Shengxin Hong, Liang Xiao, Xin Zhang, Jianxia Chen","doi":"arxiv-2403.06294","DOIUrl":null,"url":null,"abstract":"There are two main barriers to using large language models (LLMs) in clinical\nreasoning. Firstly, while LLMs exhibit significant promise in Natural Language\nProcessing (NLP) tasks, their performance in complex reasoning and planning\nfalls short of expectations. Secondly, LLMs use uninterpretable methods to make\nclinical decisions that are fundamentally different from the clinician's\ncognitive processes. This leads to user distrust. In this paper, we present a\nmulti-agent framework called ArgMed-Agents, which aims to enable LLM-based\nagents to make explainable clinical decision reasoning through interaction.\nArgMed-Agents performs self-argumentation iterations via Argumentation Scheme\nfor Clinical Decision (a reasoning mechanism for modeling cognitive processes\nin clinical reasoning), and then constructs the argumentation process as a\ndirected graph representing conflicting relationships. Ultimately, Reasoner(a\nsymbolic solver) identify a series of rational and coherent arguments to\nsupport decision. ArgMed-Agents enables LLMs to mimic the process of clinical\nargumentative reasoning by generating explanations of reasoning in a\nself-directed manner. The setup experiments show that ArgMed-Agents not only\nimproves accuracy in complex clinical decision reasoning problems compared to\nother prompt methods, but more importantly, it provides users with decision\nexplanations that increase their confidence.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"144 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.06294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
There are two main barriers to using large language models (LLMs) in clinical
reasoning. Firstly, while LLMs exhibit significant promise in Natural Language
Processing (NLP) tasks, their performance in complex reasoning and planning
falls short of expectations. Secondly, LLMs use uninterpretable methods to make
clinical decisions that are fundamentally different from the clinician's
cognitive processes. This leads to user distrust. In this paper, we present a
multi-agent framework called ArgMed-Agents, which aims to enable LLM-based
agents to make explainable clinical decision reasoning through interaction.
ArgMed-Agents performs self-argumentation iterations via Argumentation Scheme
for Clinical Decision (a reasoning mechanism for modeling cognitive processes
in clinical reasoning), and then constructs the argumentation process as a
directed graph representing conflicting relationships. Ultimately, Reasoner(a
symbolic solver) identify a series of rational and coherent arguments to
support decision. ArgMed-Agents enables LLMs to mimic the process of clinical
argumentative reasoning by generating explanations of reasoning in a
self-directed manner. The setup experiments show that ArgMed-Agents not only
improves accuracy in complex clinical decision reasoning problems compared to
other prompt methods, but more importantly, it provides users with decision
explanations that increase their confidence.