Yulin Zhou , Ruizhang Huang , Chuan Lin , Lijuan Liu , Yongbin Qin
{"title":"Dynamic knowledge correction via abductive for domain question answering","authors":"Yulin Zhou , Ruizhang Huang , Chuan Lin , Lijuan Liu , Yongbin Qin","doi":"10.1016/j.ipm.2025.104306","DOIUrl":null,"url":null,"abstract":"<div><div>Domain question answering with large language models (LLMs) often relies on previously learned domain knowledge. Previous methods typically used large language models for direct reasoning to obtain results, which have poor reasoning ability due to complexity or timeliness of domain knowledge. In this paper, we propose an abductive-based dynamic knowledge correction for large language models reasoning framework (AKC). Specifically, we first identify domain knowledge sources based on task relevance to construct a domain-specific knowledge base. Then, we decompose the initial results generated by the large language model into individual elements and perform minimal inconsistency reasoning in conjunction with the domain knowledge base to dynamically correct erroneous reasoning outcomes. Experiments on three domain-specific datasets-law, traditional Chinese medicine, and education-demonstrate that the AKC framework significantly improves LLM accuracy in domain-specific question answering.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 1","pages":"Article 104306"},"PeriodicalIF":7.4000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S030645732500247X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Domain question answering with large language models (LLMs) often relies on previously learned domain knowledge. Previous methods typically used large language models for direct reasoning to obtain results, which have poor reasoning ability due to complexity or timeliness of domain knowledge. In this paper, we propose an abductive-based dynamic knowledge correction for large language models reasoning framework (AKC). Specifically, we first identify domain knowledge sources based on task relevance to construct a domain-specific knowledge base. Then, we decompose the initial results generated by the large language model into individual elements and perform minimal inconsistency reasoning in conjunction with the domain knowledge base to dynamically correct erroneous reasoning outcomes. Experiments on three domain-specific datasets-law, traditional Chinese medicine, and education-demonstrate that the AKC framework significantly improves LLM accuracy in domain-specific question answering.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.