Yining Li , Wenjun Ke , Jiajun Liu , Peng Wang , Jianghan Liu , Yao He
{"title":"Towards evidence-aware retrieval-augmented generation via self-corrective chain-of-thought","authors":"Yining Li , Wenjun Ke , Jiajun Liu , Peng Wang , Jianghan Liu , Yao He","doi":"10.1016/j.ipm.2025.104369","DOIUrl":null,"url":null,"abstract":"<div><div>To address challenges in reconciling static internal knowledge of large language models (LLMs) with dynamic external information without sacrificing inference efficiency, we propose SC-RAG (self-corrective retrieval-augmented generation). This novel framework introduces evidence extraction using a hybrid retriever (combining semantic and unsupervised aspect-based retrieval for enhanced knowledge quality) and an evidence-aware self-correction mechanism via chain-of-thought (CoT) to activate relevant internal LLM knowledge. Experiments conducted on the LaMP (comprising a total of 7 datasets and 144k samples) and HotpotQA (comprising 113k samples) benchmarks demonstrate that SC-RAG significantly outperforms current state-of-the-art methods by 1.0% to 30.3% across various evaluation metrics. Furthermore, SC-RAG achieves these improvements while concurrently reducing inference time by up to 14.3%, offering a more efficient and accurate solution for retrieval-augmented generation.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 2","pages":"Article 104369"},"PeriodicalIF":6.9000,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325003103","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
To address challenges in reconciling static internal knowledge of large language models (LLMs) with dynamic external information without sacrificing inference efficiency, we propose SC-RAG (self-corrective retrieval-augmented generation). This novel framework introduces evidence extraction using a hybrid retriever (combining semantic and unsupervised aspect-based retrieval for enhanced knowledge quality) and an evidence-aware self-correction mechanism via chain-of-thought (CoT) to activate relevant internal LLM knowledge. Experiments conducted on the LaMP (comprising a total of 7 datasets and 144k samples) and HotpotQA (comprising 113k samples) benchmarks demonstrate that SC-RAG significantly outperforms current state-of-the-art methods by 1.0% to 30.3% across various evaluation metrics. Furthermore, SC-RAG achieves these improvements while concurrently reducing inference time by up to 14.3%, offering a more efficient and accurate solution for retrieval-augmented generation.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.