{"title":"Enhancing long-form question answering via reflection with question decomposition","authors":"Junjie Xiao , Wei Wu , Jiaxu Zhao , Meng Fang , Jianxin Wang","doi":"10.1016/j.ipm.2025.104274","DOIUrl":null,"url":null,"abstract":"<div><div>Long-Form Question Answering (LFQA) requires multi-paragraph responses that explain, contextualize and justify an answer rather than returning a single fact. Large proprietary language models can meet this bar, but privacy, cost and hardware limits often force practitioners to rely on much smaller, locally hosted models — whose outputs are typically shallow or incomplete. We introduce Decomposition-Reflection, a training-free prompting framework that (i) decomposes a user question into the complementary sub-questions, (ii) answers each one, and (iii) runs a lightweight self-reflection loop after every stage to enhance the comprehensiveness, entailment and factuality of the results before synthesizing the final response. Across three LFQA benchmarks, the proposed approach raises ROUGE and LLM-based factuality scores over strong chain-of-thought and self-refinement baselines. Ablation study confirms that removing either decomposition or reflection sharply degrades coverage and entailment, underscoring the importance of both components.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 6","pages":"Article 104274"},"PeriodicalIF":6.9000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325002158","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Long-Form Question Answering (LFQA) requires multi-paragraph responses that explain, contextualize and justify an answer rather than returning a single fact. Large proprietary language models can meet this bar, but privacy, cost and hardware limits often force practitioners to rely on much smaller, locally hosted models — whose outputs are typically shallow or incomplete. We introduce Decomposition-Reflection, a training-free prompting framework that (i) decomposes a user question into the complementary sub-questions, (ii) answers each one, and (iii) runs a lightweight self-reflection loop after every stage to enhance the comprehensiveness, entailment and factuality of the results before synthesizing the final response. Across three LFQA benchmarks, the proposed approach raises ROUGE and LLM-based factuality scores over strong chain-of-thought and self-refinement baselines. Ablation study confirms that removing either decomposition or reflection sharply degrades coverage and entailment, underscoring the importance of both components.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.