Yao Zhang , Lang Qin , Zhongtian Bao , Hongru Liang , Jun Wang , Zhenglu Yang , Zhe Sun , Andrzej Cichocki
{"title":"Adaptive knowledge selection in dialogue systems: Accommodating diverse knowledge types, requirements, and generation models","authors":"Yao Zhang , Lang Qin , Zhongtian Bao , Hongru Liang , Jun Wang , Zhenglu Yang , Zhe Sun , Andrzej Cichocki","doi":"10.1016/j.neunet.2025.108133","DOIUrl":null,"url":null,"abstract":"<div><div>Effective knowledge-grounded dialogue systems rely heavily on accurate knowledge selection. This paper begins with an innovative new perspective that categorizes research on knowledge selection based on when knowledge is selected in relation to response generation: pre-, joint-, and post-selection. Among these, pre-selection is of great interest nowadays because they endeavor to provide sufficiently relevant knowledge inputs for downstream response generation models in advance. This reduces the burden of learning, adjusting, and interpreting for the subsequent response generation models, particularly for Large Language Models. Current knowledge pre-selection methods, however, still face three significant challenges: how to cope with different types of knowledge, adapt to the various knowledge requirements in different dialogue contexts, and adapt to different generation models. To resolve the above challenges, we propose ASK, an adaptive knowledge pre-selection method. It unifies various types of knowledge, scores their relevance and contribution to generating desired responses, and adapts the knowledge pool size to ensure the optimal amount is available for generation models. ASK is enhanced by leveraging rewards for selecting appropriate knowledge in both quality and quantity, through a reinforcement learning framework. We perform exhaustive experiments on two benchmarks (WoW and OpenDialKG) and get the following conclusions: 1) ASK has excellent knowledge selection capabilities on diverse knowledge types and requirements. 2) ASK significantly enhances the performance of various downstream generation models, including ChatGPT and GPT-4o. 3) The lightweight improvement of ASK saves 40 % of the computational consumption. Code is available at <span><span>https://github.com/AnonymousCode32213/ASK</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108133"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010135","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Effective knowledge-grounded dialogue systems rely heavily on accurate knowledge selection. This paper begins with an innovative new perspective that categorizes research on knowledge selection based on when knowledge is selected in relation to response generation: pre-, joint-, and post-selection. Among these, pre-selection is of great interest nowadays because they endeavor to provide sufficiently relevant knowledge inputs for downstream response generation models in advance. This reduces the burden of learning, adjusting, and interpreting for the subsequent response generation models, particularly for Large Language Models. Current knowledge pre-selection methods, however, still face three significant challenges: how to cope with different types of knowledge, adapt to the various knowledge requirements in different dialogue contexts, and adapt to different generation models. To resolve the above challenges, we propose ASK, an adaptive knowledge pre-selection method. It unifies various types of knowledge, scores their relevance and contribution to generating desired responses, and adapts the knowledge pool size to ensure the optimal amount is available for generation models. ASK is enhanced by leveraging rewards for selecting appropriate knowledge in both quality and quantity, through a reinforcement learning framework. We perform exhaustive experiments on two benchmarks (WoW and OpenDialKG) and get the following conclusions: 1) ASK has excellent knowledge selection capabilities on diverse knowledge types and requirements. 2) ASK significantly enhances the performance of various downstream generation models, including ChatGPT and GPT-4o. 3) The lightweight improvement of ASK saves 40 % of the computational consumption. Code is available at https://github.com/AnonymousCode32213/ASK.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.