Multi-hop Knowledge Base Q&A in Integrated Energy Services Based on Intermediate Reasoning Attention

Wenbin Zhang, Jiaju She, Yingqiu Wang, Meng Zhao, Yi Wang, Chao Liu
{"title":"Multi-hop Knowledge Base Q&A in Integrated Energy Services Based on Intermediate Reasoning Attention","authors":"Wenbin Zhang, Jiaju She, Yingqiu Wang, Meng Zhao, Yi Wang, Chao Liu","doi":"10.1109/ICSAI57119.2022.10005492","DOIUrl":null,"url":null,"abstract":"Knowledge base with multiple hops quizzing aims to discover the subject entity in a question at a distance from the knowledge base’s answer entity for multiple hops. The lack of supervised signals for the intermediate phases of multi-hop inference, which leaves a model only able to get input on the final output, is a significant difficulty for the study, where the inference instructions for the intermediate steps cannot be effectively optimized and the forward propagation of inference states is weakened. Most of the existing research approaches use global attention to motivate the model to learn the inference instructions of each hop, which has been shown to fail to achieve effective performance in weakly supervised tasks. To address this challenge, this paper proposes an intermediate inference attention mechanism to handle multi-hop knowledge base quizzing tasks. Inspired by the human execution of multi-hop quizzing where each hop question is influenced by the previous hop answer, in this approach, the model pays more attention to the inference state generated by the previous hop inference instruction when generating each hop inference instruction, prompting a close interaction between the inference state of the intermediate step and the inference instruction, and providing effective attentional feedback for the optimization of the intermediate step inference instruction. On the KBQA dataset in the integrated energy service domain, which is self-constructed in this research, we conduct comprehensive comparison experiments. The findings suggest that the technique we provided achieves optimum performance in this study.","PeriodicalId":339547,"journal":{"name":"2022 8th International Conference on Systems and Informatics (ICSAI)","volume":"97 1-4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 8th International Conference on Systems and Informatics (ICSAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSAI57119.2022.10005492","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Knowledge base with multiple hops quizzing aims to discover the subject entity in a question at a distance from the knowledge base’s answer entity for multiple hops. The lack of supervised signals for the intermediate phases of multi-hop inference, which leaves a model only able to get input on the final output, is a significant difficulty for the study, where the inference instructions for the intermediate steps cannot be effectively optimized and the forward propagation of inference states is weakened. Most of the existing research approaches use global attention to motivate the model to learn the inference instructions of each hop, which has been shown to fail to achieve effective performance in weakly supervised tasks. To address this challenge, this paper proposes an intermediate inference attention mechanism to handle multi-hop knowledge base quizzing tasks. Inspired by the human execution of multi-hop quizzing where each hop question is influenced by the previous hop answer, in this approach, the model pays more attention to the inference state generated by the previous hop inference instruction when generating each hop inference instruction, prompting a close interaction between the inference state of the intermediate step and the inference instruction, and providing effective attentional feedback for the optimization of the intermediate step inference instruction. On the KBQA dataset in the integrated energy service domain, which is self-constructed in this research, we conduct comprehensive comparison experiments. The findings suggest that the technique we provided achieves optimum performance in this study.
基于中间推理注意的集成能源服务多跳知识库问答
多跳知识库测验的目的是在距离知识库的多跳答案实体一定距离的地方发现问题中的主题实体。多跳推理中间阶段缺乏监督信号,导致模型只能在最终输出上得到输入,这是研究的一个重大难点,中间步骤的推理指令不能有效优化,推理状态的前向传播被削弱。现有的研究方法大多使用全局关注来激励模型学习每一跳的推理指令,但在弱监督任务中无法获得有效的性能。为了解决这一问题,本文提出了一种中间推理注意机制来处理多跳知识库测试任务。受人类执行多跳问答的启发,每一跳的问题都受到前一跳答案的影响,在该方法中,模型在生成每一跳推理指令时更关注前一跳推理指令生成的推理状态,促使中间步骤的推理状态与推理指令密切交互。并为中间步推理指令的优化提供有效的注意力反馈。在本研究自建的综合能源服务领域KBQA数据集上,我们进行了全面的对比实验。研究结果表明,我们提供的技术在本研究中达到了最佳性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信