Explicit Graph Reasoning Fusing Knowledge and Contextual Information for Multi-hop Question Answering

Zhenyun Deng, Yonghua Zhu, Qianqian Qi, M. Witbrock, Patricia J. Riddle
{"title":"Explicit Graph Reasoning Fusing Knowledge and Contextual Information for Multi-hop Question Answering","authors":"Zhenyun Deng, Yonghua Zhu, Qianqian Qi, M. Witbrock, Patricia J. Riddle","doi":"10.18653/v1/2022.dlg4nlp-1.8","DOIUrl":null,"url":null,"abstract":"Current graph-neural-network-based (GNN-based) approaches to multi-hop questions integrate clues from scattered paragraphs in an entity graph, achieving implicit reasoning by synchronous update of graph node representations using information from neighbours; this is poorly suited for explaining how clues are passed through the graph in hops. In this paper, we describe a structured Knowledge and contextual Information Fusion GNN (KIFGraph) whose explicit multi-hop graph reasoning mimics human step by step reasoning. Specifically, we first integrate clues at multiple levels of granularity (question, paragraph, sentence, entity) as nodes in the graph, connected by edges derived using structured semantic knowledge, then use a contextual encoder to obtain the initial node representations, followed by step-by-step two-stage graph reasoning that asynchronously updates node representations. Each node can be related to its neighbour nodes through fused structured knowledge and contextual information, reliably integrating their answer clues. Moreover, a masked attention mechanism (MAM) filters out noisy or redundant nodes and edges, to avoid ineffective clue propagation in graph reasoning. Experimental results show performance competitive with published models on the HotpotQA dataset.","PeriodicalId":367475,"journal":{"name":"Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2022.dlg4nlp-1.8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Current graph-neural-network-based (GNN-based) approaches to multi-hop questions integrate clues from scattered paragraphs in an entity graph, achieving implicit reasoning by synchronous update of graph node representations using information from neighbours; this is poorly suited for explaining how clues are passed through the graph in hops. In this paper, we describe a structured Knowledge and contextual Information Fusion GNN (KIFGraph) whose explicit multi-hop graph reasoning mimics human step by step reasoning. Specifically, we first integrate clues at multiple levels of granularity (question, paragraph, sentence, entity) as nodes in the graph, connected by edges derived using structured semantic knowledge, then use a contextual encoder to obtain the initial node representations, followed by step-by-step two-stage graph reasoning that asynchronously updates node representations. Each node can be related to its neighbour nodes through fused structured knowledge and contextual information, reliably integrating their answer clues. Moreover, a masked attention mechanism (MAM) filters out noisy or redundant nodes and edges, to avoid ineffective clue propagation in graph reasoning. Experimental results show performance competitive with published models on the HotpotQA dataset.
融合知识和上下文信息的多跳问答显式图推理
当前基于图神经网络(gnn)的多跳问题方法整合了实体图中分散段落的线索,通过使用邻居信息同步更新图节点表示来实现隐式推理;这并不适合解释线索是如何在图形中跳跃传递的。在本文中,我们描述了一个结构化的知识与上下文信息融合GNN (KIFGraph),其显式多跳图推理模仿人类逐步推理。具体来说,我们首先将多个粒度级别(问题、段落、句子、实体)的线索集成为图中的节点,通过使用结构化语义知识派生的边连接起来,然后使用上下文编码器获得初始节点表示,随后进行分步两阶段图推理,异步更新节点表示。每个节点可以通过融合结构化知识和上下文信息与相邻节点建立联系,可靠地整合它们的答案线索。此外,该算法还采用了一种屏蔽注意机制(MAM)来过滤掉有噪声或冗余的节点和边缘,以避免图推理中无效的线索传播。实验结果表明,在HotpotQA数据集上,该模型的性能与已发表的模型相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信