Differentiable Reasoning on Large Knowledge Bases and Natural Language

Pasquale Minervini, Matko Bovsnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette
{"title":"Differentiable Reasoning on Large Knowledge Bases and Natural Language","authors":"Pasquale Minervini, Matko Bovsnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette","doi":"10.1609/AAAI.V34I04.5962","DOIUrl":null,"url":null,"abstract":"Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at this https URL.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"79","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge Graphs for eXplainable Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/AAAI.V34I04.5962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 79

Abstract

Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at this https URL.
基于大知识库和自然语言的可微推理
用自然语言和知识库(KBs)表达的知识进行推理是人工智能面临的主要挑战,在机器阅读、对话和问答中都有应用。联合学习文本表示和转换的一般神经结构数据效率非常低,并且很难分析其推理过程。这些问题由端到端可微推理系统(如神经定理证明器(ntp))解决,尽管它们只能与小规模符号KBs一起使用。在本文中,我们首先提出贪婪ntp (gntp),这是对ntp的扩展,解决了它们的复杂性和可扩展性限制,从而使它们适用于现实世界的数据集。该结果是通过动态构建ntp的计算图,并在推理过程中只包含最有希望的证明路径,从而获得数量级更高的效率模型来实现的。然后,我们提出了一种通过在共享嵌入空间中嵌入逻辑事实和自然语言句子来对KBs和文本提及进行联合推理的新方法。研究表明,gntp的表现与ntp相当,而成本只是后者的一小部分,同时在大型数据集上获得了有竞争力的链接预测结果,为预测提供了解释,并引入了可解释的模型。源代码、数据集和补充材料可在此https URL上在线获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信