A Constraint-Based Approach to Learning and Reasoning

Michelangelo Diligenti, Francesco Giannini, M. Gori, Marco Maggini, G. Marra
{"title":"A Constraint-Based Approach to Learning and Reasoning","authors":"Michelangelo Diligenti, Francesco Giannini, M. Gori, Marco Maggini, G. Marra","doi":"10.3233/faia210355","DOIUrl":null,"url":null,"abstract":"Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuro-Symbolic Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/faia210355","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.
基于约束的学习和推理方法
神经符号模型弥合了子符号和符号方法之间的差距,这两种方法都有明显的局限性。子符号方法,如神经网络,需要大量标记数据才能成功,而符号方法,如逻辑推理器,需要少量的先验领域知识,但不容易扩展到大量数据集。本章提出了一种综合学习和推理的一般方法,该方法基于将可用的先验知识转化为无向图形模型。图形模型上的势被设计为通过一组可训练的函数来适应随机变量之间的依赖关系,就像神经网络计算的那样。由此产生的神经符号框架可以有效地利用训练数据,同时在特定的话语领域中利用高级逻辑推理。虽然在该模型中难以进行精确的推理,但通过不同的假设可以推导出不同的可处理模型。本章特别介绍了三个模型:基于语义的正则化、深度逻辑模型和关系神经机器。基于语义的正则化是一种可扩展的神经符号模型,它在假设所提供的先验知识是正确的并且必须精确满足的情况下,不调整推理器的参数。深度逻辑模型保留了基于语义的正则化的可扩展性,同时通过在学习过程中共同训练推理器的参数,提供了对逻辑知识的灵活利用。最后,关系神经机器提供了从标准深度架构的监督数据中完美复制训练效果的基本优势,并且在考虑对符号数据进行纯推理时,保留了与马尔可夫逻辑网络相同的通用性和表达能力。学习和推理之间的联系是非常普遍的,任何(深度)学习器都可以被采用,任何一阶逻辑表示的输出结构都可以被集成。然而,关系神经机器内部的精确推理仍然是棘手的,并且讨论了不同的分解来增加方法的可扩展性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信