{"title":"Logic Meets Learning: From Aristotle to Neural Networks","authors":"Vaishak Belle","doi":"10.3233/faia210350","DOIUrl":"https://doi.org/10.3233/faia210350","url":null,"abstract":"The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence. In this chapter, we survey work that provides evidence for the long-standing and deep connections between logic and learning. After a brief historical prelude, our narrative is then structured in terms of three strands of interaction: logic versus learning, machine learning for logic, and logic for machine learning, but with ample overlap.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129630806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee
{"title":"Generalizable Neuro-symbolic Systems for Commonsense Question Answering","authors":"A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee","doi":"10.3233/FAIA210360","DOIUrl":"https://doi.org/10.3233/FAIA210360","url":null,"abstract":"This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133106324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Answering Natural-Language Questions with Neuro-Symbolic Knowledge Bases","authors":"Haitian Sun, Pat Verga, William W. Cohen","doi":"10.3233/faia210352","DOIUrl":"https://doi.org/10.3233/faia210352","url":null,"abstract":"Symbolic reasoning systems based on first-order logics are computationally powerful, and feedforward neural networks are computationally efficient, so unless P=NP, neural networks cannot, in general, emulate symbolic logics. Hence bridging the gap between neural and symbolic methods requires achieving a delicate balance: one needs to incorporate just enough of symbolic reasoning to be useful for a task, but not so much as to cause computational intractability. In this chapter we first present results that make this claim precise, and then use these formal results to inform the choice of a neuro-symbolic knowledge-based reasoning system, based on a set-based dataflow query language. We then present experimental results with a number of variants of this neuro-symbolic reasoner, and also show that this neuro-symbolic reasoner can be closely integrated into modern neural language models.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130795386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michelangelo Diligenti, Francesco Giannini, M. Gori, Marco Maggini, G. Marra
{"title":"A Constraint-Based Approach to Learning and Reasoning","authors":"Michelangelo Diligenti, Francesco Giannini, M. Gori, Marco Maggini, G. Marra","doi":"10.3233/faia210355","DOIUrl":"https://doi.org/10.3233/faia210355","url":null,"abstract":"Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125237852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tractable Boolean and Arithmetic Circuits","authors":"Adnan Darwiche","doi":"10.3233/faia210353","DOIUrl":"https://doi.org/10.3233/faia210353","url":null,"abstract":"Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now. These circuits were initially proposed as “compiled objects,” meant to facilitate logical and probabilistic reasoning, as they permit various types of inference to be performed in linear time and a feed-forward fashion like neural networks. In more recent years, the role of tractable circuits has significantly expanded as they became a computational and semantical backbone for some approaches that aim to integrate knowledge, reasoning and learning. In this chapter, we review the foundations of tractable circuits and some associated milestones, while focusing on their core properties and techniques that make them particularly useful for the broad aims of neuro-symbolic AI.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131724263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingxing Cao, Wentao Wan, Xiaodan Liang, Liang Lin
{"title":"Graph Reasoning Networks and Applications","authors":"Qingxing Cao, Wentao Wan, Xiaodan Liang, Liang Lin","doi":"10.3233/faia210351","DOIUrl":"https://doi.org/10.3233/faia210351","url":null,"abstract":"Despite the significant success in various domains, the data-driven deep neural networks compromise the feature interpretability, lack the global reasoning capability, and can’t incorporate external information crucial for complicated real-world tasks. Since the structured knowledge can provide rich cues to record human observations and commonsense, it is thus desirable to bridge symbolic semantics with learned local feature representations. In this chapter, we review works that incorporate different domain knowledge into the intermediate feature representation.These methods firstly construct a domain-specific graph that represents related human knowledge. Then, they characterize node representations with neural network features and perform graph convolution to enhance these symbolic nodes via the graph neural network(GNN).Lastly, they map the enhanced node feature back into the neural network for further propagation or prediction. Through integrating knowledge graphs into neural networks, one can collaborate feature learning and graph reasoning with the same supervised loss function and achieve a more effective and interpretable way to introduce structure constraints.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122001646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masataro Asai, Hiroshi Kajino, A. Fukunaga, Christian Muise
{"title":"Symbolic Reasoning in Latent Space: Classical Planning as an Example","authors":"Masataro Asai, Hiroshi Kajino, A. Fukunaga, Christian Muise","doi":"10.3233/faia210349","DOIUrl":"https://doi.org/10.3233/faia210349","url":null,"abstract":"Symbolic systems require hand-coded symbolic representation as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems. To address the gap between the two fields, one has to solve Symbol Grounding problem: The question of how a machine can generate symbols automatically. We discuss our recent work called Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We discuss several key ideas that made Latplan possible which would hopefully extend to many other symbolic paradigms outside classical planning.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134015759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Serafini, A.S. d'Avila Garcez, Samy Badreddine, Ivan Donadello, Michael Spranger, Federico Bianchi
{"title":"Logic Tensor Networks: Theory and Applications","authors":"L. Serafini, A.S. d'Avila Garcez, Samy Badreddine, Ivan Donadello, Michael Spranger, Federico Bianchi","doi":"10.3233/faia210498","DOIUrl":"https://doi.org/10.3233/faia210498","url":null,"abstract":"The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124892886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neuro-Symbolic Artificial Intelligence: The State of the Art","authors":"P. Hitzler, Md Kamruzzaman Sarker","doi":"10.3233/faia342","DOIUrl":"https://doi.org/10.3233/faia342","url":null,"abstract":"","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122257507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}