Federico Bianchi, Gaetano Rossiello, Luca Costabello, M. Palmonari, Pasquale Minervini
{"title":"Knowledge Graph Embeddings and Explainable AI","authors":"Federico Bianchi, Gaetano Rossiello, Luca Costabello, M. Palmonari, Pasquale Minervini","doi":"10.3233/SSW200011","DOIUrl":"https://doi.org/10.3233/SSW200011","url":null,"abstract":"Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces. In this chapter, we introduce the reader to the concept of knowledge graph embeddings by explaining what they are, how they can be generated and how they can be evaluated. We summarize the state-of-the-art in this field by describing the approaches that have been introduced to represent knowledge in the vector space. In relation to knowledge representation, we consider the problem of explainability, and discuss models and methods for explaining predictions obtained via knowledge graph embeddings.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121314549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shruthi Chari, Daniel Gruen, O. Seneviratne, D. McGuinness
{"title":"Foundations of Explainable Knowledge-Enabled Systems","authors":"Shruthi Chari, Daniel Gruen, O. Seneviratne, D. McGuinness","doi":"10.3233/SSW200010","DOIUrl":"https://doi.org/10.3233/SSW200010","url":null,"abstract":"Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125578428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Oltramari, Jonathan M Francis, C. Henson, Kaixin Ma, Ruwan Wickramarachchi
{"title":"Neuro-symbolic Architectures for Context Understanding","authors":"A. Oltramari, Jonathan M Francis, C. Henson, Kaixin Ma, Ruwan Wickramarachchi","doi":"10.3233/SSW200016","DOIUrl":"https://doi.org/10.3233/SSW200016","url":null,"abstract":"Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116347862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pasquale Minervini, Matko Bovsnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette
{"title":"Differentiable Reasoning on Large Knowledge Bases and Natural Language","authors":"Pasquale Minervini, Matko Bovsnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette","doi":"10.1609/AAAI.V34I04.5962","DOIUrl":"https://doi.org/10.1609/AAAI.V34I04.5962","url":null,"abstract":"Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at this https URL.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129857681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge Representation and Reasoning Methods to Explain Errors in Machine Learning","authors":"Marjan Alirezaie, Martin Längkvist, A. Loutfi","doi":"10.3233/SSW200017","DOIUrl":"https://doi.org/10.3233/SSW200017","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126745154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio
{"title":"Knowledge-Aware Interpretable Recommender Systems","authors":"V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio","doi":"10.3233/SSW200014","DOIUrl":"https://doi.org/10.3233/SSW200014","url":null,"abstract":"Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127862314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiewen Wu, Minh-Thuan Nguyen, G. Ngo, Nancy F. Chen
{"title":"Explanations in Predictive Analytics: Case Studies","authors":"Jiewen Wu, Minh-Thuan Nguyen, G. Ngo, Nancy F. Chen","doi":"10.3233/SSW200019","DOIUrl":"https://doi.org/10.3233/SSW200019","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"22 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114123470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Röder, M. A. Sherif, Muhammad Saleem, Felix Conrads, A. N. Ngomo
{"title":"Benchmarking the Lifecycle of Knowledge Graphs","authors":"Michael Röder, M. A. Sherif, Muhammad Saleem, Felix Conrads, A. N. Ngomo","doi":"10.3233/SSW200012","DOIUrl":"https://doi.org/10.3233/SSW200012","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"80 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132727935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating Explanations in Natural Language from Knowledge Graphs","authors":"Diego Moussallem, René Speck, A. N. Ngomo","doi":"10.3233/SSW200020","DOIUrl":"https://doi.org/10.3233/SSW200020","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133378782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}