Impact of Injecting Ground Truth Explanations on Relational Graph Convolutional Networks and their Explanation Methods for Link Prediction on Knowledge Graphs
{"title":"Impact of Injecting Ground Truth Explanations on Relational Graph Convolutional Networks and their Explanation Methods for Link Prediction on Knowledge Graphs","authors":"Nicholas F Halliwell, Fabien L. Gandon, F. Lécué","doi":"10.1109/WI-IAT55865.2022.00049","DOIUrl":null,"url":null,"abstract":"Relational Graph Convolutional Networks (RGCNs) are commonly applied to Knowledge Graphs (KGs) for black box link prediction. Several algorithms, or explanations methods, have been proposed to explain the predictions of this model. Recently, researchers have constructed datasets with ground truth explanations for quantitative and qualitative evaluation of predicted explanations. Benchmark results showed state-of-the-art explanation methods had difficulties predicting explanations. In this work, we leverage prior knowledge to further constrain the loss function of RGCNs, by penalizing node embeddings far away from the node embeddings in their associated ground truth explanation. Empirical results show improved explanation prediction performance of state-of-the-art post hoc explanations methods for RGCNs, at the cost of predictive performance. Additionally, we quantify the different types of errors made both in terms of data and semantics.","PeriodicalId":345445,"journal":{"name":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WI-IAT55865.2022.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Relational Graph Convolutional Networks (RGCNs) are commonly applied to Knowledge Graphs (KGs) for black box link prediction. Several algorithms, or explanations methods, have been proposed to explain the predictions of this model. Recently, researchers have constructed datasets with ground truth explanations for quantitative and qualitative evaluation of predicted explanations. Benchmark results showed state-of-the-art explanation methods had difficulties predicting explanations. In this work, we leverage prior knowledge to further constrain the loss function of RGCNs, by penalizing node embeddings far away from the node embeddings in their associated ground truth explanation. Empirical results show improved explanation prediction performance of state-of-the-art post hoc explanations methods for RGCNs, at the cost of predictive performance. Additionally, we quantify the different types of errors made both in terms of data and semantics.