Cooperative Explanations of Graph Neural Networks

Junfeng Fang, Xiang Wang, An Zhang, Zemin Liu, Xiangnan He, Tat-seng Chua
{"title":"Cooperative Explanations of Graph Neural Networks","authors":"Junfeng Fang, Xiang Wang, An Zhang, Zemin Liu, Xiangnan He, Tat-seng Chua","doi":"10.1145/3539597.3570378","DOIUrl":null,"url":null,"abstract":"With the growing success of graph neural networks (GNNs), the explainability of GNN is attracting considerable attention. Current explainers mostly leverage feature attribution and selection to explain a prediction. By tracing the importance of input features, they select the salient subgraph as the explanation. However, their explainability is at the granularity of input features only, and cannot reveal the usefulness of hidden neurons. This inherent limitation makes the explainers fail to scrutinize the model behavior thoroughly, resulting in unfaithful explanations. In this work, we explore the explainability of GNNs at the granularity of both input features and hidden neurons. To this end, we propose an explainer-agnostic framework, Cooperative GNN Explanation (CGE) to generate the explanatory subgraph and subnetwork simultaneously, which jointly explain how the GNN model arrived at its prediction. Specifically, it first initializes the importance scores of input features and hidden neurons with masking networks. Then it iteratively retrains the importance scores, refining the salient subgraph and subnetwork by discarding low-scored features and neurons in each iteration. Through such cooperative learning, CGE not only generates faithful and concise explanations, but also exhibits how the salient information flows by activating and deactivating neurons. We conduct extensive experiments on both synthetic and real-world datasets, validating the superiority of CGE over state-of-the-art approaches. Code is available at https://github.com/MangoKiller/CGE_demo.","PeriodicalId":227804,"journal":{"name":"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539597.3570378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

With the growing success of graph neural networks (GNNs), the explainability of GNN is attracting considerable attention. Current explainers mostly leverage feature attribution and selection to explain a prediction. By tracing the importance of input features, they select the salient subgraph as the explanation. However, their explainability is at the granularity of input features only, and cannot reveal the usefulness of hidden neurons. This inherent limitation makes the explainers fail to scrutinize the model behavior thoroughly, resulting in unfaithful explanations. In this work, we explore the explainability of GNNs at the granularity of both input features and hidden neurons. To this end, we propose an explainer-agnostic framework, Cooperative GNN Explanation (CGE) to generate the explanatory subgraph and subnetwork simultaneously, which jointly explain how the GNN model arrived at its prediction. Specifically, it first initializes the importance scores of input features and hidden neurons with masking networks. Then it iteratively retrains the importance scores, refining the salient subgraph and subnetwork by discarding low-scored features and neurons in each iteration. Through such cooperative learning, CGE not only generates faithful and concise explanations, but also exhibits how the salient information flows by activating and deactivating neurons. We conduct extensive experiments on both synthetic and real-world datasets, validating the superiority of CGE over state-of-the-art approaches. Code is available at https://github.com/MangoKiller/CGE_demo.
图神经网络的协同解释
随着图神经网络的日益成功,图神经网络的可解释性受到越来越多的关注。目前的解释者大多利用特征归因和选择来解释预测。通过追踪输入特征的重要性,他们选择显著子图作为解释。然而,它们的可解释性仅停留在输入特征的粒度上,并不能揭示隐藏神经元的有用性。这种固有的限制使得解释者不能彻底地审视模型行为,从而导致不忠实的解释。在这项工作中,我们在输入特征和隐藏神经元的粒度上探索了gnn的可解释性。为此,我们提出了一个解释者不可知的框架,即协同GNN解释(Cooperative GNN Explanation, CGE),以同时生成解释子图和子网络,它们共同解释了GNN模型如何达到其预测。具体来说,它首先用掩蔽网络初始化输入特征和隐藏神经元的重要性分数。然后迭代地重新训练重要性分数,在每次迭代中通过丢弃低分数的特征和神经元来细化显著子图和子网络。通过这种合作学习,CGE不仅产生了忠实而简洁的解释,而且还展示了显著信息是如何通过激活和失活神经元流动的。我们在合成和真实数据集上进行了广泛的实验,验证了CGE优于最先进方法的优越性。代码可从https://github.com/MangoKiller/CGE_demo获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信