{"title":"基于生成对抗网络的图神经网络的可解释性方法","authors":"Xinrui Kang, Dong Liang, Qinfeng Li","doi":"10.1145/3581807.3581850","DOIUrl":null,"url":null,"abstract":"In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.","PeriodicalId":292813,"journal":{"name":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets\",\"authors\":\"Xinrui Kang, Dong Liang, Qinfeng Li\",\"doi\":\"10.1145/3581807.3581850\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.\",\"PeriodicalId\":292813,\"journal\":{\"name\":\"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3581807.3581850\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581807.3581850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets
In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.