{"title":"图神经网络权值的对抗性训练","authors":"Hao Xue, Xin Wang, Ying Wang","doi":"10.1145/3579654.3579738","DOIUrl":null,"url":null,"abstract":"Despite the fact that Graph Neural Networks (GNNs) have been extensively used for graph embedding representation, it is challenging to train well-performing GNNs on graphs with good generalization due to the limitation of overfitting. Previous research in Computer Vision (CV) has shown that the lack of generalization usually corresponds to the convergence of model parameters to sharp local minima. However, there is still a lack of related research in the field of graph analysis. In this paper, we investigate the loss landscape of models from the weight change perspective and show that the vanilla training method tends to cause GNNs to fall into sharp local minima with poor generalization. To tackle this problem, we propose a method named Adversarial Training on Weights (ATW) to flatten the weight loss landscape using adversarial training, thus improving the generalization of GNNs. Extensive experiments with multiple backbones on various datasets demonstrate the effectiveness of our method.","PeriodicalId":146783,"journal":{"name":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Training on Weights for Graph Neural Networks\",\"authors\":\"Hao Xue, Xin Wang, Ying Wang\",\"doi\":\"10.1145/3579654.3579738\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite the fact that Graph Neural Networks (GNNs) have been extensively used for graph embedding representation, it is challenging to train well-performing GNNs on graphs with good generalization due to the limitation of overfitting. Previous research in Computer Vision (CV) has shown that the lack of generalization usually corresponds to the convergence of model parameters to sharp local minima. However, there is still a lack of related research in the field of graph analysis. In this paper, we investigate the loss landscape of models from the weight change perspective and show that the vanilla training method tends to cause GNNs to fall into sharp local minima with poor generalization. To tackle this problem, we propose a method named Adversarial Training on Weights (ATW) to flatten the weight loss landscape using adversarial training, thus improving the generalization of GNNs. Extensive experiments with multiple backbones on various datasets demonstrate the effectiveness of our method.\",\"PeriodicalId\":146783,\"journal\":{\"name\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579654.3579738\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579654.3579738","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Training on Weights for Graph Neural Networks
Despite the fact that Graph Neural Networks (GNNs) have been extensively used for graph embedding representation, it is challenging to train well-performing GNNs on graphs with good generalization due to the limitation of overfitting. Previous research in Computer Vision (CV) has shown that the lack of generalization usually corresponds to the convergence of model parameters to sharp local minima. However, there is still a lack of related research in the field of graph analysis. In this paper, we investigate the loss landscape of models from the weight change perspective and show that the vanilla training method tends to cause GNNs to fall into sharp local minima with poor generalization. To tackle this problem, we propose a method named Adversarial Training on Weights (ATW) to flatten the weight loss landscape using adversarial training, thus improving the generalization of GNNs. Extensive experiments with multiple backbones on various datasets demonstrate the effectiveness of our method.