{"title":"图神经网络的对抗性样本攻击与防御概述","authors":"Chuan Guo","doi":"10.1109/ICAA53760.2021.00053","DOIUrl":null,"url":null,"abstract":"Graph-structured data has been widely used. Graph neural network can be used to analyze graph-structured data well. However, the existence of adversarial samples indicates that the prediction results of graph neural networks can be deliberately manipulated. This affects the feasibility of applying deep learning methods to critical situations. Study on graph neural network adversarial sample attack methods and defense techniques can help to strengthen our understanding of graph neural network and build a more robust graph neural network model. It is of great significance to promote the feasibility and security of relevant algorithms in practical applications. This paper analyzes the current graph neural network adversarial sample attack and defense techniques, which has a guiding significance for future research work.","PeriodicalId":121879,"journal":{"name":"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Overview of Adversarial Sample Attacks and Defenses for Graph Neural Networks\",\"authors\":\"Chuan Guo\",\"doi\":\"10.1109/ICAA53760.2021.00053\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph-structured data has been widely used. Graph neural network can be used to analyze graph-structured data well. However, the existence of adversarial samples indicates that the prediction results of graph neural networks can be deliberately manipulated. This affects the feasibility of applying deep learning methods to critical situations. Study on graph neural network adversarial sample attack methods and defense techniques can help to strengthen our understanding of graph neural network and build a more robust graph neural network model. It is of great significance to promote the feasibility and security of relevant algorithms in practical applications. This paper analyzes the current graph neural network adversarial sample attack and defense techniques, which has a guiding significance for future research work.\",\"PeriodicalId\":121879,\"journal\":{\"name\":\"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAA53760.2021.00053\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Intelligent Computing, Automation and Applications (ICAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAA53760.2021.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Overview of Adversarial Sample Attacks and Defenses for Graph Neural Networks
Graph-structured data has been widely used. Graph neural network can be used to analyze graph-structured data well. However, the existence of adversarial samples indicates that the prediction results of graph neural networks can be deliberately manipulated. This affects the feasibility of applying deep learning methods to critical situations. Study on graph neural network adversarial sample attack methods and defense techniques can help to strengthen our understanding of graph neural network and build a more robust graph neural network model. It is of great significance to promote the feasibility and security of relevant algorithms in practical applications. This paper analyzes the current graph neural network adversarial sample attack and defense techniques, which has a guiding significance for future research work.