Jihui Yin , Taorui Yang , Yifei Sun , Jianzhi Gao , Jiangbo Lu , Zhi-Hui Zhan
{"title":"基于自适应调节的图神经网络互信息伪装中毒攻击","authors":"Jihui Yin , Taorui Yang , Yifei Sun , Jianzhi Gao , Jiangbo Lu , Zhi-Hui Zhan","doi":"10.1016/j.jai.2024.12.001","DOIUrl":null,"url":null,"abstract":"<div><div>Studies show that Graph Neural Networks (GNNs) are susceptible to minor perturbations. Therefore, analyzing adversarial attacks on GNNs is crucial in current research. Previous studies used Generative Adversarial Networks to generate a set of fake nodes, injecting them into a clean GNNs to poison the graph structure and evaluate the robustness of GNNs. In the attack process, the computation of new node connections and the attack loss are independent, which affects the attack on the GNN. To improve this, a Fake Node Camouflage Attack based on Mutual Information (FNCAMI) algorithm is proposed. By incorporating Mutual Information (MI) loss, the distribution of nodes injected into the GNNs become more similar to the original nodes, achieving better attack results. Since the loss ratios of GNNs and MI affect performance, we also design an adaptive weighting method. By adjusting the loss weights in real-time through rate changes, larger loss values are obtained, eliminating local optima. The feasibility, effectiveness, and stealthiness of this algorithm are validated on four real datasets. Additionally, we use both global and targeted attacks to test the algorithm’s performance. Comparisons with baseline attack algorithms and ablation experiments demonstrate the efficiency of the FNCAMI algorithm.</div></div>","PeriodicalId":100755,"journal":{"name":"Journal of Automation and Intelligence","volume":"4 1","pages":"Pages 21-28"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive regulation-based Mutual Information Camouflage Poisoning Attack in Graph Neural Networks\",\"authors\":\"Jihui Yin , Taorui Yang , Yifei Sun , Jianzhi Gao , Jiangbo Lu , Zhi-Hui Zhan\",\"doi\":\"10.1016/j.jai.2024.12.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Studies show that Graph Neural Networks (GNNs) are susceptible to minor perturbations. Therefore, analyzing adversarial attacks on GNNs is crucial in current research. Previous studies used Generative Adversarial Networks to generate a set of fake nodes, injecting them into a clean GNNs to poison the graph structure and evaluate the robustness of GNNs. In the attack process, the computation of new node connections and the attack loss are independent, which affects the attack on the GNN. To improve this, a Fake Node Camouflage Attack based on Mutual Information (FNCAMI) algorithm is proposed. By incorporating Mutual Information (MI) loss, the distribution of nodes injected into the GNNs become more similar to the original nodes, achieving better attack results. Since the loss ratios of GNNs and MI affect performance, we also design an adaptive weighting method. By adjusting the loss weights in real-time through rate changes, larger loss values are obtained, eliminating local optima. The feasibility, effectiveness, and stealthiness of this algorithm are validated on four real datasets. Additionally, we use both global and targeted attacks to test the algorithm’s performance. Comparisons with baseline attack algorithms and ablation experiments demonstrate the efficiency of the FNCAMI algorithm.</div></div>\",\"PeriodicalId\":100755,\"journal\":{\"name\":\"Journal of Automation and Intelligence\",\"volume\":\"4 1\",\"pages\":\"Pages 21-28\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Automation and Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949855424000595\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Automation and Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949855424000595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive regulation-based Mutual Information Camouflage Poisoning Attack in Graph Neural Networks
Studies show that Graph Neural Networks (GNNs) are susceptible to minor perturbations. Therefore, analyzing adversarial attacks on GNNs is crucial in current research. Previous studies used Generative Adversarial Networks to generate a set of fake nodes, injecting them into a clean GNNs to poison the graph structure and evaluate the robustness of GNNs. In the attack process, the computation of new node connections and the attack loss are independent, which affects the attack on the GNN. To improve this, a Fake Node Camouflage Attack based on Mutual Information (FNCAMI) algorithm is proposed. By incorporating Mutual Information (MI) loss, the distribution of nodes injected into the GNNs become more similar to the original nodes, achieving better attack results. Since the loss ratios of GNNs and MI affect performance, we also design an adaptive weighting method. By adjusting the loss weights in real-time through rate changes, larger loss values are obtained, eliminating local optima. The feasibility, effectiveness, and stealthiness of this algorithm are validated on four real datasets. Additionally, we use both global and targeted attacks to test the algorithm’s performance. Comparisons with baseline attack algorithms and ablation experiments demonstrate the efficiency of the FNCAMI algorithm.