{"title":"APVFGL:抗中毒攻击的健壮垂直联邦图学习框架","authors":"Sanfeng Zhang, Zijian Gong, Zhen Zhang, Wang Yang","doi":"10.1002/cpe.70323","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Vertical federated graph learning (VFGL) is a distributed graph learning scheme that addresses data isolation and privacy protection in scenarios where different clients hold the same nodes with distinct feature sets. However, VFGL is also vulnerable to poisoning attacks, while current defense methods based on horizontal federated learning and vertical federated learning are not effective in this context. To address this, this paper proposes APVFGL (Anti-Poison Vertical Federated Graph Learning), a robust VFGL framework resilient to poisoning attacks. APVFGL utilizes dual graph encoders and graph contrastive learning during the local training phase to derive robust node representations. The loss function, based on information bottleneck theory, reduces redundant information in the data to enhance the robustness of the model against poisoning attacks without the complexity of constructing negative samples. Additionally, a Shapley-based aggregation method is introduced on the server side to dynamically assign weights to each client, mitigating the impact of malicious feature manipulation. Experimental results on benchmark datasets demonstrate the superior performance of APVFGL against various poisoning attacks. Even in the case where more than half of the clients are poisoned, APVFGL can still achieve an F1 score of 81.6% and 71.5% on the Cora and Citeseer datasets, with an average reduction of 23.6% in attack success rate, highlighting its robustness and practicality in vertical federated graph learning scenarios.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"APVFGL: A Robust Vertical Federated Graph Learning Framework Against Poisoning Attacks\",\"authors\":\"Sanfeng Zhang, Zijian Gong, Zhen Zhang, Wang Yang\",\"doi\":\"10.1002/cpe.70323\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Vertical federated graph learning (VFGL) is a distributed graph learning scheme that addresses data isolation and privacy protection in scenarios where different clients hold the same nodes with distinct feature sets. However, VFGL is also vulnerable to poisoning attacks, while current defense methods based on horizontal federated learning and vertical federated learning are not effective in this context. To address this, this paper proposes APVFGL (Anti-Poison Vertical Federated Graph Learning), a robust VFGL framework resilient to poisoning attacks. APVFGL utilizes dual graph encoders and graph contrastive learning during the local training phase to derive robust node representations. The loss function, based on information bottleneck theory, reduces redundant information in the data to enhance the robustness of the model against poisoning attacks without the complexity of constructing negative samples. Additionally, a Shapley-based aggregation method is introduced on the server side to dynamically assign weights to each client, mitigating the impact of malicious feature manipulation. Experimental results on benchmark datasets demonstrate the superior performance of APVFGL against various poisoning attacks. Even in the case where more than half of the clients are poisoned, APVFGL can still achieve an F1 score of 81.6% and 71.5% on the Cora and Citeseer datasets, with an average reduction of 23.6% in attack success rate, highlighting its robustness and practicality in vertical federated graph learning scenarios.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 25-26\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70323\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70323","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
APVFGL: A Robust Vertical Federated Graph Learning Framework Against Poisoning Attacks
Vertical federated graph learning (VFGL) is a distributed graph learning scheme that addresses data isolation and privacy protection in scenarios where different clients hold the same nodes with distinct feature sets. However, VFGL is also vulnerable to poisoning attacks, while current defense methods based on horizontal federated learning and vertical federated learning are not effective in this context. To address this, this paper proposes APVFGL (Anti-Poison Vertical Federated Graph Learning), a robust VFGL framework resilient to poisoning attacks. APVFGL utilizes dual graph encoders and graph contrastive learning during the local training phase to derive robust node representations. The loss function, based on information bottleneck theory, reduces redundant information in the data to enhance the robustness of the model against poisoning attacks without the complexity of constructing negative samples. Additionally, a Shapley-based aggregation method is introduced on the server side to dynamically assign weights to each client, mitigating the impact of malicious feature manipulation. Experimental results on benchmark datasets demonstrate the superior performance of APVFGL against various poisoning attacks. Even in the case where more than half of the clients are poisoned, APVFGL can still achieve an F1 score of 81.6% and 71.5% on the Cora and Citeseer datasets, with an average reduction of 23.6% in attack success rate, highlighting its robustness and practicality in vertical federated graph learning scenarios.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.