Xiumin Li, Mi Wen, Siying He, Rongxing Lu, Liangliang Wang
{"title":"一种基于Krum AGR的隐私保护鲁棒联邦学习方案","authors":"Xiumin Li, Mi Wen, Siying He, Rongxing Lu, Liangliang Wang","doi":"10.1109/ICCC57788.2023.10233385","DOIUrl":null,"url":null,"abstract":"The sensitive information of participants would be leaked to an untrustworthy server through gradients in federated learning. Encrypted aggregation of uploaded parameters could resolve this issue. However, it brings challenges to the defense of model poisoning attacks in federated learning while solving the privacy problem. To address this issue, a robust federated learning scheme with privacy-preserving (RFLP) is proposed to eliminate the impact of model poisoning attacks while protecting the privacy of participants against untrusted servers. Specifically, an abnormal gradients detecting method is designed to achieve robust federated learning under encrypted aggregation using Pailliar homomorphic encryption. It is based on the concept of Krum aggregation algorithm (AGR), but utilizes privacy-preserving data features, thereby ensuring privacy. To reduce the rounds of communication in robust aggregation, a multidimensional homomorphic encryption approach is constructed. Besides, an aggregated signature authentication method is also constructed to ensure data integrity during transmission. The experiment results show that the training accuracy of RFLP with 10% malicious participants is 11.9% and 15.3% higher than that without robust aggregation.","PeriodicalId":191968,"journal":{"name":"2023 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Scheme for Robust Federated Learning with Privacy-preserving Based on Krum AGR\",\"authors\":\"Xiumin Li, Mi Wen, Siying He, Rongxing Lu, Liangliang Wang\",\"doi\":\"10.1109/ICCC57788.2023.10233385\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The sensitive information of participants would be leaked to an untrustworthy server through gradients in federated learning. Encrypted aggregation of uploaded parameters could resolve this issue. However, it brings challenges to the defense of model poisoning attacks in federated learning while solving the privacy problem. To address this issue, a robust federated learning scheme with privacy-preserving (RFLP) is proposed to eliminate the impact of model poisoning attacks while protecting the privacy of participants against untrusted servers. Specifically, an abnormal gradients detecting method is designed to achieve robust federated learning under encrypted aggregation using Pailliar homomorphic encryption. It is based on the concept of Krum aggregation algorithm (AGR), but utilizes privacy-preserving data features, thereby ensuring privacy. To reduce the rounds of communication in robust aggregation, a multidimensional homomorphic encryption approach is constructed. Besides, an aggregated signature authentication method is also constructed to ensure data integrity during transmission. The experiment results show that the training accuracy of RFLP with 10% malicious participants is 11.9% and 15.3% higher than that without robust aggregation.\",\"PeriodicalId\":191968,\"journal\":{\"name\":\"2023 IEEE/CIC International Conference on Communications in China (ICCC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CIC International Conference on Communications in China (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC57788.2023.10233385\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CIC International Conference on Communications in China (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC57788.2023.10233385","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Scheme for Robust Federated Learning with Privacy-preserving Based on Krum AGR
The sensitive information of participants would be leaked to an untrustworthy server through gradients in federated learning. Encrypted aggregation of uploaded parameters could resolve this issue. However, it brings challenges to the defense of model poisoning attacks in federated learning while solving the privacy problem. To address this issue, a robust federated learning scheme with privacy-preserving (RFLP) is proposed to eliminate the impact of model poisoning attacks while protecting the privacy of participants against untrusted servers. Specifically, an abnormal gradients detecting method is designed to achieve robust federated learning under encrypted aggregation using Pailliar homomorphic encryption. It is based on the concept of Krum aggregation algorithm (AGR), but utilizes privacy-preserving data features, thereby ensuring privacy. To reduce the rounds of communication in robust aggregation, a multidimensional homomorphic encryption approach is constructed. Besides, an aggregated signature authentication method is also constructed to ensure data integrity during transmission. The experiment results show that the training accuracy of RFLP with 10% malicious participants is 11.9% and 15.3% higher than that without robust aggregation.