{"title":"FedDBG: Privacy-Preserving Dynamic Benchmark Gradient in Federated Learning Against Poisoning Attacks","authors":"Mengfan Xu","doi":"10.1109/NaNA56854.2022.00089","DOIUrl":null,"url":null,"abstract":"The federated learning's (FL) ability to protect local data privacy while cooperatively training powerful global models has received extensive attention. Although some researchers have carried out researches on gradient privacy disclosure under poisoning attacks, the existing works still ignore the unreliability of initial data, which makes it difficult to obtain the benign initial reference gradient, resulting in a significant decline in the accuracy of the final global model. To solve this problem, we propose a privacy-preserving gradient framework in FL based on homomorphic encryption. The framework can ensure that malicious initial users and subsequent users cannot interfere with the accuracy of the global model by uploading the poisoning gradients. In this process, key parameters such as gradients of local users won't be leaked. We then design a dynamic reference gradient aggregation algorithm to mitigate the poisoning attack in FL, dynamically dividing the sub-gradients of each round of local uploads by clustering the gradients of different local uploads. Furthermore, the malicious and benign gradients are further separated and the optimal global model is obtained by iterative updating. We proved the security of the scheme theoretically, and verified the effectiveness of the scheme through experiments. The accuracy of the proposed scheme is at least 80% higher than that of the scheme without anti-poisoning measures.","PeriodicalId":113743,"journal":{"name":"2022 International Conference on Networking and Network Applications (NaNA)","volume":"729 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Networking and Network Applications (NaNA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NaNA56854.2022.00089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The federated learning's (FL) ability to protect local data privacy while cooperatively training powerful global models has received extensive attention. Although some researchers have carried out researches on gradient privacy disclosure under poisoning attacks, the existing works still ignore the unreliability of initial data, which makes it difficult to obtain the benign initial reference gradient, resulting in a significant decline in the accuracy of the final global model. To solve this problem, we propose a privacy-preserving gradient framework in FL based on homomorphic encryption. The framework can ensure that malicious initial users and subsequent users cannot interfere with the accuracy of the global model by uploading the poisoning gradients. In this process, key parameters such as gradients of local users won't be leaked. We then design a dynamic reference gradient aggregation algorithm to mitigate the poisoning attack in FL, dynamically dividing the sub-gradients of each round of local uploads by clustering the gradients of different local uploads. Furthermore, the malicious and benign gradients are further separated and the optimal global model is obtained by iterative updating. We proved the security of the scheme theoretically, and verified the effectiveness of the scheme through experiments. The accuracy of the proposed scheme is at least 80% higher than that of the scheme without anti-poisoning measures.