{"title":"Contract-based hierarchical security aggregation scheme for enhancing privacy in federated learning","authors":"Qianjin Wei , Gang Rao , Xuanjing Wu","doi":"10.1016/j.jisa.2024.103857","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning ensures the privacy of participant data by uploading gradients rather than private data. However, it has yet to address the issue of untrusted aggregators using gradient inference attacks to obtain user privacy data. Current research introduces encryption, blockchain, or secure multi-party computation to address these issues, but these solutions suffer from significant computational and communication overhead, often requiring a trusted third party. To address these challenges, this paper proposes a contract-based hierarchical secure aggregation scheme to enhance the privacy of federated learning. Firstly, the paper designs a general hierarchical federated learning model that distinguishes among training, aggregation, and consensus layers, replacing the need for a trusted third party with smart contracts. Secondly, to prevent untrusted aggregators from inferring the privacy data of each participant, the paper proposes a novel aggregation scheme based on Paillier and secret sharing. This scheme forces aggregators to aggregate participants’ model parameters, thereby preserving the privacy of gradients. Additionally, secret sharing ensures robustness for participants dynamically joining or exiting. Furthermore, at the consensus layer, the paper proposes an accuracy-based update algorithm to mitigate the impact of Byzantine attacks and allows for the introduction of other consensus methods to ensure scalability. Experimental results demonstrate that our scheme enhances privacy protection, maintains model accuracy without loss, and exhibits robustness against Byzantine attacks. The proposed scheme effectively protects participant privacy in practical federated learning scenarios.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"85 ","pages":"Article 103857"},"PeriodicalIF":3.8000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212624001595","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning ensures the privacy of participant data by uploading gradients rather than private data. However, it has yet to address the issue of untrusted aggregators using gradient inference attacks to obtain user privacy data. Current research introduces encryption, blockchain, or secure multi-party computation to address these issues, but these solutions suffer from significant computational and communication overhead, often requiring a trusted third party. To address these challenges, this paper proposes a contract-based hierarchical secure aggregation scheme to enhance the privacy of federated learning. Firstly, the paper designs a general hierarchical federated learning model that distinguishes among training, aggregation, and consensus layers, replacing the need for a trusted third party with smart contracts. Secondly, to prevent untrusted aggregators from inferring the privacy data of each participant, the paper proposes a novel aggregation scheme based on Paillier and secret sharing. This scheme forces aggregators to aggregate participants’ model parameters, thereby preserving the privacy of gradients. Additionally, secret sharing ensures robustness for participants dynamically joining or exiting. Furthermore, at the consensus layer, the paper proposes an accuracy-based update algorithm to mitigate the impact of Byzantine attacks and allows for the introduction of other consensus methods to ensure scalability. Experimental results demonstrate that our scheme enhances privacy protection, maintains model accuracy without loss, and exhibits robustness against Byzantine attacks. The proposed scheme effectively protects participant privacy in practical federated learning scenarios.
期刊介绍:
Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.