{"title":"Efficient Privacy-Preserving Data Aggregation for Lightweight Secure Model Training in Federated Learning","authors":"Cong Hu, Shuang Wang, Cuiling Liu, T. Zhang","doi":"10.1109/CSP58884.2023.00026","DOIUrl":null,"url":null,"abstract":"Federated learning has been widely adopted in every aspect of our daily life to well protect the dataset privacy, since the model parameters are trained locally and aggregated to global one, but the data themselves are not required to be sent to servers as traditional machine learning. In State Grid, different power companies tend to cooperate to train a global model to predict the risk of the grid or the trustworthiness of the customers in the future. The datasets belonging to each power company should be protected against another corporation, sector or other unauthorized entities, since they are closely related to users' privacy. On the other hand, it is widely reported even the local mode parameters can also be exploited to launch several attacks such as membership inference. Most existing work to realize privacy-preserving model aggregation relies on computationally intensive public key homomorphic encryption(HE) such as Paillier's cryptosystem, which loads intolerably high complexity on resource-constrained local users. To address this challenging issue, in this paper, a lightweight privacy-preserving data aggregation scheme is proposed without utilizing public-key homomorphic encryption. First, an efficient privacy-preserving data aggregation protocol PPDA is proposed based on any one-way trapdoor permutation in the multiple user setting. Then, based on PPDA, a lightweight secure model training scheme LSMT in federated learning is designed. Finally, security analysis and extensive simulations show that our proposed PPDA and LSMT well protect the sensitive data of power enterprises from collusion attacks, guarantees the security of aggregated results, and outperforms existing ones in terms of computational and communication overhead.","PeriodicalId":255083,"journal":{"name":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSP58884.2023.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning has been widely adopted in every aspect of our daily life to well protect the dataset privacy, since the model parameters are trained locally and aggregated to global one, but the data themselves are not required to be sent to servers as traditional machine learning. In State Grid, different power companies tend to cooperate to train a global model to predict the risk of the grid or the trustworthiness of the customers in the future. The datasets belonging to each power company should be protected against another corporation, sector or other unauthorized entities, since they are closely related to users' privacy. On the other hand, it is widely reported even the local mode parameters can also be exploited to launch several attacks such as membership inference. Most existing work to realize privacy-preserving model aggregation relies on computationally intensive public key homomorphic encryption(HE) such as Paillier's cryptosystem, which loads intolerably high complexity on resource-constrained local users. To address this challenging issue, in this paper, a lightweight privacy-preserving data aggregation scheme is proposed without utilizing public-key homomorphic encryption. First, an efficient privacy-preserving data aggregation protocol PPDA is proposed based on any one-way trapdoor permutation in the multiple user setting. Then, based on PPDA, a lightweight secure model training scheme LSMT in federated learning is designed. Finally, security analysis and extensive simulations show that our proposed PPDA and LSMT well protect the sensitive data of power enterprises from collusion attacks, guarantees the security of aggregated results, and outperforms existing ones in terms of computational and communication overhead.