Efficient Privacy-Preserving Data Aggregation for Lightweight Secure Model Training in Federated Learning

Cong Hu, Shuang Wang, Cuiling Liu, T. Zhang
{"title":"Efficient Privacy-Preserving Data Aggregation for Lightweight Secure Model Training in Federated Learning","authors":"Cong Hu, Shuang Wang, Cuiling Liu, T. Zhang","doi":"10.1109/CSP58884.2023.00026","DOIUrl":null,"url":null,"abstract":"Federated learning has been widely adopted in every aspect of our daily life to well protect the dataset privacy, since the model parameters are trained locally and aggregated to global one, but the data themselves are not required to be sent to servers as traditional machine learning. In State Grid, different power companies tend to cooperate to train a global model to predict the risk of the grid or the trustworthiness of the customers in the future. The datasets belonging to each power company should be protected against another corporation, sector or other unauthorized entities, since they are closely related to users' privacy. On the other hand, it is widely reported even the local mode parameters can also be exploited to launch several attacks such as membership inference. Most existing work to realize privacy-preserving model aggregation relies on computationally intensive public key homomorphic encryption(HE) such as Paillier's cryptosystem, which loads intolerably high complexity on resource-constrained local users. To address this challenging issue, in this paper, a lightweight privacy-preserving data aggregation scheme is proposed without utilizing public-key homomorphic encryption. First, an efficient privacy-preserving data aggregation protocol PPDA is proposed based on any one-way trapdoor permutation in the multiple user setting. Then, based on PPDA, a lightweight secure model training scheme LSMT in federated learning is designed. Finally, security analysis and extensive simulations show that our proposed PPDA and LSMT well protect the sensitive data of power enterprises from collusion attacks, guarantees the security of aggregated results, and outperforms existing ones in terms of computational and communication overhead.","PeriodicalId":255083,"journal":{"name":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSP58884.2023.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning has been widely adopted in every aspect of our daily life to well protect the dataset privacy, since the model parameters are trained locally and aggregated to global one, but the data themselves are not required to be sent to servers as traditional machine learning. In State Grid, different power companies tend to cooperate to train a global model to predict the risk of the grid or the trustworthiness of the customers in the future. The datasets belonging to each power company should be protected against another corporation, sector or other unauthorized entities, since they are closely related to users' privacy. On the other hand, it is widely reported even the local mode parameters can also be exploited to launch several attacks such as membership inference. Most existing work to realize privacy-preserving model aggregation relies on computationally intensive public key homomorphic encryption(HE) such as Paillier's cryptosystem, which loads intolerably high complexity on resource-constrained local users. To address this challenging issue, in this paper, a lightweight privacy-preserving data aggregation scheme is proposed without utilizing public-key homomorphic encryption. First, an efficient privacy-preserving data aggregation protocol PPDA is proposed based on any one-way trapdoor permutation in the multiple user setting. Then, based on PPDA, a lightweight secure model training scheme LSMT in federated learning is designed. Finally, security analysis and extensive simulations show that our proposed PPDA and LSMT well protect the sensitive data of power enterprises from collusion attacks, guarantees the security of aggregated results, and outperforms existing ones in terms of computational and communication overhead.
联邦学习中轻量级安全模型训练的高效隐私保护数据聚合
联邦学习已经广泛应用于我们日常生活的各个方面,可以很好地保护数据集的隐私,因为模型参数是在本地训练并汇总到全局的,但数据本身不需要像传统的机器学习那样发送到服务器。在国家电网中,不同的电力公司倾向于合作训练一个全球模型来预测未来电网的风险或客户的可信度。每个电力公司的数据集应受到保护,以免受到其他公司、部门或其他未经授权实体的侵害,因为它们与用户的隐私密切相关。另一方面,广泛报道甚至局部模态参数也可以被利用来发动诸如隶属度推理等多种攻击。现有实现隐私保护模型聚合的工作大多依赖于计算密集型的公钥同态加密(HE),如Paillier密码系统,这给资源受限的本地用户带来了难以忍受的高复杂度。为了解决这一具有挑战性的问题,本文提出了一种不使用公钥同态加密的轻量级隐私保护数据聚合方案。首先,提出了一种基于多用户设置下任意单向活门排列的高效隐私保护数据聚合协议PPDA。然后,基于PPDA,设计了一种联邦学习中的轻量级安全模型训练方案LSMT。最后,安全性分析和大量仿真结果表明,我们提出的PPDA和LSMT算法能够很好地保护电力企业的敏感数据不受合谋攻击,保证聚合结果的安全性,并且在计算和通信开销方面优于现有算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信