一种高效且保护隐私的同态加密联邦学习方法

Francesco Castro;Donato Impedovo;Giuseppe Pirlo
{"title":"一种高效且保护隐私的同态加密联邦学习方法","authors":"Francesco Castro;Donato Impedovo;Giuseppe Pirlo","doi":"10.1109/OJCS.2025.3536562","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a decentralized and collaborative learning approach that ensures the data privacy of each participant. However, recent studies have shown that the private data of each participant can be obtained from shared parameters of local models through reversal model and membership inference attacks leading to privacy leakage. Privacy-preserving federated learning strategies based on Homomorphic Encryption (PPFL-HE) have been developed to solve this issue. PPFL-HE methods require high communication and computational overheads, which are impractical for resource-limited devices. This work proposes an efficient PPFL-HE method to reduce communication and computational overheads. The proposed method is based on an innovative quantization process that introduces a dynamic range evaluation layer-for-layer (DREL) to encode the weights of the local models into long-signed integers. Compared to standard quantization approaches, the proposed method reduces the quantization errors and the communication overhead. Moreover, it enables the encryption of local weights with the Brakerski/Fan-Vercauteren Homomorphic Encryption scheme (BFV-HE), which is highly efficient on integers, reducing encryption, aggregation, decryption time, and ciphertext size. The experiments conducted with five popular datasets and four different Machine Learning (ML) models (three CNN models and a feedforward neural network) show that the proposed method is more efficient in communication and computational overheads than other PPFL-HE methods. Specifically, the proposed method requires fewer FL rounds to achieve global model convergence and leads to an average reduction in encryption time of 99.95% and 73.79%, in decryption time of 99.90% and 55.13%, and in ciphertext size of 5.78% and 75.17% compared to PPFL-HE methods based on Paillier and CKKS schemes, respectively.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"336-347"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10858339","citationCount":"0","resultStr":"{\"title\":\"An Efficient and Privacy-Preserving Federated Learning Approach Based on Homomorphic Encryption\",\"authors\":\"Francesco Castro;Donato Impedovo;Giuseppe Pirlo\",\"doi\":\"10.1109/OJCS.2025.3536562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) is a decentralized and collaborative learning approach that ensures the data privacy of each participant. However, recent studies have shown that the private data of each participant can be obtained from shared parameters of local models through reversal model and membership inference attacks leading to privacy leakage. Privacy-preserving federated learning strategies based on Homomorphic Encryption (PPFL-HE) have been developed to solve this issue. PPFL-HE methods require high communication and computational overheads, which are impractical for resource-limited devices. This work proposes an efficient PPFL-HE method to reduce communication and computational overheads. The proposed method is based on an innovative quantization process that introduces a dynamic range evaluation layer-for-layer (DREL) to encode the weights of the local models into long-signed integers. Compared to standard quantization approaches, the proposed method reduces the quantization errors and the communication overhead. Moreover, it enables the encryption of local weights with the Brakerski/Fan-Vercauteren Homomorphic Encryption scheme (BFV-HE), which is highly efficient on integers, reducing encryption, aggregation, decryption time, and ciphertext size. The experiments conducted with five popular datasets and four different Machine Learning (ML) models (three CNN models and a feedforward neural network) show that the proposed method is more efficient in communication and computational overheads than other PPFL-HE methods. Specifically, the proposed method requires fewer FL rounds to achieve global model convergence and leads to an average reduction in encryption time of 99.95% and 73.79%, in decryption time of 99.90% and 55.13%, and in ciphertext size of 5.78% and 75.17% compared to PPFL-HE methods based on Paillier and CKKS schemes, respectively.\",\"PeriodicalId\":13205,\"journal\":{\"name\":\"IEEE Open Journal of the Computer Society\",\"volume\":\"6 \",\"pages\":\"336-347\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10858339\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Computer Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10858339/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10858339/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)是一种分散的协作学习方法,可确保每个参与者的数据隐私。然而,最近的研究表明,通过反转模型攻击和隶属推理攻击,可以从局部模型的共享参数中获取每个参与者的隐私数据,从而导致隐私泄露。为了解决这一问题,开发了基于同态加密的隐私保护联邦学习策略(PPFL-HE)。PPFL-HE方法需要很高的通信和计算开销,这对于资源有限的设备来说是不切实际的。本工作提出了一种有效的PPFL-HE方法来减少通信和计算开销。该方法基于一种创新的量化过程,引入了一种动态范围逐层评估(DREL)方法,将局部模型的权重编码为长符号整数。与标准量化方法相比,该方法减小了量化误差和通信开销。此外,它还支持使用Brakerski/Fan-Vercauteren同态加密方案(BFV-HE)对局部权重进行加密,该方案对整数具有很高的效率,减少了加密、聚合、解密时间和密文大小。在五个流行的数据集和四种不同的机器学习模型(三个CNN模型和一个前馈神经网络)上进行的实验表明,所提出的方法在通信和计算开销方面比其他ppfl - ho方法更有效。具体而言,与基于Paillier和CKKS方案的ppfl - ho方法相比,该方法需要更少的FL轮数来实现全局模型收敛,加密时间平均减少99.95%和73.79%,解密时间平均减少99.90%和55.13%,密文大小平均减少5.78%和75.17%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Efficient and Privacy-Preserving Federated Learning Approach Based on Homomorphic Encryption
Federated Learning (FL) is a decentralized and collaborative learning approach that ensures the data privacy of each participant. However, recent studies have shown that the private data of each participant can be obtained from shared parameters of local models through reversal model and membership inference attacks leading to privacy leakage. Privacy-preserving federated learning strategies based on Homomorphic Encryption (PPFL-HE) have been developed to solve this issue. PPFL-HE methods require high communication and computational overheads, which are impractical for resource-limited devices. This work proposes an efficient PPFL-HE method to reduce communication and computational overheads. The proposed method is based on an innovative quantization process that introduces a dynamic range evaluation layer-for-layer (DREL) to encode the weights of the local models into long-signed integers. Compared to standard quantization approaches, the proposed method reduces the quantization errors and the communication overhead. Moreover, it enables the encryption of local weights with the Brakerski/Fan-Vercauteren Homomorphic Encryption scheme (BFV-HE), which is highly efficient on integers, reducing encryption, aggregation, decryption time, and ciphertext size. The experiments conducted with five popular datasets and four different Machine Learning (ML) models (three CNN models and a feedforward neural network) show that the proposed method is more efficient in communication and computational overheads than other PPFL-HE methods. Specifically, the proposed method requires fewer FL rounds to achieve global model convergence and leads to an average reduction in encryption time of 99.95% and 73.79%, in decryption time of 99.90% and 55.13%, and in ciphertext size of 5.78% and 75.17% compared to PPFL-HE methods based on Paillier and CKKS schemes, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信