{"title":"An Efficient and Privacy-Preserving Federated Learning Approach Based on Homomorphic Encryption","authors":"Francesco Castro;Donato Impedovo;Giuseppe Pirlo","doi":"10.1109/OJCS.2025.3536562","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a decentralized and collaborative learning approach that ensures the data privacy of each participant. However, recent studies have shown that the private data of each participant can be obtained from shared parameters of local models through reversal model and membership inference attacks leading to privacy leakage. Privacy-preserving federated learning strategies based on Homomorphic Encryption (PPFL-HE) have been developed to solve this issue. PPFL-HE methods require high communication and computational overheads, which are impractical for resource-limited devices. This work proposes an efficient PPFL-HE method to reduce communication and computational overheads. The proposed method is based on an innovative quantization process that introduces a dynamic range evaluation layer-for-layer (DREL) to encode the weights of the local models into long-signed integers. Compared to standard quantization approaches, the proposed method reduces the quantization errors and the communication overhead. Moreover, it enables the encryption of local weights with the Brakerski/Fan-Vercauteren Homomorphic Encryption scheme (BFV-HE), which is highly efficient on integers, reducing encryption, aggregation, decryption time, and ciphertext size. The experiments conducted with five popular datasets and four different Machine Learning (ML) models (three CNN models and a feedforward neural network) show that the proposed method is more efficient in communication and computational overheads than other PPFL-HE methods. Specifically, the proposed method requires fewer FL rounds to achieve global model convergence and leads to an average reduction in encryption time of 99.95% and 73.79%, in decryption time of 99.90% and 55.13%, and in ciphertext size of 5.78% and 75.17% compared to PPFL-HE methods based on Paillier and CKKS schemes, respectively.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"336-347"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10858339","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10858339/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) is a decentralized and collaborative learning approach that ensures the data privacy of each participant. However, recent studies have shown that the private data of each participant can be obtained from shared parameters of local models through reversal model and membership inference attacks leading to privacy leakage. Privacy-preserving federated learning strategies based on Homomorphic Encryption (PPFL-HE) have been developed to solve this issue. PPFL-HE methods require high communication and computational overheads, which are impractical for resource-limited devices. This work proposes an efficient PPFL-HE method to reduce communication and computational overheads. The proposed method is based on an innovative quantization process that introduces a dynamic range evaluation layer-for-layer (DREL) to encode the weights of the local models into long-signed integers. Compared to standard quantization approaches, the proposed method reduces the quantization errors and the communication overhead. Moreover, it enables the encryption of local weights with the Brakerski/Fan-Vercauteren Homomorphic Encryption scheme (BFV-HE), which is highly efficient on integers, reducing encryption, aggregation, decryption time, and ciphertext size. The experiments conducted with five popular datasets and four different Machine Learning (ML) models (three CNN models and a feedforward neural network) show that the proposed method is more efficient in communication and computational overheads than other PPFL-HE methods. Specifically, the proposed method requires fewer FL rounds to achieve global model convergence and leads to an average reduction in encryption time of 99.95% and 73.79%, in decryption time of 99.90% and 55.13%, and in ciphertext size of 5.78% and 75.17% compared to PPFL-HE methods based on Paillier and CKKS schemes, respectively.