Rong Wang , Ling Xiong , Jiazhou Geng , Chun Xie , Ruidong Li
{"title":"一种有效且可验证的保护隐私的安全聚合方案","authors":"Rong Wang , Ling Xiong , Jiazhou Geng , Chun Xie , Ruidong Li","doi":"10.1016/j.sysarc.2025.103364","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning has gained significant attention for enabling collaborative model training on distributed devices while maintaining data privacy. However, sharing gradients poses risks to local data privacy. This paper presents a secure aggregation scheme that addresses privacy protection and verifiability in federated learning. Firstly, a new homomorphic signature algorithm has been used to verify the aggregation results. For efficient verification, this algorithm can be divided into an offline phase and an online phase, where results are pre-computed during the offline phase and reused. Secondly, we use the symmetric homomorphic encryption lightweight algorithm to generate public keys, greatly accelerating the key generation process, making both encryption and decryption particularly efficient. Under this architecture, the aggregation server is unable to peek into the specific content of each gradient. The task management center cannot access the client’s individual gradient and can only process the aggregated information. This design ensures that the aggregation server and task management center can only access information within their permissions, effectively preventing information leakage. Finally, the security assessment indicates that our method satisfies the essential security standards for privacy-preserving federated learning. Comprehensive experimental evaluations conducted on real-world datasets reveal that the proposed solution demonstrates impressive efficiency in practical applications.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"161 ","pages":"Article 103364"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An effective and verifiable secure aggregation scheme with privacy-preserving for federated learning\",\"authors\":\"Rong Wang , Ling Xiong , Jiazhou Geng , Chun Xie , Ruidong Li\",\"doi\":\"10.1016/j.sysarc.2025.103364\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning has gained significant attention for enabling collaborative model training on distributed devices while maintaining data privacy. However, sharing gradients poses risks to local data privacy. This paper presents a secure aggregation scheme that addresses privacy protection and verifiability in federated learning. Firstly, a new homomorphic signature algorithm has been used to verify the aggregation results. For efficient verification, this algorithm can be divided into an offline phase and an online phase, where results are pre-computed during the offline phase and reused. Secondly, we use the symmetric homomorphic encryption lightweight algorithm to generate public keys, greatly accelerating the key generation process, making both encryption and decryption particularly efficient. Under this architecture, the aggregation server is unable to peek into the specific content of each gradient. The task management center cannot access the client’s individual gradient and can only process the aggregated information. This design ensures that the aggregation server and task management center can only access information within their permissions, effectively preventing information leakage. Finally, the security assessment indicates that our method satisfies the essential security standards for privacy-preserving federated learning. Comprehensive experimental evaluations conducted on real-world datasets reveal that the proposed solution demonstrates impressive efficiency in practical applications.</div></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"161 \",\"pages\":\"Article 103364\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762125000360\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762125000360","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
An effective and verifiable secure aggregation scheme with privacy-preserving for federated learning
Federated learning has gained significant attention for enabling collaborative model training on distributed devices while maintaining data privacy. However, sharing gradients poses risks to local data privacy. This paper presents a secure aggregation scheme that addresses privacy protection and verifiability in federated learning. Firstly, a new homomorphic signature algorithm has been used to verify the aggregation results. For efficient verification, this algorithm can be divided into an offline phase and an online phase, where results are pre-computed during the offline phase and reused. Secondly, we use the symmetric homomorphic encryption lightweight algorithm to generate public keys, greatly accelerating the key generation process, making both encryption and decryption particularly efficient. Under this architecture, the aggregation server is unable to peek into the specific content of each gradient. The task management center cannot access the client’s individual gradient and can only process the aggregated information. This design ensures that the aggregation server and task management center can only access information within their permissions, effectively preventing information leakage. Finally, the security assessment indicates that our method satisfies the essential security standards for privacy-preserving federated learning. Comprehensive experimental evaluations conducted on real-world datasets reveal that the proposed solution demonstrates impressive efficiency in practical applications.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.