{"title":"An Efficient Replication-Based Aggregation Verification and Correctness Assurance Scheme for Federated Learning","authors":"Shihong Wu;Yuchuan Luo;Shaojing Fu;Yingwen Chen;Ming Xu","doi":"10.1109/TSC.2024.3520833","DOIUrl":null,"url":null,"abstract":"Federated learning(FL), enabling multiple clients collaboratively to train a model via a parameter server, is an effective approach to address the issue of data silos. However, due to the self-interest and laziness of servers, they may not correctly aggregate the global model parameters, which will cause the final model trained to deviate from the training goal. In the existing proposals, the cryptography-based verification scheme involves heavy computation overheads. On the other hand, the replication-based verification method, relying on a dual-server architecture, can ensure the correctness of aggregation and reduce computation overheads, but incur at least twice the communication cost as that of the task itself. To address these issues, we propose a novel replication-based aggregation scheme for FL, which enables efficient verification and stronger correctness assurance. The scheme employs a main-secondary server architecture, which allows the secondary servers to partakes in aggregation tasks at a predetermined probability, consequently mitigating the validation overhead. Moreover, we resort to the game theory and design a Learning Contract to impose penalties on dishonest servers, enforcing rational servers to correctly compute global model parameters. Under the use of Betrayal Contract to prevent collusion among servers, we further design a training game to efficiently verify global model parameters and ensure their correctness. Finally, we analyze the correctness of the proposed scheme and demonstrate that the computational overhead of our scheme is <inline-formula><tex-math>$\\frac{{n + 1}}{{2n}}$</tex-math></inline-formula> of the previous replication-based validation scheme, obtaining a significant reduction in communication cost, where <inline-formula><tex-math>$n$</tex-math></inline-formula> means the training rounds. Experimental results further validate our deduction.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"633-646"},"PeriodicalIF":5.5000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Services Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10963996/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning(FL), enabling multiple clients collaboratively to train a model via a parameter server, is an effective approach to address the issue of data silos. However, due to the self-interest and laziness of servers, they may not correctly aggregate the global model parameters, which will cause the final model trained to deviate from the training goal. In the existing proposals, the cryptography-based verification scheme involves heavy computation overheads. On the other hand, the replication-based verification method, relying on a dual-server architecture, can ensure the correctness of aggregation and reduce computation overheads, but incur at least twice the communication cost as that of the task itself. To address these issues, we propose a novel replication-based aggregation scheme for FL, which enables efficient verification and stronger correctness assurance. The scheme employs a main-secondary server architecture, which allows the secondary servers to partakes in aggregation tasks at a predetermined probability, consequently mitigating the validation overhead. Moreover, we resort to the game theory and design a Learning Contract to impose penalties on dishonest servers, enforcing rational servers to correctly compute global model parameters. Under the use of Betrayal Contract to prevent collusion among servers, we further design a training game to efficiently verify global model parameters and ensure their correctness. Finally, we analyze the correctness of the proposed scheme and demonstrate that the computational overhead of our scheme is $\frac{{n + 1}}{{2n}}$ of the previous replication-based validation scheme, obtaining a significant reduction in communication cost, where $n$ means the training rounds. Experimental results further validate our deduction.
期刊介绍:
IEEE Transactions on Services Computing encompasses the computing and software aspects of the science and technology of services innovation research and development. It places emphasis on algorithmic, mathematical, statistical, and computational methods central to services computing. Topics covered include Service Oriented Architecture, Web Services, Business Process Integration, Solution Performance Management, and Services Operations and Management. The transactions address mathematical foundations, security, privacy, agreement, contract, discovery, negotiation, collaboration, and quality of service for web services. It also covers areas like composite web service creation, business and scientific applications, standards, utility models, business process modeling, integration, collaboration, and more in the realm of Services Computing.