{"title":"平衡隐私和公平:基于差异隐私的联邦学习中的客户端选择","authors":"Xu Zhao , Gang Li , Yuan Yao , Bo Cui","doi":"10.1016/j.sysarc.2025.103576","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) tackles the data island problem by enabling collaborative model training across clients while safeguarding privacy. However, most existing work only cared about the parameter privacy protection, while ignoring the model training efficay. To this end, in this paper, we propose AdaDPCS-FL, an Adaptive Budget Allocation and Client Selection method tailored for Federated Learning. This approach consists of two steps, where in the step, an adaptive privacy budget allocation strategy based on model similarity and a reversion mechanism are designed to speed up training convergence while still keeping privacy preservation in FL. In the second step, addressing the fairness of client selection in the FL process, it proposes a contribution-based online client selection mechanism is further proposed with the consideration of the fairness of client selection, in which, a multi-armed bandit scheme is tailored to optimize the client selection. Theoretically, the proposed method satisfies the properties of differential privacy, convergence guarantee, and a constant upper bound on cumulative regret <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>K</mi><msqrt><mrow><mo>ln</mo><mi>R</mi><mo>ln</mo><mi>R</mi></mrow></msqrt><mo>)</mo></mrow></mrow></math></span>. Experiments on real datasets demonstrate superior performance over baselines like FedProx and FedAvg. Moreover, with the privacy guaranteed, the test accuracy by our proposed method can be improved approximately 4% compared to DP-FedAvg and FedBDP in heterogeneous settings.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"168 ","pages":"Article 103576"},"PeriodicalIF":4.1000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Balancing privacy and fairness: Client selection in differential privacy-based federated learning\",\"authors\":\"Xu Zhao , Gang Li , Yuan Yao , Bo Cui\",\"doi\":\"10.1016/j.sysarc.2025.103576\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) tackles the data island problem by enabling collaborative model training across clients while safeguarding privacy. However, most existing work only cared about the parameter privacy protection, while ignoring the model training efficay. To this end, in this paper, we propose AdaDPCS-FL, an Adaptive Budget Allocation and Client Selection method tailored for Federated Learning. This approach consists of two steps, where in the step, an adaptive privacy budget allocation strategy based on model similarity and a reversion mechanism are designed to speed up training convergence while still keeping privacy preservation in FL. In the second step, addressing the fairness of client selection in the FL process, it proposes a contribution-based online client selection mechanism is further proposed with the consideration of the fairness of client selection, in which, a multi-armed bandit scheme is tailored to optimize the client selection. Theoretically, the proposed method satisfies the properties of differential privacy, convergence guarantee, and a constant upper bound on cumulative regret <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>K</mi><msqrt><mrow><mo>ln</mo><mi>R</mi><mo>ln</mo><mi>R</mi></mrow></msqrt><mo>)</mo></mrow></mrow></math></span>. Experiments on real datasets demonstrate superior performance over baselines like FedProx and FedAvg. Moreover, with the privacy guaranteed, the test accuracy by our proposed method can be improved approximately 4% compared to DP-FedAvg and FedBDP in heterogeneous settings.</div></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"168 \",\"pages\":\"Article 103576\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762125002486\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762125002486","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Balancing privacy and fairness: Client selection in differential privacy-based federated learning
Federated Learning (FL) tackles the data island problem by enabling collaborative model training across clients while safeguarding privacy. However, most existing work only cared about the parameter privacy protection, while ignoring the model training efficay. To this end, in this paper, we propose AdaDPCS-FL, an Adaptive Budget Allocation and Client Selection method tailored for Federated Learning. This approach consists of two steps, where in the step, an adaptive privacy budget allocation strategy based on model similarity and a reversion mechanism are designed to speed up training convergence while still keeping privacy preservation in FL. In the second step, addressing the fairness of client selection in the FL process, it proposes a contribution-based online client selection mechanism is further proposed with the consideration of the fairness of client selection, in which, a multi-armed bandit scheme is tailored to optimize the client selection. Theoretically, the proposed method satisfies the properties of differential privacy, convergence guarantee, and a constant upper bound on cumulative regret . Experiments on real datasets demonstrate superior performance over baselines like FedProx and FedAvg. Moreover, with the privacy guaranteed, the test accuracy by our proposed method can be improved approximately 4% compared to DP-FedAvg and FedBDP in heterogeneous settings.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.