通过动态梯度替代和客户选择加速个性化联邦学习

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Ziwei Zhan , Weijie Liu , Xiaoxi Zhang , Chee Wei Tan , Lei Xue , Haisheng Tan , Xu Chen
{"title":"通过动态梯度替代和客户选择加速个性化联邦学习","authors":"Ziwei Zhan ,&nbsp;Weijie Liu ,&nbsp;Xiaoxi Zhang ,&nbsp;Chee Wei Tan ,&nbsp;Lei Xue ,&nbsp;Haisheng Tan ,&nbsp;Xu Chen","doi":"10.1016/j.comnet.2025.111428","DOIUrl":null,"url":null,"abstract":"<div><div>Personalized federated learning (PFL) has gained widespread attention for its ability to preserve privacy and adapt to user-specific characteristics. Among the leading PFL methods, meta-learning based algorithms like Per-FedAvg offer a unified framework of gradient updates for all clients, eliminating the necessity of personalized model architectures that are common in other PFL approaches. However, their computation inefficiency and challenges in accommodating system heterogeneity are under-explored. This work proposes <em>pFedSara</em>, a novel PFL framework that accelerates the training of a target PFL method, Per-FedAvg, by exploiting the lightweight, vanilla FL algorithm, FedAvg. Instead of fervently creating marginally altered approaches, <em>pFedSara</em> is the first that strategically <em>reuses and blends</em> existing techniques for PFL training, navigating the runtime-accuracy trade-off, and it offers a comprehensive theoretical analysis. Specifically, it leverages dynamic gradient substitution and client selection by assessing runtime, loss, and gradient similarity between FedAvg and Per-FedAvg, the two candidate local update methods for each client. Additionally, it incorporates gradient scaling to accommodate incomplete Per-FedAvg computations that cannot be replaced by FedAvg, eliminating additional biases. A novel convergence analysis is provided, quantifying the biases introduced by both heterogeneous data and our employed hybrid update methods for computation speed-up. Extensive experiments demonstrate that <em>pFedSara</em> achieves superior training efficiency compared with state-of-the-art PFL methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"270 ","pages":"Article 111428"},"PeriodicalIF":4.6000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accelerating personalized federated learning via dynamic gradient substitution and client selection\",\"authors\":\"Ziwei Zhan ,&nbsp;Weijie Liu ,&nbsp;Xiaoxi Zhang ,&nbsp;Chee Wei Tan ,&nbsp;Lei Xue ,&nbsp;Haisheng Tan ,&nbsp;Xu Chen\",\"doi\":\"10.1016/j.comnet.2025.111428\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Personalized federated learning (PFL) has gained widespread attention for its ability to preserve privacy and adapt to user-specific characteristics. Among the leading PFL methods, meta-learning based algorithms like Per-FedAvg offer a unified framework of gradient updates for all clients, eliminating the necessity of personalized model architectures that are common in other PFL approaches. However, their computation inefficiency and challenges in accommodating system heterogeneity are under-explored. This work proposes <em>pFedSara</em>, a novel PFL framework that accelerates the training of a target PFL method, Per-FedAvg, by exploiting the lightweight, vanilla FL algorithm, FedAvg. Instead of fervently creating marginally altered approaches, <em>pFedSara</em> is the first that strategically <em>reuses and blends</em> existing techniques for PFL training, navigating the runtime-accuracy trade-off, and it offers a comprehensive theoretical analysis. Specifically, it leverages dynamic gradient substitution and client selection by assessing runtime, loss, and gradient similarity between FedAvg and Per-FedAvg, the two candidate local update methods for each client. Additionally, it incorporates gradient scaling to accommodate incomplete Per-FedAvg computations that cannot be replaced by FedAvg, eliminating additional biases. A novel convergence analysis is provided, quantifying the biases introduced by both heterogeneous data and our employed hybrid update methods for computation speed-up. Extensive experiments demonstrate that <em>pFedSara</em> achieves superior training efficiency compared with state-of-the-art PFL methods.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":\"270 \",\"pages\":\"Article 111428\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128625003950\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625003950","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

个性化联邦学习(PFL)因其保护隐私和适应用户特定特征的能力而受到广泛关注。在领先的PFL方法中,基于元学习的算法(如perf - fedag)为所有客户端提供了统一的梯度更新框架,消除了其他PFL方法中常见的个性化模型架构的必要性。然而,它们的计算效率低下和适应系统异质性的挑战尚未得到充分探讨。这项工作提出了pFedSara,这是一个新的PFL框架,通过利用轻量级的香草FL算法fedag来加速目标PFL方法per - fedag的训练。pFedSara不是狂热地创建略微改变的方法,而是第一个策略性地重用和混合PFL训练的现有技术,导航运行时-精度权衡,并提供全面的理论分析。具体来说,它通过评估fedag和per - fedag(每个客户机的两个候选本地更新方法)之间的运行时、损失和梯度相似性来利用动态梯度替换和客户机选择。此外,它结合了梯度缩放,以适应不完整的per - fedag计算,这些计算不能被fedag取代,从而消除了额外的偏差。给出了一种新的收敛分析,量化了异构数据和我们采用的混合更新方法所带来的偏差。大量的实验表明,与最先进的PFL方法相比,pFedSara具有更高的训练效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Accelerating personalized federated learning via dynamic gradient substitution and client selection
Personalized federated learning (PFL) has gained widespread attention for its ability to preserve privacy and adapt to user-specific characteristics. Among the leading PFL methods, meta-learning based algorithms like Per-FedAvg offer a unified framework of gradient updates for all clients, eliminating the necessity of personalized model architectures that are common in other PFL approaches. However, their computation inefficiency and challenges in accommodating system heterogeneity are under-explored. This work proposes pFedSara, a novel PFL framework that accelerates the training of a target PFL method, Per-FedAvg, by exploiting the lightweight, vanilla FL algorithm, FedAvg. Instead of fervently creating marginally altered approaches, pFedSara is the first that strategically reuses and blends existing techniques for PFL training, navigating the runtime-accuracy trade-off, and it offers a comprehensive theoretical analysis. Specifically, it leverages dynamic gradient substitution and client selection by assessing runtime, loss, and gradient similarity between FedAvg and Per-FedAvg, the two candidate local update methods for each client. Additionally, it incorporates gradient scaling to accommodate incomplete Per-FedAvg computations that cannot be replaced by FedAvg, eliminating additional biases. A novel convergence analysis is provided, quantifying the biases introduced by both heterogeneous data and our employed hybrid update methods for computation speed-up. Extensive experiments demonstrate that pFedSara achieves superior training efficiency compared with state-of-the-art PFL methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信