FedBH:在异构系统中通过共享基础和自适应超参数进行有效的联邦学习

IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zediao Liu , Yayu Luo , Tongzhijun Zhu , Ziyi Chen , Tenglong Mao , Huan Pi , Ying Lin
{"title":"FedBH:在异构系统中通过共享基础和自适应超参数进行有效的联邦学习","authors":"Zediao Liu ,&nbsp;Yayu Luo ,&nbsp;Tongzhijun Zhu ,&nbsp;Ziyi Chen ,&nbsp;Tenglong Mao ,&nbsp;Huan Pi ,&nbsp;Ying Lin","doi":"10.1016/j.comcom.2025.108190","DOIUrl":null,"url":null,"abstract":"<div><div>Federated learning is a distributed learning framework that protects client data privacy. However, in real-world applications, due to the significant differences in data distribution among different clients, a single global model often fails to meet personalized needs. Moreover, existing personalized federated learning methods often struggle to balance computational efficiency and model personalization while inadequately addressing the heterogeneous computational capabilities of clients. To address these challenges, we propose FedBH—an innovative personalized federated learning method that combines the feature extraction capabilities of deep learning with an adaptive dynamic training mechanism. FedBH decomposes each client’s model into a global base layer and client-specific local heads, enhancing both computational efficiency and personalized training. To further optimize learning, FedBH incorporates a dynamic adjustment mechanism that adapts training parameters based on each client’s specific conditions. In each local training round, the algorithm samples a subset of the dataset and explores different configurations of local head and global base training rounds. The optimal configuration is determined based on validation loss and then applied for full training. This adaptive mechanism enables FedBH to dynamically adjust to varying client demands, ensuring efficient utilization of computational resources while maintaining robust personalization. Experimental results across multiple benchmark datasets and diverse data distribution scenarios show that FedBH achieves faster convergence and higher accuracy in most cases. These findings demonstrate that FedBH not only effectively adapts to heterogeneous data environments but also significantly improves training efficiency, showing great potential in addressing issues related to diverse and non-IID data distributions.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"239 ","pages":"Article 108190"},"PeriodicalIF":4.5000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedBH: Efficient federated learning through shared base and adaptive hyperparameters in heterogeneous systems\",\"authors\":\"Zediao Liu ,&nbsp;Yayu Luo ,&nbsp;Tongzhijun Zhu ,&nbsp;Ziyi Chen ,&nbsp;Tenglong Mao ,&nbsp;Huan Pi ,&nbsp;Ying Lin\",\"doi\":\"10.1016/j.comcom.2025.108190\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated learning is a distributed learning framework that protects client data privacy. However, in real-world applications, due to the significant differences in data distribution among different clients, a single global model often fails to meet personalized needs. Moreover, existing personalized federated learning methods often struggle to balance computational efficiency and model personalization while inadequately addressing the heterogeneous computational capabilities of clients. To address these challenges, we propose FedBH—an innovative personalized federated learning method that combines the feature extraction capabilities of deep learning with an adaptive dynamic training mechanism. FedBH decomposes each client’s model into a global base layer and client-specific local heads, enhancing both computational efficiency and personalized training. To further optimize learning, FedBH incorporates a dynamic adjustment mechanism that adapts training parameters based on each client’s specific conditions. In each local training round, the algorithm samples a subset of the dataset and explores different configurations of local head and global base training rounds. The optimal configuration is determined based on validation loss and then applied for full training. This adaptive mechanism enables FedBH to dynamically adjust to varying client demands, ensuring efficient utilization of computational resources while maintaining robust personalization. Experimental results across multiple benchmark datasets and diverse data distribution scenarios show that FedBH achieves faster convergence and higher accuracy in most cases. These findings demonstrate that FedBH not only effectively adapts to heterogeneous data environments but also significantly improves training efficiency, showing great potential in addressing issues related to diverse and non-IID data distributions.</div></div>\",\"PeriodicalId\":55224,\"journal\":{\"name\":\"Computer Communications\",\"volume\":\"239 \",\"pages\":\"Article 108190\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0140366425001471\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Communications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0140366425001471","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习是一种分布式学习框架,可以保护客户端数据隐私。然而,在实际应用中,由于不同客户机之间的数据分布存在显著差异,单个全局模型往往无法满足个性化需求。此外,现有的个性化联邦学习方法往往难以平衡计算效率和模型个性化,而不能充分解决客户端的异构计算能力。为了解决这些挑战,我们提出了fedbh -一种创新的个性化联邦学习方法,它将深度学习的特征提取能力与自适应动态训练机制相结合。FedBH将每个客户的模型分解为全局基础层和客户特定的局部头部,从而提高了计算效率和个性化训练。为了进一步优化学习,FedBH结合了一个动态调整机制,根据每个客户的具体情况调整训练参数。在每个局部训练轮中,算法对数据集的一个子集进行采样,并探索局部头部和全局基础训练轮的不同配置。根据验证损失确定最优配置,并应用于全训练。这种自适应机制使FedBH能够动态调整以适应不同的客户需求,确保有效利用计算资源,同时保持强大的个性化。跨多个基准数据集和不同数据分布场景的实验结果表明,FedBH在大多数情况下具有更快的收敛速度和更高的精度。这些研究结果表明,FedBH不仅能有效适应异构数据环境,还能显著提高训练效率,在解决数据多样性和非iid分布问题方面具有很大的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FedBH: Efficient federated learning through shared base and adaptive hyperparameters in heterogeneous systems
Federated learning is a distributed learning framework that protects client data privacy. However, in real-world applications, due to the significant differences in data distribution among different clients, a single global model often fails to meet personalized needs. Moreover, existing personalized federated learning methods often struggle to balance computational efficiency and model personalization while inadequately addressing the heterogeneous computational capabilities of clients. To address these challenges, we propose FedBH—an innovative personalized federated learning method that combines the feature extraction capabilities of deep learning with an adaptive dynamic training mechanism. FedBH decomposes each client’s model into a global base layer and client-specific local heads, enhancing both computational efficiency and personalized training. To further optimize learning, FedBH incorporates a dynamic adjustment mechanism that adapts training parameters based on each client’s specific conditions. In each local training round, the algorithm samples a subset of the dataset and explores different configurations of local head and global base training rounds. The optimal configuration is determined based on validation loss and then applied for full training. This adaptive mechanism enables FedBH to dynamically adjust to varying client demands, ensuring efficient utilization of computational resources while maintaining robust personalization. Experimental results across multiple benchmark datasets and diverse data distribution scenarios show that FedBH achieves faster convergence and higher accuracy in most cases. These findings demonstrate that FedBH not only effectively adapts to heterogeneous data environments but also significantly improves training efficiency, showing great potential in addressing issues related to diverse and non-IID data distributions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Communications
Computer Communications 工程技术-电信学
CiteScore
14.10
自引率
5.00%
发文量
397
审稿时长
66 days
期刊介绍: Computer and Communications networks are key infrastructures of the information society with high socio-economic value as they contribute to the correct operations of many critical services (from healthcare to finance and transportation). Internet is the core of today''s computer-communication infrastructures. This has transformed the Internet, from a robust network for data transfer between computers, to a global, content-rich, communication and information system where contents are increasingly generated by the users, and distributed according to human social relations. Next-generation network technologies, architectures and protocols are therefore required to overcome the limitations of the legacy Internet and add new capabilities and services. The future Internet should be ubiquitous, secure, resilient, and closer to human communication paradigms. Computer Communications is a peer-reviewed international journal that publishes high-quality scientific articles (both theory and practice) and survey papers covering all aspects of future computer communication networks (on all layers, except the physical layer), with a special attention to the evolution of the Internet architecture, protocols, services, and applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信