基于云边缘计算网络的个性化联邦学习节能模型解耦

IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS
Chutong Jin, Tian Du, Xingyan Chen
{"title":"基于云边缘计算网络的个性化联邦学习节能模型解耦","authors":"Chutong Jin,&nbsp;Tian Du,&nbsp;Xingyan Chen","doi":"10.1002/ett.70203","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Federated Learning (FL) has emerged as a key distributed learning approach for privacy-preserving data scenarios. However, with the demonstrated effectiveness of scaling laws by large language models, the increasing parameter size of neural networks has led to substantial communication overhead, posing significant challenges for distributed learning systems. To address these issues, we propose a novel energy-efficient personalized federated learning framework called FedEMD, which utilizes model decoupling to divide deep neural networks into a body, consisting of the early layers of the network, and a personalized head, comprising the layers beyond the body. During training, the personalized head does not need to be uploaded to the central server for aggregation, thereby saving significant bandwidth resources. Additionally, we propose a performance-resource balancing mechanism that adaptively adjusts the number of body layers uploaded based on the available resource of the client. Finally, we conducted experiments on six datasets, comparing our method with five state-of-the-art model decoupling approaches. Our method was able to save about 10.7% in bandwidth consumption while providing comparable performance.</p>\n </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"36 7","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Energy-Efficient Model Decoupling for Personalized Federated Learning on Cloud-Edge Computing Networks\",\"authors\":\"Chutong Jin,&nbsp;Tian Du,&nbsp;Xingyan Chen\",\"doi\":\"10.1002/ett.70203\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Federated Learning (FL) has emerged as a key distributed learning approach for privacy-preserving data scenarios. However, with the demonstrated effectiveness of scaling laws by large language models, the increasing parameter size of neural networks has led to substantial communication overhead, posing significant challenges for distributed learning systems. To address these issues, we propose a novel energy-efficient personalized federated learning framework called FedEMD, which utilizes model decoupling to divide deep neural networks into a body, consisting of the early layers of the network, and a personalized head, comprising the layers beyond the body. During training, the personalized head does not need to be uploaded to the central server for aggregation, thereby saving significant bandwidth resources. Additionally, we propose a performance-resource balancing mechanism that adaptively adjusts the number of body layers uploaded based on the available resource of the client. Finally, we conducted experiments on six datasets, comparing our method with five state-of-the-art model decoupling approaches. Our method was able to save about 10.7% in bandwidth consumption while providing comparable performance.</p>\\n </div>\",\"PeriodicalId\":23282,\"journal\":{\"name\":\"Transactions on Emerging Telecommunications Technologies\",\"volume\":\"36 7\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions on Emerging Telecommunications Technologies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ett.70203\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Emerging Telecommunications Technologies","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ett.70203","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)已成为保护隐私数据场景的关键分布式学习方法。然而,随着大型语言模型证明缩放定律的有效性,神经网络参数大小的增加导致了大量的通信开销,给分布式学习系统带来了重大挑战。为了解决这些问题,我们提出了一种新的节能个性化联邦学习框架,称为FedEMD,它利用模型解耦将深度神经网络划分为由网络早期层组成的主体和由主体以外层组成的个性化头部。在训练过程中,个性化头部不需要上传到中心服务器进行聚合,从而节省了大量的带宽资源。此外,我们提出了一种性能资源平衡机制,该机制可以根据客户端可用资源自适应调整上传的正文层数。最后,我们在六个数据集上进行了实验,将我们的方法与五种最先进的模型解耦方法进行了比较。我们的方法能够在提供相当性能的同时节省大约10.7%的带宽消耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Energy-Efficient Model Decoupling for Personalized Federated Learning on Cloud-Edge Computing Networks

Federated Learning (FL) has emerged as a key distributed learning approach for privacy-preserving data scenarios. However, with the demonstrated effectiveness of scaling laws by large language models, the increasing parameter size of neural networks has led to substantial communication overhead, posing significant challenges for distributed learning systems. To address these issues, we propose a novel energy-efficient personalized federated learning framework called FedEMD, which utilizes model decoupling to divide deep neural networks into a body, consisting of the early layers of the network, and a personalized head, comprising the layers beyond the body. During training, the personalized head does not need to be uploaded to the central server for aggregation, thereby saving significant bandwidth resources. Additionally, we propose a performance-resource balancing mechanism that adaptively adjusts the number of body layers uploaded based on the available resource of the client. Finally, we conducted experiments on six datasets, comparing our method with five state-of-the-art model decoupling approaches. Our method was able to save about 10.7% in bandwidth consumption while providing comparable performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.90
自引率
13.90%
发文量
249
期刊介绍: ransactions on Emerging Telecommunications Technologies (ETT), formerly known as European Transactions on Telecommunications (ETT), has the following aims: - to attract cutting-edge publications from leading researchers and research groups around the world - to become a highly cited source of timely research findings in emerging fields of telecommunications - to limit revision and publication cycles to a few months and thus significantly increase attractiveness to publish - to become the leading journal for publishing the latest developments in telecommunications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信