{"title":"基于云边缘计算网络的个性化联邦学习节能模型解耦","authors":"Chutong Jin, Tian Du, Xingyan Chen","doi":"10.1002/ett.70203","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Federated Learning (FL) has emerged as a key distributed learning approach for privacy-preserving data scenarios. However, with the demonstrated effectiveness of scaling laws by large language models, the increasing parameter size of neural networks has led to substantial communication overhead, posing significant challenges for distributed learning systems. To address these issues, we propose a novel energy-efficient personalized federated learning framework called FedEMD, which utilizes model decoupling to divide deep neural networks into a body, consisting of the early layers of the network, and a personalized head, comprising the layers beyond the body. During training, the personalized head does not need to be uploaded to the central server for aggregation, thereby saving significant bandwidth resources. Additionally, we propose a performance-resource balancing mechanism that adaptively adjusts the number of body layers uploaded based on the available resource of the client. Finally, we conducted experiments on six datasets, comparing our method with five state-of-the-art model decoupling approaches. Our method was able to save about 10.7% in bandwidth consumption while providing comparable performance.</p>\n </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"36 7","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Energy-Efficient Model Decoupling for Personalized Federated Learning on Cloud-Edge Computing Networks\",\"authors\":\"Chutong Jin, Tian Du, Xingyan Chen\",\"doi\":\"10.1002/ett.70203\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Federated Learning (FL) has emerged as a key distributed learning approach for privacy-preserving data scenarios. However, with the demonstrated effectiveness of scaling laws by large language models, the increasing parameter size of neural networks has led to substantial communication overhead, posing significant challenges for distributed learning systems. To address these issues, we propose a novel energy-efficient personalized federated learning framework called FedEMD, which utilizes model decoupling to divide deep neural networks into a body, consisting of the early layers of the network, and a personalized head, comprising the layers beyond the body. During training, the personalized head does not need to be uploaded to the central server for aggregation, thereby saving significant bandwidth resources. Additionally, we propose a performance-resource balancing mechanism that adaptively adjusts the number of body layers uploaded based on the available resource of the client. Finally, we conducted experiments on six datasets, comparing our method with five state-of-the-art model decoupling approaches. Our method was able to save about 10.7% in bandwidth consumption while providing comparable performance.</p>\\n </div>\",\"PeriodicalId\":23282,\"journal\":{\"name\":\"Transactions on Emerging Telecommunications Technologies\",\"volume\":\"36 7\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions on Emerging Telecommunications Technologies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ett.70203\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Emerging Telecommunications Technologies","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ett.70203","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Energy-Efficient Model Decoupling for Personalized Federated Learning on Cloud-Edge Computing Networks
Federated Learning (FL) has emerged as a key distributed learning approach for privacy-preserving data scenarios. However, with the demonstrated effectiveness of scaling laws by large language models, the increasing parameter size of neural networks has led to substantial communication overhead, posing significant challenges for distributed learning systems. To address these issues, we propose a novel energy-efficient personalized federated learning framework called FedEMD, which utilizes model decoupling to divide deep neural networks into a body, consisting of the early layers of the network, and a personalized head, comprising the layers beyond the body. During training, the personalized head does not need to be uploaded to the central server for aggregation, thereby saving significant bandwidth resources. Additionally, we propose a performance-resource balancing mechanism that adaptively adjusts the number of body layers uploaded based on the available resource of the client. Finally, we conducted experiments on six datasets, comparing our method with five state-of-the-art model decoupling approaches. Our method was able to save about 10.7% in bandwidth consumption while providing comparable performance.
期刊介绍:
ransactions on Emerging Telecommunications Technologies (ETT), formerly known as European Transactions on Telecommunications (ETT), has the following aims:
- to attract cutting-edge publications from leading researchers and research groups around the world
- to become a highly cited source of timely research findings in emerging fields of telecommunications
- to limit revision and publication cycles to a few months and thus significantly increase attractiveness to publish
- to become the leading journal for publishing the latest developments in telecommunications