Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu
{"title":"GHPFL:通过优化带宽利用率推进个性化边缘学习","authors":"Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu","doi":"10.1109/TCC.2025.3540023","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is increasingly adopted to combine knowledge from clients in training without revealing their private data. In order to improve the performance of different participants, personalized FL has recently been proposed. However, considering the non-independent and identically distributed (non-IID) data and limited bandwidth at clients, the model performance could be compromised. In reality, clients near each other often tend to have similar data distributions. In this work, we train the personalized edge-based model in the client-edge-server FL. While considering the differences in data distribution, we fully utilize the limited bandwidth resources. To make training efficient and accurate at the same time, An intuitive idea is to learn as much useful knowledge as possible from other edges and reduce the accuracy loss incurred by non-IID data. Therefore, we devise Grouping Hierarchical Personalized Federated Learning (GHPFL). In this framework, each edge establishes physical connections with multiple clients, while the server physically connects with edges. It clusters edges into groups and establishes client-edge logical connections for synchronization. This is based on data similarities that the nodes actively identify, as well as the underlying physical topology. We perform a large-scale evaluation to demonstrate GHPFL’s benefits over other schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"473-484"},"PeriodicalIF":5.3000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GHPFL: Advancing Personalized Edge-Based Learning Through Optimized Bandwidth Utilization\",\"authors\":\"Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu\",\"doi\":\"10.1109/TCC.2025.3540023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is increasingly adopted to combine knowledge from clients in training without revealing their private data. In order to improve the performance of different participants, personalized FL has recently been proposed. However, considering the non-independent and identically distributed (non-IID) data and limited bandwidth at clients, the model performance could be compromised. In reality, clients near each other often tend to have similar data distributions. In this work, we train the personalized edge-based model in the client-edge-server FL. While considering the differences in data distribution, we fully utilize the limited bandwidth resources. To make training efficient and accurate at the same time, An intuitive idea is to learn as much useful knowledge as possible from other edges and reduce the accuracy loss incurred by non-IID data. Therefore, we devise Grouping Hierarchical Personalized Federated Learning (GHPFL). In this framework, each edge establishes physical connections with multiple clients, while the server physically connects with edges. It clusters edges into groups and establishes client-edge logical connections for synchronization. This is based on data similarities that the nodes actively identify, as well as the underlying physical topology. We perform a large-scale evaluation to demonstrate GHPFL’s benefits over other schemes.\",\"PeriodicalId\":13202,\"journal\":{\"name\":\"IEEE Transactions on Cloud Computing\",\"volume\":\"13 2\",\"pages\":\"473-484\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-02-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cloud Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10879314/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cloud Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10879314/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
GHPFL: Advancing Personalized Edge-Based Learning Through Optimized Bandwidth Utilization
Federated learning (FL) is increasingly adopted to combine knowledge from clients in training without revealing their private data. In order to improve the performance of different participants, personalized FL has recently been proposed. However, considering the non-independent and identically distributed (non-IID) data and limited bandwidth at clients, the model performance could be compromised. In reality, clients near each other often tend to have similar data distributions. In this work, we train the personalized edge-based model in the client-edge-server FL. While considering the differences in data distribution, we fully utilize the limited bandwidth resources. To make training efficient and accurate at the same time, An intuitive idea is to learn as much useful knowledge as possible from other edges and reduce the accuracy loss incurred by non-IID data. Therefore, we devise Grouping Hierarchical Personalized Federated Learning (GHPFL). In this framework, each edge establishes physical connections with multiple clients, while the server physically connects with edges. It clusters edges into groups and establishes client-edge logical connections for synchronization. This is based on data similarities that the nodes actively identify, as well as the underlying physical topology. We perform a large-scale evaluation to demonstrate GHPFL’s benefits over other schemes.
期刊介绍:
The IEEE Transactions on Cloud Computing (TCC) is dedicated to the multidisciplinary field of cloud computing. It is committed to the publication of articles that present innovative research ideas, application results, and case studies in cloud computing, focusing on key technical issues related to theory, algorithms, systems, applications, and performance.