{"title":"Federated learning for heterogeneous neural networks with layer similarity relations in Cloud–Edge–End scenarios","authors":"Rao Fu, Yongqiang Gao, Zijian Qiao","doi":"10.1016/j.future.2025.107856","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) aims to allow numerous clients to participate in collaborative training in an efficient communication manner without exchanging private data. Traditional FL assumes that all clients have sufficient local resources to train models with the same architecture, and does not consider the reality that clients may struggle to deploy the same model across devices with varying computational resources. To address this, we propose a heterogeneous FL method, HNN-LSFL, in which the edge server first aggregates the clients of the homogeneous model, and then the cloud server selectively aligns and aggregates the knowledge between the heterogeneous models according to the layer similarity. This Cloud–Edge–End tiered architecture effectively utilizes the powerful computing power of cloud servers, reduces the computational cost of multiple alignment and aggregation of heterogeneous models, and reduces the communication cost with the cloud, which is more suitable for large-scale client scenarios. By identifying layer similarities, the method finds commonalities between different models, enabling more valuable aggregations and reducing the transmission of unnecessary parameters. We also evaluated HNN-LSFL on heterogeneous datasets, demonstrating that it not only improves the utilization of local client resources but also optimizes FL performance. By transmitting fewer model parameters, it reduces the risk of privacy leaks and proves to be superior in FL tasks with heterogeneous models compared to current state-of-the-art heterogeneous FL algorithms.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"171 ","pages":"Article 107856"},"PeriodicalIF":6.2000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25001517","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) aims to allow numerous clients to participate in collaborative training in an efficient communication manner without exchanging private data. Traditional FL assumes that all clients have sufficient local resources to train models with the same architecture, and does not consider the reality that clients may struggle to deploy the same model across devices with varying computational resources. To address this, we propose a heterogeneous FL method, HNN-LSFL, in which the edge server first aggregates the clients of the homogeneous model, and then the cloud server selectively aligns and aggregates the knowledge between the heterogeneous models according to the layer similarity. This Cloud–Edge–End tiered architecture effectively utilizes the powerful computing power of cloud servers, reduces the computational cost of multiple alignment and aggregation of heterogeneous models, and reduces the communication cost with the cloud, which is more suitable for large-scale client scenarios. By identifying layer similarities, the method finds commonalities between different models, enabling more valuable aggregations and reducing the transmission of unnecessary parameters. We also evaluated HNN-LSFL on heterogeneous datasets, demonstrating that it not only improves the utilization of local client resources but also optimizes FL performance. By transmitting fewer model parameters, it reduces the risk of privacy leaks and proves to be superior in FL tasks with heterogeneous models compared to current state-of-the-art heterogeneous FL algorithms.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.