云-端场景中具有层相似性关系的异构神经网络的联合学习

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Rao Fu, Yongqiang Gao, Zijian Qiao
{"title":"云-端场景中具有层相似性关系的异构神经网络的联合学习","authors":"Rao Fu,&nbsp;Yongqiang Gao,&nbsp;Zijian Qiao","doi":"10.1016/j.future.2025.107856","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) aims to allow numerous clients to participate in collaborative training in an efficient communication manner without exchanging private data. Traditional FL assumes that all clients have sufficient local resources to train models with the same architecture, and does not consider the reality that clients may struggle to deploy the same model across devices with varying computational resources. To address this, we propose a heterogeneous FL method, HNN-LSFL, in which the edge server first aggregates the clients of the homogeneous model, and then the cloud server selectively aligns and aggregates the knowledge between the heterogeneous models according to the layer similarity. This Cloud–Edge–End tiered architecture effectively utilizes the powerful computing power of cloud servers, reduces the computational cost of multiple alignment and aggregation of heterogeneous models, and reduces the communication cost with the cloud, which is more suitable for large-scale client scenarios. By identifying layer similarities, the method finds commonalities between different models, enabling more valuable aggregations and reducing the transmission of unnecessary parameters. We also evaluated HNN-LSFL on heterogeneous datasets, demonstrating that it not only improves the utilization of local client resources but also optimizes FL performance. By transmitting fewer model parameters, it reduces the risk of privacy leaks and proves to be superior in FL tasks with heterogeneous models compared to current state-of-the-art heterogeneous FL algorithms.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"171 ","pages":"Article 107856"},"PeriodicalIF":6.2000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated learning for heterogeneous neural networks with layer similarity relations in Cloud–Edge–End scenarios\",\"authors\":\"Rao Fu,&nbsp;Yongqiang Gao,&nbsp;Zijian Qiao\",\"doi\":\"10.1016/j.future.2025.107856\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) aims to allow numerous clients to participate in collaborative training in an efficient communication manner without exchanging private data. Traditional FL assumes that all clients have sufficient local resources to train models with the same architecture, and does not consider the reality that clients may struggle to deploy the same model across devices with varying computational resources. To address this, we propose a heterogeneous FL method, HNN-LSFL, in which the edge server first aggregates the clients of the homogeneous model, and then the cloud server selectively aligns and aggregates the knowledge between the heterogeneous models according to the layer similarity. This Cloud–Edge–End tiered architecture effectively utilizes the powerful computing power of cloud servers, reduces the computational cost of multiple alignment and aggregation of heterogeneous models, and reduces the communication cost with the cloud, which is more suitable for large-scale client scenarios. By identifying layer similarities, the method finds commonalities between different models, enabling more valuable aggregations and reducing the transmission of unnecessary parameters. We also evaluated HNN-LSFL on heterogeneous datasets, demonstrating that it not only improves the utilization of local client resources but also optimizes FL performance. By transmitting fewer model parameters, it reduces the risk of privacy leaks and proves to be superior in FL tasks with heterogeneous models compared to current state-of-the-art heterogeneous FL algorithms.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"171 \",\"pages\":\"Article 107856\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-04-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X25001517\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25001517","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)旨在允许众多客户以有效的通信方式参与协作培训,而无需交换私有数据。传统的FL假设所有客户端都有足够的本地资源来训练具有相同架构的模型,并且没有考虑到客户端可能难以在具有不同计算资源的设备上部署相同的模型这一现实。为了解决这一问题,我们提出了一种异构FL方法HNN-LSFL,该方法首先由边缘服务器聚合同构模型的客户端,然后由云服务器根据层相似度选择性地对异构模型之间的知识进行对齐和聚合。这种cloud - edge - end分层架构有效利用了云服务器强大的计算能力,降低了异构模型的多次对齐和聚合的计算成本,降低了与云的通信成本,更适合大规模客户端场景。通过识别层相似性,该方法发现不同模型之间的共性,实现更有价值的聚合并减少不必要参数的传输。我们还在异构数据集上评估了HNN-LSFL,表明它不仅提高了本地客户端资源的利用率,而且优化了FL性能。通过传输更少的模型参数,它降低了隐私泄露的风险,与目前最先进的异构FL算法相比,它在异构模型的FL任务中表现优越。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Federated learning for heterogeneous neural networks with layer similarity relations in Cloud–Edge–End scenarios

Federated learning for heterogeneous neural networks with layer similarity relations in Cloud–Edge–End scenarios
Federated Learning (FL) aims to allow numerous clients to participate in collaborative training in an efficient communication manner without exchanging private data. Traditional FL assumes that all clients have sufficient local resources to train models with the same architecture, and does not consider the reality that clients may struggle to deploy the same model across devices with varying computational resources. To address this, we propose a heterogeneous FL method, HNN-LSFL, in which the edge server first aggregates the clients of the homogeneous model, and then the cloud server selectively aligns and aggregates the knowledge between the heterogeneous models according to the layer similarity. This Cloud–Edge–End tiered architecture effectively utilizes the powerful computing power of cloud servers, reduces the computational cost of multiple alignment and aggregation of heterogeneous models, and reduces the communication cost with the cloud, which is more suitable for large-scale client scenarios. By identifying layer similarities, the method finds commonalities between different models, enabling more valuable aggregations and reducing the transmission of unnecessary parameters. We also evaluated HNN-LSFL on heterogeneous datasets, demonstrating that it not only improves the utilization of local client resources but also optimizes FL performance. By transmitting fewer model parameters, it reduces the risk of privacy leaks and proves to be superior in FL tasks with heterogeneous models compared to current state-of-the-art heterogeneous FL algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信