跨计算连续体的联邦学习:具有分裂神经网络和个性化层的分层方法

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Harshit Gupta , Arya Krishnan , O.P. Vyas , Giovanni Merlino , Francesco Longo , Antonio Puliafito
{"title":"跨计算连续体的联邦学习:具有分裂神经网络和个性化层的分层方法","authors":"Harshit Gupta ,&nbsp;Arya Krishnan ,&nbsp;O.P. Vyas ,&nbsp;Giovanni Merlino ,&nbsp;Francesco Longo ,&nbsp;Antonio Puliafito","doi":"10.1016/j.future.2025.107878","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) allows a Machine Learning (ML) model to be trained collaboratively among distributed devices while preserving the privacy of the data being used for the training. On the other hand, Hierarchical Federated Learning (HFL) is the extended architecture of FL, which consists of additional edge servers for partial aggregation. FL is very useful in privacy-preserving machine learning. However, it has some setbacks, such as statistical heterogeneity, multiple expensive global iterations, performance degradation due to insufficient data, and slow convergence. To deal with such setbacks, the work proposes three approaches with HFL. The first approach utilizes Transfer Learning with HFL, the second approach uses personalized layers in HFL by presenting a 2-tier &amp; 3-tier architecture, and the third approach uses Split Learning (SL) with HFL by proposing an extended 3-tier architecture. The proposed work performed well with the computation at multilevel, i.e., on client, edge, and cloud, exploiting the hybrid infrastructure of IoT-Edge-cloud, i.e., compute continuum. The obtained results showed that the proposed work outperforms by increasing the accuracy of complex models from 18.10% to 76.91% with faster convergence. The work also showed better performance than the state-of-the-art models. Significant performance improvement was achieved in the presence of personalized layers in an HFL-SplitNN architecture. The proposed 3-tier architecture especially shines in the case of less homogeneous data per client. SL played a vital role with HFL in enhancing performance by providing a maximum accuracy of 82.38% with Independent &amp; Identically Distributed Data (IID) and 52.16% with Non-IID data distribution.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"173 ","pages":"Article 107878"},"PeriodicalIF":6.2000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated learning across the compute continuum: A hierarchical approach with splitNNs and personalized layers\",\"authors\":\"Harshit Gupta ,&nbsp;Arya Krishnan ,&nbsp;O.P. Vyas ,&nbsp;Giovanni Merlino ,&nbsp;Francesco Longo ,&nbsp;Antonio Puliafito\",\"doi\":\"10.1016/j.future.2025.107878\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) allows a Machine Learning (ML) model to be trained collaboratively among distributed devices while preserving the privacy of the data being used for the training. On the other hand, Hierarchical Federated Learning (HFL) is the extended architecture of FL, which consists of additional edge servers for partial aggregation. FL is very useful in privacy-preserving machine learning. However, it has some setbacks, such as statistical heterogeneity, multiple expensive global iterations, performance degradation due to insufficient data, and slow convergence. To deal with such setbacks, the work proposes three approaches with HFL. The first approach utilizes Transfer Learning with HFL, the second approach uses personalized layers in HFL by presenting a 2-tier &amp; 3-tier architecture, and the third approach uses Split Learning (SL) with HFL by proposing an extended 3-tier architecture. The proposed work performed well with the computation at multilevel, i.e., on client, edge, and cloud, exploiting the hybrid infrastructure of IoT-Edge-cloud, i.e., compute continuum. The obtained results showed that the proposed work outperforms by increasing the accuracy of complex models from 18.10% to 76.91% with faster convergence. The work also showed better performance than the state-of-the-art models. Significant performance improvement was achieved in the presence of personalized layers in an HFL-SplitNN architecture. The proposed 3-tier architecture especially shines in the case of less homogeneous data per client. SL played a vital role with HFL in enhancing performance by providing a maximum accuracy of 82.38% with Independent &amp; Identically Distributed Data (IID) and 52.16% with Non-IID data distribution.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"173 \",\"pages\":\"Article 107878\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X25001736\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25001736","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)允许在分布式设备之间协作训练机器学习(ML)模型,同时保留用于训练的数据的隐私性。另一方面,层次化联邦学习(Hierarchical Federated Learning, HFL)是层次化联邦学习的扩展架构,它由用于部分聚合的额外边缘服务器组成。FL在保护隐私的机器学习中非常有用。然而,它也有一些缺点,例如统计异质性、多次昂贵的全局迭代、由于数据不足而导致的性能下降以及收敛缓慢。为了应对这些挫折,该研究提出了三种HFL方法。第一种方法利用HFL的迁移学习,第二种方法在HFL中使用个性化层,呈现两层&;第三种方法通过提出扩展的3层体系结构,将拆分学习(SL)与HFL结合使用。所提出的工作在多层计算中表现良好,即在客户端,边缘和云上,利用物联网边缘云的混合基础设施,即计算连续体。结果表明,该方法将复杂模型的准确率从18.10%提高到76.91%,收敛速度更快。该作品也表现出比最先进的模型更好的性能。在HFL-SplitNN体系结构中,个性化层的存在显著提高了性能。所建议的三层体系结构在每个客户机的同构数据较少的情况下尤其突出。SL与HFL在提高性能方面发挥了至关重要的作用,独立&;IID(同分布数据),非IID数据分布占52.16%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Federated learning across the compute continuum: A hierarchical approach with splitNNs and personalized layers
Federated Learning (FL) allows a Machine Learning (ML) model to be trained collaboratively among distributed devices while preserving the privacy of the data being used for the training. On the other hand, Hierarchical Federated Learning (HFL) is the extended architecture of FL, which consists of additional edge servers for partial aggregation. FL is very useful in privacy-preserving machine learning. However, it has some setbacks, such as statistical heterogeneity, multiple expensive global iterations, performance degradation due to insufficient data, and slow convergence. To deal with such setbacks, the work proposes three approaches with HFL. The first approach utilizes Transfer Learning with HFL, the second approach uses personalized layers in HFL by presenting a 2-tier & 3-tier architecture, and the third approach uses Split Learning (SL) with HFL by proposing an extended 3-tier architecture. The proposed work performed well with the computation at multilevel, i.e., on client, edge, and cloud, exploiting the hybrid infrastructure of IoT-Edge-cloud, i.e., compute continuum. The obtained results showed that the proposed work outperforms by increasing the accuracy of complex models from 18.10% to 76.91% with faster convergence. The work also showed better performance than the state-of-the-art models. Significant performance improvement was achieved in the presence of personalized layers in an HFL-SplitNN architecture. The proposed 3-tier architecture especially shines in the case of less homogeneous data per client. SL played a vital role with HFL in enhancing performance by providing a maximum accuracy of 82.38% with Independent & Identically Distributed Data (IID) and 52.16% with Non-IID data distribution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信