FLaTEC:跨物联网边缘云环境的高效联邦学习方案

IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Van An Le, Jason Haga, Yusuke Tanimura, Truong Thao Nguyen
{"title":"FLaTEC:跨物联网边缘云环境的高效联邦学习方案","authors":"Van An Le,&nbsp;Jason Haga,&nbsp;Yusuke Tanimura,&nbsp;Truong Thao Nguyen","doi":"10.1016/j.future.2025.108073","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) has become a cornerstone for enabling decentralized model training in mobile edge computing and Internet of Things (IoT) environments and maintaining data privacy by keeping data local to devices. However, the exponential growth in the number of participating devices and the increasing size and complexity of Machine Learning (ML) models amplify FL’s challenges, including high communication overhead, significant computational and energy constraints on edge devices, and the issue of heterogeneous data distribution, i.e., non-Independent and Identically Distributed (non-IID) data across clients. To address these challenges, we propose FLaTEC, a novel FL system tailored for the Thing-Edge-Cloud (TEC) architecture. First, FLaTEC introduces a split-training architecture that divides the global model into three components: a lightweight base model trained on resource-constrained edge devices, a computationally intensive core model trained on edge servers, and a simplified core model designed for on-device training and inference. Second, FLaTEC adopts a separate training strategy in which feature data is uploaded periodically from devices to edge servers to train the core model, reducing frequent data exchanges and mitigating the non-IID problem in FL. Third, to enhance the performance of the simplified model used for on-device training, FLaTEC applies knowledge distillation from the core model trained at the edge. A cloud server orchestrates the entire system by aggregating the base, core, and simplified core models using a federated averaging algorithm, ensuring consistency and coordination across devices and edge servers. Extensive experiments conducted across multiple datasets and diverse ML tasks validate FLaTEC’s superior performance, demonstrating its ability to achieve high accuracy, reduced communication overhead, and resilience to data heterogeneity compared to state-of-the-art methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"175 ","pages":"Article 108073"},"PeriodicalIF":6.2000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FLaTEC: An efficient federated learning scheme across the Thing-Edge-Cloud environment\",\"authors\":\"Van An Le,&nbsp;Jason Haga,&nbsp;Yusuke Tanimura,&nbsp;Truong Thao Nguyen\",\"doi\":\"10.1016/j.future.2025.108073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Federated Learning (FL) has become a cornerstone for enabling decentralized model training in mobile edge computing and Internet of Things (IoT) environments and maintaining data privacy by keeping data local to devices. However, the exponential growth in the number of participating devices and the increasing size and complexity of Machine Learning (ML) models amplify FL’s challenges, including high communication overhead, significant computational and energy constraints on edge devices, and the issue of heterogeneous data distribution, i.e., non-Independent and Identically Distributed (non-IID) data across clients. To address these challenges, we propose FLaTEC, a novel FL system tailored for the Thing-Edge-Cloud (TEC) architecture. First, FLaTEC introduces a split-training architecture that divides the global model into three components: a lightweight base model trained on resource-constrained edge devices, a computationally intensive core model trained on edge servers, and a simplified core model designed for on-device training and inference. Second, FLaTEC adopts a separate training strategy in which feature data is uploaded periodically from devices to edge servers to train the core model, reducing frequent data exchanges and mitigating the non-IID problem in FL. Third, to enhance the performance of the simplified model used for on-device training, FLaTEC applies knowledge distillation from the core model trained at the edge. A cloud server orchestrates the entire system by aggregating the base, core, and simplified core models using a federated averaging algorithm, ensuring consistency and coordination across devices and edge servers. Extensive experiments conducted across multiple datasets and diverse ML tasks validate FLaTEC’s superior performance, demonstrating its ability to achieve high accuracy, reduced communication overhead, and resilience to data heterogeneity compared to state-of-the-art methods.</div></div>\",\"PeriodicalId\":55132,\"journal\":{\"name\":\"Future Generation Computer Systems-The International Journal of Escience\",\"volume\":\"175 \",\"pages\":\"Article 108073\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Future Generation Computer Systems-The International Journal of Escience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167739X2500367X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X2500367X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)已经成为在移动边缘计算和物联网(IoT)环境中实现分散模型训练以及通过将数据保存在设备本地来维护数据隐私的基石。然而,参与设备数量的指数级增长以及机器学习(ML)模型的规模和复杂性的增加放大了FL的挑战,包括高通信开销,边缘设备上的显着计算和能量限制,以及异构数据分布的问题,即跨客户端的非独立和同分布(non-IID)数据。为了应对这些挑战,我们提出了FLaTEC,这是一种为物边缘云(TEC)架构量身定制的新型FL系统。首先,FLaTEC引入了一种分离训练架构,将全局模型分为三个部分:在资源受限的边缘设备上训练的轻量级基础模型,在边缘服务器上训练的计算密集型核心模型,以及设计用于设备上训练和推理的简化核心模型。其次,FLaTEC采用单独的训练策略,将特征数据定期从设备上传到边缘服务器来训练核心模型,减少了频繁的数据交换,缓解了FL中的非iid问题。第三,为了提高用于设备上训练的简化模型的性能,FLaTEC从边缘训练的核心模型中应用知识蒸馏。云服务器通过使用联邦平均算法聚合基本、核心和简化核心模型来编排整个系统,从而确保设备和边缘服务器之间的一致性和协调性。在多个数据集和各种ML任务中进行的大量实验验证了FLaTEC的卓越性能,证明了与最先进的方法相比,FLaTEC能够实现高精度、减少通信开销和对数据异构的弹性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FLaTEC: An efficient federated learning scheme across the Thing-Edge-Cloud environment
Federated Learning (FL) has become a cornerstone for enabling decentralized model training in mobile edge computing and Internet of Things (IoT) environments and maintaining data privacy by keeping data local to devices. However, the exponential growth in the number of participating devices and the increasing size and complexity of Machine Learning (ML) models amplify FL’s challenges, including high communication overhead, significant computational and energy constraints on edge devices, and the issue of heterogeneous data distribution, i.e., non-Independent and Identically Distributed (non-IID) data across clients. To address these challenges, we propose FLaTEC, a novel FL system tailored for the Thing-Edge-Cloud (TEC) architecture. First, FLaTEC introduces a split-training architecture that divides the global model into three components: a lightweight base model trained on resource-constrained edge devices, a computationally intensive core model trained on edge servers, and a simplified core model designed for on-device training and inference. Second, FLaTEC adopts a separate training strategy in which feature data is uploaded periodically from devices to edge servers to train the core model, reducing frequent data exchanges and mitigating the non-IID problem in FL. Third, to enhance the performance of the simplified model used for on-device training, FLaTEC applies knowledge distillation from the core model trained at the edge. A cloud server orchestrates the entire system by aggregating the base, core, and simplified core models using a federated averaging algorithm, ensuring consistency and coordination across devices and edge servers. Extensive experiments conducted across multiple datasets and diverse ML tasks validate FLaTEC’s superior performance, demonstrating its ability to achieve high accuracy, reduced communication overhead, and resilience to data heterogeneity compared to state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.90
自引率
2.70%
发文量
376
审稿时长
10.6 months
期刊介绍: Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications. Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration. Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信