PopFL:无服务器边缘计算中的可扩展联邦学习模型,与动态弹出式网络集成

IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Neha Singh , Mainak Adhikari
{"title":"PopFL:无服务器边缘计算中的可扩展联邦学习模型,与动态弹出式网络集成","authors":"Neha Singh ,&nbsp;Mainak Adhikari","doi":"10.1016/j.adhoc.2024.103728","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid increase in the number of Internet-of-Things (IoT) devices, the massive volume of data creates significant challenges for traditional cloud-based solutions. These solutions often lead to high latency, increased operational costs, and limited scalability, making them unsuitable for real-time applications and resource-constrained environments. As a result, edge, and fog computing have emerged as viable alternatives, reducing latency and costs by processing data closer to its source. However, managing the flow of such vast and distributed data streams requires well-structured data pipelines to control the complete lifecycle—from data acquisition at the source to processing at the edge and fog layers, and finally storage and analytics in the cloud. To dynamically handle data analytics at varying distances from the source, often on heterogeneous hardware devices with collaborative learning techniques such as Federated Learning (FL). FL enables decentralized model training by leveraging the local data on Edge Devices (EDs), thereby preserving data privacy and reducing communication overhead with the cloud. However, FL faces critical challenges, including data heterogeneity, where the non-independent and identically distributed (non-IID) nature of data degrades model performance, and resource limitations on EDs, which lead to inefficiencies in training and biases in the aggregated models.</div><div>To address these issues, we propose a novel FL solution, called Pop-Up Federated Learning (PopFL) in edge networks. This solution introduces hierarchical aggregation to reduce network congestion by distributing the aggregation tasks across multiple Fog Servers (FSs), rather than relying solely on centralized cloud aggregation. To further enhance participation and resource utilization at the edge, we incorporate the Stackelberg game model, which incentivizes EDs based on their contribution and resource availability. Additionally, PopFL employs a pop-up ad-hoc network for scalable and efficient communication between EDs and FSs, ensuring robust data transmission in dynamic network conditions. Extensive experiments conducted on three diverse datasets highlight the superior performance of PopFL compared to state-of-the-art FL techniques. The results show significant improvements in model accuracy, robustness, and fairness across various scenarios, effectively addressing the challenges of data heterogeneity and resource limitations. Through these innovations, PopFL paves the way for more reliable and efficient distributed learning systems, unlocking the full potential of FL in real-world applications where low latency and scalable solutions are critical.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"169 ","pages":"Article 103728"},"PeriodicalIF":4.4000,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PopFL: A scalable Federated Learning model in serverless edge computing integrating with dynamic pop-up network\",\"authors\":\"Neha Singh ,&nbsp;Mainak Adhikari\",\"doi\":\"10.1016/j.adhoc.2024.103728\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rapid increase in the number of Internet-of-Things (IoT) devices, the massive volume of data creates significant challenges for traditional cloud-based solutions. These solutions often lead to high latency, increased operational costs, and limited scalability, making them unsuitable for real-time applications and resource-constrained environments. As a result, edge, and fog computing have emerged as viable alternatives, reducing latency and costs by processing data closer to its source. However, managing the flow of such vast and distributed data streams requires well-structured data pipelines to control the complete lifecycle—from data acquisition at the source to processing at the edge and fog layers, and finally storage and analytics in the cloud. To dynamically handle data analytics at varying distances from the source, often on heterogeneous hardware devices with collaborative learning techniques such as Federated Learning (FL). FL enables decentralized model training by leveraging the local data on Edge Devices (EDs), thereby preserving data privacy and reducing communication overhead with the cloud. However, FL faces critical challenges, including data heterogeneity, where the non-independent and identically distributed (non-IID) nature of data degrades model performance, and resource limitations on EDs, which lead to inefficiencies in training and biases in the aggregated models.</div><div>To address these issues, we propose a novel FL solution, called Pop-Up Federated Learning (PopFL) in edge networks. This solution introduces hierarchical aggregation to reduce network congestion by distributing the aggregation tasks across multiple Fog Servers (FSs), rather than relying solely on centralized cloud aggregation. To further enhance participation and resource utilization at the edge, we incorporate the Stackelberg game model, which incentivizes EDs based on their contribution and resource availability. Additionally, PopFL employs a pop-up ad-hoc network for scalable and efficient communication between EDs and FSs, ensuring robust data transmission in dynamic network conditions. Extensive experiments conducted on three diverse datasets highlight the superior performance of PopFL compared to state-of-the-art FL techniques. The results show significant improvements in model accuracy, robustness, and fairness across various scenarios, effectively addressing the challenges of data heterogeneity and resource limitations. Through these innovations, PopFL paves the way for more reliable and efficient distributed learning systems, unlocking the full potential of FL in real-world applications where low latency and scalable solutions are critical.</div></div>\",\"PeriodicalId\":55555,\"journal\":{\"name\":\"Ad Hoc Networks\",\"volume\":\"169 \",\"pages\":\"Article 103728\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ad Hoc Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1570870524003391\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870524003391","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

随着物联网(IoT)设备数量的快速增加,海量数据给传统的基于云的解决方案带来了重大挑战。这些解决方案通常会导致高延迟、增加操作成本和有限的可伸缩性,使它们不适合实时应用程序和资源受限的环境。因此,边缘计算和雾计算已经成为可行的替代方案,通过更接近数据源处理数据来减少延迟和成本。然而,管理如此庞大和分布式的数据流需要结构良好的数据管道来控制整个生命周期——从源处的数据采集到边缘和雾层的处理,最后在云中存储和分析。动态处理距离数据源不同距离的数据分析,通常在异构硬件设备上使用协作学习技术,如联邦学习(FL)。FL通过利用边缘设备(ed)上的本地数据来实现分散的模型训练,从而保护数据隐私并减少与云的通信开销。然而,FL面临着严峻的挑战,包括数据异质性,其中数据的非独立和同分布(non-IID)性质降低了模型的性能,以及ed上的资源限制,这导致了聚合模型的训练效率低下和偏差。为了解决这些问题,我们提出了一种新的FL解决方案,称为边缘网络中的弹出式联邦学习(PopFL)。该解决方案引入了分层聚合,通过在多个雾服务器(Fog server, fs)上分配聚合任务来减少网络拥塞,而不是仅仅依赖于集中式云聚合。为了进一步提高边缘用户的参与度和资源利用率,我们采用了Stackelberg博弈模型,该模型根据用户的贡献和资源可用性来激励用户。此外,PopFL采用弹出式自组织网络,在EDs和fs之间进行可扩展和高效的通信,确保动态网络条件下的稳健数据传输。在三个不同的数据集上进行的大量实验突出了PopFL与最先进的FL技术相比的优越性能。结果表明,在不同场景下,模型的准确性、鲁棒性和公平性都有显著提高,有效地解决了数据异构和资源限制的挑战。通过这些创新,PopFL为更可靠和高效的分布式学习系统铺平了道路,在低延迟和可扩展解决方案至关重要的实际应用中释放了FL的全部潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PopFL: A scalable Federated Learning model in serverless edge computing integrating with dynamic pop-up network
With the rapid increase in the number of Internet-of-Things (IoT) devices, the massive volume of data creates significant challenges for traditional cloud-based solutions. These solutions often lead to high latency, increased operational costs, and limited scalability, making them unsuitable for real-time applications and resource-constrained environments. As a result, edge, and fog computing have emerged as viable alternatives, reducing latency and costs by processing data closer to its source. However, managing the flow of such vast and distributed data streams requires well-structured data pipelines to control the complete lifecycle—from data acquisition at the source to processing at the edge and fog layers, and finally storage and analytics in the cloud. To dynamically handle data analytics at varying distances from the source, often on heterogeneous hardware devices with collaborative learning techniques such as Federated Learning (FL). FL enables decentralized model training by leveraging the local data on Edge Devices (EDs), thereby preserving data privacy and reducing communication overhead with the cloud. However, FL faces critical challenges, including data heterogeneity, where the non-independent and identically distributed (non-IID) nature of data degrades model performance, and resource limitations on EDs, which lead to inefficiencies in training and biases in the aggregated models.
To address these issues, we propose a novel FL solution, called Pop-Up Federated Learning (PopFL) in edge networks. This solution introduces hierarchical aggregation to reduce network congestion by distributing the aggregation tasks across multiple Fog Servers (FSs), rather than relying solely on centralized cloud aggregation. To further enhance participation and resource utilization at the edge, we incorporate the Stackelberg game model, which incentivizes EDs based on their contribution and resource availability. Additionally, PopFL employs a pop-up ad-hoc network for scalable and efficient communication between EDs and FSs, ensuring robust data transmission in dynamic network conditions. Extensive experiments conducted on three diverse datasets highlight the superior performance of PopFL compared to state-of-the-art FL techniques. The results show significant improvements in model accuracy, robustness, and fairness across various scenarios, effectively addressing the challenges of data heterogeneity and resource limitations. Through these innovations, PopFL paves the way for more reliable and efficient distributed learning systems, unlocking the full potential of FL in real-world applications where low latency and scalable solutions are critical.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ad Hoc Networks
Ad Hoc Networks 工程技术-电信学
CiteScore
10.20
自引率
4.20%
发文量
131
审稿时长
4.8 months
期刊介绍: The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to: Mobile and Wireless Ad Hoc Networks Sensor Networks Wireless Local and Personal Area Networks Home Networks Ad Hoc Networks of Autonomous Intelligent Systems Novel Architectures for Ad Hoc and Sensor Networks Self-organizing Network Architectures and Protocols Transport Layer Protocols Routing protocols (unicast, multicast, geocast, etc.) Media Access Control Techniques Error Control Schemes Power-Aware, Low-Power and Energy-Efficient Designs Synchronization and Scheduling Issues Mobility Management Mobility-Tolerant Communication Protocols Location Tracking and Location-based Services Resource and Information Management Security and Fault-Tolerance Issues Hardware and Software Platforms, Systems, and Testbeds Experimental and Prototype Results Quality-of-Service Issues Cross-Layer Interactions Scalability Issues Performance Analysis and Simulation of Protocols.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信