整合数据收集、通信和计算,实现重要性感知在线边缘学习任务

IF 10.7 1区 计算机科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Nan Wang;Yinglei Teng;Kaibin Huang
{"title":"整合数据收集、通信和计算,实现重要性感知在线边缘学习任务","authors":"Nan Wang;Yinglei Teng;Kaibin Huang","doi":"10.1109/TWC.2024.3522956","DOIUrl":null,"url":null,"abstract":"With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.","PeriodicalId":13431,"journal":{"name":"IEEE Transactions on Wireless Communications","volume":"24 3","pages":"2606-2619"},"PeriodicalIF":10.7000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks\",\"authors\":\"Nan Wang;Yinglei Teng;Kaibin Huang\",\"doi\":\"10.1109/TWC.2024.3522956\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.\",\"PeriodicalId\":13431,\"journal\":{\"name\":\"IEEE Transactions on Wireless Communications\",\"volume\":\"24 3\",\"pages\":\"2606-2619\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2025-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Wireless Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10855354/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Wireless Communications","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10855354/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

随着实时智能应用的普及,在线边缘学习(OEL)由于能够通过边缘计算快速访问环境数据来改进人工智能模型而受到越来越多的关注。然而,OEL的性能与不断变化的环境中传入数据的动态特性复杂地联系在一起,这并不符合平稳分布。在这项工作中,我们开发了一个数据重要性感知的收集、通信和计算集成框架,通过利用动态网络资源下不同的数据有用性来提高训练效率。在基于小批量梯度下降(MGD)的在线学习任务中,首先导出了一种量化数据重要性的模型收敛度量(MCM)。为了加速边缘模型的学习,我们优化了训练批配置,并通过协调调度来微调重要数据的获取,包括数据采样、传输和计算资源分配。为了解决决策变量的时间差异和复杂耦合问题,设计了一种双时间尺度分层强化学习(TTHRL)算法,将原问题分解为两层子问题,并在混合时间尺度模式下分别对子问题进行优化。实验表明,所提出的数据集成框架在稳定系统缓存队列的同时,能有效提高在线学习效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks
With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
18.60
自引率
10.60%
发文量
708
审稿时长
5.6 months
期刊介绍: The IEEE Transactions on Wireless Communications is a prestigious publication that showcases cutting-edge advancements in wireless communications. It welcomes both theoretical and practical contributions in various areas. The scope of the Transactions encompasses a wide range of topics, including modulation and coding, detection and estimation, propagation and channel characterization, and diversity techniques. The journal also emphasizes the physical and link layer communication aspects of network architectures and protocols. The journal is open to papers on specific topics or non-traditional topics related to specific application areas. This includes simulation tools and methodologies, orthogonal frequency division multiplexing, MIMO systems, and wireless over optical technologies. Overall, the IEEE Transactions on Wireless Communications serves as a platform for high-quality manuscripts that push the boundaries of wireless communications and contribute to advancements in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信