城域光网络中基于转移强化学习的 DNN 分布式推理卸载方案

IF 4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Shan Yin;Lihao Liu;Mengru Cai;Yutong Chai;Yurong Jiao;Zheng Duan;Yian Li;Shanguo Huang
{"title":"城域光网络中基于转移强化学习的 DNN 分布式推理卸载方案","authors":"Shan Yin;Lihao Liu;Mengru Cai;Yutong Chai;Yurong Jiao;Zheng Duan;Yian Li;Shanguo Huang","doi":"10.1364/JOCN.533206","DOIUrl":null,"url":null,"abstract":"With the development of 5G and mobile edge computing, deep neural network (DNN) inference can be distributed at the edge to reduce communication overhead and inference time, namely, DNN distributed inference. DNN distributed inference will pose challenges to the resource allocation problem in metro optical networks (MONs). Efficient cooperative allocation of optical communication and computational resources can facilitate high-bandwidth and low-latency applications. However, it also introduces greater complexity to the resource allocation problem. In this study, we propose a joint resource allocation method using high-performance transfer deep reinforcement learning (T-DRL) to maximize network throughput. When the topologies or characteristics of MONs change, T-DRL requires only a small amount of transfer training to re-converge. Considering that the generalizability of conventional methods is inversely related to optimization performance, we develop two deployment schemes (i.e., single-agent and multi-agent) based on the T-DRL method to explore the performance of T-DRL. Simulation results demonstrate that T-DRL greatly reduces the blocking probability and average inference time of DNN inference requests. Besides, the multi-agent scheme can maintain a lower blocking probability of requests in MONs, while the single-agent has a shorter convergence time after network changes.","PeriodicalId":50103,"journal":{"name":"Journal of Optical Communications and Networking","volume":"16 9","pages":"852-867"},"PeriodicalIF":4.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DNN distributed inference offloading scheme based on transfer reinforcement learning in metro optical networks\",\"authors\":\"Shan Yin;Lihao Liu;Mengru Cai;Yutong Chai;Yurong Jiao;Zheng Duan;Yian Li;Shanguo Huang\",\"doi\":\"10.1364/JOCN.533206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of 5G and mobile edge computing, deep neural network (DNN) inference can be distributed at the edge to reduce communication overhead and inference time, namely, DNN distributed inference. DNN distributed inference will pose challenges to the resource allocation problem in metro optical networks (MONs). Efficient cooperative allocation of optical communication and computational resources can facilitate high-bandwidth and low-latency applications. However, it also introduces greater complexity to the resource allocation problem. In this study, we propose a joint resource allocation method using high-performance transfer deep reinforcement learning (T-DRL) to maximize network throughput. When the topologies or characteristics of MONs change, T-DRL requires only a small amount of transfer training to re-converge. Considering that the generalizability of conventional methods is inversely related to optimization performance, we develop two deployment schemes (i.e., single-agent and multi-agent) based on the T-DRL method to explore the performance of T-DRL. Simulation results demonstrate that T-DRL greatly reduces the blocking probability and average inference time of DNN inference requests. Besides, the multi-agent scheme can maintain a lower blocking probability of requests in MONs, while the single-agent has a shorter convergence time after network changes.\",\"PeriodicalId\":50103,\"journal\":{\"name\":\"Journal of Optical Communications and Networking\",\"volume\":\"16 9\",\"pages\":\"852-867\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Optical Communications and Networking\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10633210/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Optical Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10633210/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

随着 5G 和移动边缘计算的发展,深度神经网络(DNN)推理可以分布在边缘,以减少通信开销和推理时间,即 DNN 分布式推理。DNN 分布式推理将对城域光网络(MON)中的资源分配问题提出挑战。光通信和计算资源的高效协同分配可促进高带宽和低延迟应用。然而,这也给资源分配问题带来了更大的复杂性。在本研究中,我们提出了一种使用高性能传输深度强化学习(T-DRL)的联合资源分配方法,以最大限度地提高网络吞吐量。当 MON 的拓扑结构或特征发生变化时,T-DRL 只需少量的迁移训练即可重新收敛。考虑到传统方法的通用性与优化性能成反比,我们开发了两种基于 T-DRL 方法的部署方案(即单代理和多代理),以探索 T-DRL 的性能。仿真结果表明,T-DRL 大大降低了 DNN 推理请求的阻塞概率和平均推理时间。此外,多代理方案能在 MONs 中保持较低的请求阻塞概率,而单代理方案在网络变化后的收敛时间较短。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DNN distributed inference offloading scheme based on transfer reinforcement learning in metro optical networks
With the development of 5G and mobile edge computing, deep neural network (DNN) inference can be distributed at the edge to reduce communication overhead and inference time, namely, DNN distributed inference. DNN distributed inference will pose challenges to the resource allocation problem in metro optical networks (MONs). Efficient cooperative allocation of optical communication and computational resources can facilitate high-bandwidth and low-latency applications. However, it also introduces greater complexity to the resource allocation problem. In this study, we propose a joint resource allocation method using high-performance transfer deep reinforcement learning (T-DRL) to maximize network throughput. When the topologies or characteristics of MONs change, T-DRL requires only a small amount of transfer training to re-converge. Considering that the generalizability of conventional methods is inversely related to optimization performance, we develop two deployment schemes (i.e., single-agent and multi-agent) based on the T-DRL method to explore the performance of T-DRL. Simulation results demonstrate that T-DRL greatly reduces the blocking probability and average inference time of DNN inference requests. Besides, the multi-agent scheme can maintain a lower blocking probability of requests in MONs, while the single-agent has a shorter convergence time after network changes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.40
自引率
16.00%
发文量
104
审稿时长
4 months
期刊介绍: The scope of the Journal includes advances in the state-of-the-art of optical networking science, technology, and engineering. Both theoretical contributions (including new techniques, concepts, analyses, and economic studies) and practical contributions (including optical networking experiments, prototypes, and new applications) are encouraged. Subareas of interest include the architecture and design of optical networks, optical network survivability and security, software-defined optical networking, elastic optical networks, data and control plane advances, network management related innovation, and optical access networks. Enabling technologies and their applications are suitable topics only if the results are shown to directly impact optical networking beyond simple point-to-point networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信