A DRL-Based Algorithm for DNN Partition, Subtask Offloading and Resource Allocation in Multi-Hop Computing Nodes with Cloud

IF 1.5 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Ruiyu Yang, Zhili Wang, Yang Yang, Sining Wang
{"title":"A DRL-Based Algorithm for DNN Partition, Subtask Offloading and Resource Allocation in Multi-Hop Computing Nodes with Cloud","authors":"Ruiyu Yang,&nbsp;Zhili Wang,&nbsp;Yang Yang,&nbsp;Sining Wang","doi":"10.1049/cmu2.70048","DOIUrl":null,"url":null,"abstract":"<p>Nowadays, deep neural network (DNN) partition is an effective strategy to accelerate deep learning (DL) tasks. A pioneering technology, computing and network convergence (CNC), integrates dispersed computing resources and bandwidth via the network control plane to utilize them efficiently. This paper presents a novel network-cloud (NC) architecture designed for DL task inference in CNC scenario, where network devices directly participate in computation, thereby reducing extra transmission costs. Considering multi-hop computing-capable network nodes and one cloud node in a chain path, leveraging deep reinforcement learning (DRL), we develop a joint-optimization algorithm for DNN partition, subtask offloading and computing resource allocation based on deep Q network (DQN), referred to as POADQ, which invokes a subtask offloading and computing resource allocation (SORA) algorithm with low complexity, to minimize delay. DQN searches the optimal DNN partition point, and SORA identifies the next optimal offloading node for next subtask through our proposed NONPRA (next optimal node prediction with resource allocation) method, which selects the node that exhibits the smallest predicted increase in cost. We conduct some experiments and compare POADQ with other schemes. The results show that our proposed algorithm is superior to other algorithms in reducing the average delay of subtasks.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70048","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Communications","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.70048","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Nowadays, deep neural network (DNN) partition is an effective strategy to accelerate deep learning (DL) tasks. A pioneering technology, computing and network convergence (CNC), integrates dispersed computing resources and bandwidth via the network control plane to utilize them efficiently. This paper presents a novel network-cloud (NC) architecture designed for DL task inference in CNC scenario, where network devices directly participate in computation, thereby reducing extra transmission costs. Considering multi-hop computing-capable network nodes and one cloud node in a chain path, leveraging deep reinforcement learning (DRL), we develop a joint-optimization algorithm for DNN partition, subtask offloading and computing resource allocation based on deep Q network (DQN), referred to as POADQ, which invokes a subtask offloading and computing resource allocation (SORA) algorithm with low complexity, to minimize delay. DQN searches the optimal DNN partition point, and SORA identifies the next optimal offloading node for next subtask through our proposed NONPRA (next optimal node prediction with resource allocation) method, which selects the node that exhibits the smallest predicted increase in cost. We conduct some experiments and compare POADQ with other schemes. The results show that our proposed algorithm is superior to other algorithms in reducing the average delay of subtasks.

基于drl的多跳云计算节点DNN分区、子任务卸载和资源分配算法
目前,深度神经网络(DNN)划分是加速深度学习(DL)任务的有效策略。计算与网络融合(computing and network convergence, CNC)是一项前沿技术,它通过网络控制平面将分散的计算资源和带宽整合起来,从而有效地利用它们。本文提出了一种新的网络云(network-cloud, NC)架构,用于CNC场景下的深度学习任务推理,网络设备直接参与计算,从而减少了额外的传输成本。考虑到具有多跳计算能力的网络节点和链路径中的一个云节点,利用深度强化学习(DRL),我们开发了一种基于深度Q网络(DQN)的DNN分区、子任务卸载和计算资源分配的联合优化算法,称为POADQ,该算法调用了低复杂度的子任务卸载和计算资源分配(SORA)算法,以最小化延迟。DQN搜索最优DNN分区点,SORA通过提出的NONPRA(下一个最优节点预测与资源分配)方法识别下一个子任务的下一个最优卸载节点,该方法选择预测成本增加最小的节点。我们进行了一些实验,并将POADQ与其他方案进行了比较。结果表明,该算法在降低子任务平均延迟方面优于其他算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IET Communications
IET Communications 工程技术-工程:电子与电气
CiteScore
4.30
自引率
6.20%
发文量
220
审稿时长
5.9 months
期刊介绍: IET Communications covers the fundamental and generic research for a better understanding of communication technologies to harness the signals for better performing communication systems using various wired and/or wireless media. This Journal is particularly interested in research papers reporting novel solutions to the dominating problems of noise, interference, timing and errors for reduction systems deficiencies such as wasting scarce resources such as spectra, energy and bandwidth. Topics include, but are not limited to: Coding and Communication Theory; Modulation and Signal Design; Wired, Wireless and Optical Communication; Communication System Special Issues. Current Call for Papers: Cognitive and AI-enabled Wireless and Mobile - https://digital-library.theiet.org/files/IET_COM_CFP_CAWM.pdf UAV-Enabled Mobile Edge Computing - https://digital-library.theiet.org/files/IET_COM_CFP_UAV.pdf
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信