Deep Reinforcement Learning-Based Mining Task Offloading Scheme for Intelligent Connected Vehicles in UAV-Aided MEC

IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Chunlin Li, Kun Jiang, Yong Zhang, Lincheng Jiang, Youlong Luo, Shaohua Wan
{"title":"Deep Reinforcement Learning-Based Mining Task Offloading Scheme for Intelligent Connected Vehicles in UAV-Aided MEC","authors":"Chunlin Li, Kun Jiang, Yong Zhang, Lincheng Jiang, Youlong Luo, Shaohua Wan","doi":"10.1145/3653451","DOIUrl":null,"url":null,"abstract":"<p>The convergence of unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) networks and blockchain transforms the existing mobile networking paradigm. However, in the temporary hotspot scenario for intelligent connected vehicles (ICVs) in UAV-aided MEC networks, deploying blockchain-based services and applications in vehicles is generally impossible due to its high computational resource and storage requirements. One possible solution is to offload part of all the computational tasks to MEC servers wherever possible. Unfortunately, due to the limited availability and high mobility of the vehicles, there is still lacking simple solutions that can support low-latency and higher reliability networking services for ICVs. In this paper, we study the task offloading problem of minimizing the total system latency and the optimal task offloading scheme, subject to constraints on the hover position coordinates of the UAV, the fixed bonuses, flexible transaction fees, transaction rates, mining difficulty, costs and battery energy consumption of the UAV. The problem is confirmed to be a challenging linear integer planning problem, we formulate the problem as a constrained Markov decision process (CMDP). Deep Reinforcement Learning (DRL) has excellently solved sequential decision-making problems in dynamic ICVs environment, therefore, we propose a novel distributed DRL-based P-D3QN approach by using Prioritized Experience Replay (PER) strategy and the dueling double deep Q-network (D3QN) algorithm to solve the optimal task offloading policy effectively. Finally, experiment results show that compared with the benchmark scheme, the P-D3QN algorithm can bring about 26.24% latency improvement and increase about 42.26% offloading utility.</p>","PeriodicalId":50944,"journal":{"name":"ACM Transactions on Design Automation of Electronic Systems","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Design Automation of Electronic Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3653451","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The convergence of unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) networks and blockchain transforms the existing mobile networking paradigm. However, in the temporary hotspot scenario for intelligent connected vehicles (ICVs) in UAV-aided MEC networks, deploying blockchain-based services and applications in vehicles is generally impossible due to its high computational resource and storage requirements. One possible solution is to offload part of all the computational tasks to MEC servers wherever possible. Unfortunately, due to the limited availability and high mobility of the vehicles, there is still lacking simple solutions that can support low-latency and higher reliability networking services for ICVs. In this paper, we study the task offloading problem of minimizing the total system latency and the optimal task offloading scheme, subject to constraints on the hover position coordinates of the UAV, the fixed bonuses, flexible transaction fees, transaction rates, mining difficulty, costs and battery energy consumption of the UAV. The problem is confirmed to be a challenging linear integer planning problem, we formulate the problem as a constrained Markov decision process (CMDP). Deep Reinforcement Learning (DRL) has excellently solved sequential decision-making problems in dynamic ICVs environment, therefore, we propose a novel distributed DRL-based P-D3QN approach by using Prioritized Experience Replay (PER) strategy and the dueling double deep Q-network (D3QN) algorithm to solve the optimal task offloading policy effectively. Finally, experiment results show that compared with the benchmark scheme, the P-D3QN algorithm can bring about 26.24% latency improvement and increase about 42.26% offloading utility.

基于深度强化学习的智能网联汽车挖掘任务卸载方案(UAV-Aided MEC
无人机辅助移动边缘计算(MEC)网络与区块链的融合改变了现有的移动网络模式。然而,在无人机辅助的 MEC 网络中的智能互联车辆(ICV)临时热点场景中,由于对计算资源和存储要求较高,一般不可能在车辆中部署基于区块链的服务和应用。一种可能的解决方案是尽可能将所有计算任务的一部分卸载到 MEC 服务器上。遗憾的是,由于车辆的有限可用性和高流动性,仍然缺乏简单的解决方案来支持 ICV 的低延迟和高可靠性网络服务。本文研究了任务卸载问题,即在无人飞行器悬停位置坐标、固定奖金、灵活交易费、交易费率、挖矿难度、成本和无人飞行器电池能耗等约束条件下,系统总延迟最小化和最优任务卸载方案。经证实,该问题是一个具有挑战性的线性整数规划问题,我们将该问题表述为受约束马尔可夫决策过程(CMDP)。深度强化学习(DRL)出色地解决了动态 ICV 环境中的顺序决策问题,因此,我们提出了一种基于 DRL 的新型分布式 P-D3QN 方法,利用优先经验重放(PER)策略和决斗双深度 Q 网络(D3QN)算法有效地解决了最优任务卸载策略。最后,实验结果表明,与基准方案相比,P-D3QN 算法能带来约 26.24% 的延迟改善,并提高约 42.26% 的卸载效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Design Automation of Electronic Systems
ACM Transactions on Design Automation of Electronic Systems 工程技术-计算机:软件工程
CiteScore
3.20
自引率
7.10%
发文量
105
审稿时长
3 months
期刊介绍: TODAES is a premier ACM journal in design and automation of electronic systems. It publishes innovative work documenting significant research and development advances on the specification, design, analysis, simulation, testing, and evaluation of electronic systems, emphasizing a computer science/engineering orientation. Both theoretical analysis and practical solutions are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信