Partial Offloading Strategy Based on Deep Reinforcement Learning in the Internet of Vehicles

IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Shujuan Tian;Xinjie Zhu;Bochao Feng;Zhirun Zheng;Haolin Liu;Zhetao Li
{"title":"Partial Offloading Strategy Based on Deep Reinforcement Learning in the Internet of Vehicles","authors":"Shujuan Tian;Xinjie Zhu;Bochao Feng;Zhirun Zheng;Haolin Liu;Zhetao Li","doi":"10.1109/TMC.2025.3543976","DOIUrl":null,"url":null,"abstract":"Driven by the increasing demands of vehicular tasks, edge offloading has emerged as a promising paradigm to enhance quality of experience (QoE) in Internet of Vehicles (IoV) networks. This approach enables vehicles to offload computation-intensive tasks to edge servers, resulting in reduced computation delays and lower energy consumption. However, traditional binary offloading limits the efficiency of edge offloading. To address this gap, we propose a partial offloading strategy that jointly optimizes the offloading ratio, computation, and communication resources in IoV. Recognizing the varying priorities of vehicular tasks regarding task delay and energy consumption, we formulate two distinct scenarios: one focused on minimizing delay and the other on minimizing energy consumption. Furthermore, we employ a reinforcement learning approach to establish a multi-dimensional joint optimization function by setting different objectives for each scenario. Based on this framework, we introduce a multi-state iteration deep deterministic policy gradient algorithm (SIDDPG), which effectively determines task partitioning and resource allocation. Simulation results demonstrate that the proposed algorithm outperforms benchmark schemes in terms of task delay and energy consumption.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 7","pages":"6517-6531"},"PeriodicalIF":7.7000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10904114/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Driven by the increasing demands of vehicular tasks, edge offloading has emerged as a promising paradigm to enhance quality of experience (QoE) in Internet of Vehicles (IoV) networks. This approach enables vehicles to offload computation-intensive tasks to edge servers, resulting in reduced computation delays and lower energy consumption. However, traditional binary offloading limits the efficiency of edge offloading. To address this gap, we propose a partial offloading strategy that jointly optimizes the offloading ratio, computation, and communication resources in IoV. Recognizing the varying priorities of vehicular tasks regarding task delay and energy consumption, we formulate two distinct scenarios: one focused on minimizing delay and the other on minimizing energy consumption. Furthermore, we employ a reinforcement learning approach to establish a multi-dimensional joint optimization function by setting different objectives for each scenario. Based on this framework, we introduce a multi-state iteration deep deterministic policy gradient algorithm (SIDDPG), which effectively determines task partitioning and resource allocation. Simulation results demonstrate that the proposed algorithm outperforms benchmark schemes in terms of task delay and energy consumption.
基于深度强化学习的车联网部分卸载策略
在日益增长的车辆任务需求的驱动下,边缘卸载已成为提高车联网(IoV)网络体验质量(QoE)的一种有前途的范例。这种方法使车辆能够将计算密集型任务卸载到边缘服务器,从而减少计算延迟并降低能耗。然而,传统的二进制卸载限制了边缘卸载的效率。为了解决这一差距,我们提出了一种局部卸载策略,共同优化了车联网中的卸载比例、计算和通信资源。认识到车辆任务在任务延迟和能耗方面的不同优先级,我们制定了两种不同的方案:一种侧重于最小化延迟,另一种侧重于最小化能耗。此外,我们采用强化学习方法通过为每个场景设置不同的目标来建立多维联合优化函数。在此框架的基础上,我们引入了一种多状态迭代深度确定性策略梯度算法(SIDDPG),该算法能有效地确定任务划分和资源分配。仿真结果表明,该算法在任务延迟和能耗方面优于基准算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing 工程技术-电信学
CiteScore
12.90
自引率
2.50%
发文量
403
审稿时长
6.6 months
期刊介绍: IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信