Joint deep reinforcement learning strategy in MEC for smart internet of vehicles edge computing networks

IF 3.8 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Jiabin Luo , Qinyu Song , Fusen Guo , Haoyuan Wu , Hafizan Mat Som , Saad Alahmari , Azadeh Noori Hoshyar
{"title":"Joint deep reinforcement learning strategy in MEC for smart internet of vehicles edge computing networks","authors":"Jiabin Luo ,&nbsp;Qinyu Song ,&nbsp;Fusen Guo ,&nbsp;Haoyuan Wu ,&nbsp;Hafizan Mat Som ,&nbsp;Saad Alahmari ,&nbsp;Azadeh Noori Hoshyar","doi":"10.1016/j.suscom.2025.101121","DOIUrl":null,"url":null,"abstract":"<div><div>The Internet of Vehicles (IoV) has a limited computing capacity, making processing computation tasks challenging. These vehicular services are updated through communication and computing platforms. Edge computing is deployed closest to the terminals to extend the cloud computing facilities. However, the limitation of the vehicular edge nodes, satisfying the Quality of Experience (QoE) is the challenge. This paper developed an imaginative IoV scenario supported by mobile edge computing (MEC) by constructing collaborative processes such as task offloading decisions and resource allocation in various roadside units (RSU) environments that cover multiple vehicles. After that, Deep reinforcement Learning (DRL) is employed to solve the joint optimisation issue. Based on this joint optimisation model, the offloading decisions and resource allocations are gained to reduce the cost obtained in end-to-end delay and expense of resource computation. This problem is formulated based on the Markov Decision Process (MDP) designed functions like state, action, and reward. The proposed model's performance evaluations and numerical results achieve less average delay for 30 vehicle nodes in simulation.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"46 ","pages":"Article 101121"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainable Computing-Informatics & Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210537925000411","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The Internet of Vehicles (IoV) has a limited computing capacity, making processing computation tasks challenging. These vehicular services are updated through communication and computing platforms. Edge computing is deployed closest to the terminals to extend the cloud computing facilities. However, the limitation of the vehicular edge nodes, satisfying the Quality of Experience (QoE) is the challenge. This paper developed an imaginative IoV scenario supported by mobile edge computing (MEC) by constructing collaborative processes such as task offloading decisions and resource allocation in various roadside units (RSU) environments that cover multiple vehicles. After that, Deep reinforcement Learning (DRL) is employed to solve the joint optimisation issue. Based on this joint optimisation model, the offloading decisions and resource allocations are gained to reduce the cost obtained in end-to-end delay and expense of resource computation. This problem is formulated based on the Markov Decision Process (MDP) designed functions like state, action, and reward. The proposed model's performance evaluations and numerical results achieve less average delay for 30 vehicle nodes in simulation.
面向智能车联网边缘计算网络的MEC联合深度强化学习策略
车联网(IoV)的计算能力有限,使得处理计算任务具有挑战性。这些车辆服务通过通信和计算平台进行更新。在离终端最近的地方部署边缘计算,扩展云计算设施。然而,由于车辆边缘节点的限制,满足体验质量(QoE)是一个挑战。本文通过构建涵盖多辆车辆的各种路边单元(RSU)环境中的任务卸载决策和资源分配等协作流程,开发了一个由移动边缘计算(MEC)支持的富有想象力的车联网场景。然后,采用深度强化学习(Deep reinforcement Learning, DRL)解决联合优化问题。基于该联合优化模型,获得卸载决策和资源分配,以降低端到端延迟成本和资源计算费用。这个问题是基于马尔可夫决策过程(MDP)设计的函数,如状态、行动和奖励。仿真结果表明,该模型在30个车辆节点上实现了较小的平均时延。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Sustainable Computing-Informatics & Systems
Sustainable Computing-Informatics & Systems COMPUTER SCIENCE, HARDWARE & ARCHITECTUREC-COMPUTER SCIENCE, INFORMATION SYSTEMS
CiteScore
10.70
自引率
4.40%
发文量
142
期刊介绍: Sustainable computing is a rapidly expanding research area spanning the fields of computer science and engineering, electrical engineering as well as other engineering disciplines. The aim of Sustainable Computing: Informatics and Systems (SUSCOM) is to publish the myriad research findings related to energy-aware and thermal-aware management of computing resource. Equally important is a spectrum of related research issues such as applications of computing that can have ecological and societal impacts. SUSCOM publishes original and timely research papers and survey articles in current areas of power, energy, temperature, and environment related research areas of current importance to readers. SUSCOM has an editorial board comprising prominent researchers from around the world and selects competitively evaluated peer-reviewed papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信