基于深度强化学习的车辆网络高效协同缓存管理方法PDRL-CM

IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Pingjie Ou , Ningjiang Chen , Long Yang
{"title":"基于深度强化学习的车辆网络高效协同缓存管理方法PDRL-CM","authors":"Pingjie Ou ,&nbsp;Ningjiang Chen ,&nbsp;Long Yang","doi":"10.1016/j.adhoc.2025.103888","DOIUrl":null,"url":null,"abstract":"<div><div>In vehicular networks, onboard devices face the challenge of limited storage, and computational resources constrain their processing and storage capabilities. This limitation is particularly significant for applications that require complex computations and real-time responses. Additionally, limited storage capacity reduces the range of cacheable data, which can impact the immediate availability of data and the continuity of services. Therefore, improving cache utilization and meeting vehicles’ real-time data demands pose significant challenges. Deep reinforcement learning can optimize the issues arising from agents’ continuously changing state and action spaces due to increasing request demands. However, training the network may encounter instability and convergence difficulties in dynamic and complex environments or situations with sparse rewards. In response to these issues, this paper proposes a Priority-based Deep Reinforcement Learning Collaborative Cache Management method (PDRL-CM). PDRL-CM first designs a lightweight cache admission strategy that leverages data’s inherent and combined attributes. It then makes cache admission decisions Using Monte Carlo sampling and a max-value search strategy combined with a feedforward neural network. Secondly, the method considers minimizing system latency and reducing vehicle energy consumption as joint optimization problems. An improved deep reinforcement learning algorithm solves this problem and makes cache-sharding decisions. A prioritized experience replay mechanism is incorporated to adjust the network prediction model quickly and accelerate the convergence process. Experimental results indicate that, compared to existing DRL-based caching methods, PDRL-CM offers faster data processing efficiency and higher cache hit rates under varying vehicle density, storage capacity, and content volume conditions.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"176 ","pages":"Article 103888"},"PeriodicalIF":4.8000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PDRL-CM: An efficient cooperative caching management method for vehicular networks based on deep reinforcement learning\",\"authors\":\"Pingjie Ou ,&nbsp;Ningjiang Chen ,&nbsp;Long Yang\",\"doi\":\"10.1016/j.adhoc.2025.103888\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In vehicular networks, onboard devices face the challenge of limited storage, and computational resources constrain their processing and storage capabilities. This limitation is particularly significant for applications that require complex computations and real-time responses. Additionally, limited storage capacity reduces the range of cacheable data, which can impact the immediate availability of data and the continuity of services. Therefore, improving cache utilization and meeting vehicles’ real-time data demands pose significant challenges. Deep reinforcement learning can optimize the issues arising from agents’ continuously changing state and action spaces due to increasing request demands. However, training the network may encounter instability and convergence difficulties in dynamic and complex environments or situations with sparse rewards. In response to these issues, this paper proposes a Priority-based Deep Reinforcement Learning Collaborative Cache Management method (PDRL-CM). PDRL-CM first designs a lightweight cache admission strategy that leverages data’s inherent and combined attributes. It then makes cache admission decisions Using Monte Carlo sampling and a max-value search strategy combined with a feedforward neural network. Secondly, the method considers minimizing system latency and reducing vehicle energy consumption as joint optimization problems. An improved deep reinforcement learning algorithm solves this problem and makes cache-sharding decisions. A prioritized experience replay mechanism is incorporated to adjust the network prediction model quickly and accelerate the convergence process. Experimental results indicate that, compared to existing DRL-based caching methods, PDRL-CM offers faster data processing efficiency and higher cache hit rates under varying vehicle density, storage capacity, and content volume conditions.</div></div>\",\"PeriodicalId\":55555,\"journal\":{\"name\":\"Ad Hoc Networks\",\"volume\":\"176 \",\"pages\":\"Article 103888\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ad Hoc Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1570870525001362\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870525001362","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在车载网络中,车载设备面临着存储空间有限的挑战,计算资源限制了其处理和存储能力。这个限制对于需要复杂计算和实时响应的应用程序尤为重要。此外,有限的存储容量减少了可缓存数据的范围,这可能会影响数据的即时可用性和服务的连续性。因此,提高缓存利用率和满足车辆实时数据需求是一个重大挑战。深度强化学习可以优化智能体由于不断增加的请求需求而不断变化的状态和动作空间所产生的问题。然而,在动态复杂的环境或奖励稀疏的情况下,训练网络可能会遇到不稳定和收敛困难。针对这些问题,本文提出一种基于优先级的深度强化学习协同缓存管理方法(PDRL-CM)。PDRL-CM首先设计了一个轻量级缓存允许策略,利用数据的固有属性和组合属性。然后利用蒙特卡罗采样和最大值搜索策略结合前馈神经网络进行缓存准入决策。其次,该方法将最小化系统延迟和降低车辆能耗作为联合优化问题。一种改进的深度强化学习算法解决了这个问题,并做出了缓存分片决策。引入优先体验重放机制,快速调整网络预测模型,加快收敛速度。实验结果表明,与现有基于drl的缓存方法相比,在不同车辆密度、存储容量和内容容量条件下,PDRL-CM具有更快的数据处理效率和更高的缓存命中率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PDRL-CM: An efficient cooperative caching management method for vehicular networks based on deep reinforcement learning
In vehicular networks, onboard devices face the challenge of limited storage, and computational resources constrain their processing and storage capabilities. This limitation is particularly significant for applications that require complex computations and real-time responses. Additionally, limited storage capacity reduces the range of cacheable data, which can impact the immediate availability of data and the continuity of services. Therefore, improving cache utilization and meeting vehicles’ real-time data demands pose significant challenges. Deep reinforcement learning can optimize the issues arising from agents’ continuously changing state and action spaces due to increasing request demands. However, training the network may encounter instability and convergence difficulties in dynamic and complex environments or situations with sparse rewards. In response to these issues, this paper proposes a Priority-based Deep Reinforcement Learning Collaborative Cache Management method (PDRL-CM). PDRL-CM first designs a lightweight cache admission strategy that leverages data’s inherent and combined attributes. It then makes cache admission decisions Using Monte Carlo sampling and a max-value search strategy combined with a feedforward neural network. Secondly, the method considers minimizing system latency and reducing vehicle energy consumption as joint optimization problems. An improved deep reinforcement learning algorithm solves this problem and makes cache-sharding decisions. A prioritized experience replay mechanism is incorporated to adjust the network prediction model quickly and accelerate the convergence process. Experimental results indicate that, compared to existing DRL-based caching methods, PDRL-CM offers faster data processing efficiency and higher cache hit rates under varying vehicle density, storage capacity, and content volume conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ad Hoc Networks
Ad Hoc Networks 工程技术-电信学
CiteScore
10.20
自引率
4.20%
发文量
131
审稿时长
4.8 months
期刊介绍: The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to: Mobile and Wireless Ad Hoc Networks Sensor Networks Wireless Local and Personal Area Networks Home Networks Ad Hoc Networks of Autonomous Intelligent Systems Novel Architectures for Ad Hoc and Sensor Networks Self-organizing Network Architectures and Protocols Transport Layer Protocols Routing protocols (unicast, multicast, geocast, etc.) Media Access Control Techniques Error Control Schemes Power-Aware, Low-Power and Energy-Efficient Designs Synchronization and Scheduling Issues Mobility Management Mobility-Tolerant Communication Protocols Location Tracking and Location-based Services Resource and Information Management Security and Fault-Tolerance Issues Hardware and Software Platforms, Systems, and Testbeds Experimental and Prototype Results Quality-of-Service Issues Cross-Layer Interactions Scalability Issues Performance Analysis and Simulation of Protocols.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信