{"title":"基于深度强化学习的车辆网络高效协同缓存管理方法PDRL-CM","authors":"Pingjie Ou , Ningjiang Chen , Long Yang","doi":"10.1016/j.adhoc.2025.103888","DOIUrl":null,"url":null,"abstract":"<div><div>In vehicular networks, onboard devices face the challenge of limited storage, and computational resources constrain their processing and storage capabilities. This limitation is particularly significant for applications that require complex computations and real-time responses. Additionally, limited storage capacity reduces the range of cacheable data, which can impact the immediate availability of data and the continuity of services. Therefore, improving cache utilization and meeting vehicles’ real-time data demands pose significant challenges. Deep reinforcement learning can optimize the issues arising from agents’ continuously changing state and action spaces due to increasing request demands. However, training the network may encounter instability and convergence difficulties in dynamic and complex environments or situations with sparse rewards. In response to these issues, this paper proposes a Priority-based Deep Reinforcement Learning Collaborative Cache Management method (PDRL-CM). PDRL-CM first designs a lightweight cache admission strategy that leverages data’s inherent and combined attributes. It then makes cache admission decisions Using Monte Carlo sampling and a max-value search strategy combined with a feedforward neural network. Secondly, the method considers minimizing system latency and reducing vehicle energy consumption as joint optimization problems. An improved deep reinforcement learning algorithm solves this problem and makes cache-sharding decisions. A prioritized experience replay mechanism is incorporated to adjust the network prediction model quickly and accelerate the convergence process. Experimental results indicate that, compared to existing DRL-based caching methods, PDRL-CM offers faster data processing efficiency and higher cache hit rates under varying vehicle density, storage capacity, and content volume conditions.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"176 ","pages":"Article 103888"},"PeriodicalIF":4.8000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PDRL-CM: An efficient cooperative caching management method for vehicular networks based on deep reinforcement learning\",\"authors\":\"Pingjie Ou , Ningjiang Chen , Long Yang\",\"doi\":\"10.1016/j.adhoc.2025.103888\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In vehicular networks, onboard devices face the challenge of limited storage, and computational resources constrain their processing and storage capabilities. This limitation is particularly significant for applications that require complex computations and real-time responses. Additionally, limited storage capacity reduces the range of cacheable data, which can impact the immediate availability of data and the continuity of services. Therefore, improving cache utilization and meeting vehicles’ real-time data demands pose significant challenges. Deep reinforcement learning can optimize the issues arising from agents’ continuously changing state and action spaces due to increasing request demands. However, training the network may encounter instability and convergence difficulties in dynamic and complex environments or situations with sparse rewards. In response to these issues, this paper proposes a Priority-based Deep Reinforcement Learning Collaborative Cache Management method (PDRL-CM). PDRL-CM first designs a lightweight cache admission strategy that leverages data’s inherent and combined attributes. It then makes cache admission decisions Using Monte Carlo sampling and a max-value search strategy combined with a feedforward neural network. Secondly, the method considers minimizing system latency and reducing vehicle energy consumption as joint optimization problems. An improved deep reinforcement learning algorithm solves this problem and makes cache-sharding decisions. A prioritized experience replay mechanism is incorporated to adjust the network prediction model quickly and accelerate the convergence process. Experimental results indicate that, compared to existing DRL-based caching methods, PDRL-CM offers faster data processing efficiency and higher cache hit rates under varying vehicle density, storage capacity, and content volume conditions.</div></div>\",\"PeriodicalId\":55555,\"journal\":{\"name\":\"Ad Hoc Networks\",\"volume\":\"176 \",\"pages\":\"Article 103888\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ad Hoc Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1570870525001362\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870525001362","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
PDRL-CM: An efficient cooperative caching management method for vehicular networks based on deep reinforcement learning
In vehicular networks, onboard devices face the challenge of limited storage, and computational resources constrain their processing and storage capabilities. This limitation is particularly significant for applications that require complex computations and real-time responses. Additionally, limited storage capacity reduces the range of cacheable data, which can impact the immediate availability of data and the continuity of services. Therefore, improving cache utilization and meeting vehicles’ real-time data demands pose significant challenges. Deep reinforcement learning can optimize the issues arising from agents’ continuously changing state and action spaces due to increasing request demands. However, training the network may encounter instability and convergence difficulties in dynamic and complex environments or situations with sparse rewards. In response to these issues, this paper proposes a Priority-based Deep Reinforcement Learning Collaborative Cache Management method (PDRL-CM). PDRL-CM first designs a lightweight cache admission strategy that leverages data’s inherent and combined attributes. It then makes cache admission decisions Using Monte Carlo sampling and a max-value search strategy combined with a feedforward neural network. Secondly, the method considers minimizing system latency and reducing vehicle energy consumption as joint optimization problems. An improved deep reinforcement learning algorithm solves this problem and makes cache-sharding decisions. A prioritized experience replay mechanism is incorporated to adjust the network prediction model quickly and accelerate the convergence process. Experimental results indicate that, compared to existing DRL-based caching methods, PDRL-CM offers faster data processing efficiency and higher cache hit rates under varying vehicle density, storage capacity, and content volume conditions.
期刊介绍:
The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to:
Mobile and Wireless Ad Hoc Networks
Sensor Networks
Wireless Local and Personal Area Networks
Home Networks
Ad Hoc Networks of Autonomous Intelligent Systems
Novel Architectures for Ad Hoc and Sensor Networks
Self-organizing Network Architectures and Protocols
Transport Layer Protocols
Routing protocols (unicast, multicast, geocast, etc.)
Media Access Control Techniques
Error Control Schemes
Power-Aware, Low-Power and Energy-Efficient Designs
Synchronization and Scheduling Issues
Mobility Management
Mobility-Tolerant Communication Protocols
Location Tracking and Location-based Services
Resource and Information Management
Security and Fault-Tolerance Issues
Hardware and Software Platforms, Systems, and Testbeds
Experimental and Prototype Results
Quality-of-Service Issues
Cross-Layer Interactions
Scalability Issues
Performance Analysis and Simulation of Protocols.