Zhu Sifeng;Tian Xiaohua;Zhang Zonghui;Qiao Rui;Zhu Hai
{"title":"基于深度强化学习的车联网内容放置与边缘协同缓存方案","authors":"Zhu Sifeng;Tian Xiaohua;Zhang Zonghui;Qiao Rui;Zhu Hai","doi":"10.1109/TITS.2025.3558898","DOIUrl":null,"url":null,"abstract":"With the rapid development of Internet of Vehicles technology, communication and data exchange between vehicles have become an important part of modern traffic management. A content placement and edge collaborative caching solution based on deep reinforcement learning is proposed in this paper, aiming to address the data processing and storage challenges faced by Internet of Vehicles systems. Utilizing the collaborative caching between smart vehicles and roadside units employs deep reinforcement learning methods to find and design a collaborative caching solution for the Internet of Vehicles edge. It uses content segmentation technology to divide and cache content fragments in advance to reduce the central server load and network pressure, thereby adapting to the randomness of vehicle mobility and communication duration. The experimental results show that the proposed scheme can effectively reduce the load on the central server, reduce network latency, and improve cache hit rate, providing a flexible and efficient solution for real-time communication and data exchange in the Internet of Vehicles system.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 6","pages":"8050-8064"},"PeriodicalIF":8.4000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Content Placement and Edge Collaborative Caching Scheme Based on Deep Reinforcement Learning for Internet of Vehicles\",\"authors\":\"Zhu Sifeng;Tian Xiaohua;Zhang Zonghui;Qiao Rui;Zhu Hai\",\"doi\":\"10.1109/TITS.2025.3558898\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid development of Internet of Vehicles technology, communication and data exchange between vehicles have become an important part of modern traffic management. A content placement and edge collaborative caching solution based on deep reinforcement learning is proposed in this paper, aiming to address the data processing and storage challenges faced by Internet of Vehicles systems. Utilizing the collaborative caching between smart vehicles and roadside units employs deep reinforcement learning methods to find and design a collaborative caching solution for the Internet of Vehicles edge. It uses content segmentation technology to divide and cache content fragments in advance to reduce the central server load and network pressure, thereby adapting to the randomness of vehicle mobility and communication duration. The experimental results show that the proposed scheme can effectively reduce the load on the central server, reduce network latency, and improve cache hit rate, providing a flexible and efficient solution for real-time communication and data exchange in the Internet of Vehicles system.\",\"PeriodicalId\":13416,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Transportation Systems\",\"volume\":\"26 6\",\"pages\":\"8050-8064\"},\"PeriodicalIF\":8.4000,\"publicationDate\":\"2025-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Transportation Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10972165/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, CIVIL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10972165/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
Content Placement and Edge Collaborative Caching Scheme Based on Deep Reinforcement Learning for Internet of Vehicles
With the rapid development of Internet of Vehicles technology, communication and data exchange between vehicles have become an important part of modern traffic management. A content placement and edge collaborative caching solution based on deep reinforcement learning is proposed in this paper, aiming to address the data processing and storage challenges faced by Internet of Vehicles systems. Utilizing the collaborative caching between smart vehicles and roadside units employs deep reinforcement learning methods to find and design a collaborative caching solution for the Internet of Vehicles edge. It uses content segmentation technology to divide and cache content fragments in advance to reduce the central server load and network pressure, thereby adapting to the randomness of vehicle mobility and communication duration. The experimental results show that the proposed scheme can effectively reduce the load on the central server, reduce network latency, and improve cache hit rate, providing a flexible and efficient solution for real-time communication and data exchange in the Internet of Vehicles system.
期刊介绍:
The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.