{"title":"基于分布式双深度q -学习的车辆网络目标冗余缓解方法","authors":"Imed Ghnaya, H. Aniss, T. Ahmed, M. Mosbah","doi":"10.1109/WCNC55385.2023.10118857","DOIUrl":null,"url":null,"abstract":"The use of Cooperative Perception (CP) enables Connected and Autonomous Vehicles (CAVs) to exchange objects perceived from onboard sensors (e.g., radars, lidars, and cameras) with other CAVs via CP messages (CPMs) through Vehicle-to-Vehicle (V2V) communication technologies. However, the same objects in the driving environment may simultaneously appear in the line of sight of multiple CAVs. Consequently, this leads to much irrelevant and redundant information being exchanged in the V2V network. This overloads the communication channel and reduces the CPM delivery to CAVs, thereby decreasing CP awareness. To address this issue, we mathematically formulate CP information usefulness as a maximization problem in a multi-CAV environment and introduce a distributed multi-agent deep reinforcement learning approach based on the double deep Q-learning algorithm to solve it. This approach allows each CAV to learn an optimal CPM content selection policy that maximizes the usefulness of surrounding CAVs as much as possible to reduce redundancy in the V2V network. Simulation results highlight that the proposal effectively mitigates object redundancy and improves network reliability, ensuring increased awareness at short and medium distances of less than 200 m compared to state-of-the-art approaches.","PeriodicalId":259116,"journal":{"name":"2023 IEEE Wireless Communications and Networking Conference (WCNC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Distributed Double Deep Q-Learning Method for Object Redundancy Mitigation in Vehicular Networks\",\"authors\":\"Imed Ghnaya, H. Aniss, T. Ahmed, M. Mosbah\",\"doi\":\"10.1109/WCNC55385.2023.10118857\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of Cooperative Perception (CP) enables Connected and Autonomous Vehicles (CAVs) to exchange objects perceived from onboard sensors (e.g., radars, lidars, and cameras) with other CAVs via CP messages (CPMs) through Vehicle-to-Vehicle (V2V) communication technologies. However, the same objects in the driving environment may simultaneously appear in the line of sight of multiple CAVs. Consequently, this leads to much irrelevant and redundant information being exchanged in the V2V network. This overloads the communication channel and reduces the CPM delivery to CAVs, thereby decreasing CP awareness. To address this issue, we mathematically formulate CP information usefulness as a maximization problem in a multi-CAV environment and introduce a distributed multi-agent deep reinforcement learning approach based on the double deep Q-learning algorithm to solve it. This approach allows each CAV to learn an optimal CPM content selection policy that maximizes the usefulness of surrounding CAVs as much as possible to reduce redundancy in the V2V network. Simulation results highlight that the proposal effectively mitigates object redundancy and improves network reliability, ensuring increased awareness at short and medium distances of less than 200 m compared to state-of-the-art approaches.\",\"PeriodicalId\":259116,\"journal\":{\"name\":\"2023 IEEE Wireless Communications and Networking Conference (WCNC)\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Wireless Communications and Networking Conference (WCNC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WCNC55385.2023.10118857\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Wireless Communications and Networking Conference (WCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCNC55385.2023.10118857","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Distributed Double Deep Q-Learning Method for Object Redundancy Mitigation in Vehicular Networks
The use of Cooperative Perception (CP) enables Connected and Autonomous Vehicles (CAVs) to exchange objects perceived from onboard sensors (e.g., radars, lidars, and cameras) with other CAVs via CP messages (CPMs) through Vehicle-to-Vehicle (V2V) communication technologies. However, the same objects in the driving environment may simultaneously appear in the line of sight of multiple CAVs. Consequently, this leads to much irrelevant and redundant information being exchanged in the V2V network. This overloads the communication channel and reduces the CPM delivery to CAVs, thereby decreasing CP awareness. To address this issue, we mathematically formulate CP information usefulness as a maximization problem in a multi-CAV environment and introduce a distributed multi-agent deep reinforcement learning approach based on the double deep Q-learning algorithm to solve it. This approach allows each CAV to learn an optimal CPM content selection policy that maximizes the usefulness of surrounding CAVs as much as possible to reduce redundancy in the V2V network. Simulation results highlight that the proposal effectively mitigates object redundancy and improves network reliability, ensuring increased awareness at short and medium distances of less than 200 m compared to state-of-the-art approaches.