Shubhangi Bhadauria, S. Vasan, Moustafa Roshdi, Elke Roth-Mandutz, Georg Fischer
{"title":"A Deep Reinforcement Learning: Location-based Resource Allocation for Congested C-V2X Scenario","authors":"Shubhangi Bhadauria, S. Vasan, Moustafa Roshdi, Elke Roth-Mandutz, Georg Fischer","doi":"10.1109/iemcon53756.2021.9623094","DOIUrl":null,"url":null,"abstract":"Cellular- Vehicle-to-Everything (C- V2X) communication as standardized in the 3rd generation partnership project (3GPP) plays an essential role in enabling fully autonomous driving. C- V2X envisions supporting various use-cases, e.g., platooning and remote driving, with varying quality of service (QoS) requirements regarding latency, reliability, data rate, and positioning. In order to ensure meeting these stringent QoS requirements in realistic mobility scenarios, an intelligent and efficient resource allocation scheme is required. This paper addresses channel congestion in location-based resource allocation based on Deep Reinforcement Learning (DRL) for vehicle user equipment (V-UE) in dynamic groupcast communication, i.e., without a V-UE acting as a group head. Using DRL base station acts as a centralized agent. It adapts the channel congestion due to vehicle density in resource pools segregated based on location in a TAPASCologne scenario in the Simulation of Urban Mobility (SUMO) platform. A system-level simulation shows that a DRL-based congestion approach can achieve a better packet reception ratio (PRR) than a legacy congestion control scheme when resource pools are segregated based on location.","PeriodicalId":272590,"journal":{"name":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"40 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iemcon53756.2021.9623094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Cellular- Vehicle-to-Everything (C- V2X) communication as standardized in the 3rd generation partnership project (3GPP) plays an essential role in enabling fully autonomous driving. C- V2X envisions supporting various use-cases, e.g., platooning and remote driving, with varying quality of service (QoS) requirements regarding latency, reliability, data rate, and positioning. In order to ensure meeting these stringent QoS requirements in realistic mobility scenarios, an intelligent and efficient resource allocation scheme is required. This paper addresses channel congestion in location-based resource allocation based on Deep Reinforcement Learning (DRL) for vehicle user equipment (V-UE) in dynamic groupcast communication, i.e., without a V-UE acting as a group head. Using DRL base station acts as a centralized agent. It adapts the channel congestion due to vehicle density in resource pools segregated based on location in a TAPASCologne scenario in the Simulation of Urban Mobility (SUMO) platform. A system-level simulation shows that a DRL-based congestion approach can achieve a better packet reception ratio (PRR) than a legacy congestion control scheme when resource pools are segregated based on location.