Jiajie Zhang;Bao-Lin Ye;Xin Wang;Lingxi Li;Bo Song
{"title":"A Trajectory Planning and Tracking Method Based on Deep Hierarchical Reinforcement Learning","authors":"Jiajie Zhang;Bao-Lin Ye;Xin Wang;Lingxi Li;Bo Song","doi":"10.26599/JICV.2025.9210056","DOIUrl":null,"url":null,"abstract":"To improve the driving efficiency of unmanned vehicles in a complex urban traffic flow environment and the safety and passenger comfort of vehicles when changing lanes, we propose a hierarchical reinforcement learning (HRL)-based vehicle trajectory planning and tracking method. First, we present a hierarchical control framework for vehicle trajectory tracking that is based on deep reinforcement learning (DRL) and model predictive control (MPC). We design an upper-level decision model based on the trust region policy optimization algorithm integrated with long short-term memory to obtain more accurate strategies. Second, to improve stability and passenger comfort, we constructed a lower controller that combines the Bezier curve fitting method and an MPC controller. Finally, the proposed method was simulated via the car learning to act (CARLA) simulator, which is based on an unreal engine. Random urban traffic-flow test scenarios were used to simulate a real urban road-traffic environment. The simulation results illustrate that the proposed method can complete the vehicle trajectory planning and tracking task well. Compared with the existing RL methods, our proposed method has the lowest collision rate of 1.5% and achieves an average speed improvement of 7.04%. Moreover, our proposed method has better comfort performance and lower fuel consumption during the driving process.","PeriodicalId":100793,"journal":{"name":"Journal of Intelligent and Connected Vehicles","volume":"8 2","pages":"9210056-1-9210056-9"},"PeriodicalIF":7.8000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent and Connected Vehicles","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11083711/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
To improve the driving efficiency of unmanned vehicles in a complex urban traffic flow environment and the safety and passenger comfort of vehicles when changing lanes, we propose a hierarchical reinforcement learning (HRL)-based vehicle trajectory planning and tracking method. First, we present a hierarchical control framework for vehicle trajectory tracking that is based on deep reinforcement learning (DRL) and model predictive control (MPC). We design an upper-level decision model based on the trust region policy optimization algorithm integrated with long short-term memory to obtain more accurate strategies. Second, to improve stability and passenger comfort, we constructed a lower controller that combines the Bezier curve fitting method and an MPC controller. Finally, the proposed method was simulated via the car learning to act (CARLA) simulator, which is based on an unreal engine. Random urban traffic-flow test scenarios were used to simulate a real urban road-traffic environment. The simulation results illustrate that the proposed method can complete the vehicle trajectory planning and tracking task well. Compared with the existing RL methods, our proposed method has the lowest collision rate of 1.5% and achieves an average speed improvement of 7.04%. Moreover, our proposed method has better comfort performance and lower fuel consumption during the driving process.