Joshua Chio, Daniel Geng, Lydia Han, Meghna Jain, Mi Zhou, E. Verriest
{"title":"Energy Minimization in Overtaking for Autonomous Vehicles in a Bidirectional Environment","authors":"Joshua Chio, Daniel Geng, Lydia Han, Meghna Jain, Mi Zhou, E. Verriest","doi":"10.1109/ORSS58323.2023.10161975","DOIUrl":null,"url":null,"abstract":"In this article, we formulate an overtaking problem in a bidirectional dynamic highway environment. A Deep Deterministic Policy Gradient (DDPG)-based reinforcement learning method is used to learn an optimal policy for the Ego car with a customized environment. Moreover, the classical optimal control method is applied to solve a similar optimal control problem with two time-varying constraints. Simulations are provided to verify the performance of DDPG. The optimal policy obtained by the classical optimal control method is then used as a compare benchmark for the learning-based method. Model predictive path integral control is finally employed to handle a more dynamic environment and possible different driving modes of cars.","PeriodicalId":263086,"journal":{"name":"2023 IEEE International Opportunity Research Scholars Symposium (ORSS)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Opportunity Research Scholars Symposium (ORSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ORSS58323.2023.10161975","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this article, we formulate an overtaking problem in a bidirectional dynamic highway environment. A Deep Deterministic Policy Gradient (DDPG)-based reinforcement learning method is used to learn an optimal policy for the Ego car with a customized environment. Moreover, the classical optimal control method is applied to solve a similar optimal control problem with two time-varying constraints. Simulations are provided to verify the performance of DDPG. The optimal policy obtained by the classical optimal control method is then used as a compare benchmark for the learning-based method. Model predictive path integral control is finally employed to handle a more dynamic environment and possible different driving modes of cars.