Zhuo Li, Weiran Wu, Jialin Wang, Gang Wang, Jian Sun
{"title":"自动驾驶汽车最小时间轨迹规划的持续优势学习","authors":"Zhuo Li, Weiran Wu, Jialin Wang, Gang Wang, Jian Sun","doi":"10.1007/s11432-023-4059-6","DOIUrl":null,"url":null,"abstract":"<p>This paper investigates the minimum-time trajectory planning problem of an autonomous vehicle. To deal with unknown and uncertain dynamics of the vehicle, the trajectory planning problem is modeled as a Markov decision process with a continuous action space. To solve it, we propose a continuous advantage learning (CAL) algorithm based on the advantage-value equation, and adopt a stochastic policy in the form of multivariate Gaussian distribution to encourage exploration. A shared actor-critic architecture is designed to simultaneously approximate the stochastic policy and the value function, which greatly reduces the computation burden compared to general actor-critic methods. Moreover, the shared actor-critic is updated with a loss function built as mean square consistency error of the advantage-value equation, and the update step is performed several times at each time step to improve data efficiency. Simulations validate the effectiveness of the proposed CAL algorithm and its better performance than the soft actor-critic algorithm.</p>","PeriodicalId":21618,"journal":{"name":"Science China Information Sciences","volume":null,"pages":null},"PeriodicalIF":7.3000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Continuous advantage learning for minimum-time trajectory planning of autonomous vehicles\",\"authors\":\"Zhuo Li, Weiran Wu, Jialin Wang, Gang Wang, Jian Sun\",\"doi\":\"10.1007/s11432-023-4059-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This paper investigates the minimum-time trajectory planning problem of an autonomous vehicle. To deal with unknown and uncertain dynamics of the vehicle, the trajectory planning problem is modeled as a Markov decision process with a continuous action space. To solve it, we propose a continuous advantage learning (CAL) algorithm based on the advantage-value equation, and adopt a stochastic policy in the form of multivariate Gaussian distribution to encourage exploration. A shared actor-critic architecture is designed to simultaneously approximate the stochastic policy and the value function, which greatly reduces the computation burden compared to general actor-critic methods. Moreover, the shared actor-critic is updated with a loss function built as mean square consistency error of the advantage-value equation, and the update step is performed several times at each time step to improve data efficiency. Simulations validate the effectiveness of the proposed CAL algorithm and its better performance than the soft actor-critic algorithm.</p>\",\"PeriodicalId\":21618,\"journal\":{\"name\":\"Science China Information Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science China Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11432-023-4059-6\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11432-023-4059-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
本文研究了自动驾驶车辆的最短时间轨迹规划问题。为了处理车辆的未知和不确定动态,轨迹规划问题被建模为具有连续行动空间的马尔可夫决策过程。为了解决这个问题,我们提出了一种基于优势值方程的连续优势学习(CAL)算法,并采用多变量高斯分布形式的随机策略来鼓励探索。我们设计了一种共享行为批判架构,可同时近似随机策略和价值函数,与一般的行为批判方法相比,大大减轻了计算负担。此外,共享行动者批判使用损失函数进行更新,该损失函数建立在优势-价值方程的均方一致性误差之上,更新步骤在每个时间步进行多次,以提高数据效率。模拟验证了所提出的 CAL 算法的有效性,其性能优于软演员批判算法。
Continuous advantage learning for minimum-time trajectory planning of autonomous vehicles
This paper investigates the minimum-time trajectory planning problem of an autonomous vehicle. To deal with unknown and uncertain dynamics of the vehicle, the trajectory planning problem is modeled as a Markov decision process with a continuous action space. To solve it, we propose a continuous advantage learning (CAL) algorithm based on the advantage-value equation, and adopt a stochastic policy in the form of multivariate Gaussian distribution to encourage exploration. A shared actor-critic architecture is designed to simultaneously approximate the stochastic policy and the value function, which greatly reduces the computation burden compared to general actor-critic methods. Moreover, the shared actor-critic is updated with a loss function built as mean square consistency error of the advantage-value equation, and the update step is performed several times at each time step to improve data efficiency. Simulations validate the effectiveness of the proposed CAL algorithm and its better performance than the soft actor-critic algorithm.
期刊介绍:
Science China Information Sciences is a dedicated journal that showcases high-quality, original research across various domains of information sciences. It encompasses Computer Science & Technologies, Control Science & Engineering, Information & Communication Engineering, Microelectronics & Solid-State Electronics, and Quantum Information, providing a platform for the dissemination of significant contributions in these fields.