Dongkun Zhang;Jiaming Liang;Sha Lu;Ke Guo;Qi Wang;Rong Xiong;Zhenwei Miao;Yue Wang
{"title":"PEP:自主驾驶的政策嵌入式轨迹规划","authors":"Dongkun Zhang;Jiaming Liang;Sha Lu;Ke Guo;Qi Wang;Rong Xiong;Zhenwei Miao;Yue Wang","doi":"10.1109/LRA.2024.3490377","DOIUrl":null,"url":null,"abstract":"Autonomous driving demands proficient trajectory planning to ensure safety and comfort. This letter introduces Policy-Embedded Planner (PEP), a novel framework that enhances closed-loop performance of imitation learning (IL) based planners by embedding a neural policy for sequential ego pose generation, leveraging predicted trajectories of traffic agents. PEP addresses the challenges of distribution shift and causal confusion by decomposing multi-step planning into single-step policy rollouts, applying a coordinate transformation technique to simplify training. PEP allows for the parallel generation of multi-modal candidate trajectories and incorporates both neural and rule-based scoring functions for trajectory selection. To mitigate the negative effects of prediction error on closed-loop performance, we propose an information-mixing mechanism that alternates the utilization of traffic agents' predicted and ground-truth information during training. Experimental validations on nuPlan benchmark highlight PEP's superiority over IL- and rule-based state-of-the-art methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"9 12","pages":"11361-11368"},"PeriodicalIF":4.6000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PEP: Policy-Embedded Trajectory Planning for Autonomous Driving\",\"authors\":\"Dongkun Zhang;Jiaming Liang;Sha Lu;Ke Guo;Qi Wang;Rong Xiong;Zhenwei Miao;Yue Wang\",\"doi\":\"10.1109/LRA.2024.3490377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous driving demands proficient trajectory planning to ensure safety and comfort. This letter introduces Policy-Embedded Planner (PEP), a novel framework that enhances closed-loop performance of imitation learning (IL) based planners by embedding a neural policy for sequential ego pose generation, leveraging predicted trajectories of traffic agents. PEP addresses the challenges of distribution shift and causal confusion by decomposing multi-step planning into single-step policy rollouts, applying a coordinate transformation technique to simplify training. PEP allows for the parallel generation of multi-modal candidate trajectories and incorporates both neural and rule-based scoring functions for trajectory selection. To mitigate the negative effects of prediction error on closed-loop performance, we propose an information-mixing mechanism that alternates the utilization of traffic agents' predicted and ground-truth information during training. Experimental validations on nuPlan benchmark highlight PEP's superiority over IL- and rule-based state-of-the-art methods.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"9 12\",\"pages\":\"11361-11368\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10740797/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10740797/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
摘要
自动驾驶需要熟练的轨迹规划,以确保安全性和舒适性。这篇文章介绍了政策嵌入式规划器(PEP),这是一种新颖的框架,通过嵌入神经政策,利用交通参与者的预测轨迹,按顺序生成自我姿态,从而提高基于模仿学习(IL)的规划器的闭环性能。PEP 将多步规划分解为单步策略滚动,应用坐标变换技术简化训练,从而解决了分布偏移和因果混淆的难题。PEP 允许并行生成多模式候选轨迹,并结合神经和基于规则的评分函数进行轨迹选择。为了减轻预测误差对闭环性能的负面影响,我们提出了一种信息混合机制,在训练过程中交替使用交通代理的预测信息和地面实况信息。在 nuPlan 基准上的实验验证凸显了 PEP 优于基于 IL 和规则的最先进方法。
PEP: Policy-Embedded Trajectory Planning for Autonomous Driving
Autonomous driving demands proficient trajectory planning to ensure safety and comfort. This letter introduces Policy-Embedded Planner (PEP), a novel framework that enhances closed-loop performance of imitation learning (IL) based planners by embedding a neural policy for sequential ego pose generation, leveraging predicted trajectories of traffic agents. PEP addresses the challenges of distribution shift and causal confusion by decomposing multi-step planning into single-step policy rollouts, applying a coordinate transformation technique to simplify training. PEP allows for the parallel generation of multi-modal candidate trajectories and incorporates both neural and rule-based scoring functions for trajectory selection. To mitigate the negative effects of prediction error on closed-loop performance, we propose an information-mixing mechanism that alternates the utilization of traffic agents' predicted and ground-truth information during training. Experimental validations on nuPlan benchmark highlight PEP's superiority over IL- and rule-based state-of-the-art methods.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.