Learning Human-like Driving Policies from Real Interactive Driving Scenes

Yann Koeberle, S. Sabatini, D. Tsishkou, C. Sabourin
{"title":"Learning Human-like Driving Policies from Real Interactive Driving Scenes","authors":"Yann Koeberle, S. Sabatini, D. Tsishkou, C. Sabourin","doi":"10.5220/0011268400003271","DOIUrl":null,"url":null,"abstract":": Traffic simulation has gained a lot of interest for autonomous driving companies for qualitative safety evaluation of self driving vehicles. In order to improve self driving systems from synthetic simulated experiences, traffic agents need to adapt to various situations while behaving as a human driver would do. However, simulating realistic traffic agents is still challenging because human driving style cannot easily be encoded in a driving policy. Adversarial Imitation learning (AIL) already proved that realistic driving policies could be learnt from demonstration but mainly on highways (NGSIM Dataset). Nevertheless, traffic interactions are very restricted on straight lanes and practical use cases of traffic simulation requires driving agents that can handle more various road topologies like roundabouts, complex intersections or merging. In this work, we analyse how to learn realistic driving policies on real and highly interactive driving scenes of Interaction Dataset based on AIL algorithms. We introduce a new driving policy architecture built upon the Lanelet2 map format which combines a path planner and an action space in curvilinear coordinates to reduce exploration complexity during learning. We leverage benefits of reward engineering and variational information bottleneck to propose an algorithm that outperforms all AIL baselines. We show that our learning agent is not only able to imitate humane like drivers but can also adapts safely to situations unseen during training.","PeriodicalId":6436,"journal":{"name":"2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics (CAR 2010)","volume":"75 1","pages":"419-426"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics (CAR 2010)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0011268400003271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

: Traffic simulation has gained a lot of interest for autonomous driving companies for qualitative safety evaluation of self driving vehicles. In order to improve self driving systems from synthetic simulated experiences, traffic agents need to adapt to various situations while behaving as a human driver would do. However, simulating realistic traffic agents is still challenging because human driving style cannot easily be encoded in a driving policy. Adversarial Imitation learning (AIL) already proved that realistic driving policies could be learnt from demonstration but mainly on highways (NGSIM Dataset). Nevertheless, traffic interactions are very restricted on straight lanes and practical use cases of traffic simulation requires driving agents that can handle more various road topologies like roundabouts, complex intersections or merging. In this work, we analyse how to learn realistic driving policies on real and highly interactive driving scenes of Interaction Dataset based on AIL algorithms. We introduce a new driving policy architecture built upon the Lanelet2 map format which combines a path planner and an action space in curvilinear coordinates to reduce exploration complexity during learning. We leverage benefits of reward engineering and variational information bottleneck to propose an algorithm that outperforms all AIL baselines. We show that our learning agent is not only able to imitate humane like drivers but can also adapts safely to situations unseen during training.
从真实的交互式驾驶场景中学习类人驾驶策略
:交通模拟已经引起了自动驾驶公司对自动驾驶车辆定性安全评估的极大兴趣。为了从合成模拟经验中改进自动驾驶系统,交通代理需要像人类驾驶员一样适应各种情况。然而,模拟真实的交通代理仍然具有挑战性,因为人类的驾驶风格不容易被编码到驾驶策略中。对抗模仿学习(AIL)已经证明,现实的驾驶策略可以从演示中学习,但主要是在高速公路上(NGSIM数据集)。然而,交通交互在直道上非常受限制,交通模拟的实际用例需要驾驶代理能够处理更多不同的道路拓扑,如环形交叉路口或合并。在这项工作中,我们分析了如何在基于ai算法的交互数据集的真实和高度交互的驾驶场景中学习逼真的驾驶策略。我们在Lanelet2地图格式的基础上引入了一种新的驾驶策略架构,该架构结合了路径规划器和曲线坐标的动作空间,以降低学习过程中的探索复杂性。我们利用奖励工程和变分信息瓶颈的优势,提出了一种优于所有ai基线的算法。我们的研究表明,我们的学习代理不仅能够模仿人类,比如司机,而且还能安全地适应训练中看不到的情况。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信