Scenario-level knowledge transfer for motion planning of autonomous driving via successor representation

IF 7.6 1区 工程技术 Q1 TRANSPORTATION SCIENCE & TECHNOLOGY
Hongliang Lu , Chao Lu , Haoyang Wang , Jianwei Gong , Meixin Zhu , Hai Yang
{"title":"Scenario-level knowledge transfer for motion planning of autonomous driving via successor representation","authors":"Hongliang Lu ,&nbsp;Chao Lu ,&nbsp;Haoyang Wang ,&nbsp;Jianwei Gong ,&nbsp;Meixin Zhu ,&nbsp;Hai Yang","doi":"10.1016/j.trc.2024.104899","DOIUrl":null,"url":null,"abstract":"<div><div>For autonomous vehicles, transfer learning can enhance performance by making better use of previously learned knowledge in newly encountered scenarios, which holds great promise for improving the performance of motion planning. However, previous practices using transfer learning are data-level, which is mainly achieved by introducing extra data and expanding experience. Such data-level consideration depends heavily on the quality and quantity of data, failing to take into account the scenario-level features behind similar scenarios. In this paper, we provide a scenario-level knowledge transfer framework for motion planning of autonomous driving, named SceTL. By capitalizing on successor representation, a general scenario-level knowledge among similar scenarios can be captured and thereby recycled in different traffic scenarios to empower motion planning. To verify the efficacy of our framework, a method that combines SceTL and classic artificial potential field (APF), named SceTL-APF, is proposed to conduct global planning for navigation in static scenarios. Meanwhile, a local planning method combining SceTL and motion primitives (MP), SceTL-MP, is developed for dynamic scenarios. Both simulated and realistic data are used for verification. Experimental results demonstrate that SceTL can facilitate the scenario-level knowledge transfer for both SceTL-APF and SceTL-MP, characterized by better adaptivity and faster computation speed compared with existing motion planning methods.</div></div>","PeriodicalId":54417,"journal":{"name":"Transportation Research Part C-Emerging Technologies","volume":"169 ","pages":"Article 104899"},"PeriodicalIF":7.6000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part C-Emerging Technologies","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0968090X24004200","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

For autonomous vehicles, transfer learning can enhance performance by making better use of previously learned knowledge in newly encountered scenarios, which holds great promise for improving the performance of motion planning. However, previous practices using transfer learning are data-level, which is mainly achieved by introducing extra data and expanding experience. Such data-level consideration depends heavily on the quality and quantity of data, failing to take into account the scenario-level features behind similar scenarios. In this paper, we provide a scenario-level knowledge transfer framework for motion planning of autonomous driving, named SceTL. By capitalizing on successor representation, a general scenario-level knowledge among similar scenarios can be captured and thereby recycled in different traffic scenarios to empower motion planning. To verify the efficacy of our framework, a method that combines SceTL and classic artificial potential field (APF), named SceTL-APF, is proposed to conduct global planning for navigation in static scenarios. Meanwhile, a local planning method combining SceTL and motion primitives (MP), SceTL-MP, is developed for dynamic scenarios. Both simulated and realistic data are used for verification. Experimental results demonstrate that SceTL can facilitate the scenario-level knowledge transfer for both SceTL-APF and SceTL-MP, characterized by better adaptivity and faster computation speed compared with existing motion planning methods.
通过后继表征实现自动驾驶运动规划的场景级知识转移
对于自动驾驶汽车来说,迁移学习可以在新遇到的场景中更好地利用以前学习的知识,从而提高性能,这为提高运动规划性能带来了巨大希望。然而,以往利用迁移学习的做法都是数据层面的,主要通过引入额外数据和扩展经验来实现。这种数据层面的考虑在很大程度上依赖于数据的质量和数量,未能考虑到类似场景背后的场景层面特征。在本文中,我们为自动驾驶运动规划提供了一个场景级知识转移框架,命名为 SceTL。通过利用继任者表示法,可以捕捉相似场景之间的一般场景级知识,从而在不同交通场景中循环使用,增强运动规划的能力。为了验证我们框架的有效性,我们提出了一种结合 SceTL 和经典人工势场(APF)的方法,名为 SceTL-APF,用于在静态场景中进行导航的全局规划。同时,针对动态场景开发了一种结合 SceTL 和运动基元(MP)的局部规划方法,即 SceTL-MP。模拟数据和现实数据均用于验证。实验结果表明,SceTL 可以促进 SceTL-APF 和 SceTL-MP 的场景级知识转移,与现有的运动规划方法相比,SceTL 具有更好的适应性和更快的计算速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
15.80
自引率
12.00%
发文量
332
审稿时长
64 days
期刊介绍: Transportation Research: Part C (TR_C) is dedicated to showcasing high-quality, scholarly research that delves into the development, applications, and implications of transportation systems and emerging technologies. Our focus lies not solely on individual technologies, but rather on their broader implications for the planning, design, operation, control, maintenance, and rehabilitation of transportation systems, services, and components. In essence, the intellectual core of the journal revolves around the transportation aspect rather than the technology itself. We actively encourage the integration of quantitative methods from diverse fields such as operations research, control systems, complex networks, computer science, and artificial intelligence. Join us in exploring the intersection of transportation systems and emerging technologies to drive innovation and progress in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信