使用场景流提高道路使用者对自我车辆的运动预测

IF 2.5 4区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Nilusha Jayawickrama, Risto Ojala, Kari Tammi
{"title":"使用场景流提高道路使用者对自我车辆的运动预测","authors":"Nilusha Jayawickrama,&nbsp;Risto Ojala,&nbsp;Kari Tammi","doi":"10.1049/itr2.70010","DOIUrl":null,"url":null,"abstract":"<p>We addressed the challenge of accurately determining the motion status of vehicles neighbouring an ego-vehicle, across various driving scenarios. The aim was to enhance the prediction accuracy in identifying moving vehicles through the integration of scene-flow analysis into tracking. The research was motivated by the importance, in autonomous driving, of analysing the state exclusively of moving vehicles. We implemented a novel, synergistic, vision-based, and offline approach, named MoVe, combining spatial analysis of predicted scene-flows and temporal tracking, from sensor-fused input data. Regions of moving vehicles (post background refinement) were obtained via instance segmentation, and each instance mapped to the corresponding (original) scene flows. Our method achieved an <span></span><math>\n <semantics>\n <mi>F</mi>\n <annotation>$F$</annotation>\n </semantics></math>1 score of 0.953 and accuracy of 0.959 for binary motion classification (stationary vs. moving). The proposed fusion segmentation model produced an mIoU of 82.29% for cars, outperforming YOLOv7 which relies solely on visual features. Notably, we observed a complementary dynamic between scene-flow analysis and tracking. Scene-flow analysis was generally effective in identifying fast moving vehicles, even under occlusions or truncations caused by other vehicles or infrastructure elements, while tracking usually excelled in identifying comparatively slow moving vehicles. Thus, the study demonstrated the viability of our proposed architecture to improve the detection of moving vehicles around an ego-vehicle. The outcomes further suggested the potential of our work to be utilised for training future deep learning models based on machine vision and attention, such as object-centric learning, which paves the way for enhancing perception, intent estimation, control strategies, and safety in autonomous driving.</p>","PeriodicalId":50381,"journal":{"name":"IET Intelligent Transport Systems","volume":"19 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70010","citationCount":"0","resultStr":"{\"title\":\"Using Scene-Flow to Improve Predictions of Road Users in Motion With Respect to an Ego-Vehicle\",\"authors\":\"Nilusha Jayawickrama,&nbsp;Risto Ojala,&nbsp;Kari Tammi\",\"doi\":\"10.1049/itr2.70010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>We addressed the challenge of accurately determining the motion status of vehicles neighbouring an ego-vehicle, across various driving scenarios. The aim was to enhance the prediction accuracy in identifying moving vehicles through the integration of scene-flow analysis into tracking. The research was motivated by the importance, in autonomous driving, of analysing the state exclusively of moving vehicles. We implemented a novel, synergistic, vision-based, and offline approach, named MoVe, combining spatial analysis of predicted scene-flows and temporal tracking, from sensor-fused input data. Regions of moving vehicles (post background refinement) were obtained via instance segmentation, and each instance mapped to the corresponding (original) scene flows. Our method achieved an <span></span><math>\\n <semantics>\\n <mi>F</mi>\\n <annotation>$F$</annotation>\\n </semantics></math>1 score of 0.953 and accuracy of 0.959 for binary motion classification (stationary vs. moving). The proposed fusion segmentation model produced an mIoU of 82.29% for cars, outperforming YOLOv7 which relies solely on visual features. Notably, we observed a complementary dynamic between scene-flow analysis and tracking. Scene-flow analysis was generally effective in identifying fast moving vehicles, even under occlusions or truncations caused by other vehicles or infrastructure elements, while tracking usually excelled in identifying comparatively slow moving vehicles. Thus, the study demonstrated the viability of our proposed architecture to improve the detection of moving vehicles around an ego-vehicle. The outcomes further suggested the potential of our work to be utilised for training future deep learning models based on machine vision and attention, such as object-centric learning, which paves the way for enhancing perception, intent estimation, control strategies, and safety in autonomous driving.</p>\",\"PeriodicalId\":50381,\"journal\":{\"name\":\"IET Intelligent Transport Systems\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/itr2.70010\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Intelligent Transport Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/itr2.70010\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Intelligent Transport Systems","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/itr2.70010","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

我们解决了在不同的驾驶场景下,准确确定自动驾驶车辆附近车辆的运动状态的挑战。目的是通过将场景流分析与跟踪相结合,提高识别移动车辆的预测精度。这项研究的动机是,在自动驾驶中,分析移动车辆的状态非常重要。我们实现了一种新的、协同的、基于视觉的离线方法,名为MoVe,结合了预测场景流的空间分析和时间跟踪,来自传感器融合的输入数据。通过实例分割得到运动车辆的区域(后背景细化),并将每个实例映射到相应的(原始)场景流。我们的方法实现了F$ F$ 1分数为0.953,二元运动分类(静止与移动)的准确率为0.959。所提出的融合分割模型对汽车的mIoU为82.29%,优于仅依赖视觉特征的YOLOv7。值得注意的是,我们观察到场景流分析和跟踪之间的互补动态。场景流分析通常在识别快速移动的车辆方面是有效的,即使是在其他车辆或基础设施造成的遮挡或截断的情况下,而跟踪通常在识别相对缓慢移动的车辆方面表现出色。因此,该研究证明了我们提出的架构在提高对自我车辆周围移动车辆的检测方面的可行性。结果进一步表明,我们的工作潜力可用于训练未来基于机器视觉和注意力的深度学习模型,例如以对象为中心的学习,这为增强自动驾驶的感知、意图估计、控制策略和安全性铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Using Scene-Flow to Improve Predictions of Road Users in Motion With Respect to an Ego-Vehicle

Using Scene-Flow to Improve Predictions of Road Users in Motion With Respect to an Ego-Vehicle

We addressed the challenge of accurately determining the motion status of vehicles neighbouring an ego-vehicle, across various driving scenarios. The aim was to enhance the prediction accuracy in identifying moving vehicles through the integration of scene-flow analysis into tracking. The research was motivated by the importance, in autonomous driving, of analysing the state exclusively of moving vehicles. We implemented a novel, synergistic, vision-based, and offline approach, named MoVe, combining spatial analysis of predicted scene-flows and temporal tracking, from sensor-fused input data. Regions of moving vehicles (post background refinement) were obtained via instance segmentation, and each instance mapped to the corresponding (original) scene flows. Our method achieved an F $F$ 1 score of 0.953 and accuracy of 0.959 for binary motion classification (stationary vs. moving). The proposed fusion segmentation model produced an mIoU of 82.29% for cars, outperforming YOLOv7 which relies solely on visual features. Notably, we observed a complementary dynamic between scene-flow analysis and tracking. Scene-flow analysis was generally effective in identifying fast moving vehicles, even under occlusions or truncations caused by other vehicles or infrastructure elements, while tracking usually excelled in identifying comparatively slow moving vehicles. Thus, the study demonstrated the viability of our proposed architecture to improve the detection of moving vehicles around an ego-vehicle. The outcomes further suggested the potential of our work to be utilised for training future deep learning models based on machine vision and attention, such as object-centric learning, which paves the way for enhancing perception, intent estimation, control strategies, and safety in autonomous driving.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Intelligent Transport Systems
IET Intelligent Transport Systems 工程技术-运输科技
CiteScore
6.50
自引率
7.40%
发文量
159
审稿时长
3 months
期刊介绍: IET Intelligent Transport Systems is an interdisciplinary journal devoted to research into the practical applications of ITS and infrastructures. The scope of the journal includes the following: Sustainable traffic solutions Deployments with enabling technologies Pervasive monitoring Applications; demonstrations and evaluation Economic and behavioural analyses of ITS services and scenario Data Integration and analytics Information collection and processing; image processing applications in ITS ITS aspects of electric vehicles Autonomous vehicles; connected vehicle systems; In-vehicle ITS, safety and vulnerable road user aspects Mobility as a service systems Traffic management and control Public transport systems technologies Fleet and public transport logistics Emergency and incident management Demand management and electronic payment systems Traffic related air pollution management Policy and institutional issues Interoperability, standards and architectures Funding scenarios Enforcement Human machine interaction Education, training and outreach Current Special Issue Call for papers: Intelligent Transportation Systems in Smart Cities for Sustainable Environment - https://digital-library.theiet.org/files/IET_ITS_CFP_ITSSCSE.pdf Sustainably Intelligent Mobility (SIM) - https://digital-library.theiet.org/files/IET_ITS_CFP_SIM.pdf Traffic Theory and Modelling in the Era of Artificial Intelligence and Big Data (in collaboration with World Congress for Transport Research, WCTR 2019) - https://digital-library.theiet.org/files/IET_ITS_CFP_WCTR.pdf
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信