Scene Spatio-Temporal Graph Convolutional Network for Pedestrian Intention Estimation

Abhilash Y. Naik, Ariyan Bighashdel, P. Jancura, Gijs Dubbelman
{"title":"Scene Spatio-Temporal Graph Convolutional Network for Pedestrian Intention Estimation","authors":"Abhilash Y. Naik, Ariyan Bighashdel, P. Jancura, Gijs Dubbelman","doi":"10.1109/iv51971.2022.9827231","DOIUrl":null,"url":null,"abstract":"For safe and comfortable navigation of autonomous vehicles, it is crucial to know the pedestrian’s intention of crossing the street. Generally, human drivers are aware of the traffic objects (e.g., crosswalks and traffic lights) in the environment while driving; likewise, these objects would play a crucial role for autonomous vehicles. In this research, we propose a novel pedestrian intention estimation method that not only takes into account the influence of traffic objects but also learns their contribution levels on the intention of the pedestrian. Our proposed method, referred to as Scene SpatioTemporal Graph Convolutional Network (Scene-STGCN), takes benefits from the strength of Graph Convolutional Networks and efficiently encodes the relationships between the pedestrian and the scene objects both spatially and temporally. We conduct several experiments on the Pedestrian Intention Estimation (PIE) dataset and illustrate the importance of scene objects and their contribution levels in the task of pedestrian intention estimation. Furthermore, we perform statistical analysis on the relevance of different traffic objects in the PIE dataset and carry out an ablation study on the effect of various information sources in the scene. Finally, we demonstrate the significance of the proposed Scene-STGCN through experimental comparisons with several baselines. The results indicate that our proposed Scene-STGCN outperforms the current state-of-the-art method by 0.03 in terms of ROC-AUC metric.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iv51971.2022.9827231","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

For safe and comfortable navigation of autonomous vehicles, it is crucial to know the pedestrian’s intention of crossing the street. Generally, human drivers are aware of the traffic objects (e.g., crosswalks and traffic lights) in the environment while driving; likewise, these objects would play a crucial role for autonomous vehicles. In this research, we propose a novel pedestrian intention estimation method that not only takes into account the influence of traffic objects but also learns their contribution levels on the intention of the pedestrian. Our proposed method, referred to as Scene SpatioTemporal Graph Convolutional Network (Scene-STGCN), takes benefits from the strength of Graph Convolutional Networks and efficiently encodes the relationships between the pedestrian and the scene objects both spatially and temporally. We conduct several experiments on the Pedestrian Intention Estimation (PIE) dataset and illustrate the importance of scene objects and their contribution levels in the task of pedestrian intention estimation. Furthermore, we perform statistical analysis on the relevance of different traffic objects in the PIE dataset and carry out an ablation study on the effect of various information sources in the scene. Finally, we demonstrate the significance of the proposed Scene-STGCN through experimental comparisons with several baselines. The results indicate that our proposed Scene-STGCN outperforms the current state-of-the-art method by 0.03 in terms of ROC-AUC metric.
行人意图估计的场景时空图卷积网络
为了实现自动驾驶汽车安全舒适的导航,了解行人过马路的意图至关重要。一般来说,人类驾驶员在驾驶时对环境中的交通物体(如人行横道和交通灯)是有意识的;同样,这些物体将在自动驾驶汽车中发挥至关重要的作用。在本研究中,我们提出了一种新的行人意图估计方法,该方法既考虑了交通对象的影响,又学习了它们对行人意图的贡献程度。我们提出的方法,被称为场景时空图卷积网络(Scene- stgcn),利用图卷积网络的优势,有效地编码行人和场景对象之间的空间和时间关系。我们在行人意图估计(PIE)数据集上进行了几个实验,并说明了场景对象在行人意图估计任务中的重要性及其贡献水平。此外,我们对PIE数据集中不同交通对象的相关性进行了统计分析,并对场景中各种信息源的影响进行了消融研究。最后,我们通过与几个基线的实验比较,证明了所提出的场景- stgcn的意义。结果表明,我们提出的Scene-STGCN在ROC-AUC度量方面比目前最先进的方法高出0.03。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信