Abhilash Y. Naik, Ariyan Bighashdel, P. Jancura, Gijs Dubbelman
{"title":"Scene Spatio-Temporal Graph Convolutional Network for Pedestrian Intention Estimation","authors":"Abhilash Y. Naik, Ariyan Bighashdel, P. Jancura, Gijs Dubbelman","doi":"10.1109/iv51971.2022.9827231","DOIUrl":null,"url":null,"abstract":"For safe and comfortable navigation of autonomous vehicles, it is crucial to know the pedestrian’s intention of crossing the street. Generally, human drivers are aware of the traffic objects (e.g., crosswalks and traffic lights) in the environment while driving; likewise, these objects would play a crucial role for autonomous vehicles. In this research, we propose a novel pedestrian intention estimation method that not only takes into account the influence of traffic objects but also learns their contribution levels on the intention of the pedestrian. Our proposed method, referred to as Scene SpatioTemporal Graph Convolutional Network (Scene-STGCN), takes benefits from the strength of Graph Convolutional Networks and efficiently encodes the relationships between the pedestrian and the scene objects both spatially and temporally. We conduct several experiments on the Pedestrian Intention Estimation (PIE) dataset and illustrate the importance of scene objects and their contribution levels in the task of pedestrian intention estimation. Furthermore, we perform statistical analysis on the relevance of different traffic objects in the PIE dataset and carry out an ablation study on the effect of various information sources in the scene. Finally, we demonstrate the significance of the proposed Scene-STGCN through experimental comparisons with several baselines. The results indicate that our proposed Scene-STGCN outperforms the current state-of-the-art method by 0.03 in terms of ROC-AUC metric.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iv51971.2022.9827231","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
For safe and comfortable navigation of autonomous vehicles, it is crucial to know the pedestrian’s intention of crossing the street. Generally, human drivers are aware of the traffic objects (e.g., crosswalks and traffic lights) in the environment while driving; likewise, these objects would play a crucial role for autonomous vehicles. In this research, we propose a novel pedestrian intention estimation method that not only takes into account the influence of traffic objects but also learns their contribution levels on the intention of the pedestrian. Our proposed method, referred to as Scene SpatioTemporal Graph Convolutional Network (Scene-STGCN), takes benefits from the strength of Graph Convolutional Networks and efficiently encodes the relationships between the pedestrian and the scene objects both spatially and temporally. We conduct several experiments on the Pedestrian Intention Estimation (PIE) dataset and illustrate the importance of scene objects and their contribution levels in the task of pedestrian intention estimation. Furthermore, we perform statistical analysis on the relevance of different traffic objects in the PIE dataset and carry out an ablation study on the effect of various information sources in the scene. Finally, we demonstrate the significance of the proposed Scene-STGCN through experimental comparisons with several baselines. The results indicate that our proposed Scene-STGCN outperforms the current state-of-the-art method by 0.03 in terms of ROC-AUC metric.