Muhammad Haris Kaka Khel , Paul Greaney , Marion McAfee , Sandra Moffett , Kevin Meehan
{"title":"GSTGM: Graph, spatial–temporal attention and generative based model for pedestrian multi-path prediction","authors":"Muhammad Haris Kaka Khel , Paul Greaney , Marion McAfee , Sandra Moffett , Kevin Meehan","doi":"10.1016/j.imavis.2024.105245","DOIUrl":null,"url":null,"abstract":"<div><p>Pedestrian trajectory prediction in urban environments has emerged as a critical research area with extensive applications across various domains. Accurate prediction of pedestrian trajectories is essential for the safe navigation of autonomous vehicles and robots in pedestrian-populated environments. Effective prediction models must capture both the spatial interactions among pedestrians and the temporal dependencies governing their movements. Existing research primarily focuses on forecasting a single trajectory per pedestrian, limiting its applicability in real-world scenarios characterised by diverse and unpredictable pedestrian behaviours. To address these challenges, this paper introduces the Graph Convolutional Network, Spatial–Temporal Attention, and Generative Model (GSTGM) for pedestrian trajectory prediction. GSTGM employs a spatiotemporal graph convolutional network to effectively capture complex interactions between pedestrians and their environment. Additionally, it integrates a spatial–temporal attention mechanism to prioritise relevant information during the prediction process. By incorporating a time-dependent prior within the latent space and utilising a computationally efficient generative model, GSTGM facilitates the generation of diverse and realistic future trajectories. The effectiveness of GSTGM is validated through experiments on real-world scenario datasets. Compared to the state-of-the-art models on benchmark datasets such as ETH/UCY, GSTGM demonstrates superior performance in accurately predicting multiple potential trajectories for individual pedestrians. This superiority is measured using metrics such as Final Displacement Error (FDE) and Average Displacement Error (ADE). Moreover, GSTGM achieves these results with significantly faster processing speeds.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105245"},"PeriodicalIF":4.2000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0262885624003500/pdfft?md5=be799dd771bacffe5a12fc1424240e2d&pid=1-s2.0-S0262885624003500-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624003500","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Pedestrian trajectory prediction in urban environments has emerged as a critical research area with extensive applications across various domains. Accurate prediction of pedestrian trajectories is essential for the safe navigation of autonomous vehicles and robots in pedestrian-populated environments. Effective prediction models must capture both the spatial interactions among pedestrians and the temporal dependencies governing their movements. Existing research primarily focuses on forecasting a single trajectory per pedestrian, limiting its applicability in real-world scenarios characterised by diverse and unpredictable pedestrian behaviours. To address these challenges, this paper introduces the Graph Convolutional Network, Spatial–Temporal Attention, and Generative Model (GSTGM) for pedestrian trajectory prediction. GSTGM employs a spatiotemporal graph convolutional network to effectively capture complex interactions between pedestrians and their environment. Additionally, it integrates a spatial–temporal attention mechanism to prioritise relevant information during the prediction process. By incorporating a time-dependent prior within the latent space and utilising a computationally efficient generative model, GSTGM facilitates the generation of diverse and realistic future trajectories. The effectiveness of GSTGM is validated through experiments on real-world scenario datasets. Compared to the state-of-the-art models on benchmark datasets such as ETH/UCY, GSTGM demonstrates superior performance in accurately predicting multiple potential trajectories for individual pedestrians. This superiority is measured using metrics such as Final Displacement Error (FDE) and Average Displacement Error (ADE). Moreover, GSTGM achieves these results with significantly faster processing speeds.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.