Jiawei Sun, Jiahui Li, Tingchen Liu, Chengran Yuan, Shuo Sun, Zefan Huang, Anthony Wong, Keng Peng Tee, Marcelo H. Ang Jr
{"title":"RMP-YOLO: A Robust Motion Predictor for Partially Observable Scenarios even if You Only Look Once","authors":"Jiawei Sun, Jiahui Li, Tingchen Liu, Chengran Yuan, Shuo Sun, Zefan Huang, Anthony Wong, Keng Peng Tee, Marcelo H. Ang Jr","doi":"arxiv-2409.11696","DOIUrl":null,"url":null,"abstract":"We introduce RMP-YOLO, a unified framework designed to provide robust motion\npredictions even with incomplete input data. Our key insight stems from the\nobservation that complete and reliable historical trajectory data plays a\npivotal role in ensuring accurate motion prediction. Therefore, we propose a\nnew paradigm that prioritizes the reconstruction of intact historical\ntrajectories before feeding them into the prediction modules. Our approach\nintroduces a novel scene tokenization module to enhance the extraction and\nfusion of spatial and temporal features. Following this, our proposed recovery\nmodule reconstructs agents' incomplete historical trajectories by leveraging\nlocal map topology and interactions with nearby agents. The reconstructed,\nclean historical data is then integrated into the downstream prediction\nmodules. Our framework is able to effectively handle missing data of varying\nlengths and remains robust against observation noise, while maintaining high\nprediction accuracy. Furthermore, our recovery module is compatible with\nexisting prediction models, ensuring seamless integration. Extensive\nexperiments validate the effectiveness of our approach, and deployment in\nreal-world autonomous vehicles confirms its practical utility. In the 2024\nWaymo Motion Prediction Competition, our method, RMP-YOLO, achieves\nstate-of-the-art performance, securing third place.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11696","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We introduce RMP-YOLO, a unified framework designed to provide robust motion
predictions even with incomplete input data. Our key insight stems from the
observation that complete and reliable historical trajectory data plays a
pivotal role in ensuring accurate motion prediction. Therefore, we propose a
new paradigm that prioritizes the reconstruction of intact historical
trajectories before feeding them into the prediction modules. Our approach
introduces a novel scene tokenization module to enhance the extraction and
fusion of spatial and temporal features. Following this, our proposed recovery
module reconstructs agents' incomplete historical trajectories by leveraging
local map topology and interactions with nearby agents. The reconstructed,
clean historical data is then integrated into the downstream prediction
modules. Our framework is able to effectively handle missing data of varying
lengths and remains robust against observation noise, while maintaining high
prediction accuracy. Furthermore, our recovery module is compatible with
existing prediction models, ensuring seamless integration. Extensive
experiments validate the effectiveness of our approach, and deployment in
real-world autonomous vehicles confirms its practical utility. In the 2024
Waymo Motion Prediction Competition, our method, RMP-YOLO, achieves
state-of-the-art performance, securing third place.