Yihong Xu, Victor Letzelter, Mickaël Chen, Éloi Zablocki, Matthieu Cord
{"title":"Annealed Winner-Takes-All for Motion Forecasting","authors":"Yihong Xu, Victor Letzelter, Mickaël Chen, Éloi Zablocki, Matthieu Cord","doi":"arxiv-2409.11172","DOIUrl":null,"url":null,"abstract":"In autonomous driving, motion prediction aims at forecasting the future\ntrajectories of nearby agents, helping the ego vehicle to anticipate behaviors\nand drive safely. A key challenge is generating a diverse set of future\npredictions, commonly addressed using data-driven models with Multiple Choice\nLearning (MCL) architectures and Winner-Takes-All (WTA) training objectives.\nHowever, these methods face initialization sensitivity and training\ninstabilities. Additionally, to compensate for limited performance, some\napproaches rely on training with a large set of hypotheses, requiring a\npost-selection step during inference to significantly reduce the number of\npredictions. To tackle these issues, we take inspiration from annealed MCL, a\nrecently introduced technique that improves the convergence properties of MCL\nmethods through an annealed Winner-Takes-All loss (aWTA). In this paper, we\ndemonstrate how the aWTA loss can be integrated with state-of-the-art motion\nforecasting models to enhance their performance using only a minimal set of\nhypotheses, eliminating the need for the cumbersome post-selection step. Our\napproach can be easily incorporated into any trajectory prediction model\nnormally trained using WTA and yields significant improvements. To facilitate\nthe application of our approach to future motion forecasting models, the code\nwill be made publicly available upon acceptance:\nhttps://github.com/valeoai/MF_aWTA.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In autonomous driving, motion prediction aims at forecasting the future
trajectories of nearby agents, helping the ego vehicle to anticipate behaviors
and drive safely. A key challenge is generating a diverse set of future
predictions, commonly addressed using data-driven models with Multiple Choice
Learning (MCL) architectures and Winner-Takes-All (WTA) training objectives.
However, these methods face initialization sensitivity and training
instabilities. Additionally, to compensate for limited performance, some
approaches rely on training with a large set of hypotheses, requiring a
post-selection step during inference to significantly reduce the number of
predictions. To tackle these issues, we take inspiration from annealed MCL, a
recently introduced technique that improves the convergence properties of MCL
methods through an annealed Winner-Takes-All loss (aWTA). In this paper, we
demonstrate how the aWTA loss can be integrated with state-of-the-art motion
forecasting models to enhance their performance using only a minimal set of
hypotheses, eliminating the need for the cumbersome post-selection step. Our
approach can be easily incorporated into any trajectory prediction model
normally trained using WTA and yields significant improvements. To facilitate
the application of our approach to future motion forecasting models, the code
will be made publicly available upon acceptance:
https://github.com/valeoai/MF_aWTA.