{"title":"Gate-Calibrated Double Disentangled Distribution Matching Network for Cross-Domain Pedestrian Trajectory Prediction","authors":"Zhengfa Liu;Ya Wu;Dequan Zeng;Shihang Du;Boyang Peng","doi":"10.1109/LSP.2024.3521786","DOIUrl":null,"url":null,"abstract":"In cross-domain pedestrian trajectory prediction, most existing methods usually focus on learning entangled spatial-temporal domain-invariant features, while ignoring the different contributions of spatial and temporal shifts to the prediction model. To address this issue, we propose a novel gate-Calibrated Double Disentangled Distribution Matching Network (CD<inline-formula><tex-math>$^{3}$</tex-math></inline-formula>MN) that can effectively eliminate cross-trajectory domain shifts at both the spatial and temporal levels while learning robust prediction using a calibrated gated-fusion. The key idea of CD<inline-formula><tex-math>$^{3}$</tex-math></inline-formula>MN is to model domain-invariant features across trajectories as a calibrated gated-fusion of disentangled domain-invariant features at the temporal and spatial levels. We first introduce a spatial-temporal disentanglement module to disentangle the spatial-temporal properties of pedestrian trajectories from the spatial-level and temporal-level. Secondly, we design a domain-invariant disentanglement module for learning domain-invariant sample-level transferable feature representations at the spatial and temporal levels. Finally, to effectively fuse these disentangled temporal and spatial features, we design a calibrated gated-fusion module where both inter-level and intra-level knowledge are introduced to calibrate the fusion gate. Extensive experiments on real datasets demonstrate the effectiveness of CD<inline-formula><tex-math>$^{3}$</tex-math></inline-formula>MN.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"656-660"},"PeriodicalIF":3.2000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10812777/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In cross-domain pedestrian trajectory prediction, most existing methods usually focus on learning entangled spatial-temporal domain-invariant features, while ignoring the different contributions of spatial and temporal shifts to the prediction model. To address this issue, we propose a novel gate-Calibrated Double Disentangled Distribution Matching Network (CD$^{3}$MN) that can effectively eliminate cross-trajectory domain shifts at both the spatial and temporal levels while learning robust prediction using a calibrated gated-fusion. The key idea of CD$^{3}$MN is to model domain-invariant features across trajectories as a calibrated gated-fusion of disentangled domain-invariant features at the temporal and spatial levels. We first introduce a spatial-temporal disentanglement module to disentangle the spatial-temporal properties of pedestrian trajectories from the spatial-level and temporal-level. Secondly, we design a domain-invariant disentanglement module for learning domain-invariant sample-level transferable feature representations at the spatial and temporal levels. Finally, to effectively fuse these disentangled temporal and spatial features, we design a calibrated gated-fusion module where both inter-level and intra-level knowledge are introduced to calibrate the fusion gate. Extensive experiments on real datasets demonstrate the effectiveness of CD$^{3}$MN.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.