Yan Jiang;Xu Cheng;Hao Yu;Xingyu Liu;Haoyu Chen;Guoying Zhao
{"title":"DSAF: Dual Space Alignment Framework for Visible-Infrared Person Re-Identification","authors":"Yan Jiang;Xu Cheng;Hao Yu;Xingyu Liu;Haoyu Chen;Guoying Zhao","doi":"10.1109/TMM.2025.3542988","DOIUrl":null,"url":null,"abstract":"Visible-infrared person re-identification (VI-ReID) is a cross-modality retrieval task that aims to match visible and infrared pedestrian images across non-overlapped cameras. However, we observe that three crucial challenges remain inadequately addressed by existing methods: (i) limited discriminative capacity for modality-shared representation, (ii) modality misalignment, and (iii) neglect of identity consistency knowledge. To solve the above issues, we propose a novel dual space alignment framework (DSAF) to constrain the modality in two specific spaces. Specifically, for (i), we design a lightweight and plug-and-play modality invariant enhancement (MIE) module to capture fine-grained semantic information and render identity discriminative. This facilitates the establishment of correlations between visible and infrared modalities, enabling the model to learn robust modality-shared features. To tackle (ii), a dual space alignment (DSA) is introduced to conduct the pixel-level alignment in both Euclidean space and Hilbert space. DSA establishes an elastic relationship between these two spaces, remaining invariant knowledge across two spaces. To solve (iii), we propose an adaptive identity-consistent learning (AIL) to discover identity-consistent knowledge between visible and infrared modalities in a dynamic manner. Extensive experiments on mainstream VI-ReID benchmarks show the superiority and flexibility of our proposed method, achieving competitive performance on mainstream datasets.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"5591-5603"},"PeriodicalIF":9.7000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10897882/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Visible-infrared person re-identification (VI-ReID) is a cross-modality retrieval task that aims to match visible and infrared pedestrian images across non-overlapped cameras. However, we observe that three crucial challenges remain inadequately addressed by existing methods: (i) limited discriminative capacity for modality-shared representation, (ii) modality misalignment, and (iii) neglect of identity consistency knowledge. To solve the above issues, we propose a novel dual space alignment framework (DSAF) to constrain the modality in two specific spaces. Specifically, for (i), we design a lightweight and plug-and-play modality invariant enhancement (MIE) module to capture fine-grained semantic information and render identity discriminative. This facilitates the establishment of correlations between visible and infrared modalities, enabling the model to learn robust modality-shared features. To tackle (ii), a dual space alignment (DSA) is introduced to conduct the pixel-level alignment in both Euclidean space and Hilbert space. DSA establishes an elastic relationship between these two spaces, remaining invariant knowledge across two spaces. To solve (iii), we propose an adaptive identity-consistent learning (AIL) to discover identity-consistent knowledge between visible and infrared modalities in a dynamic manner. Extensive experiments on mainstream VI-ReID benchmarks show the superiority and flexibility of our proposed method, achieving competitive performance on mainstream datasets.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.