{"title":"Progressive Cross-Modal Association Learning for Unsupervised Visible-Infrared Person Re-Identification","authors":"Yiming Yang;Weipeng Hu;Haifeng Hu","doi":"10.1109/TIFS.2025.3527356","DOIUrl":null,"url":null,"abstract":"Unsupervised visible-infrared person re-identification (USL-VI-ReID) aims to explore the cross-modal associations and learn modality-invariant representations without manual labels. The field provides flexible and economical methods for person re-identification across light and dark scenes. Existing approaches utilize cluster-level strong association methods, such as graph matching and optimal transport, to correlate modal differences, which may result in mis-linking between clusters and introduce noise. To overcome this limitation and gradually acquire reliable cross-modal associations, we propose a Progressive Cross-modal Association Learning (PCAL) method for USL-VI-ReID. Specifically, our PCAL naturally integrates Triple-modal Adversarial Learning (TAL), Cross-modal Neighbor Expansion (CNE) and Modality-invariant Contrastive Learning (MCL) into a unified framework. TAL fully utilizes the advantage of Channel Augmented (CA) technique to reduce modal differences, which facilitates subsequent mining of cross-modal associations. Furthermore, we identify the modal bias problem in existing clustering methods, which hinders the effective establishment of cross-modal associations. To address this problem, CNE is proposed to balance the contribution of cross-modal neighbor information, linking potential cross-modal neighbors as much as possible. Finally, MCL is then introduced to refine the cross-modal associations and learn modality-invariant representations. Extensive experiments on SYSU-MM01 and RegDB datasets demonstrate the competitive performance of PCAL method. Code is available at <uri>https://github.com/YimingYang23/PCA_USLVIReID</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1290-1304"},"PeriodicalIF":6.3000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10833701/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised visible-infrared person re-identification (USL-VI-ReID) aims to explore the cross-modal associations and learn modality-invariant representations without manual labels. The field provides flexible and economical methods for person re-identification across light and dark scenes. Existing approaches utilize cluster-level strong association methods, such as graph matching and optimal transport, to correlate modal differences, which may result in mis-linking between clusters and introduce noise. To overcome this limitation and gradually acquire reliable cross-modal associations, we propose a Progressive Cross-modal Association Learning (PCAL) method for USL-VI-ReID. Specifically, our PCAL naturally integrates Triple-modal Adversarial Learning (TAL), Cross-modal Neighbor Expansion (CNE) and Modality-invariant Contrastive Learning (MCL) into a unified framework. TAL fully utilizes the advantage of Channel Augmented (CA) technique to reduce modal differences, which facilitates subsequent mining of cross-modal associations. Furthermore, we identify the modal bias problem in existing clustering methods, which hinders the effective establishment of cross-modal associations. To address this problem, CNE is proposed to balance the contribution of cross-modal neighbor information, linking potential cross-modal neighbors as much as possible. Finally, MCL is then introduced to refine the cross-modal associations and learn modality-invariant representations. Extensive experiments on SYSU-MM01 and RegDB datasets demonstrate the competitive performance of PCAL method. Code is available at https://github.com/YimingYang23/PCA_USLVIReID.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features