{"title":"基于随机灰度替代的强弱协同学习跨模态人物再识别","authors":"Zexin Zhang","doi":"10.1002/cpe.70101","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Visible-infrared person re-identification (VI-ReID) is a rapidly emerging cross-modality matching problem that aims to identify the same individual across daytime visible modality and nighttime thermal modality. Existing state-of-the-art methods predominantly focus on leveraging image generation techniques to create cross-modality images or on designing diverse feature-level constraints to align feature distributions between heterogeneous data. However, challenges arising from color variations caused by differences in the imaging processes of spectrum cameras remain unresolved, leading to suboptimal feature representations. In this paper, we propose a simple yet highly effective data augmentation technique called Random Grayscale Region Substitution (RGRS) for the cross-modality matching task. RGRS operates by randomly selecting a rectangular region within a training sample and converting it to grayscale. This process generates training images that integrate varying levels of visible and channel-independent information, thereby mitigating overfitting and enhancing the model's robustness to color variations. In addition, we design a weighted regularized triplet loss function for cross-modality metric learning and a weak–strong synergy learning strategy to improve the performance of cross-modal matching. We validate the effectiveness of our approach through extensive experiments conducted on publicly available cross-modality Re-ID datasets, including SYSU-MM01 and RegDB. The experimental results demonstrate that our proposed method significantly improves accuracy, making it a valuable training trick for advancing VT-ReID research.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 12-14","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Weak–Strong Synergy Learning With Random Grayscale Substitution for Cross-Modality Person Re-Identification\",\"authors\":\"Zexin Zhang\",\"doi\":\"10.1002/cpe.70101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Visible-infrared person re-identification (VI-ReID) is a rapidly emerging cross-modality matching problem that aims to identify the same individual across daytime visible modality and nighttime thermal modality. Existing state-of-the-art methods predominantly focus on leveraging image generation techniques to create cross-modality images or on designing diverse feature-level constraints to align feature distributions between heterogeneous data. However, challenges arising from color variations caused by differences in the imaging processes of spectrum cameras remain unresolved, leading to suboptimal feature representations. In this paper, we propose a simple yet highly effective data augmentation technique called Random Grayscale Region Substitution (RGRS) for the cross-modality matching task. RGRS operates by randomly selecting a rectangular region within a training sample and converting it to grayscale. This process generates training images that integrate varying levels of visible and channel-independent information, thereby mitigating overfitting and enhancing the model's robustness to color variations. In addition, we design a weighted regularized triplet loss function for cross-modality metric learning and a weak–strong synergy learning strategy to improve the performance of cross-modal matching. We validate the effectiveness of our approach through extensive experiments conducted on publicly available cross-modality Re-ID datasets, including SYSU-MM01 and RegDB. The experimental results demonstrate that our proposed method significantly improves accuracy, making it a valuable training trick for advancing VT-ReID research.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 12-14\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-04-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70101\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70101","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Weak–Strong Synergy Learning With Random Grayscale Substitution for Cross-Modality Person Re-Identification
Visible-infrared person re-identification (VI-ReID) is a rapidly emerging cross-modality matching problem that aims to identify the same individual across daytime visible modality and nighttime thermal modality. Existing state-of-the-art methods predominantly focus on leveraging image generation techniques to create cross-modality images or on designing diverse feature-level constraints to align feature distributions between heterogeneous data. However, challenges arising from color variations caused by differences in the imaging processes of spectrum cameras remain unresolved, leading to suboptimal feature representations. In this paper, we propose a simple yet highly effective data augmentation technique called Random Grayscale Region Substitution (RGRS) for the cross-modality matching task. RGRS operates by randomly selecting a rectangular region within a training sample and converting it to grayscale. This process generates training images that integrate varying levels of visible and channel-independent information, thereby mitigating overfitting and enhancing the model's robustness to color variations. In addition, we design a weighted regularized triplet loss function for cross-modality metric learning and a weak–strong synergy learning strategy to improve the performance of cross-modal matching. We validate the effectiveness of our approach through extensive experiments conducted on publicly available cross-modality Re-ID datasets, including SYSU-MM01 and RegDB. The experimental results demonstrate that our proposed method significantly improves accuracy, making it a valuable training trick for advancing VT-ReID research.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.