{"title":"基于模态发散的无监督可见-红外跨模态人再识别进化学习","authors":"Yuxuan Liu , Hongwei Ge , Yong Luo , Chunguo Wu","doi":"10.1016/j.inffus.2025.103706","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised visible-infrared cross-modality person re-identification aims to learn cross-modality invariant features between visible and infrared modalities without relying on labeled data. Currently, the state-of-the-art methods optimize cross-modality differences by reducing intra-class gaps while expanding inter-class gaps as the underlying paradigm. However, since the cross-modality intra-class gaps are huge, there must be a large number of inter-class instances between the gaps, and such inter-class instances make cross-modality intra-class instances difficult to get closer to each other in the feature space. To this end, we propose a modality divergence based evolutionary learning framework to optimize the cross-modality intra- and inter-class instance distribution. Specifically, on the one hand, we explore the optimization directions of each cluster in two modalities and make the explored attack and defense clusters perform mutual adversarial evolutionary learning through selection, crossover, and mutation, which produces the optimal inter-class distribution. On the other hand, we explore the intra-class instances with maximum and minimum similarity and perform mutual evolutionary optimization between the maximum and minimum instances, which retains only the modality changes in the intra-class instances to learn cross-modality invariant features. Extensive experiments conducted on datasets for visible-infrared person re-identification demonstrate that the proposed approach outperforms current state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103706"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modality divergence based evolutionary learning for unsupervised visible-infrared cross-modality person re-identification\",\"authors\":\"Yuxuan Liu , Hongwei Ge , Yong Luo , Chunguo Wu\",\"doi\":\"10.1016/j.inffus.2025.103706\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Unsupervised visible-infrared cross-modality person re-identification aims to learn cross-modality invariant features between visible and infrared modalities without relying on labeled data. Currently, the state-of-the-art methods optimize cross-modality differences by reducing intra-class gaps while expanding inter-class gaps as the underlying paradigm. However, since the cross-modality intra-class gaps are huge, there must be a large number of inter-class instances between the gaps, and such inter-class instances make cross-modality intra-class instances difficult to get closer to each other in the feature space. To this end, we propose a modality divergence based evolutionary learning framework to optimize the cross-modality intra- and inter-class instance distribution. Specifically, on the one hand, we explore the optimization directions of each cluster in two modalities and make the explored attack and defense clusters perform mutual adversarial evolutionary learning through selection, crossover, and mutation, which produces the optimal inter-class distribution. On the other hand, we explore the intra-class instances with maximum and minimum similarity and perform mutual evolutionary optimization between the maximum and minimum instances, which retains only the modality changes in the intra-class instances to learn cross-modality invariant features. Extensive experiments conducted on datasets for visible-infrared person re-identification demonstrate that the proposed approach outperforms current state-of-the-art methods.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"126 \",\"pages\":\"Article 103706\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S156625352500778X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156625352500778X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Modality divergence based evolutionary learning for unsupervised visible-infrared cross-modality person re-identification
Unsupervised visible-infrared cross-modality person re-identification aims to learn cross-modality invariant features between visible and infrared modalities without relying on labeled data. Currently, the state-of-the-art methods optimize cross-modality differences by reducing intra-class gaps while expanding inter-class gaps as the underlying paradigm. However, since the cross-modality intra-class gaps are huge, there must be a large number of inter-class instances between the gaps, and such inter-class instances make cross-modality intra-class instances difficult to get closer to each other in the feature space. To this end, we propose a modality divergence based evolutionary learning framework to optimize the cross-modality intra- and inter-class instance distribution. Specifically, on the one hand, we explore the optimization directions of each cluster in two modalities and make the explored attack and defense clusters perform mutual adversarial evolutionary learning through selection, crossover, and mutation, which produces the optimal inter-class distribution. On the other hand, we explore the intra-class instances with maximum and minimum similarity and perform mutual evolutionary optimization between the maximum and minimum instances, which retains only the modality changes in the intra-class instances to learn cross-modality invariant features. Extensive experiments conducted on datasets for visible-infrared person re-identification demonstrate that the proposed approach outperforms current state-of-the-art methods.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.