{"title":"Decoupling feature-driven and multimodal fusion attention for clothing-changing person re-identification","authors":"Yongkang Ding, Xiaoyin Wang, Hao Yuan, Meina Qu, Xiangzhou Jian","doi":"10.1007/s10462-025-11250-6","DOIUrl":null,"url":null,"abstract":"<div><p>Person Re-Identification (ReID) plays a crucial role in intelligent surveillance, public safety, and intelligent transportation systems. However, clothing variation remains a significant challenge in this field. To address this issue, this paper introduces a method named Decoupling Feature-Driven and Multimodal Fusion Attention for Clothing-Changing Person Re-Identification (DM-ReID). The proposed approach employs a dual-stream feature extraction framework, consisting of a global RGB image feature stream and a clothing-irrelevant feature enhancement stream. These streams respectively capture comprehensive appearance information and identity features independent of clothing. Additionally, two feature fusion strategies are proposed: firstly, an initial fusion of RGB features and clothing-irrelevant features is achieved through the Hadamard product in the mid-network stage to enhance feature complementarity; secondly, a multimodal fusion attention mechanism is integrated at the network’s end to dynamically adjust feature weights, further improving feature representation capabilities. To optimize model performance, a composite loss function combining identity loss and triplet loss is utilized, effectively enhancing the model’s discriminative ability and feature distinctiveness. Experimental results on multiple public datasets, including PRCC, LTCC, and VC-Clothes, demonstrate that DM-ReID surpasses most existing mainstream methods in Rank-1 accuracy and mean Average Precision (mAP) metrics under clothing-changing scenarios. These findings validate the method’s effectiveness and robustness in handling complex clothing variations, highlighting its promising prospects for practical applications.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 8","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11250-6.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11250-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Person Re-Identification (ReID) plays a crucial role in intelligent surveillance, public safety, and intelligent transportation systems. However, clothing variation remains a significant challenge in this field. To address this issue, this paper introduces a method named Decoupling Feature-Driven and Multimodal Fusion Attention for Clothing-Changing Person Re-Identification (DM-ReID). The proposed approach employs a dual-stream feature extraction framework, consisting of a global RGB image feature stream and a clothing-irrelevant feature enhancement stream. These streams respectively capture comprehensive appearance information and identity features independent of clothing. Additionally, two feature fusion strategies are proposed: firstly, an initial fusion of RGB features and clothing-irrelevant features is achieved through the Hadamard product in the mid-network stage to enhance feature complementarity; secondly, a multimodal fusion attention mechanism is integrated at the network’s end to dynamically adjust feature weights, further improving feature representation capabilities. To optimize model performance, a composite loss function combining identity loss and triplet loss is utilized, effectively enhancing the model’s discriminative ability and feature distinctiveness. Experimental results on multiple public datasets, including PRCC, LTCC, and VC-Clothes, demonstrate that DM-ReID surpasses most existing mainstream methods in Rank-1 accuracy and mean Average Precision (mAP) metrics under clothing-changing scenarios. These findings validate the method’s effectiveness and robustness in handling complex clothing variations, highlighting its promising prospects for practical applications.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.