Decoupling feature-driven and multimodal fusion attention for clothing-changing person re-identification

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yongkang Ding, Xiaoyin Wang, Hao Yuan, Meina Qu, Xiangzhou Jian
{"title":"Decoupling feature-driven and multimodal fusion attention for clothing-changing person re-identification","authors":"Yongkang Ding,&nbsp;Xiaoyin Wang,&nbsp;Hao Yuan,&nbsp;Meina Qu,&nbsp;Xiangzhou Jian","doi":"10.1007/s10462-025-11250-6","DOIUrl":null,"url":null,"abstract":"<div><p>Person Re-Identification (ReID) plays a crucial role in intelligent surveillance, public safety, and intelligent transportation systems. However, clothing variation remains a significant challenge in this field. To address this issue, this paper introduces a method named Decoupling Feature-Driven and Multimodal Fusion Attention for Clothing-Changing Person Re-Identification (DM-ReID). The proposed approach employs a dual-stream feature extraction framework, consisting of a global RGB image feature stream and a clothing-irrelevant feature enhancement stream. These streams respectively capture comprehensive appearance information and identity features independent of clothing. Additionally, two feature fusion strategies are proposed: firstly, an initial fusion of RGB features and clothing-irrelevant features is achieved through the Hadamard product in the mid-network stage to enhance feature complementarity; secondly, a multimodal fusion attention mechanism is integrated at the network’s end to dynamically adjust feature weights, further improving feature representation capabilities. To optimize model performance, a composite loss function combining identity loss and triplet loss is utilized, effectively enhancing the model’s discriminative ability and feature distinctiveness. Experimental results on multiple public datasets, including PRCC, LTCC, and VC-Clothes, demonstrate that DM-ReID surpasses most existing mainstream methods in Rank-1 accuracy and mean Average Precision (mAP) metrics under clothing-changing scenarios. These findings validate the method’s effectiveness and robustness in handling complex clothing variations, highlighting its promising prospects for practical applications.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 8","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11250-6.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11250-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Person Re-Identification (ReID) plays a crucial role in intelligent surveillance, public safety, and intelligent transportation systems. However, clothing variation remains a significant challenge in this field. To address this issue, this paper introduces a method named Decoupling Feature-Driven and Multimodal Fusion Attention for Clothing-Changing Person Re-Identification (DM-ReID). The proposed approach employs a dual-stream feature extraction framework, consisting of a global RGB image feature stream and a clothing-irrelevant feature enhancement stream. These streams respectively capture comprehensive appearance information and identity features independent of clothing. Additionally, two feature fusion strategies are proposed: firstly, an initial fusion of RGB features and clothing-irrelevant features is achieved through the Hadamard product in the mid-network stage to enhance feature complementarity; secondly, a multimodal fusion attention mechanism is integrated at the network’s end to dynamically adjust feature weights, further improving feature representation capabilities. To optimize model performance, a composite loss function combining identity loss and triplet loss is utilized, effectively enhancing the model’s discriminative ability and feature distinctiveness. Experimental results on multiple public datasets, including PRCC, LTCC, and VC-Clothes, demonstrate that DM-ReID surpasses most existing mainstream methods in Rank-1 accuracy and mean Average Precision (mAP) metrics under clothing-changing scenarios. These findings validate the method’s effectiveness and robustness in handling complex clothing variations, highlighting its promising prospects for practical applications.

解耦特征驱动多模态融合关注换衣人再识别
人员再识别(ReID)在智能监控、公共安全、智能交通系统中发挥着至关重要的作用。然而,服装的变化仍然是这一领域的重大挑战。针对这一问题,提出了一种解耦特征驱动多模态融合注意力的换人再识别方法(DM-ReID)。该方法采用双流特征提取框架,包括全局RGB图像特征流和与服装无关的特征增强流。这些流分别捕获了独立于服装的综合外观信息和身份特征。提出了两种特征融合策略:首先,在网络中期通过Hadamard产品实现RGB特征与服装无关特征的初始融合,增强特征互补性;其次,在网络端集成多模态融合关注机制,动态调整特征权重,进一步提高特征表示能力;为了优化模型性能,采用了一种结合身份损失和三重损失的复合损失函数,有效地增强了模型的判别能力和特征的显著性。在PRCC、LTCC和vc - clothing等多个公共数据集上的实验结果表明,DM-ReID在换衣场景下的Rank-1精度和mean Average Precision (mAP)指标上优于大多数现有主流方法。这些发现验证了该方法在处理复杂服装变化方面的有效性和鲁棒性,突出了其实际应用的良好前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信