Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation

Xin Hao, Sanyuan Zhao, Mang Ye, Jianbing Shen
{"title":"Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation","authors":"Xin Hao, Sanyuan Zhao, Mang Ye, Jianbing Shen","doi":"10.1109/ICCV48922.2021.01609","DOIUrl":null,"url":null,"abstract":"Cross-modality person re-identification is a challenging task due to large cross-modality discrepancy and intramodality variations. Currently, most existing methods focus on learning modality-specific or modality-shareable features by using the identity supervision or modality label. Different from existing methods, this paper presents a novel Modality Confusion Learning Network (MCLNet). Its basic idea is to confuse two modalities, ensuring that the optimization is explicitly concentrated on the modality-irrelevant perspective. Specifically, MCLNet is designed to learn modality-invariant features by simultaneously minimizing inter-modality discrepancy while maximizing cross-modality similarity among instances in a single framework. Furthermore, an identity-aware marginal center aggregation strategy is introduced to extract the centralization features, while keeping diversity with a marginal constraint. Finally, we design a camera-aware learning scheme to enrich the discriminability. Extensive experiments on SYSU-MM01 and RegDB datasets show that MCLNet outperforms the state-of-the-art by a large margin. On the large-scale SYSU-MM01 dataset, our model can achieve 65.40 % and 61.98 % in terms of Rank-1 accuracy and mAP value.","PeriodicalId":6820,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision (ICCV)","volume":"35 1","pages":"16383-16392"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"67","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CVF International Conference on Computer Vision (ICCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV48922.2021.01609","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 67

Abstract

Cross-modality person re-identification is a challenging task due to large cross-modality discrepancy and intramodality variations. Currently, most existing methods focus on learning modality-specific or modality-shareable features by using the identity supervision or modality label. Different from existing methods, this paper presents a novel Modality Confusion Learning Network (MCLNet). Its basic idea is to confuse two modalities, ensuring that the optimization is explicitly concentrated on the modality-irrelevant perspective. Specifically, MCLNet is designed to learn modality-invariant features by simultaneously minimizing inter-modality discrepancy while maximizing cross-modality similarity among instances in a single framework. Furthermore, an identity-aware marginal center aggregation strategy is introduced to extract the centralization features, while keeping diversity with a marginal constraint. Finally, we design a camera-aware learning scheme to enrich the discriminability. Extensive experiments on SYSU-MM01 and RegDB datasets show that MCLNet outperforms the state-of-the-art by a large margin. On the large-scale SYSU-MM01 dataset, our model can achieve 65.40 % and 61.98 % in terms of Rank-1 accuracy and mAP value.
基于模态混淆和中心聚集的跨模态人物再识别
由于存在较大的跨模态差异和模态内差异,跨模态人的再识别是一项具有挑战性的任务。目前,大多数现有方法都是通过身份监督或模态标签来学习模态特定或模态可共享的特征。与现有方法不同,本文提出了一种新的模态混淆学习网络(MCLNet)。其基本思想是混淆两种模态,确保优化明确地集中在与模态无关的角度上。具体来说,MCLNet旨在通过最小化模态间差异同时最大化单个框架中实例之间的跨模态相似性来学习模态不变特征。引入身份感知的边缘中心聚合策略提取集中化特征,同时在边缘约束下保持多样性。最后,我们设计了一个摄像头感知学习方案,以丰富识别能力。在SYSU-MM01和RegDB数据集上进行的大量实验表明,MCLNet在很大程度上优于最先进的技术。在SYSU-MM01大规模数据集上,我们的模型在Rank-1精度和mAP值方面分别达到65.40%和61.98%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信