Semi-supervised cross-modality person re-identification based on pseudo label learning

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Fei Wu , Ruixuan Zhou , Yang Gao , Yujian Feng , Qinghua Huang , Xiao-Yuan Jing
{"title":"Semi-supervised cross-modality person re-identification based on pseudo label learning","authors":"Fei Wu ,&nbsp;Ruixuan Zhou ,&nbsp;Yang Gao ,&nbsp;Yujian Feng ,&nbsp;Qinghua Huang ,&nbsp;Xiao-Yuan Jing","doi":"10.1016/j.imavis.2025.105602","DOIUrl":null,"url":null,"abstract":"<div><div>Visible-infrared person re-identification (RGB-IR Re-ID) aims to find images of the same identity from different modalities. In practice, multiple person and cameras can provide abundant training samples and non-negligible modality differences makes manual labeling of all samples be impractical. How to accurately re-identify cross-modality pedestrians under the training condition of having few labeled samples and a quantity of unlabeled samples is an important research question. However, person re-identification in this scenario, which we call Semi-Supervised Cross-Modality Re-ID (SSCM Re-ID), has not been well studied. In this paper, we propose a cross-modality pseudo label learning (CPL) framework for SSCM Re-ID task. It consists of three modules: the feature mapping module, the identity alignment module and the pseudo-label generation module. The feature mapping module is designed to extract shared discriminatory features from modality-specific channels, followed by the identity alignment module that aims to align person identities jointly at the global-level and part-level aspects. Finally, the pseudo-label generation module is used to select samples with reliable pseudo labels from the unlabeled samples based on the confidence level. Moreover, we propose the dynamic center-based cross-entropy loss to constrain the distance of similar samples. Experiments on widely used cross-modality Re-ID datasets demonstrate that CPL can achieve the state-of-the-art SSCM Re-ID performance.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"161 ","pages":"Article 105602"},"PeriodicalIF":4.2000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625001908","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Visible-infrared person re-identification (RGB-IR Re-ID) aims to find images of the same identity from different modalities. In practice, multiple person and cameras can provide abundant training samples and non-negligible modality differences makes manual labeling of all samples be impractical. How to accurately re-identify cross-modality pedestrians under the training condition of having few labeled samples and a quantity of unlabeled samples is an important research question. However, person re-identification in this scenario, which we call Semi-Supervised Cross-Modality Re-ID (SSCM Re-ID), has not been well studied. In this paper, we propose a cross-modality pseudo label learning (CPL) framework for SSCM Re-ID task. It consists of three modules: the feature mapping module, the identity alignment module and the pseudo-label generation module. The feature mapping module is designed to extract shared discriminatory features from modality-specific channels, followed by the identity alignment module that aims to align person identities jointly at the global-level and part-level aspects. Finally, the pseudo-label generation module is used to select samples with reliable pseudo labels from the unlabeled samples based on the confidence level. Moreover, we propose the dynamic center-based cross-entropy loss to constrain the distance of similar samples. Experiments on widely used cross-modality Re-ID datasets demonstrate that CPL can achieve the state-of-the-art SSCM Re-ID performance.
基于伪标签学习的半监督跨模态人再识别
可见红外人物再识别(RGB-IR Re-ID)旨在从不同的模态中找到相同身份的图像。在实践中,多人、多台摄像机可以提供丰富的训练样本,不可忽略的模态差异使得对所有样本进行人工标注是不切实际的。如何在标记样本少、未标记样本多的训练条件下准确地重新识别跨模态行人是一个重要的研究问题。然而,这种情况下的人再识别,我们称之为半监督跨模态再识别(SSCM - Re-ID),尚未得到很好的研究。在本文中,我们提出了一个跨模态伪标签学习(CPL)框架用于SSCM Re-ID任务。它包括三个模块:特征映射模块、身份对齐模块和伪标签生成模块。特征映射模块旨在从特定于模态的通道中提取共享的歧视性特征,其次是身份对齐模块,旨在在全局级和局部级方面联合对齐人员身份。最后,使用伪标签生成模块,根据置信水平从未标记的样本中选择具有可靠伪标签的样本。此外,我们提出了基于动态中心的交叉熵损失来约束相似样本的距离。在广泛使用的跨模态Re-ID数据集上的实验表明,CPL可以达到最先进的SSCM Re-ID性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信