Joint Augmentation and Part Learning for Unsupervised Clothing Change Person Re-Identification

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Zhiqi Pang;Lingling Zhao;Yang Liu;Gaurav Sharma;Chunyu Wang
{"title":"Joint Augmentation and Part Learning for Unsupervised Clothing Change Person Re-Identification","authors":"Zhiqi Pang;Lingling Zhao;Yang Liu;Gaurav Sharma;Chunyu Wang","doi":"10.1109/TIFS.2025.3550063","DOIUrl":null,"url":null,"abstract":"Clothing change person re-identification (CC-ReID) is a crucial task in intelligent surveillance, aiming to match images of the same person wearing different clothing. Promising performance in existing CC-ReID methods is achieved at the cost of labor-intensive manual annotation of identity labels. While some researchers have explored unsupervised CC-ReID, these methods still depend on additional deep learning models for preprocessing. To eliminate the need for additional models and improve performance, we propose a joint augmentation and part learning (JAPL) framework that obtains clothing change positive pairs in an unsupervised fashion by synergistically combining augmentation-based invariant learning (AugIL) and part-based invariant learning (ParIL). AugIL first constructs clothing change pseudo-positive pairs and then encourages the model to focus on clothing-invariant information by enhancing feature consistency between the pseudo-positive pairs. ParIL beneficially encourages high similarity between inter-cluster clothing change positive pair using part images and a prediction sharpening loss. PartIL also introduces a soft consistency loss that promotes clothing-invariant feature learning by encouraging consistency of class vectors between the real features actually used for CC-ReID and the part features. Experimental results on multiple ReID datasets demonstrate that the proposed JAPL not only surpasses existing unsupervised methods but also achieves competitive performance compared to some supervised CC-ReID methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2944-2956"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10922132/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Clothing change person re-identification (CC-ReID) is a crucial task in intelligent surveillance, aiming to match images of the same person wearing different clothing. Promising performance in existing CC-ReID methods is achieved at the cost of labor-intensive manual annotation of identity labels. While some researchers have explored unsupervised CC-ReID, these methods still depend on additional deep learning models for preprocessing. To eliminate the need for additional models and improve performance, we propose a joint augmentation and part learning (JAPL) framework that obtains clothing change positive pairs in an unsupervised fashion by synergistically combining augmentation-based invariant learning (AugIL) and part-based invariant learning (ParIL). AugIL first constructs clothing change pseudo-positive pairs and then encourages the model to focus on clothing-invariant information by enhancing feature consistency between the pseudo-positive pairs. ParIL beneficially encourages high similarity between inter-cluster clothing change positive pair using part images and a prediction sharpening loss. PartIL also introduces a soft consistency loss that promotes clothing-invariant feature learning by encouraging consistency of class vectors between the real features actually used for CC-ReID and the part features. Experimental results on multiple ReID datasets demonstrate that the proposed JAPL not only surpasses existing unsupervised methods but also achieves competitive performance compared to some supervised CC-ReID methods.
无监督换衣人再识别的联合增强和局部学习
换衣人再识别(CC-ReID)是智能监控中的一项关键任务,其目的是对同一人穿着不同服装的图像进行匹配。现有CC-ReID方法的良好性能是以人工标注身份标签为代价的。虽然一些研究人员已经探索了无监督CC-ReID,但这些方法仍然依赖于额外的深度学习模型进行预处理。为了消除对额外模型的需求并提高性能,我们提出了一个联合增强和部分学习(JAPL)框架,该框架通过协同结合基于增强的不变学习(AugIL)和基于部分的不变学习(ParIL),以无监督的方式获得服装变化正对。AugIL首先构建服装变化伪正对,然后通过增强伪正对之间的特征一致性,鼓励模型关注服装不变信息。ParIL利用局部图像和预测锐化损失有利于鼓励簇间服装变化正对之间的高相似性。PartIL还引入了软一致性损失,通过鼓励CC-ReID实际使用的真实特征与部分特征之间的类向量的一致性来促进衣服不变特征学习。在多个ReID数据集上的实验结果表明,JAPL不仅超越了现有的无监督方法,而且与一些有监督的CC-ReID方法相比,也取得了相当的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信