IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
Uni-ISP: Toward Unifying the Learning of ISPs From Multiple Mobile Cameras Uni-ISP:从多个移动摄像头中统一isp学习。
IF 13.7
Lingen Li;Mingde Yao;Xingyu Meng;Muquan Yu;Tianfan Xue;Jinwei Gu
{"title":"Uni-ISP: Toward Unifying the Learning of ISPs From Multiple Mobile Cameras","authors":"Lingen Li;Mingde Yao;Xingyu Meng;Muquan Yu;Tianfan Xue;Jinwei Gu","doi":"10.1109/TIP.2025.3607617","DOIUrl":"10.1109/TIP.2025.3607617","url":null,"abstract":"Modern end-to-end image signal processors (ISPs) can learn complex mappings from RAW/XYZ data to sRGB (and vice versa), opening new possibilities in image processing. However, the growing diversity of camera models, particularly in mobile devices, renders the development of individual ISPs unsustainable due to their limited versatility and adaptability across varied camera systems. In this paper, we introduce Uni-ISP, a novel pipeline that unifies ISP learning for diverse mobile cameras, delivering a highly accurate and adaptable processor. The core of Uni-ISP is leveraging device-aware embeddings through learning forward/inverse ISPs and its special training scheme. By doing so, Uni-ISP not only improves the performance of forward and inverse ISPs but also unlocks new applications previously inaccessible to conventional learned ISPs. To support this work, we construct a real-world 4K dataset, FiveCam, comprising more than 2,400 pairs of sRGB-RAW images captured synchronously by five smartphone cameras. Extensive experiments validate Uni-ISP’s accuracy in learning forward and inverse ISPs (with improvements of +2.4dB/1.5dB PSNR), versatility in enabling new applications, and adaptability to new camera models.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"6126-6137"},"PeriodicalIF":13.7,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145068458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Feature Representation for Camouflaged Object Detection 伪装目标检测的连续特征表示
IF 13.7
Ze Song;Xudong Kang;Xiaohui Wei;Jinyang Liu;Zheng Lin;Shutao Li
{"title":"Continuous Feature Representation for Camouflaged Object Detection","authors":"Ze Song;Xudong Kang;Xiaohui Wei;Jinyang Liu;Zheng Lin;Shutao Li","doi":"10.1109/TIP.2025.3602657","DOIUrl":"10.1109/TIP.2025.3602657","url":null,"abstract":"Camouflaged object detection (COD) aims to discover objects that are seamlessly embedded in the environment. Existing COD methods have made significant progress by typically representing features in a discrete way with arrays of pixels. However, limited by discrete representation, these methods need to align features of different scales during decoding, which causes some subtle discriminative clues to become blurred. This is a huge blow to the task of identifying camouflaged objects from clear subtle clues. To address this issue, we propose a novel continuous feature representation network (CFRN), which aims to represent features of different scales as a continuous function for COD. Specifically, a Swin transformer encoder is first exploited to explore the global context between camouflaged objects and the background. Then, an object-focusing module (OFM) deployed layer by layer is designed to deeply mine subtle discriminative clues, thereby highlighting the body of camouflaged objects and suppressing other distracting objects at different scales. Finally, a novel frequency-based implicit feature decoder (FIFD) is proposed, which directly decodes the predictions at arbitrary coordinates in the continuous function with implicit neural representations, thus propagating clearer discriminative clues. Extensive experiments on four challenging COD benchmarks demonstrate that our method significantly outperforms state-of-the-art methods. The source code will be available at <uri>https://github.com/SongZeHNU/CFRN</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5672-5685"},"PeriodicalIF":13.7,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145017716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NPC-SPU: Nonlinear Phase Coding-Based Stereo Phase Unwrapping for Efficient 3D Measurement 基于非线性相位编码的立体相位展开,用于高效的三维测量
IF 13.7
Ruiming Yu;Hongshan Yu;Wei Sun;Yaonan Wang;Naveed Akhtar;Kemao Qian
{"title":"NPC-SPU: Nonlinear Phase Coding-Based Stereo Phase Unwrapping for Efficient 3D Measurement","authors":"Ruiming Yu;Hongshan Yu;Wei Sun;Yaonan Wang;Naveed Akhtar;Kemao Qian","doi":"10.1109/TIP.2025.3602644","DOIUrl":"10.1109/TIP.2025.3602644","url":null,"abstract":"3D imaging based on phase-shifting structured light is widely used in industrial measurement due to its non-contact nature. However, it typically requires a large number of additional images (multi-frequency heterodyne (M-FH) method) or introduces intensity features that compromise accuracy (space domain modulation phase-shifting (SDM-PS) method) for phase unwrapping, and it remains sensitive to motion. To overcome these issues, this article proposes a nonlinear phase coding-based stereo phase unwrapping (NPC-SPU) method that requires no additional patterns while maintaining measurement accuracy. In the encoding stage, a novel nonlinear distortion feature is introduced, while the signal-to-noise ratio of the phase codeword is preserved. In the decoding stage, a local phase unwrapping method that does not require additional auxiliary information is first proposed, closely associating the distortion information in the local wrapped phase. Then, a pre-calibrated stereo constraint system is used to filter potential matching phases, significantly reducing phase ambiguity and computational costs. Finally, to avoid the time-consuming and complex intensity kernel matching used in traditional methods, we propose a local phase correlation matching (LPCM) technique that enables lightweight and robust phase unwrapping. Experimental results demonstrate that this algorithm significantly enhances 3D reconstruction performance in scenarios with large depth, large disparity, complex colored structures, and dynamic scenes. Specifically, in dynamic environments (20mm/s), the proposed method achieves a lower measurement error rate (0.7829% vs. 6.4962%) with only 3 patterns, compared to the traditional three-frequency heterodyne (T-FH) method (using 9 patterns). Additionally, its measurement accuracy outperforms the advanced SDM-PS method, which also uses 3 patterns (0.1102 mm vs. 0.3232 mm).","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5642-5657"},"PeriodicalIF":13.7,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145017765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-Text-Image Knowledge Transfer for Lifelong Person Re-Identification With Hybrid Clothing States 混合服装状态下终身人再认同的图像-文本-图像知识转移
IF 13.7
Qizao Wang;Xuelin Qian;Bin Li;Yanwei Fu;Xiangyang Xue
{"title":"Image-Text-Image Knowledge Transfer for Lifelong Person Re-Identification With Hybrid Clothing States","authors":"Qizao Wang;Xuelin Qian;Bin Li;Yanwei Fu;Xiangyang Xue","doi":"10.1109/TIP.2025.3602745","DOIUrl":"10.1109/TIP.2025.3602745","url":null,"abstract":"With the continuous expansion of intelligent surveillance networks, lifelong person re-identification (LReID) has received widespread attention, pursuing the need of self-evolution across different domains. However, existing LReID studies accumulate knowledge with the assumption that people would not change their clothes. In this paper, we propose a more practical task, namely lifelong person re-identification with hybrid clothing states (LReID-Hybrid), which takes a series of cloth-changing and same-cloth domains into account during lifelong learning. To tackle the challenges of knowledge granularity mismatch and knowledge presentation mismatch in LReID-Hybrid, we take advantage of the consistency and generalization capabilities of the text space, and propose a novel framework, dubbed Teata, to effectively align, transfer, and accumulate knowledge in an “image-text-image” closed loop. Concretely, to achieve effective knowledge transfer, we design a Structured Semantic Prompt (SSP) learning to decompose the text prompt into several structured pairs to distill knowledge from the image space with a unified granularity of text description. Then, we introduce a Knowledge Adaptation and Projection (KAP) strategy, which tunes text knowledge via a slow-paced learner to adapt to different tasks without catastrophic forgetting. Extensive experiments demonstrate the superiority of our proposed Teata for LReID-Hybrid as well as on conventional LReID benchmarks over advanced methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5584-5597"},"PeriodicalIF":13.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144928357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIP-Based Multi-Modal Feature Learning for Cloth-Changing Person Re-Identification 基于clip的换衣人再识别多模态特征学习
IF 13.7
Guoqing Zhang;Jieqiong Zhou;Lu Jiang;Yuhui Zheng;Weisi Lin
{"title":"CLIP-Based Multi-Modal Feature Learning for Cloth-Changing Person Re-Identification","authors":"Guoqing Zhang;Jieqiong Zhou;Lu Jiang;Yuhui Zheng;Weisi Lin","doi":"10.1109/TIP.2025.3602641","DOIUrl":"10.1109/TIP.2025.3602641","url":null,"abstract":"Contrastive Language-Image Pre-training (CLIP) has achieved remarkable results in the field of person re-identification (ReID) due to its excellent cross-modal understanding ability and high scalability. Since the text encoder of CLIP mainly focuses on easy-to-describe attributes such as clothing, and clothing is the main interference factor that reduces the recognition accuracy in cloth-changing person ReID (CC ReID). Consequently, directly applying CLIP to cloth-changing scenario may be difficult to adapt to such dynamic feature changes, thereby affecting the precision of identification. To solve this challenge, we propose a CLIP-based multi-modal feature learning framework (CMFF) for CC ReID. Specifically, we first design a pose-aware identity enhancement module (PIE) to enhance the model’s perception of identity-intrinsic information. In this branch, to weaken the interference of clothing information, we apply a ranking loss to minimize the difference between appearance and pose in the feature space. Secondly, we propose a global-local hybrid attention module (GLHA), which fuses head and global features through a cross-attention mechanism, enhancing the global recognition ability of key head information. Finally, considering that existing CLIP-based methods often ignore the potential importance of shallow features, we propose a graph-based multi-layer interactive enhancement module (GMIE), which groups and integrates multi-layer features of the image encoder, aiming to enhance the contextual awareness of multi-scale features. Extensive experiments on multiple popular pedestrian datasets validate the outstanding performance of our proposed CMFF.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5570-5583"},"PeriodicalIF":13.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144928054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocalTransNet: A Hybrid Focal-Enhanced Transformer Network for Medical Image Segmentation FocalTransNet:用于医学图像分割的混合焦点增强变压器网络
IF 13.7
Miao Liao;Ruixin Yang;Yuqian Zhao;Wei Liang;Junsong Yuan
{"title":"FocalTransNet: A Hybrid Focal-Enhanced Transformer Network for Medical Image Segmentation","authors":"Miao Liao;Ruixin Yang;Yuqian Zhao;Wei Liang;Junsong Yuan","doi":"10.1109/TIP.2025.3602739","DOIUrl":"10.1109/TIP.2025.3602739","url":null,"abstract":"CNNs have demonstrated superior performance in medical image segmentation. To overcome the limitation of only using local receptive field, previous work has attempted to integrate Transformers into convolutional network components such as encoders, decoders, or skip connections. However, these methods can only establish long-distance dependencies for some specific patterns and usually neglect the loss of fine-grained details during downsampling in multi-scale feature extraction. To address the issues, we present a novel hybrid Transformer network called FocalTransNet. Specifically, we construct a focal-enhanced (FE) Transformer module by introducing dense cross-connections into a CNN-Transformer dual-path structure and deploy the FE Transformer throughout the entire encoder. Different from existing hybrid networks that employ embedding or stacking strategies, the proposed model allows for a comprehensive extraction and deep fusion of both local and global features at different scales. Besides, we propose a symmetric patch merging (SPM) module for downsampling, which can retain the fine-grained details by establishing a specific information compensation mechanism. We evaluated the proposed method on four different medical image segmentation benchmarks. The proposed method outperforms previous state-of-the-art convolutional networks, Transformers, and hybrid networks. The code for FocalTransNet is publicly available at <uri>https://github.com/nemanjajoe/FocalTransNet</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5614-5627"},"PeriodicalIF":13.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144928055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiking Variational Graph Representation Inference for Video Summarization 视频摘要的峰值变分图表示推理
IF 13.7
Wenrui Li;Wei Han;Liang-Jian Deng;Ruiqin Xiong;Xiaopeng Fan
{"title":"Spiking Variational Graph Representation Inference for Video Summarization","authors":"Wenrui Li;Wei Han;Liang-Jian Deng;Ruiqin Xiong;Xiaopeng Fan","doi":"10.1109/TIP.2025.3602649","DOIUrl":"10.1109/TIP.2025.3602649","url":null,"abstract":"With the rise of short video content, efficient video summarization techniques for extracting key information have become crucial. However, existing methods struggle to capture the global temporal dependencies and maintain the semantic coherence of video content. Additionally, these methods are also influenced by noise during multi-channel feature fusion. We propose a Spiking Variational Graph (SpiVG) Network, which enhances information density and reduces computational complexity. First, we design a keyframe extractor based on Spiking Neural Networks (SNN), leveraging the event-driven computation mechanism of SNNs to learn keyframe features autonomously. To enable fine-grained and adaptable reasoning across video frames, we introduce a Dynamic Aggregation Graph Reasoner, which decouples contextual object consistency from semantic perspective coherence. We present a Variational Inference Reconstruction Module to address uncertainty and noise arising during multi-channel feature fusion. In this module, we employ Evidence Lower Bound Optimization (ELBO) to capture the latent structure of multi-channel feature distributions, using posterior distribution regularization to reduce overfitting. Experimental results show that SpiVG surpasses existing methods across multiple datasets such as SumMe, TVSum, VideoXum, and QFVS. Our codes and pre-trained models are available at <uri>https://github.com/liwrui/SpiVG</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5697-5709"},"PeriodicalIF":13.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144928358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ColorAssist: Perception-Based Recoloring for Color Vision Deficiency Compensation ColorAssist:基于感知的色觉缺陷补偿的再着色
IF 13.7
Liqun Lin;Shangxi Xie;Yanting Wang;Bolin Chen;Ying Xue;Xiahai Zhuang;Tiesong Zhao
{"title":"ColorAssist: Perception-Based Recoloring for Color Vision Deficiency Compensation","authors":"Liqun Lin;Shangxi Xie;Yanting Wang;Bolin Chen;Ying Xue;Xiahai Zhuang;Tiesong Zhao","doi":"10.1109/TIP.2025.3602643","DOIUrl":"10.1109/TIP.2025.3602643","url":null,"abstract":"Image enhancement methods have been widely studied to improve the visual quality of diverse images, implicitly assuming that all human observers have normal vision. However, a large population around the world suffers from Color Vision Deficiency (CVD). Enhancing images to compensate for their perceptions remains a challenging issue. Existing CVD compensation methods have two drawbacks: first, the available datasets and validations have not been rigorously tested by CVD individuals; second, these methods struggle to strike an optimal balance between contrast enhancement and naturalness preservation, which often results in suboptimal outcomes for individuals with CVD. To address these issues, we develop the first large-scale, CVD-individual-labeled dataset called FZU-CVDSet and a CVD-friendly recoloring algorithm called ColorAssist. In particular, we design a perception-guided feature extraction module and a perception-guided diffusion transformer module that jointly achieve efficient image recoloring for individuals with CVD. Comprehensive experiments on both FZU-CVDSet and subjective tests in hospitals demonstrate that the proposed ColorAssist closely aligns with the visual perceptions of individuals with CVD, achieving superior performance compared with the state-of-the-arts. The source code is available at <uri>https://github.com/xsx-fzu/ColorAssist</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5658-5671"},"PeriodicalIF":13.7,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144928061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-Aware Cross-Training for Semi-Supervised Medical Image Segmentation 半监督医学图像分割的不确定性感知交叉训练
IF 13.7
Kaiwen Huang;Tao Zhou;Huazhu Fu;Yizhe Zhang;Yi Zhou;Xiao-Jun Wu
{"title":"Uncertainty-Aware Cross-Training for Semi-Supervised Medical Image Segmentation","authors":"Kaiwen Huang;Tao Zhou;Huazhu Fu;Yizhe Zhang;Yi Zhou;Xiao-Jun Wu","doi":"10.1109/TIP.2025.3599783","DOIUrl":"10.1109/TIP.2025.3599783","url":null,"abstract":"Semi-supervised learning has gained considerable popularity in medical image segmentation tasks due to its capability to reduce reliance on expert-examined annotations. Several mean-teacher (MT) based semi-supervised methods utilize consistency regularization to effectively leverage valuable information from unlabeled data. However, these methods often heavily rely on the student model and overlook the potential impact of cognitive biases within the model. Furthermore, some methods employ co-training using pseudo-labels derived from different inputs, yet generating high-confidence pseudo-labels from perturbed inputs during training remains a significant challenge. In this paper, we propose an Uncertainty-aware Cross-training framework for semi-supervised medical image Segmentation (UC-Seg). Our UC-Seg framework incorporates two distinct subnets to effectively explore and leverage the correlation between them, thereby mitigating cognitive biases within the model. Specifically, we present a Cross-subnet Consistency Preservation (CCP) strategy to enhance feature representation capability and ensure feature consistency across the two subnets. This strategy enables each subnet to correct its own biases and learn shared semantics from both labeled and unlabeled data. Additionally, we propose an Uncertainty-aware Pseudo-label Generation (UPG) component that leverages segmentation results and corresponding uncertainty maps from both subnets to generate high-confidence pseudo-labels. We extensively evaluate the proposed UC-Seg on various medical image segmentation tasks involving different modality images, such as MRI, CT, ultrasound, colonoscopy, and so on. The results demonstrate that our method achieves superior segmentation accuracy and generalization performance compared to other state-of-the-art semi-supervised methods. Our code and segmentation maps will be released at <uri>https://github.com/taozh2017/UCSeg</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5543-5556"},"PeriodicalIF":13.7,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Labeling and Invariance Modeling for Unsupervised Cross-Resolution Person Re-Identification 无监督跨分辨率人物再识别的鲁棒标记和不变性建模
IF 13.7
Zhiqi Pang;Lingling Zhao;Yang Liu;Chunyu Wang;Gaurav Sharma
{"title":"Robust Labeling and Invariance Modeling for Unsupervised Cross-Resolution Person Re-Identification","authors":"Zhiqi Pang;Lingling Zhao;Yang Liu;Chunyu Wang;Gaurav Sharma","doi":"10.1109/TIP.2025.3601443","DOIUrl":"10.1109/TIP.2025.3601443","url":null,"abstract":"Cross-resolution person re-identification (CR-ReID) aims to match low-resolution (LR) and high-resolution (HR) images of the same individual. To reduce the cost of manual annotation, existing unsupervised CR-ReID methods typically rely on cross-resolution fusion to obtain pseudo-labels and resolution-invariant features. However, the fusion process requires two encoders and a fusion module, which significantly increases computational complexity and reduces efficiency. To address this issue, we propose a robust labeling and invariance modeling (RLIM) framework, which utilizes a single encoder to tackle the unsupervised CR-ReID problem. To obtain pseudo-labels robust to resolution gaps, we develop cross-resolution robust labeling (CRL), which utilizes two clustering criteria to encourage cross-resolution positive pairs to cluster together and exploit the reliable relationships between images. We also introduce random texture augmentation (TexA) to enhance the model’s robustness to noisy textures related to artifacts and backgrounds by randomly adjusting texture strength. During the optimization process, we introduce the resolution-cluster consistency loss, which promotes resolution-invariant feature learning by aligning inter-resolution distances with intra-cluster distances. Experimental results on multiple datasets demonstrate that RLIM not only surpasses existing unsupervised methods, but also achieves performance close to some supervised CR-ReID methods. Code is available at <uri>https://github.com/zqpang/RLIM</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5557-5569"},"PeriodicalIF":13.7,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信