Amran Bhuiyan , Aijun An , Jimmy Xiangji Huang , Jialie Shen
{"title":"Optimizing domain-generalizable ReID through non-parametric normalization","authors":"Amran Bhuiyan , Aijun An , Jimmy Xiangji Huang , Jialie Shen","doi":"10.1016/j.patcog.2025.111356","DOIUrl":null,"url":null,"abstract":"<div><div>Optimizing deep neural networks to generalize effectively across diverse visual domains remains a key challenge in computer vision, especially in domain-generalizable person re-identification (ReID). The goal of domain-generalizable ReID is to develop robust deep learning (DL) models that are effective across both known (source) and unseen (target) domains. However, many top-performing ReID methods overfit to the source domain, impairing their generalization ability. Previous approaches have employed Instance Normalization (IN) with learnable parameters to generalize domains and eliminate source domain styles. Recently, some DL frameworks have adopted normalization techniques without learnable parameters. We critically examine non-parametric normalization techniques for optimizing the deep ReID model, emphasizing the advantages of using non-parametric instance normalization as a gating mechanism to extract style-independent features at various abstraction levels within both convolutional neural networks (CNNs) and Vision Transformers (ViT). Our framework offers strategic guidance on the optimal placement of non-parametric IN within the network architecture to ensure effective information flow management in subsequent layers. Additionally, we employ one-dimensional Batch Normalization (BN) without learnable parameters at deeper network levels to remove content-related biases from the source domain. Our integrated approach, termed <em>DualNormNP</em>, systematically optimizes the model’s capacity to generalize across varied domains. Comprehensive evaluations on multiple benchmark ReID datasets demonstrate that our approach surpasses current state-of-the-art ReID methods in terms of generalization performance. Code is available on Github: <span><span>https://github.com/mdamranhossenbhuiyan/DualNormNP</span><svg><path></path></svg></span></div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"162 ","pages":"Article 111356"},"PeriodicalIF":7.5000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325000160","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Optimizing deep neural networks to generalize effectively across diverse visual domains remains a key challenge in computer vision, especially in domain-generalizable person re-identification (ReID). The goal of domain-generalizable ReID is to develop robust deep learning (DL) models that are effective across both known (source) and unseen (target) domains. However, many top-performing ReID methods overfit to the source domain, impairing their generalization ability. Previous approaches have employed Instance Normalization (IN) with learnable parameters to generalize domains and eliminate source domain styles. Recently, some DL frameworks have adopted normalization techniques without learnable parameters. We critically examine non-parametric normalization techniques for optimizing the deep ReID model, emphasizing the advantages of using non-parametric instance normalization as a gating mechanism to extract style-independent features at various abstraction levels within both convolutional neural networks (CNNs) and Vision Transformers (ViT). Our framework offers strategic guidance on the optimal placement of non-parametric IN within the network architecture to ensure effective information flow management in subsequent layers. Additionally, we employ one-dimensional Batch Normalization (BN) without learnable parameters at deeper network levels to remove content-related biases from the source domain. Our integrated approach, termed DualNormNP, systematically optimizes the model’s capacity to generalize across varied domains. Comprehensive evaluations on multiple benchmark ReID datasets demonstrate that our approach surpasses current state-of-the-art ReID methods in terms of generalization performance. Code is available on Github: https://github.com/mdamranhossenbhuiyan/DualNormNP
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.