{"title":"A simple yet lightweight module for enhancing domain generalization through relative representation","authors":"Meng Cao , Songcan Chen","doi":"10.1016/j.patcog.2025.112423","DOIUrl":null,"url":null,"abstract":"<div><div>Domain Generalization (DG) learns a model from multiple source domains to combat individual domain differences and ensure generalization to unseen domains. Most existing methods focus on learning domain-invariant <em>absolute</em> representations. However, we empirically observe that such representations often suffer from notable distribution divergence, leading to unstable performance in diverse unseen domains. In contrast, <em>relative</em> representations, constructed w.r.t. a set of anchors, naturally capture geometric relationships and exhibit intrinsic stability within a dataset. Despite this potential, their application to DG remains largely unexplored, due to their common transductive assumption that anchors require access to target-domain data, which is incompatible with the inductive setting of DG. To address this issue, we design Re2SL, a simple and lightweight plug-in module that follows a pre-trained encoder and constructs anchors solely from source-domain prototypes, thereby ensuring a completely inductive design. To our knowledge, Re2SL is the first to explore relative representation for DG. This design is inspired by the insight that <strong>ReS</strong>idual differences between absolute and domain-specific representations can spontaneously seek stable representations within the same distribution shared across <em>all domains</em>. Leveraging these stable representations, we construct cross-domain <strong>ReL</strong>ative representation to enhance stability and transferability without accessing any target data during training or anchor computation. Empirical studies show that our constructed representation exhibits minimal <span><math><mi>H</mi></math></span>-divergence, confirming its stability. Notably, Re2SL achieves up to 4.3 % improvement while reducing computational cost by 90 %, demonstrating its efficiency.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112423"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325010842","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Domain Generalization (DG) learns a model from multiple source domains to combat individual domain differences and ensure generalization to unseen domains. Most existing methods focus on learning domain-invariant absolute representations. However, we empirically observe that such representations often suffer from notable distribution divergence, leading to unstable performance in diverse unseen domains. In contrast, relative representations, constructed w.r.t. a set of anchors, naturally capture geometric relationships and exhibit intrinsic stability within a dataset. Despite this potential, their application to DG remains largely unexplored, due to their common transductive assumption that anchors require access to target-domain data, which is incompatible with the inductive setting of DG. To address this issue, we design Re2SL, a simple and lightweight plug-in module that follows a pre-trained encoder and constructs anchors solely from source-domain prototypes, thereby ensuring a completely inductive design. To our knowledge, Re2SL is the first to explore relative representation for DG. This design is inspired by the insight that ReSidual differences between absolute and domain-specific representations can spontaneously seek stable representations within the same distribution shared across all domains. Leveraging these stable representations, we construct cross-domain ReLative representation to enhance stability and transferability without accessing any target data during training or anchor computation. Empirical studies show that our constructed representation exhibits minimal -divergence, confirming its stability. Notably, Re2SL achieves up to 4.3 % improvement while reducing computational cost by 90 %, demonstrating its efficiency.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.