{"title":"领域泛化的原型锚定对比测试时间自适应","authors":"Saeed Karimi, Hamdi Dibeklioglu","doi":"10.1016/j.neucom.2025.130839","DOIUrl":null,"url":null,"abstract":"<div><div>Test time adaptation (TTA) aims to address distribution shifts between training and testing data by adjusting a given model based on the testing sample. This method is particularly suitable for Domain Generalization (DG) scenarios, where it has access to online test data during inference. Most current TTA techniques modify the source model using the model’s predictions on the test data as pseudo-labels. However, under test-time domain shifts, the accuracy of these pseudo-labels is not guaranteed, potentially leading to performance declines after adaptation. Additionally, adapting all the model parameters using test samples can be harmful due to the presence of noisy samples. To tackle these issues, we propose AdaPAC, a prototypical anchored contrastive test-time adaptation approach. This method leverages knowledge from source data, specifically subclass prototypes, and incorporates them into prototypical contrastive learning within the representation space. By doing so, it effectively aligns the test and source distributions. AdaPAC categorizes test samples as reliable or unreliable based on their distances from the anchored prototypes and applies prototypical and instance-wise contrastive learning to them, respectively. Furthermore, we propose a reliable weight regularization for unreliable samples that encourages the model to update parameters that are sensitive to the distribution shift of reliable test samples. AdaPAC outperformed the current state-of-the-art DG and TTA methods across domain generalization benchmarks including VLCS, PACS, OfficeHome, and TerraIncognita.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"650 ","pages":"Article 130839"},"PeriodicalIF":6.5000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AdaPAC: Prototypical anchored contrastive test time adaptation for domain generalization\",\"authors\":\"Saeed Karimi, Hamdi Dibeklioglu\",\"doi\":\"10.1016/j.neucom.2025.130839\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Test time adaptation (TTA) aims to address distribution shifts between training and testing data by adjusting a given model based on the testing sample. This method is particularly suitable for Domain Generalization (DG) scenarios, where it has access to online test data during inference. Most current TTA techniques modify the source model using the model’s predictions on the test data as pseudo-labels. However, under test-time domain shifts, the accuracy of these pseudo-labels is not guaranteed, potentially leading to performance declines after adaptation. Additionally, adapting all the model parameters using test samples can be harmful due to the presence of noisy samples. To tackle these issues, we propose AdaPAC, a prototypical anchored contrastive test-time adaptation approach. This method leverages knowledge from source data, specifically subclass prototypes, and incorporates them into prototypical contrastive learning within the representation space. By doing so, it effectively aligns the test and source distributions. AdaPAC categorizes test samples as reliable or unreliable based on their distances from the anchored prototypes and applies prototypical and instance-wise contrastive learning to them, respectively. Furthermore, we propose a reliable weight regularization for unreliable samples that encourages the model to update parameters that are sensitive to the distribution shift of reliable test samples. AdaPAC outperformed the current state-of-the-art DG and TTA methods across domain generalization benchmarks including VLCS, PACS, OfficeHome, and TerraIncognita.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"650 \",\"pages\":\"Article 130839\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225015115\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225015115","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
AdaPAC: Prototypical anchored contrastive test time adaptation for domain generalization
Test time adaptation (TTA) aims to address distribution shifts between training and testing data by adjusting a given model based on the testing sample. This method is particularly suitable for Domain Generalization (DG) scenarios, where it has access to online test data during inference. Most current TTA techniques modify the source model using the model’s predictions on the test data as pseudo-labels. However, under test-time domain shifts, the accuracy of these pseudo-labels is not guaranteed, potentially leading to performance declines after adaptation. Additionally, adapting all the model parameters using test samples can be harmful due to the presence of noisy samples. To tackle these issues, we propose AdaPAC, a prototypical anchored contrastive test-time adaptation approach. This method leverages knowledge from source data, specifically subclass prototypes, and incorporates them into prototypical contrastive learning within the representation space. By doing so, it effectively aligns the test and source distributions. AdaPAC categorizes test samples as reliable or unreliable based on their distances from the anchored prototypes and applies prototypical and instance-wise contrastive learning to them, respectively. Furthermore, we propose a reliable weight regularization for unreliable samples that encourages the model to update parameters that are sensitive to the distribution shift of reliable test samples. AdaPAC outperformed the current state-of-the-art DG and TTA methods across domain generalization benchmarks including VLCS, PACS, OfficeHome, and TerraIncognita.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.