Hao Luo , Zhiqiang Tian , Panpan Jiao , Meiqin Liu , Shaoyi Du , Kai Nan
{"title":"Explicitly fusing plug-and-play guidance of source prototype into target subspace for domain adaptation","authors":"Hao Luo , Zhiqiang Tian , Panpan Jiao , Meiqin Liu , Shaoyi Du , Kai Nan","doi":"10.1016/j.inffus.2025.103197","DOIUrl":null,"url":null,"abstract":"<div><div>The commonly used maximum mean discrepancy (MMD) criterion has two main drawbacks when reducing cross-domain distribution gaps: firstly, it reduces the distribution discrepancy in a global manner, potentially ignoring local structural information between domains, and secondly, its performance heavily relies on the often-unstable pseudo-label refinement process. To solve these problems, we introduce two universal plug-and-play modules: dynamic prototype pursuit (DPP) regularization and bi-branch self-training (BST) mechanism. Firstly, DPP introduces a new inter-class perspective to stabilize MMD by assigning a source prototype to each target sample. This allows us to utilize inter-class data structure information for better alignment. Next, BST is a novel non-parametric pseudo-label refinement mechanism that updates pseudo labels of target data using a classifier trained on the same distribution as the target domain. This avoids the distribution gap issue, making BST more likely to generate accurate target pseudo labels. Importantly, DPP and BST are universal plug-and-play modules for shallow domain adaptation methods. To demonstrate this, experiments of 3 MMD-based models incorporated with DPP and BST are conducted on Office-Caltech, Reuters21578, and Berlin-Emovo-Tess datasets. Experimental results show that these models incorporated with DPP and BST generally achieve better results compared to not using DPP and BST in terms of multiple metrics including accuracy, F1-score, MCC, and false positive rates. Code of 3 different DA methods enhanced by the plug-and-play DPP and BST is available at: <span><span>https://github.com/Evelhz/DPP-and-BST</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103197"},"PeriodicalIF":14.7000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002702","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The commonly used maximum mean discrepancy (MMD) criterion has two main drawbacks when reducing cross-domain distribution gaps: firstly, it reduces the distribution discrepancy in a global manner, potentially ignoring local structural information between domains, and secondly, its performance heavily relies on the often-unstable pseudo-label refinement process. To solve these problems, we introduce two universal plug-and-play modules: dynamic prototype pursuit (DPP) regularization and bi-branch self-training (BST) mechanism. Firstly, DPP introduces a new inter-class perspective to stabilize MMD by assigning a source prototype to each target sample. This allows us to utilize inter-class data structure information for better alignment. Next, BST is a novel non-parametric pseudo-label refinement mechanism that updates pseudo labels of target data using a classifier trained on the same distribution as the target domain. This avoids the distribution gap issue, making BST more likely to generate accurate target pseudo labels. Importantly, DPP and BST are universal plug-and-play modules for shallow domain adaptation methods. To demonstrate this, experiments of 3 MMD-based models incorporated with DPP and BST are conducted on Office-Caltech, Reuters21578, and Berlin-Emovo-Tess datasets. Experimental results show that these models incorporated with DPP and BST generally achieve better results compared to not using DPP and BST in terms of multiple metrics including accuracy, F1-score, MCC, and false positive rates. Code of 3 different DA methods enhanced by the plug-and-play DPP and BST is available at: https://github.com/Evelhz/DPP-and-BST.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.