Yuwu Lu;Haoyu Huang;Wai Keung Wong;Xue Hu;Zhihui Lai;Xuelong Li
{"title":"Adaptive Dispersal and Collaborative Clustering for Few-Shot Unsupervised Domain Adaptation","authors":"Yuwu Lu;Haoyu Huang;Wai Keung Wong;Xue Hu;Zhihui Lai;Xuelong Li","doi":"10.1109/TIP.2025.3581007","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation is mainly focused on the tasks of transferring knowledge from a fully-labeled source domain to an unlabeled target domain. However, in some scenarios, the labeled data are expensive to collect, which cause an insufficient label issue in the source domain. To tackle this issue, some works have focused on few-shot unsupervised domain adaptation (FUDA), which transfers predictive models to an unlabeled target domain through a source domain that only contains a few labeled samples. Yet the relationship between labeled and unlabeled source domains are not well exploited in generating pseudo-labels. Additionally, the few-shot setting further prevents the transfer tasks as an excessive domain gap is introduced between the source and target domains. To address these issues, we newly proposed an adaptive dispersal and collaborative clustering (ADCC) method for FUDA. Specifically, for the shortage of the labeled source data, a collaborative clustering algorithm is constructed that expands the labeled source data to obtain more distribution information. Furthermore, to alleviate the negative impact of domain-irrelevant information, we construct an adaptive dispersal strategy that introduces an intermediate domain and pushes both the source and target domains to this intermediate domain. Extensive experiments on the Office31, Office-Home, miniDomainNet, and VisDA-2017 datasets showcase the superior performance of ADCC compared to the state-of-the-art FUDA methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4273-4285"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11072267/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised domain adaptation is mainly focused on the tasks of transferring knowledge from a fully-labeled source domain to an unlabeled target domain. However, in some scenarios, the labeled data are expensive to collect, which cause an insufficient label issue in the source domain. To tackle this issue, some works have focused on few-shot unsupervised domain adaptation (FUDA), which transfers predictive models to an unlabeled target domain through a source domain that only contains a few labeled samples. Yet the relationship between labeled and unlabeled source domains are not well exploited in generating pseudo-labels. Additionally, the few-shot setting further prevents the transfer tasks as an excessive domain gap is introduced between the source and target domains. To address these issues, we newly proposed an adaptive dispersal and collaborative clustering (ADCC) method for FUDA. Specifically, for the shortage of the labeled source data, a collaborative clustering algorithm is constructed that expands the labeled source data to obtain more distribution information. Furthermore, to alleviate the negative impact of domain-irrelevant information, we construct an adaptive dispersal strategy that introduces an intermediate domain and pushes both the source and target domains to this intermediate domain. Extensive experiments on the Office31, Office-Home, miniDomainNet, and VisDA-2017 datasets showcase the superior performance of ADCC compared to the state-of-the-art FUDA methods.