{"title":"Robust source-free domain adaptation with anti-adversarial samples training","authors":"","doi":"10.1016/j.neucom.2024.128777","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised source-free domain adaptation methods aim to transfer knowledge acquired from labeled source domain to an unlabeled target domain, where the source data are not accessible during target domain adaptation and it is prohibited to minimize domain gap by pairwise calculation of the samples from the source and target domains. Previous approaches assign pseudo label to target data using pre-trained source model to progressively train the target model in a self-learning manner. However, incorrect pseudo label may adversely affect prediction in the target domain. Furthermore, they overlook the generalization ability of the source model, which primarily affects the initial prediction of the target model. Therefore, we propose an effective framework based on adversarial training to train the target model for source-free domain adaptation. Specifically, adversarial training is an effective technique to enhance the robustness of deep neural networks. By generating anti-adversarial examples and adversarial examples, the pseudo label of target data can be corrected further by adversarial training and a more optimal performance in both accuracy and robustness is achieved. Moreover, owing to the inherent domain distribution difference between source and target domains, mislabeled target samples exist inevitably. So a target sample filtering scheme is proposed to refine pseudo label to further improve the prediction capability on the target domain. Experiments conducted on benchmark tasks demonstrate that the proposed method outperforms existing approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015480","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised source-free domain adaptation methods aim to transfer knowledge acquired from labeled source domain to an unlabeled target domain, where the source data are not accessible during target domain adaptation and it is prohibited to minimize domain gap by pairwise calculation of the samples from the source and target domains. Previous approaches assign pseudo label to target data using pre-trained source model to progressively train the target model in a self-learning manner. However, incorrect pseudo label may adversely affect prediction in the target domain. Furthermore, they overlook the generalization ability of the source model, which primarily affects the initial prediction of the target model. Therefore, we propose an effective framework based on adversarial training to train the target model for source-free domain adaptation. Specifically, adversarial training is an effective technique to enhance the robustness of deep neural networks. By generating anti-adversarial examples and adversarial examples, the pseudo label of target data can be corrected further by adversarial training and a more optimal performance in both accuracy and robustness is achieved. Moreover, owing to the inherent domain distribution difference between source and target domains, mislabeled target samples exist inevitably. So a target sample filtering scheme is proposed to refine pseudo label to further improve the prediction capability on the target domain. Experiments conducted on benchmark tasks demonstrate that the proposed method outperforms existing approaches.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.