{"title":"Discriminative and Contrastive Consistency for Semi-supervised Domain Adaptive Image Classification","authors":"Yidan Fan, Wenhuan Lu, Yahong Han","doi":"10.1109/ICME55011.2023.00188","DOIUrl":null,"url":null,"abstract":"With sufficient source and limited target supervised information, semi-supervised domain adaptation (SSDA) aims to perform well on unlabeled target domain. Although various strategies have been proposed in SSDA field, they fail to fully exploit limited target labels and adequately explore domain-invariant knowledge. In this study, we propose a framework that first introduces consistent processing of augmented training data based on contrastive learning. Specifically, supervised contrastive learning is introduced to assist the classical cross-entropy iteration to make full use of the limited target labels. Additionally, traditional unsupervised contrastive learning and pseudo-labeling are utilized to further minimize the intra-domain discrepancy. Besides, an adversarial loss is then combined with a sharpening function to acquire a more certain category center that is domain-invariant. Experimental results on DomainNet, Office-Home, and Office show the effectiveness of our method. Particularly, for 1-shot case of Office-Home with AlexNet as backbone, our method outperforms the previous state-of-the-art by 5.6% in terms of mean accuracy.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With sufficient source and limited target supervised information, semi-supervised domain adaptation (SSDA) aims to perform well on unlabeled target domain. Although various strategies have been proposed in SSDA field, they fail to fully exploit limited target labels and adequately explore domain-invariant knowledge. In this study, we propose a framework that first introduces consistent processing of augmented training data based on contrastive learning. Specifically, supervised contrastive learning is introduced to assist the classical cross-entropy iteration to make full use of the limited target labels. Additionally, traditional unsupervised contrastive learning and pseudo-labeling are utilized to further minimize the intra-domain discrepancy. Besides, an adversarial loss is then combined with a sharpening function to acquire a more certain category center that is domain-invariant. Experimental results on DomainNet, Office-Home, and Office show the effectiveness of our method. Particularly, for 1-shot case of Office-Home with AlexNet as backbone, our method outperforms the previous state-of-the-art by 5.6% in terms of mean accuracy.