{"title":"无监督域自适应的强弱综合半监督","authors":"Xiaohu Lu, H. Radha","doi":"10.1109/ICIP46576.2022.9897242","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation (UDA) focuses on transferring knowledge learned in the labeled source domain to the unlabeled target domain. Semi-supervised learning is a proven strategy for improving UDA performance. In this paper, we propose a novel strong-weak integrated semi-supervision (SWISS) learning strategy for unsupervised domain adaptation. Under the proposed SWISSUDA framework, a strong representative set with high confidence but low diversity target domain samples and a weak representative set with low confidence but high diversity target domain samples are updated constantly during the training process. Both sets are fused randomly to generate an augmented strong-weak training batch with pseudo-labels to train the network during every iteration. Moreover, a novel adversarial logit loss is proposed to reduce the intra-class divergence between source and target domains, which is back-propagated adversarially with a gradient reverse layer between the classifier and the rest of the network. Experimental results based on two popular benchmarks, Office-Home, and DomainNet, show the effectiveness of the proposed SWISS framework with our method achieving the best performance in both Office-Home and DomainNet.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Strong-Weak Integrated Semi-Supervision for Unsupervised Domain Adaptation\",\"authors\":\"Xiaohu Lu, H. Radha\",\"doi\":\"10.1109/ICIP46576.2022.9897242\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised domain adaptation (UDA) focuses on transferring knowledge learned in the labeled source domain to the unlabeled target domain. Semi-supervised learning is a proven strategy for improving UDA performance. In this paper, we propose a novel strong-weak integrated semi-supervision (SWISS) learning strategy for unsupervised domain adaptation. Under the proposed SWISSUDA framework, a strong representative set with high confidence but low diversity target domain samples and a weak representative set with low confidence but high diversity target domain samples are updated constantly during the training process. Both sets are fused randomly to generate an augmented strong-weak training batch with pseudo-labels to train the network during every iteration. Moreover, a novel adversarial logit loss is proposed to reduce the intra-class divergence between source and target domains, which is back-propagated adversarially with a gradient reverse layer between the classifier and the rest of the network. Experimental results based on two popular benchmarks, Office-Home, and DomainNet, show the effectiveness of the proposed SWISS framework with our method achieving the best performance in both Office-Home and DomainNet.\",\"PeriodicalId\":387035,\"journal\":{\"name\":\"2022 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP46576.2022.9897242\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP46576.2022.9897242","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Strong-Weak Integrated Semi-Supervision for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) focuses on transferring knowledge learned in the labeled source domain to the unlabeled target domain. Semi-supervised learning is a proven strategy for improving UDA performance. In this paper, we propose a novel strong-weak integrated semi-supervision (SWISS) learning strategy for unsupervised domain adaptation. Under the proposed SWISSUDA framework, a strong representative set with high confidence but low diversity target domain samples and a weak representative set with low confidence but high diversity target domain samples are updated constantly during the training process. Both sets are fused randomly to generate an augmented strong-weak training batch with pseudo-labels to train the network during every iteration. Moreover, a novel adversarial logit loss is proposed to reduce the intra-class divergence between source and target domains, which is back-propagated adversarially with a gradient reverse layer between the classifier and the rest of the network. Experimental results based on two popular benchmarks, Office-Home, and DomainNet, show the effectiveness of the proposed SWISS framework with our method achieving the best performance in both Office-Home and DomainNet.