{"title":"基于对比学习的自监督领域自适应模型","authors":"Ya Ma, Biao Chen, Ziwei Li, Gang Bai","doi":"10.1145/3529836.3529858","DOIUrl":null,"url":null,"abstract":"Contrastive learning is a typical discriminative self-supervised learning method, which can learn knowledge from unlabeled data. Unsupervised domain adaptation (UDA) aims to predict unlabeled target domain data. In this paper, we propose a self-supervised domain adaptation model based on contrastive learning, which applies the idea of contrastive learning to UDA, named siam-DAN. In this model, we first use the clustering method to obtain the pseudo-labels of the target domain data, then combine the labeled source domain data to construct the positive and negative examples required for contrastive learning to train the model, so that makes the distribution of samples of the same class in the representation space overlap as much as possible and finally enable the model to learn domain-invariant features. We evaluate the performance of our proposed model on three public benchmarks: Office-31, Office-Home, and VisDA-2017, and achieve relatively competitive results.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-supervised Domain Adaptation Model Based on Contrastive Learning\",\"authors\":\"Ya Ma, Biao Chen, Ziwei Li, Gang Bai\",\"doi\":\"10.1145/3529836.3529858\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contrastive learning is a typical discriminative self-supervised learning method, which can learn knowledge from unlabeled data. Unsupervised domain adaptation (UDA) aims to predict unlabeled target domain data. In this paper, we propose a self-supervised domain adaptation model based on contrastive learning, which applies the idea of contrastive learning to UDA, named siam-DAN. In this model, we first use the clustering method to obtain the pseudo-labels of the target domain data, then combine the labeled source domain data to construct the positive and negative examples required for contrastive learning to train the model, so that makes the distribution of samples of the same class in the representation space overlap as much as possible and finally enable the model to learn domain-invariant features. We evaluate the performance of our proposed model on three public benchmarks: Office-31, Office-Home, and VisDA-2017, and achieve relatively competitive results.\",\"PeriodicalId\":285191,\"journal\":{\"name\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3529836.3529858\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529858","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Self-supervised Domain Adaptation Model Based on Contrastive Learning
Contrastive learning is a typical discriminative self-supervised learning method, which can learn knowledge from unlabeled data. Unsupervised domain adaptation (UDA) aims to predict unlabeled target domain data. In this paper, we propose a self-supervised domain adaptation model based on contrastive learning, which applies the idea of contrastive learning to UDA, named siam-DAN. In this model, we first use the clustering method to obtain the pseudo-labels of the target domain data, then combine the labeled source domain data to construct the positive and negative examples required for contrastive learning to train the model, so that makes the distribution of samples of the same class in the representation space overlap as much as possible and finally enable the model to learn domain-invariant features. We evaluate the performance of our proposed model on three public benchmarks: Office-31, Office-Home, and VisDA-2017, and achieve relatively competitive results.