{"title":"Iterative Image Translation for Unsupervised Domain Adaptation","authors":"S. Chhabra, Hemanth Venkateswara, Baoxin Li","doi":"10.1145/3476098.3485050","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an image-translation-based unsupervised domain adaptation approach that iteratively trains an image translation and a classification network using each other. In Phase A, a classification network is used to guide the image translation to preserve the content and generate images. In Phase B, the generated images are used to train the classification network. With each step, the classification network and generator improve each other to learn the target domain representation. Detailed analysis and the experiments are testimony of the strength of our approach.","PeriodicalId":390904,"journal":{"name":"Multimedia Understanding with Less Labeling on Multimedia Understanding with Less Labeling","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimedia Understanding with Less Labeling on Multimedia Understanding with Less Labeling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3476098.3485050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In this paper, we propose an image-translation-based unsupervised domain adaptation approach that iteratively trains an image translation and a classification network using each other. In Phase A, a classification network is used to guide the image translation to preserve the content and generate images. In Phase B, the generated images are used to train the classification network. With each step, the classification network and generator improve each other to learn the target domain representation. Detailed analysis and the experiments are testimony of the strength of our approach.