{"title":"Uniform low-rank representation for unsupervised visual domain adaptation","authors":"Pengcheng Liu, Peipei Yang, Kaiqi Huang, T. Tan","doi":"10.1109/ACPR.2015.7486497","DOIUrl":null,"url":null,"abstract":"Visual domain adaptation aims to adapt a model learned in source domain to target domain, which has received much attention in recent years. In this paper, we propose a uniform low-rank representation based unsupervised domain adaptation method which captures the intrinsic relationship among the source and target samples and meanwhile eliminates the disturbance from the noises and outliers. In particular, we first align the source and target samples into a common subspace using a subspace alignment technique. Then we learn a domain-invariant dictionary with respect to the transformed source and target samples. Finally, all the transformed samples are low-rank represented based on the learned dictionary. Extensive experimental results show that our method is beneficial to reducing the domain difference, and we achieve the state-of-the-art performance on the widely used visual domain adaptation benchmark.","PeriodicalId":240902,"journal":{"name":"2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACPR.2015.7486497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Visual domain adaptation aims to adapt a model learned in source domain to target domain, which has received much attention in recent years. In this paper, we propose a uniform low-rank representation based unsupervised domain adaptation method which captures the intrinsic relationship among the source and target samples and meanwhile eliminates the disturbance from the noises and outliers. In particular, we first align the source and target samples into a common subspace using a subspace alignment technique. Then we learn a domain-invariant dictionary with respect to the transformed source and target samples. Finally, all the transformed samples are low-rank represented based on the learned dictionary. Extensive experimental results show that our method is beneficial to reducing the domain difference, and we achieve the state-of-the-art performance on the widely used visual domain adaptation benchmark.