Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, Kate Saenko
{"title":"VisDA:视觉域自适应的合成到真实的基准","authors":"Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, Kate Saenko","doi":"10.1109/CVPRW.2018.00271","DOIUrl":null,"url":null,"abstract":"The success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. However, real training images are expensive to collect and annotate for both computer vision and robotic applications. The synthetic images are easy to generate but model performance often drops significantly on data from a new deployment domain, a problem known as dataset shift, or dataset bias. Changes in the visual domain can include lighting, camera pose and background variation, as well as general changes in how the image data is collected. While this problem has been studied extensively in the domain adaptation literature, progress has been limited by the lack of large-scale challenge benchmarks.","PeriodicalId":150600,"journal":{"name":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"130","resultStr":"{\"title\":\"VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation\",\"authors\":\"Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, Kate Saenko\",\"doi\":\"10.1109/CVPRW.2018.00271\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. However, real training images are expensive to collect and annotate for both computer vision and robotic applications. The synthetic images are easy to generate but model performance often drops significantly on data from a new deployment domain, a problem known as dataset shift, or dataset bias. Changes in the visual domain can include lighting, camera pose and background variation, as well as general changes in how the image data is collected. While this problem has been studied extensively in the domain adaptation literature, progress has been limited by the lack of large-scale challenge benchmarks.\",\"PeriodicalId\":150600,\"journal\":{\"name\":\"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"127 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"130\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW.2018.00271\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2018.00271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation
The success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. However, real training images are expensive to collect and annotate for both computer vision and robotic applications. The synthetic images are easy to generate but model performance often drops significantly on data from a new deployment domain, a problem known as dataset shift, or dataset bias. Changes in the visual domain can include lighting, camera pose and background variation, as well as general changes in how the image data is collected. While this problem has been studied extensively in the domain adaptation literature, progress has been limited by the lack of large-scale challenge benchmarks.