{"title":"基于原型对比学习的弱监督域自适应语义分割","authors":"Anurag Das, Yongqin Xian, Dengxin Dai, B. Schiele","doi":"10.1109/CVPR52729.2023.01481","DOIUrl":null,"url":null,"abstract":"There has been a lot of effort in improving the performance of unsupervised domain adaptation for semantic segmentation task, however, there is still a huge gap in performance when compared with supervised learning. In this work, we propose a common framework to use different weak labels, e.g., image, point and coarse labels from the target domain to reduce this performance gap. Specifically, we propose to learn better prototypes that are representative class features by exploiting these weak labels. We use these improved prototypes for the contrastive alignment of class features. In particular, we perform two different feature alignments: first, we align pixel features with proto-types within each domain and second, we align pixel features from the source to prototype of target domain in an asymmetric way. This asymmetric alignment is beneficial as it preserves the target features during training, which is essential when weak labels are available from the target domain. Our experiments on various benchmarks show that our framework achieves significant improvement compared to existing works and can reduce the performance gap with supervised learning. Code will be available at https://github.com/anurag-198/WDASS.","PeriodicalId":376416,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Weakly-Supervised Domain Adaptive Semantic Segmentation with Prototypical Contrastive Learning\",\"authors\":\"Anurag Das, Yongqin Xian, Dengxin Dai, B. Schiele\",\"doi\":\"10.1109/CVPR52729.2023.01481\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There has been a lot of effort in improving the performance of unsupervised domain adaptation for semantic segmentation task, however, there is still a huge gap in performance when compared with supervised learning. In this work, we propose a common framework to use different weak labels, e.g., image, point and coarse labels from the target domain to reduce this performance gap. Specifically, we propose to learn better prototypes that are representative class features by exploiting these weak labels. We use these improved prototypes for the contrastive alignment of class features. In particular, we perform two different feature alignments: first, we align pixel features with proto-types within each domain and second, we align pixel features from the source to prototype of target domain in an asymmetric way. This asymmetric alignment is beneficial as it preserves the target features during training, which is essential when weak labels are available from the target domain. Our experiments on various benchmarks show that our framework achieves significant improvement compared to existing works and can reduce the performance gap with supervised learning. Code will be available at https://github.com/anurag-198/WDASS.\",\"PeriodicalId\":376416,\"journal\":{\"name\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPR52729.2023.01481\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR52729.2023.01481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Weakly-Supervised Domain Adaptive Semantic Segmentation with Prototypical Contrastive Learning
There has been a lot of effort in improving the performance of unsupervised domain adaptation for semantic segmentation task, however, there is still a huge gap in performance when compared with supervised learning. In this work, we propose a common framework to use different weak labels, e.g., image, point and coarse labels from the target domain to reduce this performance gap. Specifically, we propose to learn better prototypes that are representative class features by exploiting these weak labels. We use these improved prototypes for the contrastive alignment of class features. In particular, we perform two different feature alignments: first, we align pixel features with proto-types within each domain and second, we align pixel features from the source to prototype of target domain in an asymmetric way. This asymmetric alignment is beneficial as it preserves the target features during training, which is essential when weak labels are available from the target domain. Our experiments on various benchmarks show that our framework achieves significant improvement compared to existing works and can reduce the performance gap with supervised learning. Code will be available at https://github.com/anurag-198/WDASS.