结合语义自监督和自训练的语义分割领域适应

J. Niemeijer, J. P. Schäfer
{"title":"结合语义自监督和自训练的语义分割领域适应","authors":"J. Niemeijer, J. P. Schäfer","doi":"10.1109/ivworkshops54471.2021.9669255","DOIUrl":null,"url":null,"abstract":"This work presents a two-staged, unsupervised domain adaptation process for semantic segmentation models by combining a self-training and self-supervision strategy. Self-training (i. e., training a model on self-inferred pseudo-labels) yields competitive results for domain adaptation in recent research. However, self-training depends on high-quality pseudo-labels. On the other hand, self-supervision trains the model on a surrogate task and improves its performance on the target domain without further prerequisites.Therefore, our approach improves the model’s performance on the target domain with a novel surrogate task. To that, we continuously determine class centroids of the feature representations in the network’s pre-logit layer on the source domain. Our surrogate task clusters the pre-logit feature representations on the target domain regarding these class centroids during both training stages. After the first stage, the resulting model delivers improved pseudo-labels for the additional self-training in the second stage. We evaluate our method on two different domain adaptions, a real-world domain change from Cityscapes to the Berkeley Deep Drive dataset and a synthetic to real-world domain change from GTA5 to the Cityscapes dataset. For the real-world domain change, the evaluation shows a significant improvement of the model from 46% mIoU to 54% mIoU on the target domain. For the synthetic to real-world domain change, we achieve an improvement from 38.8% to 46.42% on the real-world target domain.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation\",\"authors\":\"J. Niemeijer, J. P. Schäfer\",\"doi\":\"10.1109/ivworkshops54471.2021.9669255\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work presents a two-staged, unsupervised domain adaptation process for semantic segmentation models by combining a self-training and self-supervision strategy. Self-training (i. e., training a model on self-inferred pseudo-labels) yields competitive results for domain adaptation in recent research. However, self-training depends on high-quality pseudo-labels. On the other hand, self-supervision trains the model on a surrogate task and improves its performance on the target domain without further prerequisites.Therefore, our approach improves the model’s performance on the target domain with a novel surrogate task. To that, we continuously determine class centroids of the feature representations in the network’s pre-logit layer on the source domain. Our surrogate task clusters the pre-logit feature representations on the target domain regarding these class centroids during both training stages. After the first stage, the resulting model delivers improved pseudo-labels for the additional self-training in the second stage. We evaluate our method on two different domain adaptions, a real-world domain change from Cityscapes to the Berkeley Deep Drive dataset and a synthetic to real-world domain change from GTA5 to the Cityscapes dataset. For the real-world domain change, the evaluation shows a significant improvement of the model from 46% mIoU to 54% mIoU on the target domain. For the synthetic to real-world domain change, we achieve an improvement from 38.8% to 46.42% on the real-world target domain.\",\"PeriodicalId\":256905,\"journal\":{\"name\":\"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ivworkshops54471.2021.9669255\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ivworkshops54471.2021.9669255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

本文通过结合自我训练和自我监督策略,提出了语义分割模型的两阶段无监督域自适应过程。在最近的研究中,自我训练(即在自我推断的伪标签上训练模型)在领域适应方面产生了竞争性的结果。然而,自我训练依赖于高质量的伪标签。另一方面,自我监督在代理任务上训练模型,并在没有进一步先决条件的情况下提高其在目标领域的性能。因此,我们的方法通过新的代理任务提高了模型在目标域中的性能。为此,我们在源域上连续确定网络的pre-logit层中特征表示的类质心。我们的代理任务在两个训练阶段对目标域上关于这些类质心的预logit特征表示进行聚类。在第一阶段之后,生成的模型为第二阶段的额外自我训练提供改进的伪标签。我们在两种不同的领域调整上评估了我们的方法,从cityscape到Berkeley Deep Drive数据集的现实世界领域变化,以及从GTA5到cityscape数据集的合成到现实世界的领域变化。对于现实世界的领域变化,评估表明该模型在目标领域上的mIoU从46%显著提高到54%。对于合成到真实世界领域的变化,我们在真实世界目标领域上实现了从38.8%到46.42%的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation
This work presents a two-staged, unsupervised domain adaptation process for semantic segmentation models by combining a self-training and self-supervision strategy. Self-training (i. e., training a model on self-inferred pseudo-labels) yields competitive results for domain adaptation in recent research. However, self-training depends on high-quality pseudo-labels. On the other hand, self-supervision trains the model on a surrogate task and improves its performance on the target domain without further prerequisites.Therefore, our approach improves the model’s performance on the target domain with a novel surrogate task. To that, we continuously determine class centroids of the feature representations in the network’s pre-logit layer on the source domain. Our surrogate task clusters the pre-logit feature representations on the target domain regarding these class centroids during both training stages. After the first stage, the resulting model delivers improved pseudo-labels for the additional self-training in the second stage. We evaluate our method on two different domain adaptions, a real-world domain change from Cityscapes to the Berkeley Deep Drive dataset and a synthetic to real-world domain change from GTA5 to the Cityscapes dataset. For the real-world domain change, the evaluation shows a significant improvement of the model from 46% mIoU to 54% mIoU on the target domain. For the synthetic to real-world domain change, we achieve an improvement from 38.8% to 46.42% on the real-world target domain.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信