{"title":"夜间语义切分的课程域适应","authors":"Qi Xu, Yinan Ma, Jing Wu, C. Long, Xiaolin Huang","doi":"10.1109/ICCVW54120.2021.00331","DOIUrl":null,"url":null,"abstract":"Autonomous driving needs to ensure all-weather safety, especially in unfavorable environments such as night and rain. However, the current daytime-trained semantic segmentation networks face significant performance degradation at night because of the huge domain divergence. In this paper, we propose a novel Curriculum Domain Adaptation method (CDAda) to realize the smooth semantic knowledge transfer from daytime to nighttime. Specifically, it consists of two steps: 1) inter-domain style adaptation: fine-tune the daytime-trained model on the labeled synthetic nighttime images through the proposed frequency-based style transformation method (replace the low-frequency components of daytime images with those of nighttime images); 2) intra-domain gradual self-training: separate the nighttime domain into the easy split nighttime domain and hard split nighttime domain based on the \"entropy + illumination\" ranking principle, then gradually adapt the model to the two sub-domains through pseudo supervision on easy split data and entropy minimization on hard split data. To the best of our knowledge, we first extend the idea of intra-domain adaptation to self-training and prove different treatments on two parts can reduce the distribution divergence in the nighttime domain itself. In particular, aimed at the adopted unlabeled day-night image pairs, the prediction of the daytime images can guide the segmentation on the nighttime images by ensuring patch-level consistency. Extensive experiments on Nighttime Driving, Dark Zurich, and BDD100K-night dataset highlight the effectiveness of our approach with the more favorable performance 50.9%, 45.0%, and 33.8% Mean IoU against existing state-of-the-art approaches.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":"{\"title\":\"CDAda: A Curriculum Domain Adaptation for Nighttime Semantic Segmentation\",\"authors\":\"Qi Xu, Yinan Ma, Jing Wu, C. Long, Xiaolin Huang\",\"doi\":\"10.1109/ICCVW54120.2021.00331\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous driving needs to ensure all-weather safety, especially in unfavorable environments such as night and rain. However, the current daytime-trained semantic segmentation networks face significant performance degradation at night because of the huge domain divergence. In this paper, we propose a novel Curriculum Domain Adaptation method (CDAda) to realize the smooth semantic knowledge transfer from daytime to nighttime. Specifically, it consists of two steps: 1) inter-domain style adaptation: fine-tune the daytime-trained model on the labeled synthetic nighttime images through the proposed frequency-based style transformation method (replace the low-frequency components of daytime images with those of nighttime images); 2) intra-domain gradual self-training: separate the nighttime domain into the easy split nighttime domain and hard split nighttime domain based on the \\\"entropy + illumination\\\" ranking principle, then gradually adapt the model to the two sub-domains through pseudo supervision on easy split data and entropy minimization on hard split data. To the best of our knowledge, we first extend the idea of intra-domain adaptation to self-training and prove different treatments on two parts can reduce the distribution divergence in the nighttime domain itself. In particular, aimed at the adopted unlabeled day-night image pairs, the prediction of the daytime images can guide the segmentation on the nighttime images by ensuring patch-level consistency. Extensive experiments on Nighttime Driving, Dark Zurich, and BDD100K-night dataset highlight the effectiveness of our approach with the more favorable performance 50.9%, 45.0%, and 33.8% Mean IoU against existing state-of-the-art approaches.\",\"PeriodicalId\":226794,\"journal\":{\"name\":\"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)\",\"volume\":\"172 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"30\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCVW54120.2021.00331\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCVW54120.2021.00331","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CDAda: A Curriculum Domain Adaptation for Nighttime Semantic Segmentation
Autonomous driving needs to ensure all-weather safety, especially in unfavorable environments such as night and rain. However, the current daytime-trained semantic segmentation networks face significant performance degradation at night because of the huge domain divergence. In this paper, we propose a novel Curriculum Domain Adaptation method (CDAda) to realize the smooth semantic knowledge transfer from daytime to nighttime. Specifically, it consists of two steps: 1) inter-domain style adaptation: fine-tune the daytime-trained model on the labeled synthetic nighttime images through the proposed frequency-based style transformation method (replace the low-frequency components of daytime images with those of nighttime images); 2) intra-domain gradual self-training: separate the nighttime domain into the easy split nighttime domain and hard split nighttime domain based on the "entropy + illumination" ranking principle, then gradually adapt the model to the two sub-domains through pseudo supervision on easy split data and entropy minimization on hard split data. To the best of our knowledge, we first extend the idea of intra-domain adaptation to self-training and prove different treatments on two parts can reduce the distribution divergence in the nighttime domain itself. In particular, aimed at the adopted unlabeled day-night image pairs, the prediction of the daytime images can guide the segmentation on the nighttime images by ensuring patch-level consistency. Extensive experiments on Nighttime Driving, Dark Zurich, and BDD100K-night dataset highlight the effectiveness of our approach with the more favorable performance 50.9%, 45.0%, and 33.8% Mean IoU against existing state-of-the-art approaches.