{"title":"范畴差异很重要:语义切分中范畴间错误的广义分析","authors":"Jingxing Zhou, Jürgen Beyerer","doi":"10.1109/CVPRW59228.2023.00401","DOIUrl":null,"url":null,"abstract":"In current evaluation schemes of semantic segmentation, metrics are calculated in such a way that all predicted classes should equally be identical to their ground truth, paying less attention to the various manifestations of the false predictions within the object category. In this work, we propose the Critical Error Rate (CER) as a supplement to the current evaluation metrics, focusing on the error rate, which reflects predictions that fall outside of the category from the ground truth. We conduct a series of experiments evaluating the behavior of different network architectures in various evaluation setups, including domain shift, the introduction of novel classes, and a mixture of these. We demonstrate the essential criteria for network generalization with those experiments. Furthermore, we ablate the impact of utilizing various class taxonomies for the evaluation of out-of-category error.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Category Differences Matter: A Broad Analysis of Inter-Category Error in Semantic Segmentation\",\"authors\":\"Jingxing Zhou, Jürgen Beyerer\",\"doi\":\"10.1109/CVPRW59228.2023.00401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In current evaluation schemes of semantic segmentation, metrics are calculated in such a way that all predicted classes should equally be identical to their ground truth, paying less attention to the various manifestations of the false predictions within the object category. In this work, we propose the Critical Error Rate (CER) as a supplement to the current evaluation metrics, focusing on the error rate, which reflects predictions that fall outside of the category from the ground truth. We conduct a series of experiments evaluating the behavior of different network architectures in various evaluation setups, including domain shift, the introduction of novel classes, and a mixture of these. We demonstrate the essential criteria for network generalization with those experiments. Furthermore, we ablate the impact of utilizing various class taxonomies for the evaluation of out-of-category error.\",\"PeriodicalId\":355438,\"journal\":{\"name\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW59228.2023.00401\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW59228.2023.00401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Category Differences Matter: A Broad Analysis of Inter-Category Error in Semantic Segmentation
In current evaluation schemes of semantic segmentation, metrics are calculated in such a way that all predicted classes should equally be identical to their ground truth, paying less attention to the various manifestations of the false predictions within the object category. In this work, we propose the Critical Error Rate (CER) as a supplement to the current evaluation metrics, focusing on the error rate, which reflects predictions that fall outside of the category from the ground truth. We conduct a series of experiments evaluating the behavior of different network architectures in various evaluation setups, including domain shift, the introduction of novel classes, and a mixture of these. We demonstrate the essential criteria for network generalization with those experiments. Furthermore, we ablate the impact of utilizing various class taxonomies for the evaluation of out-of-category error.