{"title":"Comparative Analysis of Deep Semantic Segmentation Networks Sensitivity to Input Noise","authors":"Silviu-Dumitru Paval, M. Craus","doi":"10.1109/ICSTCC55426.2022.9931858","DOIUrl":null,"url":null,"abstract":"In this paper we are analyzing the sensitivity of deep semantic segmentation networks output with respect to input image augmentations. Our goal is to introduce a new method for measuring the sensitivity of deep neural networks doing semantic segmentation and thus determine how stable these networks are when image alterations are encountered during real life image capturing. To achieve our goal, we construct a sensitivity analysis model and apply it to some commonly used semantic segmentation CNN architectures (PSPNet, ICNet, DeepLabV3) across a couple types of image degradations. Extrapolating the results we obtain would allow for estimating various CNN models performance for new domains comprised by images of lower quality (captured with different types of camera or light conditions). Our specific experiments for semantic segmentation task revealed that DeepLabV3is more stable to input image degradations than PSPNet and ICNet. However, for some object classes even DeepLabV3is seriously affected by the input noise.","PeriodicalId":220845,"journal":{"name":"2022 26th International Conference on System Theory, Control and Computing (ICSTCC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 26th International Conference on System Theory, Control and Computing (ICSTCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSTCC55426.2022.9931858","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper we are analyzing the sensitivity of deep semantic segmentation networks output with respect to input image augmentations. Our goal is to introduce a new method for measuring the sensitivity of deep neural networks doing semantic segmentation and thus determine how stable these networks are when image alterations are encountered during real life image capturing. To achieve our goal, we construct a sensitivity analysis model and apply it to some commonly used semantic segmentation CNN architectures (PSPNet, ICNet, DeepLabV3) across a couple types of image degradations. Extrapolating the results we obtain would allow for estimating various CNN models performance for new domains comprised by images of lower quality (captured with different types of camera or light conditions). Our specific experiments for semantic segmentation task revealed that DeepLabV3is more stable to input image degradations than PSPNet and ICNet. However, for some object classes even DeepLabV3is seriously affected by the input noise.