{"title":"Occlusion-Based Approach for Interpretable Semantic Segmentation","authors":"Rokas Gipiškis, O. Kurasova","doi":"10.23919/CISTI58278.2023.10212017","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate the application of an occlusion-based approach for the task of interpreting semantic segmentation results. With an increasing deployment of deep learning systems in critical domains, interpretability plays a key role in providing additional information about the model besides the evaluation metric score. An extended modification of occlusion sensitivity allows the generation of saliency maps based on the effect of occlusions on the evaluation metric. Such a perturbation-based post-hoc interpretability method can be used to visualize those image regions that the selected segmentation class is most sensitive to. We observe that, compared to classification cases, the evaluation metric scores for segmentation remain similar to each other even after occlusions. To generate more color intensities in the saliency map, we use normalization and standardization techniques. We also evaluate the results quantitatively using deletion curves.","PeriodicalId":121747,"journal":{"name":"2023 18th Iberian Conference on Information Systems and Technologies (CISTI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th Iberian Conference on Information Systems and Technologies (CISTI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/CISTI58278.2023.10212017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we investigate the application of an occlusion-based approach for the task of interpreting semantic segmentation results. With an increasing deployment of deep learning systems in critical domains, interpretability plays a key role in providing additional information about the model besides the evaluation metric score. An extended modification of occlusion sensitivity allows the generation of saliency maps based on the effect of occlusions on the evaluation metric. Such a perturbation-based post-hoc interpretability method can be used to visualize those image regions that the selected segmentation class is most sensitive to. We observe that, compared to classification cases, the evaluation metric scores for segmentation remain similar to each other even after occlusions. To generate more color intensities in the saliency map, we use normalization and standardization techniques. We also evaluate the results quantitatively using deletion curves.