{"title":"基于可学习类激活映射的医学图像分割的可解释深度学习","authors":"Kaiyu Wang, Sixing Yin, Yining Wang, Shufang Li","doi":"10.1145/3590003.3590040","DOIUrl":null,"url":null,"abstract":"Medical image segmentation is crucial for facilitating pathology assessment, ensuring reliable diagnosis and monitoring disease progression. Deep-learning models have been extensively applied in automating medical image analysis to reduce human effort. However, the non-transparency of deep-learning models limits their clinical practicality due to the unaffordably high risk of misdiagnosis resulted from the misleading model output. In this paper, we propose a explainability metric as part of the loss function. The proposed explainability metric comes from Class Activation Map(CAM) with learnable weights such that the model can be optimized to achieve desirable balance between segmentation performance and explainability. Experiments found that the proposed model visibly heightened Dice score from to , Jaccard similarity from to and Recall from to respectively compared with U-net. In addition, results make clear that the drawn model outdistances the conventional U-net in terms of explainability performance.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Deep Learning for Medical Image Segmentation With Learnable Class Activation Mapping\",\"authors\":\"Kaiyu Wang, Sixing Yin, Yining Wang, Shufang Li\",\"doi\":\"10.1145/3590003.3590040\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Medical image segmentation is crucial for facilitating pathology assessment, ensuring reliable diagnosis and monitoring disease progression. Deep-learning models have been extensively applied in automating medical image analysis to reduce human effort. However, the non-transparency of deep-learning models limits their clinical practicality due to the unaffordably high risk of misdiagnosis resulted from the misleading model output. In this paper, we propose a explainability metric as part of the loss function. The proposed explainability metric comes from Class Activation Map(CAM) with learnable weights such that the model can be optimized to achieve desirable balance between segmentation performance and explainability. Experiments found that the proposed model visibly heightened Dice score from to , Jaccard similarity from to and Recall from to respectively compared with U-net. In addition, results make clear that the drawn model outdistances the conventional U-net in terms of explainability performance.\",\"PeriodicalId\":340225,\"journal\":{\"name\":\"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning\",\"volume\":\"107 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3590003.3590040\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590003.3590040","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable Deep Learning for Medical Image Segmentation With Learnable Class Activation Mapping
Medical image segmentation is crucial for facilitating pathology assessment, ensuring reliable diagnosis and monitoring disease progression. Deep-learning models have been extensively applied in automating medical image analysis to reduce human effort. However, the non-transparency of deep-learning models limits their clinical practicality due to the unaffordably high risk of misdiagnosis resulted from the misleading model output. In this paper, we propose a explainability metric as part of the loss function. The proposed explainability metric comes from Class Activation Map(CAM) with learnable weights such that the model can be optimized to achieve desirable balance between segmentation performance and explainability. Experiments found that the proposed model visibly heightened Dice score from to , Jaccard similarity from to and Recall from to respectively compared with U-net. In addition, results make clear that the drawn model outdistances the conventional U-net in terms of explainability performance.