{"title":"多模态医学影像分割下可靠性学习和可解释决策的证据建模。","authors":"Jianfeng Zhao , Shuo Li","doi":"10.1016/j.compmedimag.2024.102422","DOIUrl":null,"url":null,"abstract":"<div><p>Reliability learning and interpretable decision-making are crucial for multi-modality medical image segmentation. Although many works have attempted multi-modality medical image segmentation, they rarely explore how much reliability is provided by each modality for segmentation. Moreover, the existing approach of decision-making such as the <span><math><mrow><mi>s</mi><mi>o</mi><mi>f</mi><mi>t</mi><mi>m</mi><mi>a</mi><mi>x</mi></mrow></math></span> function lacks the interpretability for multi-modality fusion. In this study, we proposed a novel approach named contextual discounted evidential network (CDE-Net) for reliability learning and interpretable decision-making under multi-modality medical image segmentation. Specifically, the CDE-Net first models the semantic evidence by uncertainty measurement using the proposed evidential decision-making module. Then, it leverages the contextual discounted fusion layer to learn the reliability provided by each modality. Finally, a multi-level loss function is deployed for the optimization of evidence modeling and reliability learning. Moreover, this study elaborates on the framework interpretability by discussing the consistency between pixel attribution maps and the learned reliability coefficients. Extensive experiments are conducted on both multi-modality brain and liver datasets. The CDE-Net gains high performance with an average Dice score of 0.914 for brain tumor segmentation and 0.913 for liver tumor segmentation, which proves CDE-Net has great potential to facilitate the interpretation of artificial intelligence-based multi-modality medical image fusion.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102422"},"PeriodicalIF":5.4000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evidence modeling for reliability learning and interpretable decision-making under multi-modality medical image segmentation\",\"authors\":\"Jianfeng Zhao , Shuo Li\",\"doi\":\"10.1016/j.compmedimag.2024.102422\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Reliability learning and interpretable decision-making are crucial for multi-modality medical image segmentation. Although many works have attempted multi-modality medical image segmentation, they rarely explore how much reliability is provided by each modality for segmentation. Moreover, the existing approach of decision-making such as the <span><math><mrow><mi>s</mi><mi>o</mi><mi>f</mi><mi>t</mi><mi>m</mi><mi>a</mi><mi>x</mi></mrow></math></span> function lacks the interpretability for multi-modality fusion. In this study, we proposed a novel approach named contextual discounted evidential network (CDE-Net) for reliability learning and interpretable decision-making under multi-modality medical image segmentation. Specifically, the CDE-Net first models the semantic evidence by uncertainty measurement using the proposed evidential decision-making module. Then, it leverages the contextual discounted fusion layer to learn the reliability provided by each modality. Finally, a multi-level loss function is deployed for the optimization of evidence modeling and reliability learning. Moreover, this study elaborates on the framework interpretability by discussing the consistency between pixel attribution maps and the learned reliability coefficients. Extensive experiments are conducted on both multi-modality brain and liver datasets. The CDE-Net gains high performance with an average Dice score of 0.914 for brain tumor segmentation and 0.913 for liver tumor segmentation, which proves CDE-Net has great potential to facilitate the interpretation of artificial intelligence-based multi-modality medical image fusion.</p></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"116 \",\"pages\":\"Article 102422\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611124000995\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124000995","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Evidence modeling for reliability learning and interpretable decision-making under multi-modality medical image segmentation
Reliability learning and interpretable decision-making are crucial for multi-modality medical image segmentation. Although many works have attempted multi-modality medical image segmentation, they rarely explore how much reliability is provided by each modality for segmentation. Moreover, the existing approach of decision-making such as the function lacks the interpretability for multi-modality fusion. In this study, we proposed a novel approach named contextual discounted evidential network (CDE-Net) for reliability learning and interpretable decision-making under multi-modality medical image segmentation. Specifically, the CDE-Net first models the semantic evidence by uncertainty measurement using the proposed evidential decision-making module. Then, it leverages the contextual discounted fusion layer to learn the reliability provided by each modality. Finally, a multi-level loss function is deployed for the optimization of evidence modeling and reliability learning. Moreover, this study elaborates on the framework interpretability by discussing the consistency between pixel attribution maps and the learned reliability coefficients. Extensive experiments are conducted on both multi-modality brain and liver datasets. The CDE-Net gains high performance with an average Dice score of 0.914 for brain tumor segmentation and 0.913 for liver tumor segmentation, which proves CDE-Net has great potential to facilitate the interpretation of artificial intelligence-based multi-modality medical image fusion.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.