Afsana Ahmed Munia , Moloud Abdar , Mehedi Hasan , Mohammad S. Jalali , Biplab Banerjee , Abbas Khosravi , Ibrahim Hossain , Huazhu Fu , Alejandro F. Frangi
{"title":"用于不确定性驱动医学图像分割的注意力引导分层融合 U-Net","authors":"Afsana Ahmed Munia , Moloud Abdar , Mehedi Hasan , Mohammad S. Jalali , Biplab Banerjee , Abbas Khosravi , Ibrahim Hossain , Huazhu Fu , Alejandro F. Frangi","doi":"10.1016/j.inffus.2024.102719","DOIUrl":null,"url":null,"abstract":"<div><div>Small inaccuracies in the system components or artificial intelligence (AI) models for medical imaging could have significant consequences leading to life hazards. To mitigate those risks, one must consider the precision of the image analysis outcomes (e.g., image segmentation), along with the confidence in the underlying model predictions. U-shaped architectures, based on the convolutional encoder–decoder, have established themselves as a critical component of many AI-enabled diagnostic imaging systems. However, most of the existing methods focus on producing accurate diagnostic predictions without assessing the uncertainty associated with such predictions or the introduced techniques. Uncertainty maps highlight areas in the predicted segmented results, where the model is uncertain or less confident. This could lead radiologists to pay more attention to ensuring patient safety and pave the way for trustworthy AI applications. In this paper, we therefore propose the Attention-guided Hierarchical Fusion U-Net (named AHF-U-Net) for medical image segmentation. We then introduce the uncertainty-aware version of it called UA-AHF-U-Net which provides the uncertainty map alongside the predicted segmentation map. The network is designed by integrating the Encoder Attention Fusion module (EAF) and the Decoder Attention Fusion module (DAF) on the encoder and decoder sides of the U-Net architecture, respectively. The EAF and DAF modules utilize spatial and channel attention to capture relevant spatial information and indicate which channels are appropriate for a given image. Furthermore, an enhanced skip connection is introduced and named the Hierarchical Attention-Enhanced (HAE) skip connection. We evaluated the efficiency of our model by comparing it with eleven well-established methods for three popular medical image segmentation datasets consisting of coarse-grained images with unclear boundaries. Based on the quantitative and qualitative results, the proposed method ranks first in two datasets and second in a third. The code can be accessed at: <span><span>https://github.com/AfsanaAhmedMunia/AHF-Fusion-U-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"115 ","pages":"Article 102719"},"PeriodicalIF":14.7000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation\",\"authors\":\"Afsana Ahmed Munia , Moloud Abdar , Mehedi Hasan , Mohammad S. Jalali , Biplab Banerjee , Abbas Khosravi , Ibrahim Hossain , Huazhu Fu , Alejandro F. Frangi\",\"doi\":\"10.1016/j.inffus.2024.102719\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Small inaccuracies in the system components or artificial intelligence (AI) models for medical imaging could have significant consequences leading to life hazards. To mitigate those risks, one must consider the precision of the image analysis outcomes (e.g., image segmentation), along with the confidence in the underlying model predictions. U-shaped architectures, based on the convolutional encoder–decoder, have established themselves as a critical component of many AI-enabled diagnostic imaging systems. However, most of the existing methods focus on producing accurate diagnostic predictions without assessing the uncertainty associated with such predictions or the introduced techniques. Uncertainty maps highlight areas in the predicted segmented results, where the model is uncertain or less confident. This could lead radiologists to pay more attention to ensuring patient safety and pave the way for trustworthy AI applications. In this paper, we therefore propose the Attention-guided Hierarchical Fusion U-Net (named AHF-U-Net) for medical image segmentation. We then introduce the uncertainty-aware version of it called UA-AHF-U-Net which provides the uncertainty map alongside the predicted segmentation map. The network is designed by integrating the Encoder Attention Fusion module (EAF) and the Decoder Attention Fusion module (DAF) on the encoder and decoder sides of the U-Net architecture, respectively. The EAF and DAF modules utilize spatial and channel attention to capture relevant spatial information and indicate which channels are appropriate for a given image. Furthermore, an enhanced skip connection is introduced and named the Hierarchical Attention-Enhanced (HAE) skip connection. We evaluated the efficiency of our model by comparing it with eleven well-established methods for three popular medical image segmentation datasets consisting of coarse-grained images with unclear boundaries. Based on the quantitative and qualitative results, the proposed method ranks first in two datasets and second in a third. The code can be accessed at: <span><span>https://github.com/AfsanaAhmedMunia/AHF-Fusion-U-Net</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"115 \",\"pages\":\"Article 102719\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2024-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253524004974\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253524004974","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation
Small inaccuracies in the system components or artificial intelligence (AI) models for medical imaging could have significant consequences leading to life hazards. To mitigate those risks, one must consider the precision of the image analysis outcomes (e.g., image segmentation), along with the confidence in the underlying model predictions. U-shaped architectures, based on the convolutional encoder–decoder, have established themselves as a critical component of many AI-enabled diagnostic imaging systems. However, most of the existing methods focus on producing accurate diagnostic predictions without assessing the uncertainty associated with such predictions or the introduced techniques. Uncertainty maps highlight areas in the predicted segmented results, where the model is uncertain or less confident. This could lead radiologists to pay more attention to ensuring patient safety and pave the way for trustworthy AI applications. In this paper, we therefore propose the Attention-guided Hierarchical Fusion U-Net (named AHF-U-Net) for medical image segmentation. We then introduce the uncertainty-aware version of it called UA-AHF-U-Net which provides the uncertainty map alongside the predicted segmentation map. The network is designed by integrating the Encoder Attention Fusion module (EAF) and the Decoder Attention Fusion module (DAF) on the encoder and decoder sides of the U-Net architecture, respectively. The EAF and DAF modules utilize spatial and channel attention to capture relevant spatial information and indicate which channels are appropriate for a given image. Furthermore, an enhanced skip connection is introduced and named the Hierarchical Attention-Enhanced (HAE) skip connection. We evaluated the efficiency of our model by comparing it with eleven well-established methods for three popular medical image segmentation datasets consisting of coarse-grained images with unclear boundaries. Based on the quantitative and qualitative results, the proposed method ranks first in two datasets and second in a third. The code can be accessed at: https://github.com/AfsanaAhmedMunia/AHF-Fusion-U-Net.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.