{"title":"FusionSegNet:一种分层多轴关注和门控特征融合网络,用于超声成像中不确定性建模的乳腺病变分割","authors":"Md Rayhan Ahmed, Patricia Lasserre","doi":"10.1016/j.inffus.2025.103399","DOIUrl":null,"url":null,"abstract":"<div><div>Lesion segmentation in breast ultrasound images (BUS) is challenging due to noise, low contrast appearance, ambiguous boundaries, texture inconsistencies, and inherent uncertainty in lesion appearance. These challenges are further exacerbated by the semantic gap between encoder and decoder features in U-Net-based models. In this paper, we introduce FusionSegNet, a novel lesion segmentation network that integrates several key innovations to address these challenges. First, we propose a Fuzzy Logic-Based Multi-Scale Contextual Network as the encoder to handle noisy and uncertain areas through multi-scale attention and fuzzy membership-based uncertainty estimation. Second, we design a Weighted Multiplicative Fusion Module to effectively merge multi-scale features while suppressing noise. Third, we integrate Hierarchical Multi-Axis Attention in both the encoder and decoder to enhance focus across multiple dimensions, enabling FusionSegNet to better segment targets with varypositions, scalesscalessizesd sizes. Fourth, we introduce a Gated Multi-Scale Feature Aggregation Module that bridges both local and global information for better semantic understanding, and the newly integrated Atrous Attention Fusion Module further refines multi-scale long-range contextual details using different dilation rates. Finally, we design a Gated Multi-Scale Fusion Block which facilitates feature fusion between the encoder and decoder to maintain spatial consistency. Extensive experiments and a comprehensive ablation study on two benchmark BUS datasets validate the superiority of FusionSegNet and its integrated design choices over state-of-the-art methods. FusionSegNet achieves an mDSC of 93.22% on the UDIAT dataset and an mIoU of 80.10% on the BUSI dataset, establishing a new benchmark for lesion segmentation in BUS images. Our code can be found at <span><span>https://github.com/rayhan-ahmed91/FusionSegNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103399"},"PeriodicalIF":15.5000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FusionSegNet: A Hierarchical Multi-Axis Attention and gated feature fusion network for breast lesion segmentation with uncertainty modeling in ultrasound imaging\",\"authors\":\"Md Rayhan Ahmed, Patricia Lasserre\",\"doi\":\"10.1016/j.inffus.2025.103399\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Lesion segmentation in breast ultrasound images (BUS) is challenging due to noise, low contrast appearance, ambiguous boundaries, texture inconsistencies, and inherent uncertainty in lesion appearance. These challenges are further exacerbated by the semantic gap between encoder and decoder features in U-Net-based models. In this paper, we introduce FusionSegNet, a novel lesion segmentation network that integrates several key innovations to address these challenges. First, we propose a Fuzzy Logic-Based Multi-Scale Contextual Network as the encoder to handle noisy and uncertain areas through multi-scale attention and fuzzy membership-based uncertainty estimation. Second, we design a Weighted Multiplicative Fusion Module to effectively merge multi-scale features while suppressing noise. Third, we integrate Hierarchical Multi-Axis Attention in both the encoder and decoder to enhance focus across multiple dimensions, enabling FusionSegNet to better segment targets with varypositions, scalesscalessizesd sizes. Fourth, we introduce a Gated Multi-Scale Feature Aggregation Module that bridges both local and global information for better semantic understanding, and the newly integrated Atrous Attention Fusion Module further refines multi-scale long-range contextual details using different dilation rates. Finally, we design a Gated Multi-Scale Fusion Block which facilitates feature fusion between the encoder and decoder to maintain spatial consistency. Extensive experiments and a comprehensive ablation study on two benchmark BUS datasets validate the superiority of FusionSegNet and its integrated design choices over state-of-the-art methods. FusionSegNet achieves an mDSC of 93.22% on the UDIAT dataset and an mIoU of 80.10% on the BUSI dataset, establishing a new benchmark for lesion segmentation in BUS images. Our code can be found at <span><span>https://github.com/rayhan-ahmed91/FusionSegNet</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"124 \",\"pages\":\"Article 103399\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525004725\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004725","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
FusionSegNet: A Hierarchical Multi-Axis Attention and gated feature fusion network for breast lesion segmentation with uncertainty modeling in ultrasound imaging
Lesion segmentation in breast ultrasound images (BUS) is challenging due to noise, low contrast appearance, ambiguous boundaries, texture inconsistencies, and inherent uncertainty in lesion appearance. These challenges are further exacerbated by the semantic gap between encoder and decoder features in U-Net-based models. In this paper, we introduce FusionSegNet, a novel lesion segmentation network that integrates several key innovations to address these challenges. First, we propose a Fuzzy Logic-Based Multi-Scale Contextual Network as the encoder to handle noisy and uncertain areas through multi-scale attention and fuzzy membership-based uncertainty estimation. Second, we design a Weighted Multiplicative Fusion Module to effectively merge multi-scale features while suppressing noise. Third, we integrate Hierarchical Multi-Axis Attention in both the encoder and decoder to enhance focus across multiple dimensions, enabling FusionSegNet to better segment targets with varypositions, scalesscalessizesd sizes. Fourth, we introduce a Gated Multi-Scale Feature Aggregation Module that bridges both local and global information for better semantic understanding, and the newly integrated Atrous Attention Fusion Module further refines multi-scale long-range contextual details using different dilation rates. Finally, we design a Gated Multi-Scale Fusion Block which facilitates feature fusion between the encoder and decoder to maintain spatial consistency. Extensive experiments and a comprehensive ablation study on two benchmark BUS datasets validate the superiority of FusionSegNet and its integrated design choices over state-of-the-art methods. FusionSegNet achieves an mDSC of 93.22% on the UDIAT dataset and an mIoU of 80.10% on the BUSI dataset, establishing a new benchmark for lesion segmentation in BUS images. Our code can be found at https://github.com/rayhan-ahmed91/FusionSegNet.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.