{"title":"CFNet: Cross-scale fusion network for medical image segmentation","authors":"Amina Benabid , Jing Yuan , Mohammed A.M. Elhassan , Douaa Benabid","doi":"10.1016/j.jksuci.2024.102123","DOIUrl":null,"url":null,"abstract":"<div><p>Learning multi-scale feature representations is essential for medical image segmentation. Most existing frameworks are based on U-shape architecture in which the high-resolution representation is recovered progressively by connecting different levels of the decoder with the low-resolution representation from the encoder. However, intrinsic defects in complementary feature fusion inhibit the U-shape from aggregating efficient global and discriminative features along object boundaries. While Transformer can help model the global features, their computation complexity limits the application in real-time medical scenarios. To address these issues, we propose a Cross-scale Fusion Network (CFNet), combining a cross-scale attention module and pyramidal module to fuse multi-stage/global context information. Specifically, we first utilize large kernel convolution to design the basic building block capable of extracting global and local information. Then, we propose a Bidirectional Atrous Spatial Pyramid Pooling (BiASPP), which employs atrous convolution in the bidirectional paths to capture various shapes and sizes of brain tumors. Furthermore, we introduce a cross-stage attention mechanism to reduce redundant information when merging features from two stages with different semantics. Extensive evaluation was performed on five medical image segmentation datasets: a 3D volumetric dataset, namely Brats benchmarks. CFNet-L achieves 85.74% and 90.98% dice score for Enhanced Tumor and Whole Tumor on Brats2018, respectively. Furthermore, our largest model CFNet-L outperformed other methods on 2D medical image. It achieved 71.95%, 82.79%, and 80.79% SE for STARE, DRIVE, and CHASEDB1, respectively. The code will be available at <span><span>https://github.com/aminabenabid/CFNet</span><svg><path></path></svg></span></p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S131915782400212X/pdfft?md5=f9e4769e712ba1e0a899046089ca2727&pid=1-s2.0-S131915782400212X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S131915782400212X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Learning multi-scale feature representations is essential for medical image segmentation. Most existing frameworks are based on U-shape architecture in which the high-resolution representation is recovered progressively by connecting different levels of the decoder with the low-resolution representation from the encoder. However, intrinsic defects in complementary feature fusion inhibit the U-shape from aggregating efficient global and discriminative features along object boundaries. While Transformer can help model the global features, their computation complexity limits the application in real-time medical scenarios. To address these issues, we propose a Cross-scale Fusion Network (CFNet), combining a cross-scale attention module and pyramidal module to fuse multi-stage/global context information. Specifically, we first utilize large kernel convolution to design the basic building block capable of extracting global and local information. Then, we propose a Bidirectional Atrous Spatial Pyramid Pooling (BiASPP), which employs atrous convolution in the bidirectional paths to capture various shapes and sizes of brain tumors. Furthermore, we introduce a cross-stage attention mechanism to reduce redundant information when merging features from two stages with different semantics. Extensive evaluation was performed on five medical image segmentation datasets: a 3D volumetric dataset, namely Brats benchmarks. CFNet-L achieves 85.74% and 90.98% dice score for Enhanced Tumor and Whole Tumor on Brats2018, respectively. Furthermore, our largest model CFNet-L outperformed other methods on 2D medical image. It achieved 71.95%, 82.79%, and 80.79% SE for STARE, DRIVE, and CHASEDB1, respectively. The code will be available at https://github.com/aminabenabid/CFNet
期刊介绍:
In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.