RFLE-Net: Refined Feature Extraction and Low-Loss Feature Fusion Method in Semantic Segmentation of Medical Images

IF 5.8 3区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY
Fan Zhang, Zihao Zhang, Huifang Hou, Yale Yang, Kangzhan Xie, Chao Fan, Xiaozhen Ren, Quan Pan
{"title":"RFLE-Net: Refined Feature Extraction and Low-Loss Feature Fusion Method in Semantic Segmentation of Medical Images","authors":"Fan Zhang,&nbsp;Zihao Zhang,&nbsp;Huifang Hou,&nbsp;Yale Yang,&nbsp;Kangzhan Xie,&nbsp;Chao Fan,&nbsp;Xiaozhen Ren,&nbsp;Quan Pan","doi":"10.1007/s42235-025-00688-7","DOIUrl":null,"url":null,"abstract":"<div><p>The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle. Nevertheless, two main obstacles persist: (1) the restrictions of the Transformer network in dealing with locally detailed features, and (2) the considerable loss of feature information in current feature fusion modules. To solve these issues, this study initially presents a refined feature extraction approach, employing a double-branch feature extraction network to capture complex multi-scale local and global information from images. Subsequently, we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module (MFFEM), which realizes effective feature fusion with minimal loss. Simultaneously, the cross-layer cross-attention fusion module (CLCA) is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales. Finally, the feasibility of our method was verified using the Synapse and ACDC datasets, demonstrating its competitiveness. The average DSC (%) was 83.62 and 91.99 respectively, and the average HD95 (mm) was reduced to 19.55 and 1.15 respectively.</p></div>","PeriodicalId":614,"journal":{"name":"Journal of Bionic Engineering","volume":"22 3","pages":"1557 - 1572"},"PeriodicalIF":5.8000,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Bionic Engineering","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s42235-025-00688-7","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle. Nevertheless, two main obstacles persist: (1) the restrictions of the Transformer network in dealing with locally detailed features, and (2) the considerable loss of feature information in current feature fusion modules. To solve these issues, this study initially presents a refined feature extraction approach, employing a double-branch feature extraction network to capture complex multi-scale local and global information from images. Subsequently, we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module (MFFEM), which realizes effective feature fusion with minimal loss. Simultaneously, the cross-layer cross-attention fusion module (CLCA) is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales. Finally, the feasibility of our method was verified using the Synapse and ACDC datasets, demonstrating its competitiveness. The average DSC (%) was 83.62 and 91.99 respectively, and the average HD95 (mm) was reduced to 19.55 and 1.15 respectively.

rfl - net:医学图像语义分割中的精细特征提取和低损失特征融合方法
变压器网络和特征融合模型在医学图像分割中的应用引起了学术界的广泛关注。然而,两个主要障碍仍然存在:(1)Transformer网络在处理局部详细特征方面的限制;(2)当前特征融合模块中特征信息的大量丢失。为了解决这些问题,本研究首先提出了一种改进的特征提取方法,采用双分支特征提取网络从图像中捕获复杂的多尺度局部和全局信息。随后,我们提出了一种低损耗的特征融合方法——多分支特征融合增强模块(MFFEM),以最小的损耗实现了有效的特征融合。同时,采用跨层交叉注意融合模块(cross-layer cross-attention fusion module, CLCA),通过增强不同尺度编码器和解码器之间的交互,进一步实现充分的特征融合。最后,利用Synapse和ACDC数据集验证了该方法的可行性,证明了其竞争力。平均DSC(%)分别为83.62和91.99,平均HD95 (mm)分别降至19.55和1.15。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Bionic Engineering
Journal of Bionic Engineering 工程技术-材料科学:生物材料
CiteScore
7.10
自引率
10.00%
发文量
162
审稿时长
10.0 months
期刊介绍: The Journal of Bionic Engineering (JBE) is a peer-reviewed journal that publishes original research papers and reviews that apply the knowledge learned from nature and biological systems to solve concrete engineering problems. The topics that JBE covers include but are not limited to: Mechanisms, kinematical mechanics and control of animal locomotion, development of mobile robots with walking (running and crawling), swimming or flying abilities inspired by animal locomotion. Structures, morphologies, composition and physical properties of natural and biomaterials; fabrication of new materials mimicking the properties and functions of natural and biomaterials. Biomedical materials, artificial organs and tissue engineering for medical applications; rehabilitation equipment and devices. Development of bioinspired computation methods and artificial intelligence for engineering applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信