Context-Aware Deep Network With Cross-Layer Feature Fusion for Polyp Segmentation

IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Guanghui Yue;Shangjie Wu;Guibin Zhuo;Gang Li;Wanwan Han;Lei Yang;Jingfeng Du;Bin Jiang;Tianwei Zhou
{"title":"Context-Aware Deep Network With Cross-Layer Feature Fusion for Polyp Segmentation","authors":"Guanghui Yue;Shangjie Wu;Guibin Zhuo;Gang Li;Wanwan Han;Lei Yang;Jingfeng Du;Bin Jiang;Tianwei Zhou","doi":"10.1109/JSEN.2025.3553904","DOIUrl":null,"url":null,"abstract":"Accurate polyp segmentation in colonoscopy images is crucial for early diagnosis and timely treatment of colorectal cancer. Recently, deep learning-based methods have exhibited distinct advantages in polyp segmentation. However, most existing methods often obtain ordinary performance due to the limited ability to extract context information and foreground cues. To overcome these drawbacks, we propose a context-aware deep network (CADNet) with cross-layer feature fusion for polyp segmentation in this article. Specifically, CADNet is designed with an encoder-decoder framework. In the encoder stage, a group guidance context module (GGCM) is proposed for the top-three highest layers to make the network focus more on target regions by aggregating multiscale context information with the prior knowledge from the adjacent high layer. In the decoder stage, a cross-layer feature fusion module (CLFFM) is proposed to obtain rich foreground cues by adaptively fusing low-level spatial details and high-level semantic concepts with the assistance of an attention mechanism. After that, the foreground cues serve as the input for the subsequent decoding stage. Considering that there is a lack of two-modal datasets of polyp segmentation, we construct a new dataset with 1200 images. Extensive experiments on our dataset and three public datasets demonstrate that our CADNet has considerable generalization ability across two-modal data and cross-center data and obtains comparable results compared with the state-of-the-art methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 9","pages":"16515-16527"},"PeriodicalIF":4.3000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/10945549/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate polyp segmentation in colonoscopy images is crucial for early diagnosis and timely treatment of colorectal cancer. Recently, deep learning-based methods have exhibited distinct advantages in polyp segmentation. However, most existing methods often obtain ordinary performance due to the limited ability to extract context information and foreground cues. To overcome these drawbacks, we propose a context-aware deep network (CADNet) with cross-layer feature fusion for polyp segmentation in this article. Specifically, CADNet is designed with an encoder-decoder framework. In the encoder stage, a group guidance context module (GGCM) is proposed for the top-three highest layers to make the network focus more on target regions by aggregating multiscale context information with the prior knowledge from the adjacent high layer. In the decoder stage, a cross-layer feature fusion module (CLFFM) is proposed to obtain rich foreground cues by adaptively fusing low-level spatial details and high-level semantic concepts with the assistance of an attention mechanism. After that, the foreground cues serve as the input for the subsequent decoding stage. Considering that there is a lack of two-modal datasets of polyp segmentation, we construct a new dataset with 1200 images. Extensive experiments on our dataset and three public datasets demonstrate that our CADNet has considerable generalization ability across two-modal data and cross-center data and obtains comparable results compared with the state-of-the-art methods.
基于上下文感知的跨层特征融合深度网络息肉分割
结肠镜图像中准确的息肉分割对于早期诊断和及时治疗结直肠癌至关重要。近年来,基于深度学习的方法在息肉分割中表现出明显的优势。然而,由于提取上下文信息和前景线索的能力有限,大多数现有方法往往只能获得一般的性能。为了克服这些缺点,本文提出了一种具有跨层特征融合的上下文感知深度网络(CADNet)用于息肉分割。具体来说,CADNet设计了一个编码器-解码器框架。在编码器阶段,对最高的前三层提出了群引导上下文模块(group guidance context module, GGCM),通过将多尺度上下文信息与相邻高层的先验知识进行聚合,使网络更加关注目标区域。在解码器阶段,提出了一种跨层特征融合模块(CLFFM),通过注意机制自适应融合低层次空间细节和高层次语义概念,获得丰富的前景线索。之后,前景线索作为后续解码阶段的输入。考虑到目前缺乏息肉分割的双模态数据集,我们构建了一个包含1200张图像的新数据集。在我们的数据集和三个公共数据集上进行的大量实验表明,我们的CADNet在双模态数据和跨中心数据上具有相当的泛化能力,并且与最先进的方法相比获得了相当的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Sensors Journal
IEEE Sensors Journal 工程技术-工程:电子与电气
CiteScore
7.70
自引率
14.00%
发文量
2058
审稿时长
5.2 months
期刊介绍: The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following: -Sensor Phenomenology, Modelling, and Evaluation -Sensor Materials, Processing, and Fabrication -Chemical and Gas Sensors -Microfluidics and Biosensors -Optical Sensors -Physical Sensors: Temperature, Mechanical, Magnetic, and others -Acoustic and Ultrasonic Sensors -Sensor Packaging -Sensor Networks -Sensor Applications -Sensor Systems: Signals, Processing, and Interfaces -Actuators and Sensor Power Systems -Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting -Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data) -Sensors in Industrial Practice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信