{"title":"基于上下文感知的跨层特征融合深度网络息肉分割","authors":"Guanghui Yue;Shangjie Wu;Guibin Zhuo;Gang Li;Wanwan Han;Lei Yang;Jingfeng Du;Bin Jiang;Tianwei Zhou","doi":"10.1109/JSEN.2025.3553904","DOIUrl":null,"url":null,"abstract":"Accurate polyp segmentation in colonoscopy images is crucial for early diagnosis and timely treatment of colorectal cancer. Recently, deep learning-based methods have exhibited distinct advantages in polyp segmentation. However, most existing methods often obtain ordinary performance due to the limited ability to extract context information and foreground cues. To overcome these drawbacks, we propose a context-aware deep network (CADNet) with cross-layer feature fusion for polyp segmentation in this article. Specifically, CADNet is designed with an encoder-decoder framework. In the encoder stage, a group guidance context module (GGCM) is proposed for the top-three highest layers to make the network focus more on target regions by aggregating multiscale context information with the prior knowledge from the adjacent high layer. In the decoder stage, a cross-layer feature fusion module (CLFFM) is proposed to obtain rich foreground cues by adaptively fusing low-level spatial details and high-level semantic concepts with the assistance of an attention mechanism. After that, the foreground cues serve as the input for the subsequent decoding stage. Considering that there is a lack of two-modal datasets of polyp segmentation, we construct a new dataset with 1200 images. Extensive experiments on our dataset and three public datasets demonstrate that our CADNet has considerable generalization ability across two-modal data and cross-center data and obtains comparable results compared with the state-of-the-art methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 9","pages":"16515-16527"},"PeriodicalIF":4.3000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Context-Aware Deep Network With Cross-Layer Feature Fusion for Polyp Segmentation\",\"authors\":\"Guanghui Yue;Shangjie Wu;Guibin Zhuo;Gang Li;Wanwan Han;Lei Yang;Jingfeng Du;Bin Jiang;Tianwei Zhou\",\"doi\":\"10.1109/JSEN.2025.3553904\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Accurate polyp segmentation in colonoscopy images is crucial for early diagnosis and timely treatment of colorectal cancer. Recently, deep learning-based methods have exhibited distinct advantages in polyp segmentation. However, most existing methods often obtain ordinary performance due to the limited ability to extract context information and foreground cues. To overcome these drawbacks, we propose a context-aware deep network (CADNet) with cross-layer feature fusion for polyp segmentation in this article. Specifically, CADNet is designed with an encoder-decoder framework. In the encoder stage, a group guidance context module (GGCM) is proposed for the top-three highest layers to make the network focus more on target regions by aggregating multiscale context information with the prior knowledge from the adjacent high layer. In the decoder stage, a cross-layer feature fusion module (CLFFM) is proposed to obtain rich foreground cues by adaptively fusing low-level spatial details and high-level semantic concepts with the assistance of an attention mechanism. After that, the foreground cues serve as the input for the subsequent decoding stage. Considering that there is a lack of two-modal datasets of polyp segmentation, we construct a new dataset with 1200 images. Extensive experiments on our dataset and three public datasets demonstrate that our CADNet has considerable generalization ability across two-modal data and cross-center data and obtains comparable results compared with the state-of-the-art methods.\",\"PeriodicalId\":447,\"journal\":{\"name\":\"IEEE Sensors Journal\",\"volume\":\"25 9\",\"pages\":\"16515-16527\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Sensors Journal\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10945549/\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/10945549/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Context-Aware Deep Network With Cross-Layer Feature Fusion for Polyp Segmentation
Accurate polyp segmentation in colonoscopy images is crucial for early diagnosis and timely treatment of colorectal cancer. Recently, deep learning-based methods have exhibited distinct advantages in polyp segmentation. However, most existing methods often obtain ordinary performance due to the limited ability to extract context information and foreground cues. To overcome these drawbacks, we propose a context-aware deep network (CADNet) with cross-layer feature fusion for polyp segmentation in this article. Specifically, CADNet is designed with an encoder-decoder framework. In the encoder stage, a group guidance context module (GGCM) is proposed for the top-three highest layers to make the network focus more on target regions by aggregating multiscale context information with the prior knowledge from the adjacent high layer. In the decoder stage, a cross-layer feature fusion module (CLFFM) is proposed to obtain rich foreground cues by adaptively fusing low-level spatial details and high-level semantic concepts with the assistance of an attention mechanism. After that, the foreground cues serve as the input for the subsequent decoding stage. Considering that there is a lack of two-modal datasets of polyp segmentation, we construct a new dataset with 1200 images. Extensive experiments on our dataset and three public datasets demonstrate that our CADNet has considerable generalization ability across two-modal data and cross-center data and obtains comparable results compared with the state-of-the-art methods.
期刊介绍:
The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following:
-Sensor Phenomenology, Modelling, and Evaluation
-Sensor Materials, Processing, and Fabrication
-Chemical and Gas Sensors
-Microfluidics and Biosensors
-Optical Sensors
-Physical Sensors: Temperature, Mechanical, Magnetic, and others
-Acoustic and Ultrasonic Sensors
-Sensor Packaging
-Sensor Networks
-Sensor Applications
-Sensor Systems: Signals, Processing, and Interfaces
-Actuators and Sensor Power Systems
-Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting
-Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data)
-Sensors in Industrial Practice