CS-UNet: Cross-scale U-Net with Semantic-position dependencies for retinal vessel segmentation.

IF 1.1 3区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Network-Computation in Neural Systems Pub Date : 2024-05-01 Epub Date: 2023-12-05 DOI:10.1080/0954898X.2023.2288858
Ying Yang, Shengbin Yue, Haiyan Quan
{"title":"CS-UNet: Cross-scale U-Net with Semantic-position dependencies for retinal vessel segmentation.","authors":"Ying Yang, Shengbin Yue, Haiyan Quan","doi":"10.1080/0954898X.2023.2288858","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate retinal vessel segmentation is the prerequisite for early recognition and treatment of retina-related diseases. However, segmenting retinal vessels is still challenging due to the intricate vessel tree in fundus images, which has a significant number of tiny vessels, low contrast, and lesion interference. For this task, the u-shaped architecture (U-Net) has become the de-facto standard and has achieved considerable success. However, U-Net is a pure convolutional network, which usually shows limitations in global modelling. In this paper, we propose a novel Cross-scale U-Net with Semantic-position Dependencies (CS-UNet) for retinal vessel segmentation. In particular, we first designed a Semantic-position Dependencies Aggregator (SPDA) and incorporate it into each layer of the encoder to better focus on global contextual information by integrating the relationship of semantic and position. To endow the model with the capability of cross-scale interaction, the Cross-scale Relation Refine Module (CSRR) is designed to dynamically select the information associated with the vessels, which helps guide the up-sampling operation. Finally, we have evaluated CS-UNet on three public datasets: DRIVE, CHASE_DB1, and STARE. Compared to most existing state-of-the-art methods, CS-UNet demonstrated better performance.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"134-153"},"PeriodicalIF":1.1000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Network-Computation in Neural Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0954898X.2023.2288858","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/5 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate retinal vessel segmentation is the prerequisite for early recognition and treatment of retina-related diseases. However, segmenting retinal vessels is still challenging due to the intricate vessel tree in fundus images, which has a significant number of tiny vessels, low contrast, and lesion interference. For this task, the u-shaped architecture (U-Net) has become the de-facto standard and has achieved considerable success. However, U-Net is a pure convolutional network, which usually shows limitations in global modelling. In this paper, we propose a novel Cross-scale U-Net with Semantic-position Dependencies (CS-UNet) for retinal vessel segmentation. In particular, we first designed a Semantic-position Dependencies Aggregator (SPDA) and incorporate it into each layer of the encoder to better focus on global contextual information by integrating the relationship of semantic and position. To endow the model with the capability of cross-scale interaction, the Cross-scale Relation Refine Module (CSRR) is designed to dynamically select the information associated with the vessels, which helps guide the up-sampling operation. Finally, we have evaluated CS-UNet on three public datasets: DRIVE, CHASE_DB1, and STARE. Compared to most existing state-of-the-art methods, CS-UNet demonstrated better performance.

CS-UNet:用于视网膜血管分割的具有语义位置依赖性的跨尺度 U-Net
准确的视网膜血管分割是早期识别和治疗视网膜相关疾病的先决条件。然而,由于眼底图像中的血管树错综复杂,存在大量微小血管、低对比度和病变干扰,因此分割视网膜血管仍是一项挑战。对于这项任务,U 形结构(U-Net)已成为事实上的标准,并取得了相当大的成功。然而,U-Net 是一种纯卷积网络,通常在全局建模方面存在局限性。在本文中,我们为视网膜血管分割提出了一种新颖的具有语义位置依赖性的跨尺度 U-Net (CS-UNet)。具体而言,我们首先设计了一个语义-位置依赖性聚合器(SPDA),并将其纳入编码器的每一层,通过整合语义和位置的关系,更好地关注全局上下文信息。为了赋予模型跨尺度交互的能力,我们设计了跨尺度关系提炼模块(CSRR),以动态选择与船只相关的信息,从而帮助指导上采样操作。最后,我们在三个公共数据集上对 CS-UNet 进行了评估:DRIVE、CHASE_DB1 和 STARE。与现有的大多数先进方法相比,CS-UNet 表现出了更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Network-Computation in Neural Systems
Network-Computation in Neural Systems 工程技术-工程:电子与电气
CiteScore
3.70
自引率
1.30%
发文量
22
审稿时长
>12 weeks
期刊介绍: Network: Computation in Neural Systems welcomes submissions of research papers that integrate theoretical neuroscience with experimental data, emphasizing the utilization of cutting-edge technologies. We invite authors and researchers to contribute their work in the following areas: Theoretical Neuroscience: This section encompasses neural network modeling approaches that elucidate brain function. Neural Networks in Data Analysis and Pattern Recognition: We encourage submissions exploring the use of neural networks for data analysis and pattern recognition, including but not limited to image analysis and speech processing applications. Neural Networks in Control Systems: This category encompasses the utilization of neural networks in control systems, including robotics, state estimation, fault detection, and diagnosis. Analysis of Neurophysiological Data: We invite submissions focusing on the analysis of neurophysiology data obtained from experimental studies involving animals. Analysis of Experimental Data on the Human Brain: This section includes papers analyzing experimental data from studies on the human brain, utilizing imaging techniques such as MRI, fMRI, EEG, and PET. Neurobiological Foundations of Consciousness: We encourage submissions exploring the neural bases of consciousness in the brain and its simulation in machines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信