Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss

T. Soomro, O. Hellwich, Ahmed J. Afifi, M. Paul, Junbin Gao, Lihong Zheng
{"title":"Strided U-Net Model: Retinal Vessels Segmentation using Dice Loss","authors":"T. Soomro, O. Hellwich, Ahmed J. Afifi, M. Paul, Junbin Gao, Lihong Zheng","doi":"10.1109/DICTA.2018.8615770","DOIUrl":null,"url":null,"abstract":"Accurate segmentation of vessels is an arduous task in the analysis of medical images, particularly the extraction of vessels from colored retinal fundus images. Many image processing tactics have been implemented for accurate detection of vessels, but many vessels have been dropped. In this paper, we propose a deep learning method based on the convolutional neural network (CNN) with dice loss function for retinal vessel segmentation. To our knowledge, we are the first to form the CNN on the basis of the dice loss function for the extraction of vessels from a colored retinal image. The pre-processing steps are used to eliminate uneven illumination to make the training process more efficient. We implement the CNN model based on a variational auto-encoder (VAE), which is a modified version of U-Net. Our main contribution to the implementation of CNN is to replace all pooling layers with progressive convolution and deeper layers. It takes the retinal image as input and generates the image of segmented output vessels with the same resolution as the input image. The proposed segmentation method showed better performance than the existing methods on the most used databases, namely: DRIVE and STARE. In addition, it gives a sensitivity of 0.739 on the DRIVE database with an accuracy of 0.948 and a sensitivity of 0.748 on the STARE database with an accuracy of 0.947.","PeriodicalId":130057,"journal":{"name":"2018 Digital Image Computing: Techniques and Applications (DICTA)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"47","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2018.8615770","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 47

Abstract

Accurate segmentation of vessels is an arduous task in the analysis of medical images, particularly the extraction of vessels from colored retinal fundus images. Many image processing tactics have been implemented for accurate detection of vessels, but many vessels have been dropped. In this paper, we propose a deep learning method based on the convolutional neural network (CNN) with dice loss function for retinal vessel segmentation. To our knowledge, we are the first to form the CNN on the basis of the dice loss function for the extraction of vessels from a colored retinal image. The pre-processing steps are used to eliminate uneven illumination to make the training process more efficient. We implement the CNN model based on a variational auto-encoder (VAE), which is a modified version of U-Net. Our main contribution to the implementation of CNN is to replace all pooling layers with progressive convolution and deeper layers. It takes the retinal image as input and generates the image of segmented output vessels with the same resolution as the input image. The proposed segmentation method showed better performance than the existing methods on the most used databases, namely: DRIVE and STARE. In addition, it gives a sensitivity of 0.739 on the DRIVE database with an accuracy of 0.948 and a sensitivity of 0.748 on the STARE database with an accuracy of 0.947.
跨步U-Net模型:使用骰子损失进行视网膜血管分割
在医学图像分析中,血管的准确分割是一项艰巨的任务,特别是从彩色视网膜眼底图像中提取血管。为了准确检测血管,已经实现了许多图像处理策略,但许多血管都被丢弃了。在本文中,我们提出了一种基于卷积神经网络(CNN)和骰子损失函数的深度学习方法用于视网膜血管分割。据我们所知,我们是第一个在骰子损失函数的基础上形成CNN的,用于从彩色视网膜图像中提取血管。预处理步骤用于消除光照不均匀,使训练过程更高效。我们实现了基于变分自编码器(VAE)的CNN模型,它是U-Net的改进版本。我们对CNN实现的主要贡献是用渐进卷积和更深的层取代所有池化层。它以视网膜图像为输入,生成与输入图像具有相同分辨率的输出血管分割图像。在常用的数据库(DRIVE和STARE)上,所提出的分割方法的性能优于现有的分割方法。此外,它在DRIVE数据库上的灵敏度为0.739,精度为0.948;在STARE数据库上的灵敏度为0.748,精度为0.947。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信