基于改进Anam-Net的视网膜血管分割轻量级深度学习模型

Syed Irtaza Haider, Khursheed Aurangzeb, Musaed A. Alhussein
{"title":"基于改进Anam-Net的视网膜血管分割轻量级深度学习模型","authors":"Syed Irtaza Haider, Khursheed Aurangzeb, Musaed A. Alhussein","doi":"10.32604/cmc.2022.025479","DOIUrl":null,"url":null,"abstract":"The accurate segmentation of retinal vessels is a challenging task due to the presence of various pathologies as well as the low-contrast of thin vessels and non-uniform illumination. In recent years, encoder-decoder networks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we propose a lightweight convolutional neural network (CNN)-based encoder-decoder deep learning model for accurate retinal vessels segmentation. The proposed deep learning model consists of encoder-decoder architecture along with bottleneck layers that consist of depth-wise squeezing, followed by full-convolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, which was tested on CT images for COVID-19 identification. For our lightweight model, we used a stack of two 3 x 3 convolution layers (without spatial pooling in between) instead of a single 3 x 3 convolution layer as proposed in Anam-Net to increase the receptive field and to reduce the trainable parameters. The proposed method includes fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution. These modifications do not compromise on the segmentation accuracy, but they do make the architecture significantly lighter in terms of the number of trainable parameters and computation time. The proposed architecture has comparatively fewer parameters (1.01M) than Anam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the other recent works. The proposed model does not require any problem-specific pre- or post-processing, nor does it rely on handcrafted features. In addition, the attribute of being efficient in terms of segmentation accuracy as well as lightweight makes the proposed method a suitable candidate to be used in the screening platforms at the point of care. We evaluated our proposed model on open-access datasets namely, DRIVE, STARE, and CHASE_DB. The experimental results show that the proposed model outperforms several state-of-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoder-decoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the area under the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and 0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and 0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752, and 0.9906} on the CHASE_DB dataset. Additionally, we perform cross-training experiments on the DRIVE and STARE datasets. The result of this experiment indicates the generalization ability and robustness of the proposed model.","PeriodicalId":329824,"journal":{"name":"Computers, Materials & Continua","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Modified Anam-Net Based Lightweight Deep Learning Model for Retinal Vessel Segmentation\",\"authors\":\"Syed Irtaza Haider, Khursheed Aurangzeb, Musaed A. Alhussein\",\"doi\":\"10.32604/cmc.2022.025479\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The accurate segmentation of retinal vessels is a challenging task due to the presence of various pathologies as well as the low-contrast of thin vessels and non-uniform illumination. In recent years, encoder-decoder networks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we propose a lightweight convolutional neural network (CNN)-based encoder-decoder deep learning model for accurate retinal vessels segmentation. The proposed deep learning model consists of encoder-decoder architecture along with bottleneck layers that consist of depth-wise squeezing, followed by full-convolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, which was tested on CT images for COVID-19 identification. For our lightweight model, we used a stack of two 3 x 3 convolution layers (without spatial pooling in between) instead of a single 3 x 3 convolution layer as proposed in Anam-Net to increase the receptive field and to reduce the trainable parameters. The proposed method includes fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution. These modifications do not compromise on the segmentation accuracy, but they do make the architecture significantly lighter in terms of the number of trainable parameters and computation time. The proposed architecture has comparatively fewer parameters (1.01M) than Anam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the other recent works. The proposed model does not require any problem-specific pre- or post-processing, nor does it rely on handcrafted features. In addition, the attribute of being efficient in terms of segmentation accuracy as well as lightweight makes the proposed method a suitable candidate to be used in the screening platforms at the point of care. We evaluated our proposed model on open-access datasets namely, DRIVE, STARE, and CHASE_DB. The experimental results show that the proposed model outperforms several state-of-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoder-decoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the area under the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and 0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and 0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752, and 0.9906} on the CHASE_DB dataset. Additionally, we perform cross-training experiments on the DRIVE and STARE datasets. The result of this experiment indicates the generalization ability and robustness of the proposed model.\",\"PeriodicalId\":329824,\"journal\":{\"name\":\"Computers, Materials & Continua\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers, Materials & Continua\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.32604/cmc.2022.025479\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers, Materials & Continua","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32604/cmc.2022.025479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

由于视网膜血管存在多种病变、薄血管对比度低、光照不均匀等问题,对视网膜血管进行准确分割是一项具有挑战性的任务。近年来,编码器-解码器网络在视网膜血管分割方面取得了优异的成绩,但代价是计算复杂度较高。为了解决上述挑战并降低计算复杂度,我们提出了一种基于卷积神经网络(CNN)的轻量级编码器-解码器深度学习模型,用于精确分割视网膜血管。提出的深度学习模型由编码器-解码器架构以及瓶颈层组成,瓶颈层由深度压缩组成,然后是全卷积,最后是深度拉伸。该模型的灵感来自最近开发的Anam-Net模型,该模型在CT图像上进行了测试,以识别新冠病毒。对于我们的轻量级模型,我们使用了两个3 × 3卷积层的堆栈(中间没有空间池),而不是Anam-Net中提出的单个3 × 3卷积层,以增加接受域并减少可训练参数。与原始的Anam-Net相比,该方法在所有卷积层中包含更少的滤波器,并且不会因为分辨率降低而增加滤波器的数量。这些修改不会影响分割的准确性,但它们确实使架构在可训练参数的数量和计算时间方面显着减轻。与Anam-Net (4.47M)、U-Net (31.05M)、SegNet (29.50M)和大多数其他最近的作品相比,所提出的架构具有相对较少的参数(1.01M)。所提出的模型不需要任何特定于问题的预处理或后处理,也不依赖于手工制作的特征。此外,在分割精度和轻量级方面的高效属性使所提出的方法适合用于护理点的筛选平台。我们在开放存取数据集(即DRIVE、STARE和CHASE_DB)上评估了我们提出的模型。实验结果表明,所提出的模型优于几种最先进的方法,如U-Net及其变体、全卷积网络(FCN)、SegNet、CCNet、ResWNet、基于剩余连接的编码器-解码器网络(RCED-Net)和尺度空间近似。在DRIVE数据集上的得分分别为{0.8184、0.8561、0.9669、0.9868},在STARE数据集上的得分分别为{0.8233、0.8581、0.9726、0.9901},在CHASE_DB数据集上的得分分别为{0.8138、0.8604、0.9752、0.9906}。此外,我们在DRIVE和STARE数据集上进行了交叉训练实验。实验结果表明,该模型具有良好的泛化能力和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Modified Anam-Net Based Lightweight Deep Learning Model for Retinal Vessel Segmentation
The accurate segmentation of retinal vessels is a challenging task due to the presence of various pathologies as well as the low-contrast of thin vessels and non-uniform illumination. In recent years, encoder-decoder networks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we propose a lightweight convolutional neural network (CNN)-based encoder-decoder deep learning model for accurate retinal vessels segmentation. The proposed deep learning model consists of encoder-decoder architecture along with bottleneck layers that consist of depth-wise squeezing, followed by full-convolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, which was tested on CT images for COVID-19 identification. For our lightweight model, we used a stack of two 3 x 3 convolution layers (without spatial pooling in between) instead of a single 3 x 3 convolution layer as proposed in Anam-Net to increase the receptive field and to reduce the trainable parameters. The proposed method includes fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution. These modifications do not compromise on the segmentation accuracy, but they do make the architecture significantly lighter in terms of the number of trainable parameters and computation time. The proposed architecture has comparatively fewer parameters (1.01M) than Anam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the other recent works. The proposed model does not require any problem-specific pre- or post-processing, nor does it rely on handcrafted features. In addition, the attribute of being efficient in terms of segmentation accuracy as well as lightweight makes the proposed method a suitable candidate to be used in the screening platforms at the point of care. We evaluated our proposed model on open-access datasets namely, DRIVE, STARE, and CHASE_DB. The experimental results show that the proposed model outperforms several state-of-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoder-decoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the area under the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and 0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and 0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752, and 0.9906} on the CHASE_DB dataset. Additionally, we perform cross-training experiments on the DRIVE and STARE datasets. The result of this experiment indicates the generalization ability and robustness of the proposed model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信