Robustness of Deep Learning Methods for Ocular Fundus Segmentation: Evaluation of Blur Sensitivity

V. Petrovic, Gorana Gojic, D. Dragan, Dušan B. Gajić, Nebojsa Horvat, R. Turovic, A. Oros
{"title":"Robustness of Deep Learning Methods for Ocular Fundus Segmentation: Evaluation of Blur Sensitivity","authors":"V. Petrovic, Gorana Gojic, D. Dragan, Dušan B. Gajić, Nebojsa Horvat, R. Turovic, A. Oros","doi":"10.1109/INISTA49547.2020.9194612","DOIUrl":null,"url":null,"abstract":"This paper analyzes the sensitivity of deep learning methods for ocular fundus segmentation. We use an empirical methodology based on non-adversarial perturbed datasets. The research is motivated by the perceived needs of mass screening and self-administered tests in which autonomous or semi-autonomous artificially intelligent methods are needed and may be given substandard images with focus issues. These substandard pictures are simulated using blurring algorithms of varying designs and kernel sizes which are subjected to a test of inter-network sensitivity. The network's result on an unblurred original is derived from the testing subset of the DRIVE ocular fundus image dataset used as the ground truth. The networks studied were VesselUNet (Ronnenberger et al. and Huang et al.), VesselGAN (Son et al.), and VesselFCNN (Oliveira et al.). Statistical analysis of the resultant n = 3600 sample has determined that the datapoints indicating sensitivity over kernel size can be fitted with a sigmoid (with a maximum final tolerance of 9.33e-6), and that it can be shown, using robust pairwise Holm-corrected comparisons, that VesselUNet is the least sensitive (with p-values <5e-8). The least disruptive was Gaussian blur, and the most disruptive motion blur unaligned with cardinal axes. The analysis gives us cause to believe that there is reason to research this problem with more depth, and to work on developing more robust methods for ocular fundus segmentation.","PeriodicalId":124632,"journal":{"name":"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INISTA49547.2020.9194612","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

This paper analyzes the sensitivity of deep learning methods for ocular fundus segmentation. We use an empirical methodology based on non-adversarial perturbed datasets. The research is motivated by the perceived needs of mass screening and self-administered tests in which autonomous or semi-autonomous artificially intelligent methods are needed and may be given substandard images with focus issues. These substandard pictures are simulated using blurring algorithms of varying designs and kernel sizes which are subjected to a test of inter-network sensitivity. The network's result on an unblurred original is derived from the testing subset of the DRIVE ocular fundus image dataset used as the ground truth. The networks studied were VesselUNet (Ronnenberger et al. and Huang et al.), VesselGAN (Son et al.), and VesselFCNN (Oliveira et al.). Statistical analysis of the resultant n = 3600 sample has determined that the datapoints indicating sensitivity over kernel size can be fitted with a sigmoid (with a maximum final tolerance of 9.33e-6), and that it can be shown, using robust pairwise Holm-corrected comparisons, that VesselUNet is the least sensitive (with p-values <5e-8). The least disruptive was Gaussian blur, and the most disruptive motion blur unaligned with cardinal axes. The analysis gives us cause to believe that there is reason to research this problem with more depth, and to work on developing more robust methods for ocular fundus segmentation.
深度学习方法在眼底分割中的鲁棒性:模糊灵敏度评估
分析了深度学习方法在眼底分割中的敏感性。我们使用基于非对抗性摄动数据集的经验方法。这项研究的动机是大规模筛查和自我管理测试的感知需求,其中需要自主或半自主的人工智能方法,并且可能会给出不合格的图像,并存在焦点问题。使用不同设计和核大小的模糊算法模拟这些不合格的图像,并进行网络间灵敏度测试。该网络在未模糊原始图像上的结果来自DRIVE眼底图像数据集的测试子集,该数据集用作地面真值。研究的网络是VesselUNet (Ronnenberger等人和Huang等人)、VesselGAN (Son等人)和VesselFCNN (Oliveira等人)。对结果n = 3600样本的统计分析已经确定,表明内核大小敏感性的数据点可以用s型拟合(最大最终容差为9.33e-6),并且可以显示,使用稳健的两两holm校正比较,VesselUNet是最不敏感的(p值<5e-8)。破坏性最小的是高斯模糊,而破坏性最大的运动模糊与基本轴不对齐。这一分析使我们有理由相信,有理由对这一问题进行更深入的研究,并致力于开发更强大的眼底分割方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信